Equivalent linear damping characterization in linear and nonlinear force-stiffness muscle models.
Ovesy, Marzieh; Nazari, Mohammad Ali; Mahdavian, Mohammad
2016-02-01
In the current research, the muscle equivalent linear damping coefficient which is introduced as the force-velocity relation in a muscle model and the corresponding time constant are investigated. In order to reach this goal, a 1D skeletal muscle model was used. Two characterizations of this model using a linear force-stiffness relationship (Hill-type model) and a nonlinear one have been implemented. The OpenSim platform was used for verification of the model. The isometric activation has been used for the simulation. The equivalent linear damping and the time constant of each model were extracted by using the results obtained from the simulation. The results provide a better insight into the characteristics of each model. It is found that the nonlinear models had a response rate closer to the reality compared to the Hill-type models.
NASA Astrophysics Data System (ADS)
Kim, Euiyoung; Cho, Maenghyo
2017-11-01
In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.
On the Relation between the Linear Factor Model and the Latent Profile Model
ERIC Educational Resources Information Center
Halpin, Peter F.; Dolan, Conor V.; Grasman, Raoul P. P. P.; De Boeck, Paul
2011-01-01
The relationship between linear factor models and latent profile models is addressed within the context of maximum likelihood estimation based on the joint distribution of the manifest variables. Although the two models are well known to imply equivalent covariance decompositions, in general they do not yield equivalent estimates of the…
Equivalent linearization for fatigue life estimates of a nonlinear structure
NASA Technical Reports Server (NTRS)
Miles, R. N.
1989-01-01
An analysis is presented of the suitability of the method of equivalent linearization for estimating the fatigue life of a nonlinear structure. Comparisons are made of the fatigue life of a nonlinear plate as predicted using conventional equivalent linearization and three other more accurate methods. The excitation of the plate is assumed to be Gaussian white noise and the plate response is modeled using a single resonant mode. The methods used for comparison consist of numerical simulation, a probabalistic formulation, and a modification of equivalent linearization which avoids the usual assumption that the response process is Gaussian. Remarkably close agreement is obtained between all four methods, even for cases where the response is significantly linear.
Semilinear programming: applications and implementation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohan, S.
Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less
Biological effects and equivalent doses in radiotherapy: A software solution
Voyant, Cyril; Julian, Daniel; Roustit, Rudy; Biffi, Katia; Lantieri, Céline
2013-01-01
Background The limits of TDF (time, dose, and fractionation) and linear quadratic models have been known for a long time. Medical physicists and physicians are required to provide fast and reliable interpretations regarding delivered doses or any future prescriptions relating to treatment changes. Aim We, therefore, propose a calculation interface under the GNU license to be used for equivalent doses, biological doses, and normal tumor complication probability (Lyman model). Materials and methods The methodology used draws from several sources: the linear-quadratic-linear model of Astrahan, the repopulation effects of Dale, and the prediction of multi-fractionated treatments of Thames. Results and conclusions The results are obtained from an algorithm that minimizes an ad-hoc cost function, and then compared to an equivalent dose computed using standard calculators in seven French radiotherapy centers. PMID:24936319
Assessing Measurement Equivalence in Ordered-Categorical Data
ERIC Educational Resources Information Center
Elosua, Paula
2011-01-01
Assessing measurement equivalence in the framework of the common factor linear models (CFL) is known as factorial invariance. This methodology is used to evaluate the equivalence among the parameters of a measurement model among different groups. However, when dichotomous, Likert, or ordered responses are used, one of the assumptions of the CFL is…
A single-degree-of-freedom model for non-linear soil amplification
Erdik, Mustafa Ozder
1979-01-01
For proper understanding of soil behavior during earthquakes and assessment of a realistic surface motion, studies of the large-strain dynamic response of non-linear hysteretic soil systems are indispensable. Most of the presently available studies are based on the assumption that the response of a soil deposit is mainly due to the upward propagation of horizontally polarized shear waves from the underlying bedrock. Equivalent-linear procedures, currently in common use in non-linear soil response analysis, provide a simple approach and have been favorably compared with the actual recorded motions in some particular cases. Strain compatibility in these equivalent-linear approaches is maintained by selecting values of shear moduli and damping ratios in accordance with the average soil strains, in an iterative manner. Truly non-linear constitutive models with complete strain compatibility have also been employed. The equivalent-linear approaches often raise some doubt as to the reliability of their results concerning the system response in high frequency regions. In these frequency regions the equivalent-linear methods may underestimate the surface motion by as much as a factor of two or more. Although studies are complete in their methods of analysis, they inevitably provide applications pertaining only to a few specific soil systems, and do not lead to general conclusions about soil behavior. This report attempts to provide a general picture of the soil response through the use of a single-degree-of-freedom non-linear-hysteretic model. Although the investigation is based on a specific type of nonlinearity and a set of dynamic soil properties, the method described does not limit itself to these assumptions and is equally applicable to other types of nonlinearity and soil parameters.
ERIC Educational Resources Information Center
Flowers, Claudia P.; Raju, Nambury S.; Oshima, T. C.
Current interest in the assessment of measurement equivalence emphasizes two methods of analysis, linear, and nonlinear procedures. This study simulated data using the graded response model to examine the performance of linear (confirmatory factor analysis or CFA) and nonlinear (item-response-theory-based differential item function or IRT-Based…
Equivalent reduced model technique development for nonlinear system dynamic response
NASA Astrophysics Data System (ADS)
Thibault, Louis; Avitabile, Peter; Foley, Jason; Wolfson, Janet
2013-04-01
The dynamic response of structural systems commonly involves nonlinear effects. Often times, structural systems are made up of several components, whose individual behavior is essentially linear compared to the total assembled system. However, the assembly of linear components using highly nonlinear connection elements or contact regions causes the entire system to become nonlinear. Conventional transient nonlinear integration of the equations of motion can be extremely computationally intensive, especially when the finite element models describing the components are very large and detailed. In this work, the equivalent reduced model technique (ERMT) is developed to address complicated nonlinear contact problems. ERMT utilizes a highly accurate model reduction scheme, the System equivalent reduction expansion process (SEREP). Extremely reduced order models that provide dynamic characteristics of linear components, which are interconnected with highly nonlinear connection elements, are formulated with SEREP for the dynamic response evaluation using direct integration techniques. The full-space solution will be compared to the response obtained using drastically reduced models to make evident the usefulness of the technique for a variety of analytical cases.
Stochastic Stability of Sampled Data Systems with a Jump Linear Controller
NASA Technical Reports Server (NTRS)
Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven
2004-01-01
In this paper an equivalence between the stochastic stability of a sampled-data system and its associated discrete-time representation is established. The sampled-data system consists of a deterministic, linear, time-invariant, continuous-time plant and a stochastic, linear, time-invariant, discrete-time, jump linear controller. The jump linear controller models computer systems and communication networks that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. This paper shows that the known equivalence between the stability of a deterministic sampled-data system and the associated discrete-time representation holds even in a stochastic framework.
Equivalent circuit simulation of HPEM-induced transient responses at nonlinear loads
NASA Astrophysics Data System (ADS)
Kotzev, Miroslav; Bi, Xiaotang; Kreitlow, Matthias; Gronwald, Frank
2017-09-01
In this paper the equivalent circuit modeling of a nonlinearly loaded loop antenna and its transient responses to HPEM field excitations are investigated. For the circuit modeling the general strategy to characterize the nonlinearly loaded antenna by a linear and a nonlinear circuit part is pursued. The linear circuit part can be determined by standard methods of antenna theory and numerical field computation. The modeling of the nonlinear circuit part requires realistic circuit models of the nonlinear loads that are given by Schottky diodes. Combining both parts, appropriate circuit models are obtained and analyzed by means of a standard SPICE circuit simulator. It is the main result that in this way full-wave simulation results can be reproduced. Furthermore it is clearly seen that the equivalent circuit modeling offers considerable advantages with respect to computation speed and also leads to improved physical insights regarding the coupling between HPEM field excitation and nonlinearly loaded loop antenna.
Compartmental and Data-Based Modeling of Cerebral Hemodynamics: Linear Analysis.
Henley, B C; Shin, D C; Zhang, R; Marmarelis, V Z
Compartmental and data-based modeling of cerebral hemodynamics are alternative approaches that utilize distinct model forms and have been employed in the quantitative study of cerebral hemodynamics. This paper examines the relation between a compartmental equivalent-circuit and a data-based input-output model of dynamic cerebral autoregulation (DCA) and CO2-vasomotor reactivity (DVR). The compartmental model is constructed as an equivalent-circuit utilizing putative first principles and previously proposed hypothesis-based models. The linear input-output dynamics of this compartmental model are compared with data-based estimates of the DCA-DVR process. This comparative study indicates that there are some qualitative similarities between the two-input compartmental model and experimental results.
Flatness-based control and Kalman filtering for a continuous-time macroeconomic model
NASA Astrophysics Data System (ADS)
Rigatos, G.; Siano, P.; Ghosh, T.; Busawon, K.; Binns, R.
2017-11-01
The article proposes flatness-based control for a nonlinear macro-economic model of the UK economy. The differential flatness properties of the model are proven. This enables to introduce a transformation (diffeomorphism) of the system's state variables and to express the state-space description of the model in the linear canonical (Brunowsky) form in which both the feedback control and the state estimation problem can be solved. For the linearized equivalent model of the macroeconomic system, stabilizing feedback control can be achieved using pole placement methods. Moreover, to implement stabilizing feedback control of the system by measuring only a subset of its state vector elements the Derivative-free nonlinear Kalman Filter is used. This consists of the Kalman Filter recursion applied on the linearized equivalent model of the financial system and of an inverse transformation that is based again on differential flatness theory. The asymptotic stability properties of the control scheme are confirmed.
Estimation of hysteretic damping of structures by stochastic subspace identification
NASA Astrophysics Data System (ADS)
Bajrić, Anela; Høgsberg, Jan
2018-05-01
Output-only system identification techniques can estimate modal parameters of structures represented by linear time-invariant systems. However, the extension of the techniques to structures exhibiting non-linear behavior has not received much attention. This paper presents an output-only system identification method suitable for random response of dynamic systems with hysteretic damping. The method applies the concept of Stochastic Subspace Identification (SSI) to estimate the model parameters of a dynamic system with hysteretic damping. The restoring force is represented by the Bouc-Wen model, for which an equivalent linear relaxation model is derived. Hysteretic properties can be encountered in engineering structures exposed to severe cyclic environmental loads, as well as in vibration mitigation devices, such as Magneto-Rheological (MR) dampers. The identification technique incorporates the equivalent linear damper model in the estimation procedure. Synthetic data, representing the random vibrations of systems with hysteresis, validate the estimated system parameters by the presented identification method at low and high-levels of excitation amplitudes.
Variable structure control of nonlinear systems through simplified uncertain models
NASA Technical Reports Server (NTRS)
Sira-Ramirez, Hebertt
1986-01-01
A variable structure control approach is presented for the robust stabilization of feedback equivalent nonlinear systems whose proposed model lies in the same structural orbit of a linear system in Brunovsky's canonical form. An attempt to linearize exactly the nonlinear plant on the basis of the feedback control law derived for the available model results in a nonlinearly perturbed canonical system for the expanded class of possible equivalent control functions. Conservatism tends to grow as modeling errors become larger. In order to preserve the internal controllability structure of the plant, it is proposed that model simplification be carried out on the open-loop-transformed system. As an example, a controller is developed for a single link manipulator with an elastic joint.
Nature of size effects in compact models of field effect transistors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torkhov, N. A., E-mail: trkf@mail.ru; Scientific-Research Institute of Semiconductor Devices, Tomsk 634050; Tomsk State University of Control Systems and Radioelectronics, Tomsk 634050
Investigations have shown that in the local approximation (for sizes L < 100 μm), AlGaN/GaN high electron mobility transistor (HEMT) structures satisfy to all properties of chaotic systems and can be described in the language of fractal geometry of fractional dimensions. For such objects, values of their electrophysical characteristics depend on the linear sizes of the examined regions, which explain the presence of the so-called size effects—dependences of the electrophysical and instrumental characteristics on the linear sizes of the active elements of semiconductor devices. In the present work, a relationship has been established for the linear model parameters of themore » equivalent circuit elements of internal transistors with fractal geometry of the heteroepitaxial structure manifested through a dependence of its relative electrophysical characteristics on the linear sizes of the examined surface areas. For the HEMTs, this implies dependences of their relative static (A/mm, mA/V/mm, Ω/mm, etc.) and microwave characteristics (W/mm) on the width d of the sink-source channel and on the number of sections n that leads to a nonlinear dependence of the retrieved parameter values of equivalent circuit elements of linear internal transistor models on n and d. Thus, it has been demonstrated that the size effects in semiconductors determined by the fractal geometry must be taken into account when investigating the properties of semiconductor objects on the levels less than the local approximation limit and designing and manufacturing field effect transistors. In general, the suggested approach allows a complex of problems to be solved on designing, optimizing, and retrieving the parameters of equivalent circuits of linear and nonlinear models of not only field effect transistors but also any arbitrary semiconductor devices with nonlinear instrumental characteristics.« less
On the equivalence of case-crossover and time series methods in environmental epidemiology.
Lu, Yun; Zeger, Scott L
2007-04-01
The case-crossover design was introduced in epidemiology 15 years ago as a method for studying the effects of a risk factor on a health event using only cases. The idea is to compare a case's exposure immediately prior to or during the case-defining event with that same person's exposure at otherwise similar "reference" times. An alternative approach to the analysis of daily exposure and case-only data is time series analysis. Here, log-linear regression models express the expected total number of events on each day as a function of the exposure level and potential confounding variables. In time series analyses of air pollution, smooth functions of time and weather are the main confounders. Time series and case-crossover methods are often viewed as competing methods. In this paper, we show that case-crossover using conditional logistic regression is a special case of time series analysis when there is a common exposure such as in air pollution studies. This equivalence provides computational convenience for case-crossover analyses and a better understanding of time series models. Time series log-linear regression accounts for overdispersion of the Poisson variance, while case-crossover analyses typically do not. This equivalence also permits model checking for case-crossover data using standard log-linear model diagnostics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spears, Robert Edward; Coleman, Justin Leigh
2015-08-01
Seismic analysis of nuclear structures is routinely performed using guidance provided in “Seismic Analysis of Safety-Related Nuclear Structures and Commentary (ASCE 4, 1998).” This document, which is currently under revision, provides detailed guidance on linear seismic soil-structure-interaction (SSI) analysis of nuclear structures. To accommodate the linear analysis, soil material properties are typically developed as shear modulus and damping ratio versus cyclic shear strain amplitude. A new Appendix in ASCE 4-2014 (draft) is being added to provide guidance for nonlinear time domain SSI analysis. To accommodate the nonlinear analysis, a more appropriate form of the soil material properties includes shear stressmore » and energy absorbed per cycle versus shear strain. Ideally, nonlinear soil model material properties would be established with soil testing appropriate for the nonlinear constitutive model being used. However, much of the soil testing done for SSI analysis is performed for use with linear analysis techniques. Consequently, a method is described in this paper that uses soil test data intended for linear analysis to develop nonlinear soil material properties. To produce nonlinear material properties that are equivalent to the linear material properties, the linear and nonlinear model hysteresis loops are considered. For equivalent material properties, the shear stress at peak shear strain and energy absorbed per cycle should match when comparing the linear and nonlinear model hysteresis loops. Consequently, nonlinear material properties are selected based on these criteria.« less
A Note on Equivalence Among Various Scalar Field Models of Dark Energies
NASA Astrophysics Data System (ADS)
Mandal, Jyotirmay Das; Debnath, Ujjal
2017-08-01
In this work, we have tried to find out similarities between various available models of scalar field dark energies (e.g., quintessence, k-essence, tachyon, phantom, quintom, dilatonic dark energy, etc). We have defined an equivalence relation from elementary set theory between scalar field models of dark energies and used fundamental ideas from linear algebra to set up our model. Consequently, we have obtained mutually disjoint subsets of scalar field dark energies with similar properties and discussed our observation.
Agent based reasoning for the non-linear stochastic models of long-range memory
NASA Astrophysics Data System (ADS)
Kononovicius, A.; Gontis, V.
2012-02-01
We extend Kirman's model by introducing variable event time scale. The proposed flexible time scale is equivalent to the variable trading activity observed in financial markets. Stochastic version of the extended Kirman's agent based model is compared to the non-linear stochastic models of long-range memory in financial markets. The agent based model providing matching macroscopic description serves as a microscopic reasoning of the earlier proposed stochastic model exhibiting power law statistics.
The effect of a paraffin screen on the neutron dose at the maze door of a 15 MV linear accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krmar, M.; Kuzmanović, A.; Nikolić, D.
2013-08-15
Purpose: The purpose of this study was to explore the effects of a paraffin screen located at various positions in the maze on the neutron dose equivalent at the maze door.Methods: The neutron dose equivalent was measured at the maze door of a room containing a 15 MV linear accelerator for x-ray therapy. Measurements were performed for several positions of the paraffin screen covering only 27.5% of the cross-sectional area of the maze. The neutron dose equivalent was also measured at all screen positions. Two simple models of the neutron source were considered in which the first assumed that themore » source was the cross-sectional area at the inner entrance of the maze, radiating neutrons in an isotropic manner. In the second model the reduction in the neutron dose equivalent at the maze door due to the paraffin screen was considered to be a function of the mean values of the neutron fluence and energy at the screen.Results: The results of this study indicate that the equivalent dose at the maze door was reduced by a factor of 3 through the use of a paraffin screen that was placed inside the maze. It was also determined that the contributions to the dosage from areas that were not covered by the paraffin screen as viewed from the dosimeter, were 2.5 times higher than the contributions from the covered areas. This study also concluded that the contributions of the maze walls, ceiling, and floor to the total neutron dose equivalent were an order of magnitude lower than those from the surface at the far end of the maze.Conclusions: This study demonstrated that a paraffin screen could be used to reduce the neutron dose equivalent at the maze door by a factor of 3. This paper also found that the reduction of the neutron dose equivalent was a linear function of the area covered by the maze screen and that the decrease in the dose at the maze door could be modeled as an exponential function of the product φ·E at the screen.« less
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Guo, Mengchao; Zhou, Kan; Wang, Xiaokun; Zhuang, Haiyan; Tang, Dongming; Zhang, Baoshan; Yang, Yi
2018-04-01
In this paper, the impact of coupling between unit cells on the performance of linear-to-circular polarization conversion metamaterial with half transmission and half reflection is analyzed by changing the distance between the unit cells. An equivalent electrical circuit model is then built to explain it based on the analysis. The simulated results show that, when the distance between the unit cells is 23 mm, this metamaterial converts half of the incident linearly-polarized wave into reflected left-hand circularly-polarized wave and converts the other half of it into transmitted left-hand circularly-polarized wave at 4.4 GHz; when the distance is 28 mm, this metamaterial reflects all of the incident linearly-polarized wave at 4.4 GHz; and when the distance is 32 mm, this metamaterial converts half of the incident linearly-polarized wave into reflected right-hand circularly-polarized wave and converts the other half of it into transmitted right-hand circularly-polarized wave at 4.4 GHz. The tunability is realized successfully. The analysis shows that the changes of coupling between unit cells lead to the changes of performance of this metamaterial. The coupling between the unit cells is then considered when building the equivalent electrical circuit model. The built equivalent electrical circuit model can be used to perfectly explain the simulated results, which confirms the validity of it. It can also give help to the design of tunable polarization conversion metamaterials.
A computational algorithm for spacecraft control and momentum management
NASA Technical Reports Server (NTRS)
Dzielski, John; Bergmann, Edward; Paradiso, Joseph
1990-01-01
Developments in the area of nonlinear control theory have shown how coordinate changes in the state and input spaces of a dynamical system can be used to transform certain nonlinear differential equations into equivalent linear equations. These techniques are applied to the control of a spacecraft equipped with momentum exchange devices. An optimal control problem is formulated that incorporates a nonlinear spacecraft model. An algorithm is developed for solving the optimization problem using feedback linearization to transform to an equivalent problem involving a linear dynamical constraint and a functional approximation technique to solve for the linear dynamics in terms of the control. The original problem is transformed into an unconstrained nonlinear quadratic program that yields an approximate solution to the original problem. Two examples are presented to illustrate the results.
Baldwin, Alex S.; Baker, Daniel H.; Hess, Robert F.
2016-01-01
The internal noise present in a linear system can be quantified by the equivalent noise method. By measuring the effect that applying external noise to the system’s input has on its output one can estimate the variance of this internal noise. By applying this simple “linear amplifier” model to the human visual system, one can entirely explain an observer’s detection performance by a combination of the internal noise variance and their efficiency relative to an ideal observer. Studies using this method rely on two crucial factors: firstly that the external noise in their stimuli behaves like the visual system’s internal noise in the dimension of interest, and secondly that the assumptions underlying their model are correct (e.g. linearity). Here we explore the effects of these two factors while applying the equivalent noise method to investigate the contrast sensitivity function (CSF). We compare the results at 0.5 and 6 c/deg from the equivalent noise method against those we would expect based on pedestal masking data collected from the same observers. We find that the loss of sensitivity with increasing spatial frequency results from changes in the saturation constant of the gain control nonlinearity, and that this only masquerades as a change in internal noise under the equivalent noise method. Part of the effect we find can be attributed to the optical transfer function of the eye. The remainder can be explained by either changes in effective input gain, divisive suppression, or a combination of the two. Given these effects the efficiency of our observers approaches the ideal level. We show the importance of considering these factors in equivalent noise studies. PMID:26953796
Baldwin, Alex S; Baker, Daniel H; Hess, Robert F
2016-01-01
The internal noise present in a linear system can be quantified by the equivalent noise method. By measuring the effect that applying external noise to the system's input has on its output one can estimate the variance of this internal noise. By applying this simple "linear amplifier" model to the human visual system, one can entirely explain an observer's detection performance by a combination of the internal noise variance and their efficiency relative to an ideal observer. Studies using this method rely on two crucial factors: firstly that the external noise in their stimuli behaves like the visual system's internal noise in the dimension of interest, and secondly that the assumptions underlying their model are correct (e.g. linearity). Here we explore the effects of these two factors while applying the equivalent noise method to investigate the contrast sensitivity function (CSF). We compare the results at 0.5 and 6 c/deg from the equivalent noise method against those we would expect based on pedestal masking data collected from the same observers. We find that the loss of sensitivity with increasing spatial frequency results from changes in the saturation constant of the gain control nonlinearity, and that this only masquerades as a change in internal noise under the equivalent noise method. Part of the effect we find can be attributed to the optical transfer function of the eye. The remainder can be explained by either changes in effective input gain, divisive suppression, or a combination of the two. Given these effects the efficiency of our observers approaches the ideal level. We show the importance of considering these factors in equivalent noise studies.
NASA Technical Reports Server (NTRS)
Badhwar, G. D.; Cucinotta, F. A.; Wilson, J. W. (Principal Investigator)
1998-01-01
A matched set of five tissue-equivalent proportional counters (TEPCs), embedded at the centers of 0 (bare), 3, 5, 8 and 12-inch-diameter polyethylene spheres, were flown on the Shuttle flight STS-81 (inclination 51.65 degrees, altitude approximately 400 km). The data obtained were separated into contributions from trapped protons and galactic cosmic radiation (GCR). From the measured linear energy transfer (LET) spectra, the absorbed dose and dose-equivalent rates were calculated. The results were compared to calculations made with the radiation transport model HZETRN/NUCFRG2, using the GCR free-space spectra, orbit-averaged geomagnetic transmission function and Shuttle shielding distributions. The comparison shows that the model fits the dose rates to a root mean square (rms) error of 5%, and dose-equivalent rates to an rms error of 10%. Fairly good agreement between the LET spectra was found; however, differences are seen at both low and high LET. These differences can be understood as due to the combined effects of chord-length variation and detector response function. These results rule out a number of radiation transport/nuclear fragmentation models. Similar comparisons of trapped-proton dose rates were made between calculations made with the proton transport model BRYNTRN using the AP-8 MIN trapped-proton model and Shuttle shielding distributions. The predictions of absorbed dose and dose-equivalent rates are fairly good. However, the prediction of the LET spectra below approximately 30 keV/microm shows the need to improve the AP-8 model. These results have strong implications for shielding requirements for an interplanetary manned mission.
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.
2003-01-01
The use of stress predictions from equivalent linearization analyses in the computation of high-cycle fatigue life is examined. Stresses so obtained differ in behavior from the fully nonlinear analysis in both spectral shape and amplitude. Consequently, fatigue life predictions made using this data will be affected. Comparisons of fatigue life predictions based upon the stress response obtained from equivalent linear and numerical simulation analyses are made to determine the range over which the equivalent linear analysis is applicable.
NASA Astrophysics Data System (ADS)
Lahaie, Sébastien; Parkes, David C.
We consider the problem of fair allocation in the package assignment model, where a set of indivisible items, held by single seller, must be efficiently allocated to agents with quasi-linear utilities. A fair assignment is one that is efficient and envy-free. We consider a model where bidders have superadditive valuations, meaning that items are pure complements. Our central result is that core outcomes are fair and even coalition-fair over this domain, while fair distributions may not even exist for general valuations. Of relevance to auction design, we also establish that the core is equivalent to the set of anonymous-price competitive equilibria, and that superadditive valuations are a maximal domain that guarantees the existence of anonymous-price competitive equilibrium. Our results are analogs of core equivalence results for linear prices in the standard assignment model, and for nonlinear, non-anonymous prices in the package assignment model with general valuations.
Feedback-Equivalence of Nonlinear Systems with Applications to Power System Equations.
NASA Astrophysics Data System (ADS)
Marino, Riccardo
The key concept of the dissertation is feedback equivalence among systems affine in control. Feedback equivalence to linear systems in Brunovsky canonical form and the construction of the corresponding feedback transformation are used to: (i) design a nonlinear regulator for a detailed nonlinear model of a synchronous generator connected to an infinite bus; (ii) establish which power system network structures enjoy the feedback linearizability property and design a stabilizing control law for these networks with a constraint on the control space which comes from the use of d.c. lines. It is also shown that the feedback linearizability property allows the use of state feedback to contruct a linear controllable system with a positive definite linear Hamiltonian structure for the uncontrolled part if the state space is even; a stabilizing control law is derived for such systems. Feedback linearizability property is characterized by the involutivity of certain nested distributions for strongly accessible analytic systems; if the system is defined on a manifold M diffeomorphic to the Euclidean space, it is established that the set where the property holds is a submanifold open and dense in M. If an analytic output map is defined, a set of nested involutive distributions can be always defined and that allows the introduction of an observability property which is the dual concept, in some sense, to feedback linearizability: the goal is to investigate when a nonlinear system affine in control with an analytic output map is feedback equivalent to a linear controllable and observable system. Finally a nested involutive structure of distributions is shown to guarantee the existence of a state feedback that takes a nonlinear system affine in control to a single input one, both feedback equivalent to linear controllable systems, preserving one controlled vector field.
Control Law Design in a Computational Aeroelasticity Environment
NASA Technical Reports Server (NTRS)
Newsom, Jerry R.; Robertshaw, Harry H.; Kapania, Rakesh K.
2003-01-01
A methodology for designing active control laws in a computational aeroelasticity environment is given. The methodology involves employing a systems identification technique to develop an explicit state-space model for control law design from the output of a computational aeroelasticity code. The particular computational aeroelasticity code employed in this paper solves the transonic small disturbance aerodynamic equation using a time-accurate, finite-difference scheme. Linear structural dynamics equations are integrated simultaneously with the computational fluid dynamics equations to determine the time responses of the structure. These structural responses are employed as the input to a modern systems identification technique that determines the Markov parameters of an "equivalent linear system". The Eigensystem Realization Algorithm is then employed to develop an explicit state-space model of the equivalent linear system. The Linear Quadratic Guassian control law design technique is employed to design a control law. The computational aeroelasticity code is modified to accept control laws and perform closed-loop simulations. Flutter control of a rectangular wing model is chosen to demonstrate the methodology. Various cases are used to illustrate the usefulness of the methodology as the nonlinearity of the aeroelastic system is increased through increased angle-of-attack changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosnitskiy, P., E-mail: pavrosni@yandex.ru; Yuldashev, P., E-mail: petr@acs366.phys.msu.ru; Khokhlova, V., E-mail: vera@acs366.phys.msu.ru
2015-10-28
An equivalent source model was proposed as a boundary condition to the nonlinear parabolic Khokhlov-Zabolotskaya (KZ) equation to simulate high intensity focused ultrasound (HIFU) fields generated by medical ultrasound transducers with the shape of a spherical shell. The boundary condition was set in the initial plane; the aperture, the focal distance, and the initial pressure of the source were chosen based on the best match of the axial pressure amplitude and phase distributions in the Rayleigh integral analytic solution for a spherical transducer and the linear parabolic approximation solution for the equivalent source. Analytic expressions for the equivalent source parametersmore » were derived. It was shown that the proposed approach allowed us to transfer the boundary condition from the spherical surface to the plane and to achieve a very good match between the linear field solutions of the parabolic and full diffraction models even for highly focused sources with F-number less than unity. The proposed method can be further used to expand the capabilities of the KZ nonlinear parabolic equation for efficient modeling of HIFU fields generated by strongly focused sources.« less
NASA Astrophysics Data System (ADS)
Tian, Wenli; Cao, Chengxuan
2017-03-01
A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.
Computation of linear acceleration through an internal model in the macaque cerebellum
Laurens, Jean; Meng, Hui; Angelaki, Dora E.
2013-01-01
A combination of theory and behavioral findings has supported a role for internal models in the resolution of sensory ambiguities and sensorimotor processing. Although the cerebellum has been proposed as a candidate for implementation of internal models, concrete evidence from neural responses is lacking. Here we exploit un-natural motion stimuli, which induce incorrect self-motion perception and eye movements, to explore the neural correlates of an internal model proposed to compensate for Einstein’s equivalence principle and generate neural estimates of linear acceleration and gravity. We show that caudal cerebellar vermis Purkinje cells and cerebellar nuclei neurons selective for actual linear acceleration also encode erroneous linear acceleration, as expected from the internal model hypothesis, even when no actual linear acceleration occurs. These findings provide strong evidence that the cerebellum might be involved in the implementation of internal models that mimic physical principles to interpret sensory signals, as previously hypothesized by theorists. PMID:24077562
Research on the time-temperature-damage superposition principle of NEPE propellant
NASA Astrophysics Data System (ADS)
Han, Long; Chen, Xiong; Xu, Jin-sheng; Zhou, Chang-sheng; Yu, Jia-quan
2015-11-01
To describe the relaxation behavior of NEPE (Nitrate Ester Plasticized Polyether) propellant, we analyzed the equivalent relationships between time, temperature, and damage. We conducted a series of uniaxial tensile tests and employed a cumulative damage model to calculate the damage values for relaxation tests at different strain levels. The damage evolution curve of the tensile test at 100 mm/min was obtained through numerical analysis. Relaxation tests were conducted over a range of temperature and strain levels, and the equivalent relationship between time, temperature, and damage was deduced based on free volume theory. The equivalent relationship was then used to generate predictions of the long-term relaxation behavior of the NEPE propellant. Subsequently, the equivalent relationship between time and damage was introduced into the linear viscoelastic model to establish a nonlinear model which is capable of describing the mechanical behavior of composite propellants under a uniaxial tensile load. The comparison between model prediction and experimental data shows that the presented model provides a reliable forecast of the mechanical behavior of propellants.
NASA Technical Reports Server (NTRS)
Vivian, H. C.
1985-01-01
Charge-state model for lead/acid batteries proposed as part of effort to make equivalent of fuel gage for battery-powered vehicles. Models based on equations that approximate observable characteristics of battery electrochemistry. Uses linear equations, easier to simulate on computer, and gives smooth transitions between charge, discharge, and recuperation.
Improved Equivalent Linearization Implementations Using Nonlinear Stiffness Evaluation
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2001-01-01
This report documents two new implementations of equivalent linearization for solving geometrically nonlinear random vibration problems of complicated structures. The implementations are given the acronym ELSTEP, for "Equivalent Linearization using a STiffness Evaluation Procedure." Both implementations of ELSTEP are fundamentally the same in that they use a novel nonlinear stiffness evaluation procedure to numerically compute otherwise inaccessible nonlinear stiffness terms from commercial finite element programs. The commercial finite element program MSC/NASTRAN (NASTRAN) was chosen as the core of ELSTEP. The FORTRAN implementation calculates the nonlinear stiffness terms and performs the equivalent linearization analysis outside of NASTRAN. The Direct Matrix Abstraction Program (DMAP) implementation performs these operations within NASTRAN. Both provide nearly identical results. Within each implementation, two error minimization approaches for the equivalent linearization procedure are available - force and strain energy error minimization. Sample results for a simply supported rectangular plate are included to illustrate the analysis procedure.
NASA Astrophysics Data System (ADS)
Beardsell, Alec; Collier, William; Han, Tao
2016-09-01
There is a trend in the wind industry towards ever larger and more flexible turbine blades. Blade tip deflections in modern blades now commonly exceed 10% of blade length. Historically, the dynamic response of wind turbine blades has been analysed using linear models of blade deflection which include the assumption of small deflections. For modern flexible blades, this assumption is becoming less valid. In order to continue to simulate dynamic turbine performance accurately, routine use of non-linear models of blade deflection may be required. This can be achieved by representing the blade as a connected series of individual flexible linear bodies - referred to in this paper as the multi-part approach. In this paper, Bladed is used to compare load predictions using single-part and multi-part blade models for several turbines. The study examines the impact on fatigue and extreme loads and blade deflection through reduced sets of load calculations based on IEC 61400-1 ed. 3. Damage equivalent load changes of up to 16% and extreme load changes of up to 29% are observed at some turbine load locations. It is found that there is no general pattern in the loading differences observed between single-part and multi-part blade models. Rather, changes in fatigue and extreme loads with a multi-part blade model depend on the characteristics of the individual turbine and blade. Key underlying causes of damage equivalent load change are identified as differences in edgewise- torsional coupling between the multi-part and single-part models, and increased edgewise rotor mode damping in the multi-part model. Similarly, a causal link is identified between torsional blade dynamics and changes in ultimate load results.
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
Impacts analysis of car following models considering variable vehicular gap policies
NASA Astrophysics Data System (ADS)
Xin, Qi; Yang, Nan; Fu, Rui; Yu, Shaowei; Shi, Zhongke
2018-07-01
Due to the important roles playing in the vehicles' adaptive cruise control system, variable vehicular gap polices were employed to full velocity difference model (FVDM) to investigate the traffic flow properties. In this paper, two new car following models were put forward by taking constant time headway(CTH) policy and variable time headway(VTH) policy into optimal velocity function, separately. By steady state analysis of the new models, an equivalent optimal velocity function was defined. To determine the linear stable conditions of the new models, we introduce equivalent expressions of safe vehicular gap, and then apply small amplitude perturbation analysis and long terms of wave expansion techniques to obtain the new models' linear stable conditions. Additionally, the first order approximate solutions of the new models were drawn at the stable region, by transforming the models into typical Burger's partial differential equations with reductive perturbation method. The FVDM based numerical simulations indicate that the variable vehicular gap polices with proper parameters directly contribute to the improvement of the traffic flows' stability and the avoidance of the unstable traffic phenomena.
Estimating Causal Effects with Ancestral Graph Markov Models
Malinsky, Daniel; Spirtes, Peter
2017-01-01
We present an algorithm for estimating bounds on causal effects from observational data which combines graphical model search with simple linear regression. We assume that the underlying system can be represented by a linear structural equation model with no feedback, and we allow for the possibility of latent variables. Under assumptions standard in the causal search literature, we use conditional independence constraints to search for an equivalence class of ancestral graphs. Then, for each model in the equivalence class, we perform the appropriate regression (using causal structure information to determine which covariates to include in the regression) to estimate a set of possible causal effects. Our approach is based on the “IDA” procedure of Maathuis et al. (2009), which assumes that all relevant variables have been measured (i.e., no unmeasured confounders). We generalize their work by relaxing this assumption, which is often violated in applied contexts. We validate the performance of our algorithm on simulated data and demonstrate improved precision over IDA when latent variables are present. PMID:28217244
Wave propagation in equivalent continuums representing truss lattice materials
Messner, Mark C.; Barham, Matthew I.; Kumar, Mukul; ...
2015-07-29
Stiffness scales linearly with density in stretch-dominated lattice meta-materials offering the possibility of very light yet very stiff structures. Current additive manufacturing techniques can assemble structures from lattice materials, but the design of such structures will require accurate, efficient simulation methods. Equivalent continuum models have several advantages over discrete truss models of stretch dominated lattices, including computational efficiency and ease of model construction. However, the development an equivalent model suitable for representing the dynamic response of a periodic truss in the small deformation regime is complicated by microinertial effects. This study derives a dynamic equivalent continuum model for periodic trussmore » structures suitable for representing long-wavelength wave propagation and verifies it against the full Bloch wave theory and detailed finite element simulations. The model must incorporate microinertial effects to accurately reproduce long wavelength characteristics of the response such as anisotropic elastic soundspeeds. Finally, the formulation presented here also improves upon previous work by preserving equilibrium at truss joints for simple lattices and by improving numerical stability by eliminating vertices in the effective yield surface.« less
Modification of the USLE K factor for soil erodibility assessment on calcareous soils in Iran
NASA Astrophysics Data System (ADS)
Ostovari, Yaser; Ghorbani-Dashtaki, Shoja; Bahrami, Hossein-Ali; Naderi, Mehdi; Dematte, Jose Alexandre M.; Kerry, Ruth
2016-11-01
The measurement of soil erodibility (K) in the field is tedious, time-consuming and expensive; therefore, its prediction through pedotransfer functions (PTFs) could be far less costly and time-consuming. The aim of this study was to develop new PTFs to estimate the K factor using multiple linear regression, Mamdani fuzzy inference systems, and artificial neural networks. For this purpose, K was measured in 40 erosion plots with natural rainfall. Various soil properties including the soil particle size distribution, calcium carbonate equivalent, organic matter, permeability, and wet-aggregate stability were measured. The results showed that the mean measured K was 0.014 t h MJ- 1 mm- 1 and 2.08 times less than the estimated mean K (0.030 t h MJ- 1 mm- 1) using the USLE model. Permeability, wet-aggregate stability, very fine sand, and calcium carbonate were selected as independent variables by forward stepwise regression in order to assess the ability of multiple linear regression, Mamdani fuzzy inference systems and artificial neural networks to predict K. The calcium carbonate equivalent, which is not accounted for in the USLE model, had a significant impact on K in multiple linear regression due to its strong influence on the stability of aggregates and soil permeability. Statistical indices in validation and calibration datasets determined that the artificial neural networks method with the highest R2, lowest RMSE, and lowest ME was the best model for estimating the K factor. A strong correlation (R2 = 0.81, n = 40, p < 0.05) between the estimated K from multiple linear regression and measured K indicates that the use of calcium carbonate equivalent as a predictor variable gives a better estimation of K in areas with calcareous soils.
Examining Factor Score Distributions to Determine the Nature of Latent Spaces
ERIC Educational Resources Information Center
Steinley, Douglas; McDonald, Roderick P.
2007-01-01
Similarities between latent class models with K classes and linear factor models with K-1 factors are investigated. Specifically, the mathematical equivalence between the covariance structure of the two models is discussed, and a Monte Carlo simulation is performed using generated data that represents both latent factors and latent classes with…
NASA Technical Reports Server (NTRS)
Shavers, M. R.; Poston, J. W.; Cucinotta, F. A.; Wilson, J. W.
1996-01-01
During manned space missions, high-energy nucleons of cosmic and solar origin collide with atomic nuclei of the human body and produce a broad linear energy transfer spectrum of secondary particles, called target fragments. These nuclear fragments are often more biologically harmful than the direct ionization of the incident nucleon. That these secondary particles increase tissue absorbed dose in regions adjacent to the bone-soft tissue interface was demonstrated in a previous publication. To assess radiological risks to tissue near the bone-soft tissue interface, a computer transport model for nuclear fragments produced by high energy nucleons was used in this study to calculate integral linear energy transfer spectra and dose equivalents resulting from nuclear collisions of 1-GeV protons transversing bone and red bone marrow. In terms of dose equivalent averaged over trabecular bone marrow, target fragments emitted from interactions in both tissues are predicted to be at least as important as the direct ionization of the primary protons-twice as important, if recently recommended radiation weighting factors and "worst-case" geometry are used. The use of conventional dosimetry (absorbed dose weighted by aa linear energy transfer-dependent quality factor) as an appropriate framework for predicting risk from low fluences of high-linear energy transfer target fragments is discussed.
NASA Technical Reports Server (NTRS)
Haynes, Davy A.; Miller, David S.; Klein, John R.; Louie, Check M.
1988-01-01
A method by which a simple equivalent faired body can be designed to replace a more complex body with flowing inlets has been demonstrated for supersonic flow. An analytically defined, geometrically simple faired inlet forebody has been designed using a linear potential code to generate flow perturbations equivalent to those produced by a much more complex forebody with inlets. An equivalent forebody wind-tunnel model was fabricated and a test was conducted in NASA Langley Research Center's Unitary Plan Wind Tunnel. The test Mach number range was 1.60 to 2.16 for angles of attack of -4 to 16 deg. Test results indicate that, for the purposes considered here, the equivalent forebody simulates the original flowfield disturbances to an acceptable degree of accuracy.
Linear network representation of multistate models of transport.
Sandblom, J; Ring, A; Eisenman, G
1982-01-01
By introducing external driving forces in rate-theory models of transport we show how the Eyring rate equations can be transformed into Ohm's law with potentials that obey Kirchhoff's second law. From such a formalism the state diagram of a multioccupancy multicomponent system can be directly converted into linear network with resistors connecting nodal (branch) points and with capacitances connecting each nodal point with a reference point. The external forces appear as emf or current generators in the network. This theory allows the algebraic methods of linear network theory to be used in solving the flux equations for multistate models and is particularly useful for making proper simplifying approximation in models of complex membrane structure. Some general properties of linear network representation are also deduced. It is shown, for instance, that Maxwell's reciprocity relationships of linear networks lead directly to Onsager's relationships in the near equilibrium region. Finally, as an example of the procedure, the equivalent circuit method is used to solve the equations for a few transport models. PMID:7093425
Portfolio optimization using fuzzy linear programming
NASA Astrophysics Data System (ADS)
Pandit, Purnima K.
2013-09-01
Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.
Influential Nonegligible Parameters under the Search Linear Model.
1986-04-25
lack of fit as wi 2 SSL0F(1 ) - I n u~ -(M) (12) and the sum of squares due to pure error as SSPE - I I (Y V-2 (13) For I 1,.,2) we define F(i) SSL0F...SSE (I) Noting that the numerator on the RHS of the above expression does not depend on i, we get the equivalence of (a) and (b). Again, SSE ) SSPE ...SSLOFM I and SSPE does not depend on i. Therefore (a) and (c) are equivalent. - From (14), the equivalence of (c) and (d) is clear. From (3), (6
NASA Astrophysics Data System (ADS)
Liu, Richeng; Li, Bo; Jiang, Yujing; Yu, Liyuan
2018-01-01
Hydro-mechanical properties of rock fractures are core issues for many geoscience and geo-engineering practices. Previous experimental and numerical studies have revealed that shear processes could greatly enhance the permeability of single rock fractures, yet the shear effects on hydraulic properties of fractured rock masses have received little attention. In most previous fracture network models, single fractures are typically presumed to be formed by parallel plates and flow is presumed to obey the cubic law. However, related studies have suggested that the parallel plate model cannot realistically represent the surface characters of natural rock fractures, and the relationship between flow rate and pressure drop will no longer be linear at sufficiently large Reynolds numbers. In the present study, a numerical approach was established to assess the effects of shear on the hydraulic properties of 2-D discrete fracture networks (DFNs) in both linear and nonlinear regimes. DFNs considering fracture surface roughness and variation of aperture in space were generated using an originally developed code DFNGEN. Numerical simulations by solving Navier-Stokes equations were performed to simulate the fluid flow through these DFNs. A fracture that cuts through each model was sheared and by varying the shear and normal displacements, effects of shear on equivalent permeability and nonlinear flow characteristics of DFNs were estimated. The results show that the critical condition of quantifying the transition from a linear flow regime to a nonlinear flow regime is: 10-4 〈 J < 10-3, where J is the hydraulic gradient. When the fluid flow is in a linear regime (i.e., J < 10-4), the relative deviation of equivalent permeability induced by shear, δ2, is linearly correlated with J with small variations, while for fluid flow in the nonlinear regime (J 〉 10-3), δ2 is nonlinearly correlated with J. A shear process would reduce the equivalent permeability significantly in the orientation perpendicular to the sheared fracture as much as 53.86% when J = 1, shear displacement Ds = 7 mm, and normal displacement Dn = 1 mm. By fitting the calculated results, the mathematical expression for δ2 is established to help choose proper governing equations when solving fluid flow problems in fracture networks.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.
2017-08-01
Self-learning equivalent-convolutional neural structures (SLECNS) for auto-coding-decoding and image clustering are discussed. The SLECNS architectures and their spatially invariant equivalent models (SI EMs) using the corresponding matrix-matrix procedures with basic operations of continuous logic and non-linear processing are proposed. These SI EMs have several advantages, such as the ability to recognize image fragments with better efficiency and strong cross correlation. The proposed clustering method of fragments with regard to their structural features is suitable not only for binary, but also color images and combines self-learning and the formation of weight clustered matrix-patterns. Its model is constructed and designed on the basis of recursively processing algorithms and to k-average method. The experimental results confirmed that larger images and 2D binary fragments with a large numbers of elements may be clustered. For the first time the possibility of generalization of these models for space invariant case is shown. The experiment for an image with dimension of 256x256 (a reference array) and fragments with dimensions of 7x7 and 21x21 for clustering is carried out. The experiments, using the software environment Mathcad, showed that the proposed method is universal, has a significant convergence, the small number of iterations is easily, displayed on the matrix structure, and confirmed its prospects. Thus, to understand the mechanisms of self-learning equivalence-convolutional clustering, accompanying her to the competitive processes in neurons, and the neural auto-encoding-decoding and recognition principles with the use of self-learning cluster patterns is very important which used the algorithm and the principles of non-linear processing of two-dimensional spatial functions of images comparison. These SIEMs can simply describe the signals processing during the all training and recognition stages and they are suitable for unipolar-coding multilevel signals. We show that the implementation of SLECNS based on known equivalentors or traditional correlators is possible if they are based on proposed equivalental two-dimensional functions of image similarity. The clustering efficiency in such models and their implementation depends on the discriminant properties of neural elements of hidden layers. Therefore, the main models and architecture parameters and characteristics depends on the applied types of non-linear processing and function used for image comparison or for adaptive-equivalental weighing of input patterns. Real model experiments in Mathcad are demonstrated, which confirm that non-linear processing on equivalent functions allows you to determine the neuron winners and adjust the weight matrix. Experimental results have shown that such models can be successfully used for auto- and hetero-associative recognition. They can also be used to explain some mechanisms known as "focus" and "competing gain-inhibition concept". The SLECNS architecture and hardware implementations of its basic nodes based on multi-channel convolvers and correlators with time integration are proposed. The parameters and performance of such architectures are estimated.
There's a Green Glob in Your Classroom.
ERIC Educational Resources Information Center
Dugdale, Sharon
1983-01-01
Discusses computer games (called intrinsic models) focusing on mathematics rather than on unrelated motivations (flashing lights or sounds). Games include "Green Globs," (equations/linear functions), "Darts"/"Torpedo" (fractions), "Escape" (graphing), and "Make-a-Monster" (equivalent fractions and…
Analysis and modeling of a family of two-transistor parallel inverters
NASA Technical Reports Server (NTRS)
Lee, F. C. Y.; Wilson, T. G.
1973-01-01
A family of five static dc-to-square-wave inverters, each employing a square-loop magnetic core in conjunction with two switching transistors, is analyzed using piecewise-linear models for the nonlinear characteristics of the transistors, diodes, and saturable-core devices. Four of the inverters are analyzed in detail for the first time. These analyses show that, by proper choice of a frame of reference, each of the five quite differently appearing inverter circuits can be described by a common equivalent circuit. This equivalent circuit consists of a five-segment nonlinear resistor, a nonlinear saturable reactor, and a linear capacitor. Thus, by proper interpretation and identification of the parameters in the different circuits, the results of a detailed solution for one of the inverter circuits provide similar information and insight into the local and global behavior of each inverter in the family.
Moment method analysis of linearly tapered slot antennas
NASA Technical Reports Server (NTRS)
Koeksal, Adnan
1993-01-01
A method of moments (MOM) model for the analysis of the Linearly Tapered Slot Antenna (LTSA) is developed and implemented. The model employs an unequal size rectangular sectioning for conducting parts of the antenna. Piecewise sinusoidal basis functions are used for the expansion of conductor current. The effect of the dielectric is incorporated in the model by using equivalent volume polarization current density and solving the equivalent problem in free-space. The feed section of the antenna including the microstripline is handled rigorously in the MOM model by including slotline short circuit and microstripline currents among the unknowns. Comparison with measurements is made to demonstrate the validity of the model for both the air case and the dielectric case. Validity of the model is also verified by extending the model to handle the analysis of the skew-plate antenna and comparing the results to those of a skew-segmentation modeling results of the same structure and to available data in the literature. Variation of the radiation pattern for the air LTSA with length, height, and taper angle is investigated, and the results are tabulated. Numerical results for the effect of the dielectric thickness and permittivity are presented.
NASA Astrophysics Data System (ADS)
Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego
2017-04-01
In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.
Numerical prediction of turbulent flame stability in premixed/prevaporized (HSCT) combustors
NASA Technical Reports Server (NTRS)
Winowich, Nicholas S.
1990-01-01
A numerical analysis of combustion instabilities that induce flashback in a lean, premixed, prevaporized dump combustor is performed. KIVA-II, a finite volume CFD code for the modeling of transient, multidimensional, chemically reactive flows, serves as the principal analytical tool. The experiment of Proctor and T'ien is used as a reference for developing the computational model. An experimentally derived combustion instability mechanism is presented on the basis of the observations of Proctor and T'ien and other investigators of instabilities in low speed (M less than 0.1) dump combustors. The analysis comprises two independent procedures that begin from a calculated stable flame: The first is a linear increase of the equivalence ratio and the second is the linear decrease of the inflow velocity. The objective is to observe changes in the aerothermochemical features of the flow field prior to flashback. It was found that only the linear increase of the equivalence ratio elicits a calculated flashback result. Though this result did not exhibit large scale coherent vortices in the turbulent shear layer coincident with a flame flickering mode as was observed experimentally, there were interesting acoustic effects which were resolved quite well in the calculation. A discussion of the k-e turbulence model used by KIVA-II is prompted by the absence of combustion instabilities in the model as the inflow velocity is linearly decreased. Finally, recommendations are made for further numerical analysis that may improve correlation with experimentally observed combustion instabilities.
Comparison of Nonlinear Random Response Using Equivalent Linearization and Numerical Simulation
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2000-01-01
A recently developed finite-element-based equivalent linearization approach for the analysis of random vibrations of geometrically nonlinear multiple degree-of-freedom structures is validated. The validation is based on comparisons with results from a finite element based numerical simulation analysis using a numerical integration technique in physical coordinates. In particular, results for the case of a clamped-clamped beam are considered for an extensive load range to establish the limits of validity of the equivalent linearization approach.
Effects of Optical Blur Reduction on Equivalent Intrinsic Blur
Valeshabad, Ali Kord; Wanek, Justin; McAnany, J. Jason; Shahidi, Mahnaz
2015-01-01
Purpose To determine the effect of optical blur reduction on equivalent intrinsic blur, an estimate of the blur within the visual system, by comparing optical and equivalent intrinsic blur before and after adaptive optics (AO) correction of wavefront error. Methods Twelve visually normal individuals (age; 31 ± 12 years) participated in this study. Equivalent intrinsic blur (σint) was derived using a previously described model. Optical blur (σopt) due to high-order aberrations was quantified by Shack-Hartmann aberrometry and minimized using AO correction of wavefront error. Results σopt and σint were significantly reduced and visual acuity (VA) was significantly improved after AO correction (P ≤ 0.004). Reductions in σopt and σint were linearly dependent on the values before AO correction (r ≥ 0.94, P ≤ 0.002). The reduction in σint was greater than the reduction in σopt, although it was marginally significant (P = 0.05). σint after AO correlated significantly with σint before AO (r = 0.92, P < 0.001) and the two parameters were related linearly with a slope of 0.46. Conclusions Reduction in equivalent intrinsic blur was greater than the reduction in optical blur due to AO correction of wavefront error. This finding implies that VA in subjects with high equivalent intrinsic blur can be improved beyond that expected from the reduction in optical blur alone. PMID:25785538
Effects of optical blur reduction on equivalent intrinsic blur.
Kord Valeshabad, Ali; Wanek, Justin; McAnany, J Jason; Shahidi, Mahnaz
2015-04-01
To determine the effect of optical blur reduction on equivalent intrinsic blur, an estimate of the blur within the visual system, by comparing optical and equivalent intrinsic blur before and after adaptive optics (AO) correction of wavefront error. Twelve visually normal subjects (mean [±SD] age, 31 [±12] years) participated in this study. Equivalent intrinsic blur (σint) was derived using a previously described model. Optical blur (σopt) caused by high-order aberrations was quantified by Shack-Hartmann aberrometry and minimized using AO correction of wavefront error. σopt and σint were significantly reduced and visual acuity was significantly improved after AO correction (p ≤ 0.004). Reductions in σopt and σint were linearly dependent on the values before AO correction (r ≥ 0.94, p ≤ 0.002). The reduction in σint was greater than the reduction in σopt, although it was marginally significant (p = 0.05). σint after AO correlated significantly with σint before AO (r = 0.92, p < 0.001), and the two parameters were related linearly with a slope of 0.46. Reduction in equivalent intrinsic blur was greater than the reduction in optical blur after AO correction of wavefront error. This finding implies that visual acuity in subjects with high equivalent intrinsic blur can be improved beyond that expected from the reduction in optical blur alone.
Identification of Synchronous Machine Stability - Parameters: AN On-Line Time-Domain Approach.
NASA Astrophysics Data System (ADS)
Le, Loc Xuan
1987-09-01
A time-domain modeling approach is described which enables the stability-study parameters of the synchronous machine to be determined directly from input-output data measured at the terminals of the machine operating under normal conditions. The transient responses due to system perturbations are used to identify the parameters of the equivalent circuit models. The described models are verified by comparing their responses with the machine responses generated from the transient stability models of a small three-generator multi-bus power system and of a single -machine infinite-bus power network. The least-squares method is used for the solution of the model parameters. As a precaution against ill-conditioned problems, the singular value decomposition (SVD) is employed for its inherent numerical stability. In order to identify the equivalent-circuit parameters uniquely, the solution of a linear optimization problem with non-linear constraints is required. Here, the SVD appears to offer a simple solution to this otherwise difficult problem. Furthermore, the SVD yields solutions with small bias and, therefore, physically meaningful parameters even in the presence of noise in the data. The question concerning the need for a more advanced model of the synchronous machine which describes subtransient and even sub-subtransient behavior is dealt with sensibly by the concept of condition number. The concept provides a quantitative measure for determining whether such an advanced model is indeed necessary. Finally, the recursive SVD algorithm is described for real-time parameter identification and tracking of slowly time-variant parameters. The algorithm is applied to identify the dynamic equivalent power system model.
MICRO-U 70.1: Training Model of an Instructional Institution, Users Manual.
ERIC Educational Resources Information Center
Springer, Colby H.
MICRO-U is a student demand driven deterministic model. Student enrollment, by degree program, is used to develop an Instructional Work Load Matrix. Linear equations using Weekly Student Contact Hours (WSCH), Full Time Equivalent (FTE) students, FTE faculty, and number of disciplines determine library, central administration, and physical plant…
A new neural network model for solving random interval linear programming problems.
Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza
2017-05-01
This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.
Modeling Percolation in Polymer Nanocomposites by Stochastic Microstructuring
Soto, Matias; Esteva, Milton; Martínez-Romero, Oscar; Baez, Jesús; Elías-Zúñiga, Alex
2015-01-01
A methodology was developed for the prediction of the electrical properties of carbon nanotube-polymer nanocomposites via Monte Carlo computational simulations. A two-dimensional microstructure that takes into account waviness, fiber length and diameter distributions is used as a representative volume element. Fiber interactions in the microstructure are identified and then modeled as an equivalent electrical circuit, assuming one-third metallic and two-thirds semiconductor nanotubes. Tunneling paths in the microstructure are also modeled as electrical resistors, and crossing fibers are accounted for by assuming a contact resistance associated with them. The equivalent resistor network is then converted into a set of linear equations using nodal voltage analysis, which is then solved by means of the Gauss–Jordan elimination method. Nodal voltages are obtained for the microstructure, from which the percolation probability, equivalent resistance and conductivity are calculated. Percolation probability curves and electrical conductivity values are compared to those found in the literature. PMID:28793594
Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza
2017-09-27
Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.
Thermospheric dynamics - A system theory approach
NASA Technical Reports Server (NTRS)
Codrescu, M.; Forbes, J. M.; Roble, R. G.
1990-01-01
A system theory approach to thermospheric modeling is developed, based upon a linearization method which is capable of preserving nonlinear features of a dynamical system. The method is tested using a large, nonlinear, time-varying system, namely the thermospheric general circulation model (TGCM) of the National Center for Atmospheric Research. In the linearized version an equivalent system, defined for one of the desired TGCM output variables, is characterized by a set of response functions that is constructed from corresponding quasi-steady state and unit sample response functions. The linearized version of the system runs on a personal computer and produces an approximation of the desired TGCM output field height profile at a given geographic location.
Linear and nonlinear ARMA model parameter estimation using an artificial neural network
NASA Technical Reports Server (NTRS)
Chon, K. H.; Cohen, R. J.
1997-01-01
This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.
Couto, José Guilherme; Bravo, Isabel; Pirraco, Rui
2011-09-01
The purpose of this work was the biological comparison between Low Dose Rate (LDR) and Pulsed Dose Rate (PDR) in cervical cancer regarding the discontinuation of the afterloading system used for the LDR treatments at our Institution since December 2009. In the first phase we studied the influence of the pulse dose and the pulse time in the biological equivalence between LDR and PDR treatments using the Linear Quadratic Model (LQM). In the second phase, the equivalent dose in 2 Gy/fraction (EQD(2)) for the tumor, rectum and bladder in treatments performed with both techniques was evaluated and statistically compared. All evaluated patients had stage IIB cervical cancer and were treated with External Beam Radiotherapy (EBRT) plus two Brachytherapy (BT) applications. Data were collected from 48 patients (26 patients treated with LDR and 22 patients with PDR). In the analyses of the influence of PDR parameters in the biological equivalence between LDR and PDR treatments (Phase 1), it was calculated that if the pulse dose in PDR was kept equal to the LDR dose rate, a small the-rapeutic loss was expected. If the pulse dose was decreased, the therapeutic window became larger, but a correction in the prescribed dose was necessary. In PDR schemes with 1 hour interval between pulses, the pulse time did not influence significantly the equivalent dose. In the comparison between the groups treated with LDR and PDR (Phase 2) we concluded that they were not equivalent, because in the PDR group the total EQD(2) for the tumor, rectum and bladder was smaller than in the LDR group; the LQM estimated that a correction in the prescribed dose of 6% to 10% was ne-cessary to avoid therapeutic loss. A correction in the prescribed dose was necessary; this correction should be achieved by calculating the PDR dose equivalent to the desired LDR total dose.
Bravo, Isabel; Pirraco, Rui
2011-01-01
Purpose The purpose of this work was the biological comparison between Low Dose Rate (LDR) and Pulsed Dose Rate (PDR) in cervical cancer regarding the discontinuation of the afterloading system used for the LDR treatments at our Institution since December 2009. Material and methods In the first phase we studied the influence of the pulse dose and the pulse time in the biological equivalence between LDR and PDR treatments using the Linear Quadratic Model (LQM). In the second phase, the equivalent dose in 2 Gy/fraction (EQD2) for the tumor, rectum and bladder in treatments performed with both techniques was evaluated and statistically compared. All evaluated patients had stage IIB cervical cancer and were treated with External Beam Radiotherapy (EBRT) plus two Brachytherapy (BT) applications. Data were collected from 48 patients (26 patients treated with LDR and 22 patients with PDR). Results In the analyses of the influence of PDR parameters in the biological equivalence between LDR and PDR treatments (Phase 1), it was calculated that if the pulse dose in PDR was kept equal to the LDR dose rate, a small the-rapeutic loss was expected. If the pulse dose was decreased, the therapeutic window became larger, but a correction in the prescribed dose was necessary. In PDR schemes with 1 hour interval between pulses, the pulse time did not influence significantly the equivalent dose. In the comparison between the groups treated with LDR and PDR (Phase 2) we concluded that they were not equivalent, because in the PDR group the total EQD2 for the tumor, rectum and bladder was smaller than in the LDR group; the LQM estimated that a correction in the prescribed dose of 6% to 10% was ne-cessary to avoid therapeutic loss. Conclusions A correction in the prescribed dose was necessary; this correction should be achieved by calculating the PDR dose equivalent to the desired LDR total dose. PMID:23346123
Stochastic stability properties of jump linear systems
NASA Technical Reports Server (NTRS)
Feng, Xiangbo; Loparo, Kenneth A.; Ji, Yuandong; Chizeck, Howard J.
1992-01-01
Jump linear systems are defined as a family of linear systems with randomly jumping parameters (usually governed by a Markov jump process) and are used to model systems subject to failures or changes in structure. The authors study stochastic stability properties in jump linear systems and the relationship among various moment and sample path stability properties. It is shown that all second moment stability properties are equivalent and are sufficient for almost sure sample path stability, and a testable necessary and sufficient condition for second moment stability is derived. The Lyapunov exponent method for the study of almost sure sample stability is discussed, and a theorem which characterizes the Lyapunov exponents of jump linear systems is presented.
Hartzell, S.; Leeds, A.; Frankel, A.; Williams, R.A.; Odum, J.; Stephenson, W.; Silva, W.
2002-01-01
The Seattle fault poses a significant seismic hazard to the city of Seattle, Washington. A hybrid, low-frequency, high-frequency method is used to calculate broadband (0-20 Hz) ground-motion time histories for a M 6.5 earthquake on the Seattle fault. Low frequencies (1 Hz) are calculated by a stochastic method that uses a fractal subevent size distribution to give an ω-2 displacement spectrum. Time histories are calculated for a grid of stations and then corrected for the local site response using a classification scheme based on the surficial geology. Average shear-wave velocity profiles are developed for six surficial geologic units: artificial fill, modified land, Esperance sand, Lawton clay, till, and Tertiary sandstone. These profiles together with other soil parameters are used to compare linear, equivalent-linear, and nonlinear predictions of ground motion in the frequency band 0-15 Hz. Linear site-response corrections are found to yield unreasonably large ground motions. Equivalent-linear and nonlinear calculations give peak values similar to the 1994 Northridge, California, earthquake and those predicted by regression relationships. Ground-motion variance is estimated for (1) randomization of the velocity profiles, (2) variation in source parameters, and (3) choice of nonlinear model. Within the limits of the models tested, the results are found to be most sensitive to the nonlinear model and soil parameters, notably the over consolidation ratio.
The Shock and Vibration Digest. Volume 18, Number 7
1986-07-01
long-term dynamic irregularity of a soluble Los Alamos, NM, July 21-23, 1981 quantum mechanical model known as the Jaynes - Cummings model . The analysis...substructure models are obtained % substructure computation can be performed by approximating each state space vector as a independently of the other...Non- and rotational residual flexibilities at the inter- linear joint behavior is modeled by an equivalent face. Data were taken in the form of
Linear energy transfer in water phantom within SHIELD-HIT transport code
NASA Astrophysics Data System (ADS)
Ergun, A.; Sobolevsky, N.; Botvina, A. S.; Buyukcizmeci, N.; Latysheva, L.; Ogul, R.
2017-02-01
The effect of irradiation in tissue is important in hadron therapy for the dose measurement and treatment planning. This biological effect is defined by an equivalent dose H which depends on the Linear Energy Transfer (LET). Usually, H can be expressed in terms of the absorbed dose D and the quality factor K of the radiation under consideration. In literature, various types of transport codes have been used for modeling and simulation of the interaction of the beams of protons and heavier ions with tissue-equivalent materials. In this presentation we used SHIELD-HIT code to simulate decomposition of the absorbed dose by LET in water for 16O beams. A more detailed description of capabilities of the SHIELD-HIT code can be found in the literature.
Panel Flutter Emulation Using a Few Concentrated Forces
NASA Astrophysics Data System (ADS)
Dhital, Kailash; Han, Jae-Hung
2018-04-01
The objective of this paper is to study the feasibility of panel flutter emulation using a few concentrated forces. The concentrated forces are considered to be equivalent to aerodynamic forces. The equivalence is carried out using surface spline method and principle of virtual work. The structural modeling of the plate is based on the classical plate theory and the aerodynamic modeling is based on the piston theory. The present approach differs from the linear panel flutter analysis in scheming the modal aerodynamics forces with unchanged structural properties. The solutions for the flutter problem are obtained numerically using the standard eigenvalue procedure. A few concentrated forces were considered with an optimization effort to decide their optimal locations. The optimization process is based on minimizing the error between the flutter bounds from emulated and linear flutter analysis method. The emulated flutter results for the square plate of four different boundary conditions using six concentrated forces are obtained with minimal error to the reference value. The results demonstrated the workability and viability of using concentrated forces in emulating real panel flutter. In addition, the paper includes the parametric studies of linear panel flutter whose proper literatures are not available.
Analog Microcontroller Model for an Energy Harvesting Round Counter
2012-07-01
densities representing the duration of ≥ for all scaled piezo ................................7 1 INTRODUCTION An accurate count...limited surface area available for mounting piezos on the gun system. Figure 1. Equivalent circuit model for a piezoelectric transducer...circuit model for the linear I-V relationships is parallel combination of six stages, each of which is comprised of a series combination of a resistor , DC
Parallel But Not Equivalent: Challenges and Solutions for Repeated Assessment of Cognition over Time
Gross, Alden L.; Inouye, Sharon K.; Rebok, George W.; Brandt, Jason; Crane, Paul K.; Parisi, Jeanine M.; Tommet, Doug; Bandeen-Roche, Karen; Carlson, Michelle C.; Jones, Richard N.
2013-01-01
Objective Analyses of individual differences in change may be unintentionally biased when versions of a neuropsychological test used at different follow-ups are not of equivalent difficulty. This study’s objective was to compare mean, linear, and equipercentile equating methods and demonstrate their utility in longitudinal research. Study Design and Setting The Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE, N=1,401) study is a longitudinal randomized trial of cognitive training. The Alzheimer’s Disease Neuroimaging Initiative (ADNI, n=819) is an observational cohort study. Nonequivalent alternate versions of the Auditory Verbal Learning Test (AVLT) were administered in both studies. Results Using visual displays, raw and mean-equated AVLT scores in both studies showed obvious nonlinear trajectories in reference groups that should show minimal change, poor equivalence over time (ps≤0.001), and raw scores demonstrated poor fits in models of within-person change (RMSEAs>0.12). Linear and equipercentile equating produced more similar means in reference groups (ps≥0.09) and performed better in growth models (RMSEAs<0.05). Conclusion Equipercentile equating is the preferred equating method because it accommodates tests more difficult than a reference test at different percentiles of performance and performs well in models of within-person trajectory. The method has broad applications in both clinical and research settings to enhance the ability to use nonequivalent test forms. PMID:22540849
Advanced analysis technique for the evaluation of linear alternators and linear motors
NASA Technical Reports Server (NTRS)
Holliday, Jeffrey C.
1995-01-01
A method for the mathematical analysis of linear alternator and linear motor devices and designs is described, and an example of its use is included. The technique seeks to surpass other methods of analysis by including more rigorous treatment of phenomena normally omitted or coarsely approximated such as eddy braking, non-linear material properties, and power losses generated within structures surrounding the device. The technique is broadly applicable to linear alternators and linear motors involving iron yoke structures and moving permanent magnets. The technique involves the application of Amperian current equivalents to the modeling of the moving permanent magnet components within a finite element formulation. The resulting steady state and transient mode field solutions can simultaneously account for the moving and static field sources within and around the device.
Vibration mitigation in partially liquid-filled vessel using passive energy absorbers
NASA Astrophysics Data System (ADS)
Farid, M.; Levy, N.; Gendelman, O. V.
2017-10-01
We consider possible solutions for vibration mitigation in reduced-order model (ROM) of partially filled liquid tank under impulsive forcing. Such excitations may lead to strong hydraulic impacts applied to the tank inner walls. Finite stiffness of the tank walls is taken into account. In order to mitigate the dangerous internal stresses in the tank walls, we explore both linear (Tuned Mass Damper) and nonlinear (Nonlinear Energy Sink) passive vibration absorbers; mitigation performance in both cases is examined numerically. The liquid sloshing mass is modeled by equivalent mass-spring-dashpot system, which can both perform small-amplitude linear oscillations and hit the vessel walls. We use parameters of the equivalent mass-spring-dashpot system for a well-explored case of cylindrical tanks. The hydraulic impacts are modeled by high-power potential and dissipation functions. Critical location in the tank structure is determined and expression of the corresponding local mechanical stress is derived. We use finite element approach to assess the natural frequencies for specific system parameters. Numerical evaluation criteria are suggested to determine the energy absorption performance.
NASA Technical Reports Server (NTRS)
Fasnacht, Zachary; Qin, Wenhan; Haffner, David P.; Loyola, Diego; Joiner, Joanna; Krotkov, Nickolay; Vasilkov, Alexander; Spurr, Robert
2017-01-01
Surface Lambertian-equivalent reflectivity (LER) is important for trace gas retrievals in the direct calculation of cloud fractions and indirect calculation of the air mass factor. Current trace gas retrievals use climatological surface LER's. Surface properties that impact the bidirectional reflectance distribution function (BRDF) as well as varying satellite viewing geometry can be important for retrieval of trace gases. Geometry Dependent LER (GLER) captures these effects with its calculation of sun normalized radiances (I/F) and can be used in current LER algorithms (Vasilkov et al. 2016). Pixel by pixel radiative transfer calculations are computationally expensive for large datasets. Modern satellite missions such as the Tropospheric Monitoring Instrument (TROPOMI) produce very large datasets as they take measurements at much higher spatial and spectral resolutions. Look up table (LUT) interpolation improves the speed of radiative transfer calculations but complexity increases for non-linear functions. Neural networks perform fast calculations and can accurately predict both non-linear and linear functions with little effort.
Investigation of Cepstrum Analysis for Seismic/Acoustic Signal Sensor Range Determination.
1981-01-01
distorted by transmission through a linear system . For example, the effect of multipath and reverberation may be modeled in terms of a signal that is...called the short time averaged cepstrum. To derive some analytical expressions for short time average cepstrums we choose some functions of interest...linear process applied to the time series or any equivalent time function Repiod Period The amount of time required for one cycle of a time series Saphe
2003-01-01
ambient conditions prior to testing. A masterbatch for hydrosilylation-curable model systems was prepared by combining 200 g of hexamethydisilazane treated...fumed silica and 800 g of vinylterminated polydimethylsiloxane (equivalent weight ¼ 4111). The masterbatch was combined with additional vinyl polymer...followed by 10ml of Karstedt’s catalyst (10.9% Pt, 4.8mmol Pt). The amounts of masterbatch , linear vinyl, linear hydride, and crosslinkable hydride
Linear phase compressive filter
McEwan, Thomas E.
1995-01-01
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.
An approach to checking case-crossover analyses based on equivalence with time-series methods.
Lu, Yun; Symons, James Morel; Geyh, Alison S; Zeger, Scott L
2008-03-01
The case-crossover design has been increasingly applied to epidemiologic investigations of acute adverse health effects associated with ambient air pollution. The correspondence of the design to that of matched case-control studies makes it inferentially appealing for epidemiologic studies. Case-crossover analyses generally use conditional logistic regression modeling. This technique is equivalent to time-series log-linear regression models when there is a common exposure across individuals, as in air pollution studies. Previous methods for obtaining unbiased estimates for case-crossover analyses have assumed that time-varying risk factors are constant within reference windows. In this paper, we rely on the connection between case-crossover and time-series methods to illustrate model-checking procedures from log-linear model diagnostics for time-stratified case-crossover analyses. Additionally, we compare the relative performance of the time-stratified case-crossover approach to time-series methods under 3 simulated scenarios representing different temporal patterns of daily mortality associated with air pollution in Chicago, Illinois, during 1995 and 1996. Whenever a model-be it time-series or case-crossover-fails to account appropriately for fluctuations in time that confound the exposure, the effect estimate will be biased. It is therefore important to perform model-checking in time-stratified case-crossover analyses rather than assume the estimator is unbiased.
An Airborne Radar Model For Non-Uniformly Spaced Antenna Arrays
2006-03-01
Department of Defense, or the United States Government . AFIT-GE-ENG-06-58 An Airborne Radar Model For Non-Uniformly Spaced Antenna Arrays THESIS Presented...different circular arrays, one containing 24 elements and one containing 15 elements. The circular array per- formance is compared to that of a 6 × 6...model and compared to the radar model of [5, 6, 13]. The two models are mathematically equivalent when the uniformly spaced array is linear. The two
Multi-temperature state-dependent equivalent circuit discharge model for lithium-sulfur batteries
NASA Astrophysics Data System (ADS)
Propp, Karsten; Marinescu, Monica; Auger, Daniel J.; O'Neill, Laura; Fotouhi, Abbas; Somasundaram, Karthik; Offer, Gregory J.; Minton, Geraint; Longo, Stefano; Wild, Mark; Knap, Vaclav
2016-10-01
Lithium-sulfur (Li-S) batteries are described extensively in the literature, but existing computational models aimed at scientific understanding are too complex for use in applications such as battery management. Computationally simple models are vital for exploitation. This paper proposes a non-linear state-of-charge dependent Li-S equivalent circuit network (ECN) model for a Li-S cell under discharge. Li-S batteries are fundamentally different to Li-ion batteries, and require chemistry-specific models. A new Li-S model is obtained using a 'behavioural' interpretation of the ECN model; as Li-S exhibits a 'steep' open-circuit voltage (OCV) profile at high states-of-charge, identification methods are designed to take into account OCV changes during current pulses. The prediction-error minimization technique is used. The model is parameterized from laboratory experiments using a mixed-size current pulse profile at four temperatures from 10 °C to 50 °C, giving linearized ECN parameters for a range of states-of-charge, currents and temperatures. These are used to create a nonlinear polynomial-based battery model suitable for use in a battery management system. When the model is used to predict the behaviour of a validation data set representing an automotive NEDC driving cycle, the terminal voltage predictions are judged accurate with a root mean square error of 32 mV.
Cardinal Equivalence of Small Number in Young Children.
ERIC Educational Resources Information Center
Kingma, J.; Roelinga, U.
1982-01-01
Children completed three types of equivalent cardination tasks which assessed the influence of different stimulus configurations (linear, linear-nonlinear, and nonlinear), and density of object spacing. Prior results reported by Siegel, Brainerd, and Gelman and Gallistel were not replicated. Implications for understanding cardination concept…
Equivalent circuit-based analysis of CMUT cell dynamics in arrays.
Oguz, H K; Atalar, Abdullah; Köymen, Hayrettin
2013-05-01
Capacitive micromachined ultrasonic transducers (CMUTs) are usually composed of large arrays of closely packed cells. In this work, we use an equivalent circuit model to analyze CMUT arrays with multiple cells. We study the effects of mutual acoustic interactions through the immersion medium caused by the pressure field generated by each cell acting upon the others. To do this, all the cells in the array are coupled through a radiation impedance matrix at their acoustic terminals. An accurate approximation for the mutual radiation impedance is defined between two circular cells, which can be used in large arrays to reduce computational complexity. Hence, a performance analysis of CMUT arrays can be accurately done with a circuit simulator. By using the proposed model, one can very rapidly obtain the linear frequency and nonlinear transient responses of arrays with an arbitrary number of CMUT cells. We performed several finite element method (FEM) simulations for arrays with small numbers of cells and showed that the results are very similar to those obtained by the equivalent circuit model.
Guo, J.; Tsang, L.; Josberger, E.G.; Wood, A.W.; Hwang, J.-N.; Lettenmaier, D.P.
2003-01-01
This paper presents an algorithm that estimates the spatial distribution and temporal evolution of snow water equivalent and snow depth based on passive remote sensing measurements. It combines the inversion of passive microwave remote sensing measurements via dense media radiative transfer modeling results with snow accumulation and melt model predictions to yield improved estimates of snow depth and snow water equivalent, at a pixel resolution of 5 arc-min. In the inversion, snow grain size evolution is constrained based on pattern matching by using the local snow temperature history. This algorithm is applied to produce spatial snow maps of Upper Rio Grande River basin in Colorado. The simulation results are compared with that of the snow accumulation and melt model and a linear regression method. The quantitative comparison with the ground truth measurements from four Snowpack Telemetry (SNOTEL) sites in the basin shows that this algorithm is able to improve the estimation of snow parameters.
A linear polarization converter with near unity efficiency in microwave regime
NASA Astrophysics Data System (ADS)
Xu, Peng; Wang, Shen-Yun; Geyi, Wen
2017-04-01
In this paper, we present a linear polarization converter in the reflective mode with near unity conversion efficiency. The converter is designed in an array form on the basis of a pair of orthogonally arranged three-dimensional split-loop resonators sharing a common terminal coaxial port and a continuous metallic ground slab. It converts the linearly polarized incident electromagnetic wave at resonance to its orthogonal counterpart upon the reflection mode. The conversion mechanism is explained by an equivalent circuit model, and the conversion efficiency can be tuned by changing the impedance of the terminal port. Such a scheme of the linear polarization converter has potential applications in microwave communications, remote sensing, and imaging.
Seismic equivalents of volcanic jet scaling laws and multipoles in acoustics
NASA Astrophysics Data System (ADS)
Haney, Matthew M.; Matoza, Robin S.; Fee, David; Aldridge, David F.
2018-04-01
We establish analogies between equivalent source theory in seismology (moment-tensor and single-force sources) and acoustics (monopoles, dipoles and quadrupoles) in the context of volcanic eruption signals. Although infrasound (acoustic waves < 20 Hz) from volcanic eruptions may be more complex than a simple monopole, dipole or quadrupole assumption, these elementary acoustic sources are a logical place to begin exploring relations with seismic sources. By considering the radiated power of a harmonic force source at the surface of an elastic half-space, we show that a volcanic jet or plume modelled as a seismic force has similar scaling with respect to eruption parameters (e.g. exit velocity and vent area) as an acoustic dipole. We support this by demonstrating, from first principles, a fundamental relationship that ties together explosion, torque and force sources in seismology and highlights the underlying dipole nature of seismic forces. This forges a connection between the multipole expansion of equivalent sources in acoustics and the use of forces and moments as equivalent sources in seismology. We further show that volcanic infrasound monopole and quadrupole sources exhibit scalings similar to seismicity radiated by volume injection and moment sources, respectively. We describe a scaling theory for seismic tremor during volcanic eruptions that agrees with observations showing a linear relation between radiated power of tremor and eruption rate. Volcanic tremor over the first 17 hr of the 2016 eruption at Pavlof Volcano, Alaska, obeyed the linear relation. Subsequent tremor during the main phase of the eruption did not obey the linear relation and demonstrates that volcanic eruption tremor can exhibit other scalings even during the same eruption.
NASA Astrophysics Data System (ADS)
Hälg, R. A.; Besserer, J.; Boschung, M.; Mayer, S.; Lomax, A. J.; Schneider, U.
2014-05-01
In radiation therapy, high energy photon and proton beams cause the production of secondary neutrons. This leads to an unwanted dose contribution, which can be considerable for tissues outside of the target volume regarding the long term health of cancer patients. Due to the high biological effectiveness of neutrons in regards to cancer induction, small neutron doses can be important. This study quantified the neutron doses for different radiation therapy modalities. Most of the reports in the literature used neutron dose measurements free in air or on the surface of phantoms to estimate the amount of neutron dose to the patient. In this study, dose measurements were performed in terms of neutron dose equivalent inside an anthropomorphic phantom. The neutron dose equivalent was determined using track etch detectors as a function of the distance to the isocenter, as well as for radiation sensitive organs. The dose distributions were compared with respect to treatment techniques (3D-conformal, volumetric modulated arc therapy and intensity-modulated radiation therapy for photons; spot scanning and passive scattering for protons), therapy machines (Varian, Elekta and Siemens linear accelerators) and radiation quality (photons and protons). The neutron dose equivalent varied between 0.002 and 3 mSv per treatment gray over all measurements. Only small differences were found when comparing treatment techniques, but substantial differences were observed between the linear accelerator models. The neutron dose equivalent for proton therapy was higher than for photons in general and in particular for double-scattered protons. The overall neutron dose equivalent measured in this study was an order of magnitude lower than the stray dose of a treatment using 6 MV photons, suggesting that the contribution of the secondary neutron dose equivalent to the integral dose of a radiotherapy patient is small.
Hälg, R A; Besserer, J; Boschung, M; Mayer, S; Lomax, A J; Schneider, U
2014-05-21
In radiation therapy, high energy photon and proton beams cause the production of secondary neutrons. This leads to an unwanted dose contribution, which can be considerable for tissues outside of the target volume regarding the long term health of cancer patients. Due to the high biological effectiveness of neutrons in regards to cancer induction, small neutron doses can be important. This study quantified the neutron doses for different radiation therapy modalities. Most of the reports in the literature used neutron dose measurements free in air or on the surface of phantoms to estimate the amount of neutron dose to the patient. In this study, dose measurements were performed in terms of neutron dose equivalent inside an anthropomorphic phantom. The neutron dose equivalent was determined using track etch detectors as a function of the distance to the isocenter, as well as for radiation sensitive organs. The dose distributions were compared with respect to treatment techniques (3D-conformal, volumetric modulated arc therapy and intensity-modulated radiation therapy for photons; spot scanning and passive scattering for protons), therapy machines (Varian, Elekta and Siemens linear accelerators) and radiation quality (photons and protons). The neutron dose equivalent varied between 0.002 and 3 mSv per treatment gray over all measurements. Only small differences were found when comparing treatment techniques, but substantial differences were observed between the linear accelerator models. The neutron dose equivalent for proton therapy was higher than for photons in general and in particular for double-scattered protons. The overall neutron dose equivalent measured in this study was an order of magnitude lower than the stray dose of a treatment using 6 MV photons, suggesting that the contribution of the secondary neutron dose equivalent to the integral dose of a radiotherapy patient is small.
Space Radiation Organ Doses for Astronauts on Past and Future Missions
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.
2007-01-01
We review methods and data used for determining astronaut organ dose equivalents on past space missions including Apollo, Skylab, Space Shuttle, NASA-Mir, and International Space Station (ISS). Expectations for future lunar missions are also described. Physical measurements of space radiation include the absorbed dose, dose equivalent, and linear energy transfer (LET) spectra, or a related quantity, the lineal energy (y) spectra that is measured by a tissue equivalent proportional counter (TEPC). These data are used in conjunction with space radiation transport models to project organ specific doses used in cancer and other risk projection models. Biodosimetry data from Mir, STS, and ISS missions provide an alternative estimate of organ dose equivalents based on chromosome aberrations. The physical environments inside spacecraft are currently well understood with errors in organ dose projections estimated as less than plus or minus 15%, however understanding the biological risks from space radiation remains a difficult problem because of the many radiation types including protons, heavy ions, and secondary neutrons for which there are no human data to estimate risks. The accuracy of projections of organ dose equivalents described here must be supplemented with research on the health risks of space exposure to properly assess crew safety for exploration missions.
Simplified planar model of a car steering system with rack and pinion and McPherson suspension
NASA Astrophysics Data System (ADS)
Knapczyk, J.; Kucybała, P.
2016-09-01
The paper presents the analysis and optimization of steering system with rack and pinion and McPherson suspension using spatial model and equivalent simplified planar model. The dimension of the steering linkage that give minimum steering error can be estimated using planar model. The steering error is defined as the difference between the actual angle made by the outer front wheel during steering manoeuvers and the calculated angle for the same wheel based on the Ackerman principle. For a given linear rack displacement, a specified steering arms angular displacements are determined while simultaneously ensuring best transmission angle characteristics (i) without and (ii) with imposing linear correlation between input and output. Numerical examples are used to illustrate the proposed method.
Linear phase compressive filter
McEwan, T.E.
1995-06-06
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.
Data-Driven Method to Estimate Nonlinear Chemical Equivalence.
Mayo, Michael; Collier, Zachary A; Winton, Corey; Chappell, Mark A
2015-01-01
There is great need to express the impacts of chemicals found in the environment in terms of effects from alternative chemicals of interest. Methods currently employed in fields such as life-cycle assessment, risk assessment, mixtures toxicology, and pharmacology rely mostly on heuristic arguments to justify the use of linear relationships in the construction of "equivalency factors," which aim to model these concentration-concentration correlations. However, the use of linear models, even at low concentrations, oversimplifies the nonlinear nature of the concentration-response curve, therefore introducing error into calculations involving these factors. We address this problem by reporting a method to determine a concentration-concentration relationship between two chemicals based on the full extent of experimentally derived concentration-response curves. Although this method can be easily generalized, we develop and illustrate it from the perspective of toxicology, in which we provide equations relating the sigmoid and non-monotone, or "biphasic," responses typical of the field. The resulting concentration-concentration relationships are manifestly nonlinear for nearly any chemical level, even at the very low concentrations common to environmental measurements. We demonstrate the method using real-world examples of toxicological data which may exhibit sigmoid and biphasic mortality curves. Finally, we use our models to calculate equivalency factors, and show that traditional results are recovered only when the concentration-response curves are "parallel," which has been noted before, but we make formal here by providing mathematical conditions on the validity of this approach.
Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen
2006-01-01
This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
Rigatos, Gerasimos G
2016-06-01
It is proven that the model of the p53-mdm2 protein synthesis loop is a differentially flat one and using a diffeomorphism (change of state variables) that is proposed by differential flatness theory it is shown that the protein synthesis model can be transformed into the canonical (Brunovsky) form. This enables the design of a feedback control law that maintains the concentration of the p53 protein at the desirable levels. To estimate the non-measurable elements of the state vector describing the p53-mdm2 system dynamics, the derivative-free non-linear Kalman filter is used. Moreover, to compensate for modelling uncertainties and external disturbances that affect the p53-mdm2 system, the derivative-free non-linear Kalman filter is re-designed as a disturbance observer. The derivative-free non-linear Kalman filter consists of the Kalman filter recursion applied on the linearised equivalent of the protein synthesis model together with an inverse transformation based on differential flatness theory that enables to retrieve estimates for the state variables of the initial non-linear model. The proposed non-linear feedback control and perturbations compensation method for the p53-mdm2 system can result in more efficient chemotherapy schemes where the infusion of medication will be better administered.
Characterizing hydrochemical properties of springs in Taiwan based on their geological origins.
Jang, Cheng-Shin; Chen, Jui-Sheng; Lin, Yun-Bin; Liu, Chen-Wuing
2012-01-01
This study was performed to characterize hydrochemical properties of springs based on their geological origins in Taiwan. Stepwise discriminant analysis (DA) was used to establish a linear classification model of springs using hydrochemical parameters. Two hydrochemical datasets-ion concentrations and relative proportions of equivalents per liter of major ions-were included to perform prediction of the geological origins of springs. Analyzed results reveal that DA using relative proportions of equivalents per liter of major ions yields a 95.6% right assignation, which is superior to DA using ion concentrations. This result indicates that relative proportions of equivalents of major hydrochemical parameters in spring water are more highly associated with the geological origins than ion concentrations do. Low percentages of Na(+) equivalents are common properties of springs emerging from acid-sulfate and neutral-sulfate igneous rock. Springs emerging from metamorphic rock show low percentages of Cl( - ) equivalents and high percentages of HCO[Formula: see text] equivalents, and springs emerging from sedimentary rock exhibit high Cl( - )/SO(2-)(4) ratios.
Shindul-Rothschild, Judith; Gregas, Matt
2013-01-01
The Affordable Care Act is modeled after Massachusetts insurance reforms enacted in 2006. A linear mixed effect model examined trends in patient turnover and nurse employment in Massachusetts, New York, and California nonfederal hospitals from 2000 to 2011. The linear mixed effect analysis found that the rate of increase in hospital admissions was significantly higher in Massachusetts hospitals (p<.001) than that in California and New York (p=.007). The rate of change in registered nurses full-time equivalent hours per patient day was significantly less (p=.02) in Massachusetts than that in California and was not different from zero. The rate of change in admissions to registered nurses full-time equivalent hours per patient day was significantly greater in Massachusetts than California (p=.001) and New York (p<.01). Nurse staffing remained flat in Massachusetts, despite a significant increase in hospital admissions. The implications of the findings for nurse employment and hospital utilization following the implementation of national health insurance reform are discussed.
The Prediction of Scattered Broadband Shock-Associated Noise
NASA Technical Reports Server (NTRS)
Miller, Steven A. E.
2015-01-01
A mathematical model is developed for the prediction of scattered broadband shock-associated noise. Model arguments are dependent on the vector Green's function of the linearized Euler equations, steady Reynolds-averaged Navier-Stokes solutions, and the two-point cross-correlation of the equivalent source. The equivalent source is dependent on steady Reynolds-averaged Navier-Stokes solutions of the jet flow, that capture the nozzle geometry and airframe surface. Contours of the time-averaged streamwise velocity component and turbulent kinetic energy are examined with varying airframe position relative to the nozzle exit. Propagation effects are incorporated by approximating the vector Green's function of the linearized Euler equations. This approximation involves the use of ray theory and an assumption that broadband shock-associated noise is relatively unaffected by the refraction of the jet shear layer. A non-dimensional parameter is proposed that quantifies the changes of the broadband shock-associated noise source with varying jet operating condition and airframe position. Scattered broadband shock-associated noise possesses a second set of broadband lobes that are due to the effect of scattering. Presented predictions demonstrate relatively good agreement compared to a wide variety of measurements.
Cotton-type and joint invariants for linear elliptic systems.
Aslam, A; Mahomed, F M
2013-01-01
Cotton-type invariants for a subclass of a system of two linear elliptic equations, obtainable from a complex base linear elliptic equation, are derived both by spliting of the corresponding complex Cotton invariants of the base complex equation and from the Laplace-type invariants of the system of linear hyperbolic equations equivalent to the system of linear elliptic equations via linear complex transformations of the independent variables. It is shown that Cotton-type invariants derived from these two approaches are identical. Furthermore, Cotton-type and joint invariants for a general system of two linear elliptic equations are also obtained from the Laplace-type and joint invariants for a system of two linear hyperbolic equations equivalent to the system of linear elliptic equations by complex changes of the independent variables. Examples are presented to illustrate the results.
Cotton-Type and Joint Invariants for Linear Elliptic Systems
Aslam, A.; Mahomed, F. M.
2013-01-01
Cotton-type invariants for a subclass of a system of two linear elliptic equations, obtainable from a complex base linear elliptic equation, are derived both by spliting of the corresponding complex Cotton invariants of the base complex equation and from the Laplace-type invariants of the system of linear hyperbolic equations equivalent to the system of linear elliptic equations via linear complex transformations of the independent variables. It is shown that Cotton-type invariants derived from these two approaches are identical. Furthermore, Cotton-type and joint invariants for a general system of two linear elliptic equations are also obtained from the Laplace-type and joint invariants for a system of two linear hyperbolic equations equivalent to the system of linear elliptic equations by complex changes of the independent variables. Examples are presented to illustrate the results. PMID:24453871
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Y; Waldron, T; Pennington, E
Purpose: To test the radiobiological impact of hypofractionated choroidal melanoma brachytherapy, we calculated single fraction equivalent doses (SFED) of the tumor that equivalent to 85 Gy of I125-BT for 20 patients. Corresponding organs-at-risks (OARs) doses were estimated. Methods: Twenty patients treated with I125-BT were retrospectively examined. The tumor SFED values were calculated from tumor BED using a conventional linear-quadratic (L-Q) model and an universal survival curve (USC). The opposite retina (α/β = 2.58), macula (2.58), optic disc (1.75), and lens (1.2) were examined. The % doses of OARs over tumor doses were assumed to be the same as for amore » single fraction delivery. The OAR SFED values were converted into BED and equivalent dose in 2 Gy fraction (EQD2) by using both L-Q and USC models, then compared to I125-BT. Results: The USC-based BED and EQD2 doses of the macula, optic disc, and the lens were on average 118 ± 46% (p < 0.0527), 126 ± 43% (p < 0.0354), and 112 ± 32% (p < 0.0265) higher than those of I125-BT, respectively. The BED and EQD2 doses of the opposite retina were 52 ± 9% lower than I125-BT. The tumor SFED values were 25.2 ± 3.3 Gy and 29.1 ± 2.5 Gy when using USC and LQ models which can be delivered within 1 hour. All BED and EQD2 values using L-Q model were significantly larger when compared to the USC model (p < 0.0274) due to its large single fraction size (> 14 Gy). Conclusion: The estimated single fraction doses were feasible to be delivered within 1 hour using a high dose rate source such as electronic brachytherapy (eBT). However, the estimated OAR doses using eBT were 112 ∼ 118% higher than when using the I125-BT technique. Continued exploration of alternative dose rate or fractionation schedules should be followed.« less
A physics-based model of the electrical impedance of ionic polymer metal composites
NASA Astrophysics Data System (ADS)
Cha, Youngsu; Aureli, Matteo; Porfiri, Maurizio
2012-06-01
In this paper, we analyze the chemoelectrical behavior of ionic polymer metal composites (IPMCs) in the small voltage range with a novel hypothesis on the charge dynamics in proximity of the electrodes. In particular, we homogenize the microscopic properties of the interfacial region through a so-called composite layer which extends between the polymer membrane and the metal electrode. This layer accounts for the dissimilar properties of its constituents by describing the charge distribution via two species of charge carriers, that is, electrons and mobile counterions. We model the charge dynamics in the IPMC by adapting the multiphysics formulation based on the Poisson-Nernst-Planck (PNP) framework, which is enriched through an additional term to capture the electron transport in the composite layer. Under the hypothesis of small voltage input, we use the linearized PNP model to derive an equivalent IPMC circuit model with lumped elements. The equivalent model comprises a resistor connected in series with the parallel of a capacitor and a Warburg impedance element. These elements idealize the phenomena of charge build up in the double layer region and the faradaic impedance related to mass transfer, respectively. We validate the equivalent model through measurements on in-house fabricated samples addressing both IPMC step response and impedance, while assessing the influence of repeated plating cycles on the electrical properties of IPMCs. Experimental results are compared with theoretical findings to identify the equivalent circuit parameters. Findings from this study are compared with alternative impedance models proposed in the literature.
Linear decentralized systems with special structure. [for twin lift helicopters
NASA Technical Reports Server (NTRS)
Martin, C. F.
1982-01-01
Certain fundamental structures associated with linear systems having internal symmetries are outlined. It is shown that the theory of finite-dimensional algebras and their representations are closely related to such systems. It is also demonstrated that certain problems in the decentralized control of symmetric systems are equivalent to long-standing problems of linear systems theory. Even though the structure imposed arose in considering the problems of twin-lift helicopters, any large system composed of several identical intercoupled control systems can be modeled by a linear system that satisfies the constraints imposed. Internal symmetry can be exploited to yield new system-theoretic invariants and a better understanding of the way in which the underlying structure affects overall system performance.
Reduction of a linear complex model for respiratory system during Airflow Interruption.
Jablonski, Ireneusz; Mroczka, Janusz
2010-01-01
The paper presents methodology of a complex model reduction to its simpler version - an identifiable inverse model. Its main tool is a numerical procedure of sensitivity analysis (structural and parametric) applied to the forward linear equivalent designed for the conditions of interrupter experiment. Final result - the reduced analog for the interrupter technique is especially worth of notice as it fills a major gap in occlusional measurements, which typically use simple, one- or two-element physical representations. Proposed electrical reduced circuit, being structural combination of resistive, inertial and elastic properties, can be perceived as a candidate for reliable reconstruction and quantification (in the time and frequency domain) of dynamical behavior of the respiratory system in response to a quasi-step excitation by valve closure.
A MODEL FOR INTERFACE DYNAMOS IN LATE K AND EARLY M DWARFS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mullan, D. J.; MacDonald, J.; Houdebine, E. R., E-mail: mullan@udel.edu
2015-09-10
Measurements of the equivalent width EW(CaK) of emission in the Ca ii K line have been obtained by Houdebine et al. for stars with spectral types from dK5 to dM4. In order to explain the observed variations of EW(CaK) with spectral sub-type, we propose a quantitative model of interface dynamos in low-mass stars. Our model leads to surface field strengths B{sub s} which turn out to be essentially linearly proportional to EW(CaK). This result is reminiscent of the Sun, where Skumanich et al. found that the intensity of CaK emission in solar active regions is linearly proportional to the localmore » field strength.« less
Li, YuHui; Jin, FeiTeng
2017-01-01
The inversion design approach is a very useful tool for the complex multiple-input-multiple-output nonlinear systems to implement the decoupling control goal, such as the airplane model and spacecraft model. In this work, the flight control law is proposed using the neural-based inversion design method associated with the nonlinear compensation for a general longitudinal model of the airplane. First, the nonlinear mathematic model is converted to the equivalent linear model based on the feedback linearization theory. Then, the flight control law integrated with this inversion model is developed to stabilize the nonlinear system and relieve the coupling effect. Afterwards, the inversion control combined with the neural network and nonlinear portion is presented to improve the transient performance and attenuate the uncertain effects on both external disturbances and model errors. Finally, the simulation results demonstrate the effectiveness of this controller. PMID:29410680
Dong, Jing; Zhang, Zhe-chen; Zhou, Guo-liang
2015-06-01
To analyze the stress distribution in periodontal ligament of maxillary first molar during distal movement with nonlinear finite element analysis, and to compare it with the result of linear finite element analysis, consequently to provide biomechanical evidence for clinical application. The 3-D finite element model including a maxillary first molar, periodontal ligament, alveolar bone, cancellous bone, cortical bone and a buccal tube was built up by using Mimics, Geomagic, ProE and Ansys Workbench. The material of periodontal ligament was set as nonlinear material and linear elastic material, respectively. Loads of different combinations were applied to simulate the clinical situation of distalizing the maxillary first molar. There were channels of low stress in peak distribution of Von Mises equivalent stress and compressive stress of periodontal ligament in nonlinear finite element model. The peak of Von Mises equivalent stress was lower when it was satisfied that Mt/F minus Mr/F approximately equals 2. The peak of compressive stress was lower when it was satisfied that Mt/F was approximately equal to Mr/F. The relative stress of periodontal ligament was higher and violent in linear finite element model and there were no channels of low stress in peak distribution. There are channels in which stress of periodontal ligament is lower. The condition of low stress should be satisfied by applied M/F during the course of distalizing the maxillary first molar.
Study on static and dynamic characteristics of moving magnet linear compressors
NASA Astrophysics Data System (ADS)
Chen, N.; Tang, Y. J.; Wu, Y. N.; Chen, X.; Xu, L.
2007-09-01
With the development of high-strength NdFeB magnetic material, moving magnet linear compressors have been gradually introduced in the fields of refrigeration and cryogenic engineering, especially in Stirling and pulse tube cryocoolers. This paper presents simulation and experimental investigations on the static and dynamic characteristics of a moving magnet linear motor and a moving magnet linear compressor. Both equivalent magnetic circuits and finite element approaches have been used to model the moving magnet linear motor. Subsequently, the force and equilibrium characteristics of the linear motor have been predicted and verified by detailed static experimental analyses. In combination with a harmonic analysis, experimental investigations were conducted on a prototype of a moving magnet linear compressor. A voltage-stroke relationship, the effect of charging pressure on the performance and dynamic frequency response characteristics are investigated. Finally, the method to identify optimal points of the linear compressor has been described, which is indispensable to the design and operation of moving magnet linear compressors.
Apipunyasopon, Lukkana; Srisatit, Somyot; Phaisangittisakul, Nakorn
2013-09-06
The purpose of the study was to investigate the use of the equivalent square formula for determining the surface dose from a rectangular photon beam. A 6 MV therapeutic photon beam delivered from a Varian Clinac 23EX medical linear accelerator was modeled using the EGS4nrc Monte Carlo simulation package. It was then used to calculate the dose in the build-up region from both square and rectangular fields. The field patterns were defined by various settings of the X- and Y-collimator jaw ranging from 5 to 20 cm. Dose measurements were performed using a thermoluminescence dosimeter and a Markus parallel-plate ionization chamber on the four square fields (5 × 5, 10 × 10, 15 × 15, and 20 × 20 cm2). The surface dose was acquired by extrapolating the build-up doses to the surface. An equivalent square for a rectangular field was determined using the area-to-perimeter formula, and the surface dose of the equivalent square was estimated using the square-field data. The surface dose of square field increased linearly from approximately 10% to 28% as the side of the square field increased from 5 to 20 cm. The influence of collimator exchange on the surface dose was found to be not significant. The difference in the percentage surface dose of the rectangular field compared to that of the relevant equivalent square was insignificant and can be clinically neglected. The use of the area-to-perimeter formula for an equivalent square field can provide a clinically acceptable surface dose estimation for a rectangular field from a 6 MV therapy photon beam.
Higgs potential from derivative interactions
NASA Astrophysics Data System (ADS)
Quadri, A.
2017-06-01
A formulation of the linear σ model with derivative interactions is studied. The classical theory is on-shell equivalent to the σ model with the standard quartic Higgs potential. The mass of the scalar mode only appears in the quadratic part and not in the interaction vertices, unlike in the ordinary formulation of the theory. Renormalization of the model is discussed. A nonpower-counting renormalizable extension, obeying the defining functional identities of the theory, is presented. This extension is physically equivalent to the tree-level inclusion of a dimension-six effective operator ∂μ(Φ†Φ)∂μ(Φ†Φ). The resulting UV divergences are arranged in a perturbation series around the power-counting renormalizable theory. The application of the formalism to the Standard Model in the presence of the dimension-six operator ∂μ(Φ†Φ)∂μ(Φ†Φ) is discussed.
Cyclic Plasticity Constitutive Model for Uniaxial Ratcheting Behavior of AZ31B Magnesium Alloy
NASA Astrophysics Data System (ADS)
Lin, Y. C.; Liu, Zheng-Hua; Chen, Xiao-Min; Long, Zhi-Li
2015-05-01
Investigating the ratcheting behavior of magnesium alloys is significant for the structure's reliable design. The uniaxial ratcheting behavior of AZ31B magnesium alloy is studied by the asymmetric cyclic stress-controlled experiments at room temperature. A modified kinematic hardening model is established to describe the uniaxial ratcheting behavior of the studied alloy. In the modified model, the material parameter m i is improved as an exponential function of the maximum equivalent stress. The modified model can be used to predict the ratcheting strain evolution of the studied alloy under the single-step and multi-step asymmetric stress-controlled cyclic loadings. Additionally, due to the significant effect of twinning on the plastic deformation of magnesium alloy, the relationship between the material parameter m i and the linear density of twins is discussed. It is found that there is a linear relationship between the material parameter m i and the linear density of twins induced by the cyclic loadings.
Estimating linear-nonlinear models using Rényi divergences
Kouh, Minjoon; Sharpee, Tatyana O.
2009-01-01
This paper compares a family of methods for characterizing neural feature selectivity using natural stimuli in the framework of the linear-nonlinear model. In this model, the spike probability depends in a nonlinear way on a small number of stimulus dimensions. The relevant stimulus dimensions can be found by optimizing a Rényi divergence that quantifies a change in the stimulus distribution associated with the arrival of single spikes. Generally, good reconstructions can be obtained based on optimization of Rényi divergence of any order, even in the limit of small numbers of spikes. However, the smallest error is obtained when the Rényi divergence of order 1 is optimized. This type of optimization is equivalent to information maximization, and is shown to saturate the Cramér-Rao bound describing the smallest error allowed for any unbiased method. We also discuss conditions under which information maximization provides a convenient way to perform maximum likelihood estimation of linear-nonlinear models from neural data. PMID:19568981
Estimating linear-nonlinear models using Renyi divergences.
Kouh, Minjoon; Sharpee, Tatyana O
2009-01-01
This article compares a family of methods for characterizing neural feature selectivity using natural stimuli in the framework of the linear-nonlinear model. In this model, the spike probability depends in a nonlinear way on a small number of stimulus dimensions. The relevant stimulus dimensions can be found by optimizing a Rényi divergence that quantifies a change in the stimulus distribution associated with the arrival of single spikes. Generally, good reconstructions can be obtained based on optimization of Rényi divergence of any order, even in the limit of small numbers of spikes. However, the smallest error is obtained when the Rényi divergence of order 1 is optimized. This type of optimization is equivalent to information maximization, and is shown to saturate the Cramer-Rao bound describing the smallest error allowed for any unbiased method. We also discuss conditions under which information maximization provides a convenient way to perform maximum likelihood estimation of linear-nonlinear models from neural data.
Mixed effect Poisson log-linear models for clinical and epidemiological sleep hypnogram data
Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian; Punjabi, Naresh M.
2013-01-01
Bayesian Poisson log-linear multilevel models scalable to epidemiological studies are proposed to investigate population variability in sleep state transition rates. Hierarchical random effects are used to account for pairings of subjects and repeated measures within those subjects, as comparing diseased to non-diseased subjects while minimizing bias is of importance. Essentially, non-parametric piecewise constant hazards are estimated and smoothed, allowing for time-varying covariates and segment of the night comparisons. The Bayesian Poisson regression is justified through a re-derivation of a classical algebraic likelihood equivalence of Poisson regression with a log(time) offset and survival regression assuming exponentially distributed survival times. Such re-derivation allows synthesis of two methods currently used to analyze sleep transition phenomena: stratified multi-state proportional hazards models and log-linear models with GEE for transition counts. An example data set from the Sleep Heart Health Study is analyzed. Supplementary material includes the analyzed data set as well as the code for a reproducible analysis. PMID:22241689
Preprocessing Inconsistent Linear System for a Meaningful Least Squares Solution
NASA Technical Reports Server (NTRS)
Sen, Syamal K.; Shaykhian, Gholam Ali
2011-01-01
Mathematical models of many physical/statistical problems are systems of linear equations. Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.
Preprocessing in Matlab Inconsistent Linear System for a Meaningful Least Squares Solution
NASA Technical Reports Server (NTRS)
Sen, Symal K.; Shaykhian, Gholam Ali
2011-01-01
Mathematical models of many physical/statistical problems are systems of linear equations Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the . minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.
A d-statistic for single-case designs that is equivalent to the usual between-groups d-statistic.
Shadish, William R; Hedges, Larry V; Pustejovsky, James E; Boyajian, Jonathan G; Sullivan, Kristynn J; Andrade, Alma; Barrientos, Jeannette L
2014-01-01
We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect size measures for single-case design, and unlike many general linear model approaches such as multilevel modelling or generalised additive models, it produces a standardised effect size that can be integrated over studies with different outcome measures. SPSS macros for both effect size computation and power analysis are available.
Aeroelastic Stability of Rotor Blades Using Finite Element Analysis
NASA Technical Reports Server (NTRS)
Chopra, I.; Sivaneri, N.
1982-01-01
The flutter stability of flap bending, lead-lag bending, and torsion of helicopter rotor blades in hover is investigated using a finite element formulation based on Hamilton's principle. The blade is divided into a number of finite elements. Quasi-steady strip theory is used to evaluate the aerodynamic loads. The nonlinear equations of motion are solved for steady-state blade deflections through an iterative procedure. The equations of motion are linearized assuming blade motion to be a small perturbation about the steady deflected shape. The normal mode method based on the coupled rotating natural modes is used to reduce the number of equations in the flutter analysis. First the formulation is applied to single-load-path blades (articulated and hingeless blades). Numerical results show very good agreement with existing results obtained using the modal approach. The second part of the application concerns multiple-load-path blades, i.e. bearingless blades. Numerical results are presented for several analytical models of the bearingless blade. Results are also obtained using an equivalent beam approach wherein a bearingless blade is modelled as a single beam with equivalent properties. Results show the equivalent beam model.
[From clinical judgment to linear regression model.
Palacios-Cruz, Lino; Pérez, Marcela; Rivas-Ruiz, Rodolfo; Talavera, Juan O
2013-01-01
When we think about mathematical models, such as linear regression model, we think that these terms are only used by those engaged in research, a notion that is far from the truth. Legendre described the first mathematical model in 1805, and Galton introduced the formal term in 1886. Linear regression is one of the most commonly used regression models in clinical practice. It is useful to predict or show the relationship between two or more variables as long as the dependent variable is quantitative and has normal distribution. Stated in another way, the regression is used to predict a measure based on the knowledge of at least one other variable. Linear regression has as it's first objective to determine the slope or inclination of the regression line: Y = a + bx, where "a" is the intercept or regression constant and it is equivalent to "Y" value when "X" equals 0 and "b" (also called slope) indicates the increase or decrease that occurs when the variable "x" increases or decreases in one unit. In the regression line, "b" is called regression coefficient. The coefficient of determination (R 2 ) indicates the importance of independent variables in the outcome.
Color Sparse Representations for Image Processing: Review, Models, and Prospects.
Barthélemy, Quentin; Larue, Anthony; Mars, Jérôme I
2015-11-01
Sparse representations have been extended to deal with color images composed of three channels. A review of dictionary-learning-based sparse representations for color images is made here, detailing the differences between the models, and comparing their results on the real and simulated data. These models are considered in a unifying framework that is based on the degrees of freedom of the linear filtering/transformation of the color channels. Moreover, this allows it to be shown that the scalar quaternionic linear model is equivalent to constrained matrix-based color filtering, which highlights the filtering implicitly applied through this model. Based on this reformulation, the new color filtering model is introduced, using unconstrained filters. In this model, spatial morphologies of color images are encoded by atoms, and colors are encoded by color filters. Color variability is no longer captured in increasing the dictionary size, but with color filters, this gives an efficient color representation.
The Short Form 36 English and Chinese versions were equivalent in a multiethnic Asian population.
Tan, Maudrene L S; Wee, Hwee-Lin; Lee, Jeannette; Ma, Stefan; Heng, Derrick; Tai, E-Shyong; Thumboo, Julian
2013-07-01
The primary aim of this article was to evaluate measurement equivalence of the English and Chinese versions of the Short Form 36 version 2 (SF-36v2) and Short Form 6D (SF-6D). In this cross-sectional study, health-related quality of life (HRQoL) was measured from 4,973 ethnic Chinese subjects using the SF-36v2 questionnaire. Measurement equivalence of domain and utility scores for the English- and Chinese-language SF-36v2 and SF-6D were assessed by examining the score differences between the two languages using linear regression models, with and without adjustment for known determinants of HRQoL. Equivalence was achieved if the 90% confidence interval (CI) of the differences in scores, due to language, fell within a predefined equivalence margin. Compared with English-speaking Chinese, Chinese-speaking Chinese were significantly older (47.6 vs. 55.5 years). All SF-36v2 domains were equivalent after adjusting for known HRQoL. SF-6D utility/items had the 90% CI either fully or partially overlap their predefined equivalence margin. The English- and Chinese-language versions of the SF-36v2 and SF-6D demonstrated equivalence. Copyright © 2013 Elsevier Inc. All rights reserved.
Tahmasebi Birgani, Mohamad J; Chegeni, Nahid; Zabihzadeh, Mansoor; Hamzian, Nima
2014-01-01
Equivalent field is frequently used for central axis depth-dose calculations of rectangular- and irregular-shaped photon beams. As most of the proposed models to calculate the equivalent square field are dosimetry based, a simple physical-based method to calculate the equivalent square field size was used as the basis of this study. The table of the sides of the equivalent square or rectangular fields was constructed and then compared with the well-known tables by BJR and Venselaar, et al. with the average relative error percentage of 2.5 ± 2.5% and 1.5 ± 1.5%, respectively. To evaluate the accuracy of this method, the percentage depth doses (PDDs) were measured for some special irregular symmetric and asymmetric treatment fields and their equivalent squares for Siemens Primus Plus linear accelerator for both energies, 6 and 18MV. The mean relative differences of PDDs measurement for these fields and their equivalent square was approximately 1% or less. As a result, this method can be employed to calculate equivalent field not only for rectangular fields but also for any irregular symmetric or asymmetric field. © 2013 American Association of Medical Dosimetrists Published by American Association of Medical Dosimetrists All rights reserved.
Classes of Split-Plot Response Surface Designs for Equivalent Estimation
NASA Technical Reports Server (NTRS)
Parker, Peter A.; Kowalski, Scott M.; Vining, G. Geoffrey
2006-01-01
When planning an experimental investigation, we are frequently faced with factors that are difficult or time consuming to manipulate, thereby making complete randomization impractical. A split-plot structure differentiates between the experimental units associated with these hard-to-change factors and others that are relatively easy-to-change and provides an efficient strategy that integrates the restrictions imposed by the experimental apparatus. Several industrial and scientific examples are presented to illustrate design considerations encountered in the restricted randomization context. In this paper, we propose classes of split-plot response designs that provide an intuitive and natural extension from the completely randomized context. For these designs, the ordinary least squares estimates of the model are equivalent to the generalized least squares estimates. This property provides best linear unbiased estimators and simplifies model estimation. The design conditions that allow for equivalent estimation are presented enabling design construction strategies to transform completely randomized Box-Behnken, equiradial, and small composite designs into a split-plot structure.
Farney, Robert J.; Walker, Brandon S.; Farney, Robert M.; Snow, Gregory L.; Walker, James M.
2011-01-01
Background: Various models and questionnaires have been developed for screening specific populations for obstructive sleep apnea (OSA) as defined by the apnea/hypopnea index (AHI); however, almost every method is based upon dichotomizing a population, and none function ideally. We evaluated the possibility of using the STOP-Bang model (SBM) to classify severity of OSA into 4 categories ranging from none to severe. Methods: Anthropomorphic data and the presence of snoring, tiredness/sleepiness, observed apneas, and hypertension were collected from 1426 patients who underwent diagnostic polysomnography. Questionnaire data for each patient was converted to the STOP-Bang equivalent with an ordinal rating of 0 to 8. Proportional odds logistic regression analysis was conducted to predict severity of sleep apnea based upon the AHI: none (AHI < 5/h), mild (AHI ≥ 5 to < 15/h), moderate (≥ 15 to < 30/h), and severe (AHI ≥ 30/h). Results: Linear, curvilinear, and weighted models (R2 = 0.245, 0.251, and 0.269, respectively) were developed that predicted AHI severity. The linear model showed a progressive increase in the probability of severe (4.4% to 81.9%) and progressive decrease in the probability of none (52.5% to 1.1%). The probability of mild or moderate OSA initially increased from 32.9% and 10.3% respectively (SBM score 0) to 39.3% (SBM score 2) and 31.8% (SBM score 4), after which there was a progressive decrease in probabilities as more patients fell into the severe category. Conclusions: The STOP-Bang model may be useful to categorize OSA severity, triage patients for diagnostic evaluation or exclude from harm. Citation: Farney RJ; Walker BS; Farney RM; Snow GL; Walker JM. The STOP-Bang equivalent model and prediction of severity of obstructive sleep apnea: relation to polysomnographic measurements of the apnea/hypopnea index. J Clin Sleep Med 2011;7(5):459-465. PMID:22003340
Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A
2015-01-01
This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398
2011-01-01
Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520
Soil amplification with a strong impedance contrast: Boston, Massachusetts
Baise, Laurie G.; Kaklamanos, James; Berry, Bradford M; Thompson, Eric M.
2016-01-01
In this study, we evaluate the effect of strong sediment/bedrock impedance contrasts on soil amplification in Boston, Massachusetts, for typical sites along the Charles and Mystic Rivers. These sites can be characterized by artificial fill overlying marine sediments overlying glacial till and bedrock, where the depth to bedrock ranges from 20 to 80 m. The marine sediments generally consist of organic silts, sand, and Boston Blue Clay. We chose these sites because they represent typical foundation conditions in the city of Boston, and the soil conditions are similar to other high impedance contrast environments. The sediment/bedrock interface in this region results in an impedance ratio on the order of ten, which in turn results in a significant amplification of the ground motion. Using stratigraphic information derived from numerous boreholes across the region paired with geologic and geomorphologic constraints, we develop a depth-to-bedrock model for the greater Boston region. Using shear-wave velocity profiles from 30 locations, we develop average velocity profiles for sites mapped as artificial fill, glaciofluvial deposits, and bedrock. By pairing the depth-to-bedrock model with the surficial geology and the average shear-wave velocity profiles, we can predict soil amplification in Boston. We compare linear and equivalent-linear site response predictions for a soil layer of varying thickness over bedrock, and assess the effects of varying the bedrock shear-wave velocity (VSb) and quality factor (Q). In a moderate seismicity region like Boston, many earthquakes will result in ground motions that can be modeled with linear site response methods. We also assess the effect of bedrock depth on soil amplification for a generic soil profile in artificial fill, using both linear and equivalent-linear site response models. Finally, we assess the accuracy of the model results by comparing the predicted (linear site response) and observed site response at the Northeastern University (NEU) vertical seismometer array during the 2011 M 5.8 Mineral, Virginia, earthquake. Site response at the NEU vertical array results in amplification on the order of 10 times at a period between 0.7-0.8 s. The results from this study provide evidence that the mean short-period and mean intermediate-period amplification used in design codes (i.e., from the Fa and Fv site coefficients) may underpredict soil amplification in strong impedance contrast environments such as Boston.
Prediction of nonlinear soil effects
Hartzell, S.; Bonilla, L.F.; Williams, R.A.
2004-01-01
Mathematical models of soil nonlinearity in common use and recently developed nonlinear codes compared to investigate the range of their predictions. We consider equivalent linear formulations with and without frequency-dependent moduli and damping ratios and nonlinear formulations for total and effective stress. Average velocity profiles to 150 m depth with midrange National Earthquake Hazards Reduction Program site classifications (B, BC, C, D, and E) in the top 30 m are used to compare the response of a wide range of site conditions from rock to soft soil. Nonlinear soil models are compared using the amplification spectrum, calculated as the ratio of surface ground motion to the input motion at the base of the velocity profile. Peak input motions from 0.1g to 0.9g are considered. For site class B, no significant differences exist between the models considered in this article. For site classes BC and C, differences are small at low input motions (0.1g to 0.2g), but become significant at higher input levels. For site classes D and E the overdamping of frequencies above about 4 Hz by the equivalent linear solution with frequency-independent parameters is apparent for the entire range of input motions considered. The equivalent linear formulation with frequency-dependent moduli and damping ratios under damps relative to the nonlinear models considered for site class C with larger input motions and most input levels for site classes D and E. At larger input motions the underdamping for site classes D and E is not as severe as the overdamping with the frequency-independent formulation, but there are still significant differences in the time domain. A nonlinear formulation is recommended for site classes D and E and for site classes BC and C with input motions greater than a few tenths of the acceleration of gravity. The type of nonlinear formulation to use is driven by considerations of the importance of water content and the availability of laboratory soils data. Our average amplification curves from a nonlinear effective stress formulation compare favorably with observed spectral amplification at class D and E sites in the Seattle area for the 2001 Nisqually earthquake.
NASA Astrophysics Data System (ADS)
Bisdom, Kevin; Bertotti, Giovanni; Nick, Hamidreza M.
2016-05-01
Predicting equivalent permeability in fractured reservoirs requires an understanding of the fracture network geometry and apertures. There are different methods for defining aperture, based on outcrop observations (power law scaling), fundamental mechanics (sublinear length-aperture scaling), and experiments (Barton-Bandis conductive shearing). Each method predicts heterogeneous apertures, even along single fractures (i.e., intrafracture variations), but most fractured reservoir models imply constant apertures for single fractures. We compare the relative differences in aperture and permeability predicted by three aperture methods, where permeability is modeled in explicit fracture networks with coupled fracture-matrix flow. Aperture varies along single fractures, and geomechanical relations are used to identify which fractures are critically stressed. The aperture models are applied to real-world large-scale fracture networks. (Sub)linear length scaling predicts the largest average aperture and equivalent permeability. Barton-Bandis aperture is smaller, predicting on average a sixfold increase compared to matrix permeability. Application of critical stress criteria results in a decrease in the fraction of open fractures. For the applied stress conditions, Coulomb predicts that 50% of the network is critically stressed, compared to 80% for Barton-Bandis peak shear. The impact of the fracture network on equivalent permeability depends on the matrix hydraulic properties, as in a low-permeable matrix, intrafracture connectivity, i.e., the opening along a single fracture, controls equivalent permeability, whereas for a more permeable matrix, absolute apertures have a larger impact. Quantification of fracture flow regimes using only the ratio of fracture versus matrix permeability is insufficient, as these regimes also depend on aperture variations within fractures.
Manimaran, S
2007-06-01
The aim of this study was to compare the biological equivalent of low-dose-rate (LDR) and high-dose-rate (HDR) brachytherapy in terms of the more recent linear quadratic (LQ) model, which leads to theoretical estimation of biological equivalence. One of the key features of the LQ model is that it allows a more systematic radiobiological comparison between different types of treatment because the main parameters alpha/beta and micro are tissue-specific. Such comparisons also allow assessment of the likely change in the therapeutic ratio when switching between LDR and HDR treatments. The main application of LQ methodology, which focuses on by increasing the availability of remote afterloading units, has been to design fractionated HDR treatments that can replace existing LDR techniques. In this study, with LDR treatments (39 Gy in 48 h) equivalent to 11 fractions of HDR irradiation at the experimental level, there are increasing reports of reproducible animal models that may be used to investigate the biological basis of brachytherapy and to help confirm theoretical predictions. This is a timely development owing to the nonavailability of sufficient retrospective patient data analysis. It appears that HDR brachytherapy is likely to be a viable alternative to LDR only if it is delivered without a prohibitively large number of fractions (e.g., fewer than 11). With increased scientific understanding and technological capability, the prospect of a dose equivalent to HDR brachytherapy will allow greater utilization of the concepts discussed in this article.
Study on Standard Fatigue Vehicle Load Model
NASA Astrophysics Data System (ADS)
Huang, H. Y.; Zhang, J. P.; Li, Y. H.
2018-02-01
Based on the measured data of truck from three artery expressways in Guangdong Province, the statistical analysis of truck weight was conducted according to axle number. The standard fatigue vehicle model applied to industrial areas in the middle and late was obtained, which adopted equivalence damage principle, Miner linear accumulation law, water discharge method and damage ratio theory. Compared with the fatigue vehicle model Specified by the current bridge design code, the proposed model has better applicability. It is of certain reference value for the fatigue design of bridge in China.
Turrini, Enrico; Carnevale, Claudio; Finzi, Giovanna; Volta, Marialuisa
2018-04-15
This paper introduces the MAQ (Multi-dimensional Air Quality) model aimed at defining cost-effective air quality plans at different scales (urban to national) and assessing the co-benefits for GHG emissions. The model implements and solves a non-linear multi-objective, multi-pollutant decision problem where the decision variables are the application levels of emission abatement measures allowing the reduction of energy consumption, end-of pipe technologies and fuel switch options. The objectives of the decision problem are the minimization of tropospheric secondary pollution exposure and of internal costs. The model assesses CO 2 equivalent emissions in order to support decision makers in the selection of win-win policies. The methodology is tested on Lombardy region, a heavily polluted area in northern Italy. Copyright © 2017 Elsevier B.V. All rights reserved.
Charge-based MOSFET model based on the Hermite interpolation polynomial
NASA Astrophysics Data System (ADS)
Colalongo, Luigi; Richelli, Anna; Kovacs, Zsolt
2017-04-01
An accurate charge-based compact MOSFET model is developed using the third order Hermite interpolation polynomial to approximate the relation between surface potential and inversion charge in the channel. This new formulation of the drain current retains the same simplicity of the most advanced charge-based compact MOSFET models such as BSIM, ACM and EKV, but it is developed without requiring the crude linearization of the inversion charge. Hence, the asymmetry and the non-linearity in the channel are accurately accounted for. Nevertheless, the expression of the drain current can be worked out to be analytically equivalent to BSIM, ACM and EKV. Furthermore, thanks to this new mathematical approach the slope factor is rigorously defined in all regions of operation and no empirical assumption is required.
Independent contrasts and PGLS regression estimators are equivalent.
Blomberg, Simon P; Lefevre, James G; Wells, Jessie A; Waterhouse, Mary
2012-05-01
We prove that the slope parameter of the ordinary least squares regression of phylogenetically independent contrasts (PICs) conducted through the origin is identical to the slope parameter of the method of generalized least squares (GLSs) regression under a Brownian motion model of evolution. This equivalence has several implications: 1. Understanding the structure of the linear model for GLS regression provides insight into when and why phylogeny is important in comparative studies. 2. The limitations of the PIC regression analysis are the same as the limitations of the GLS model. In particular, phylogenetic covariance applies only to the response variable in the regression and the explanatory variable should be regarded as fixed. Calculation of PICs for explanatory variables should be treated as a mathematical idiosyncrasy of the PIC regression algorithm. 3. Since the GLS estimator is the best linear unbiased estimator (BLUE), the slope parameter estimated using PICs is also BLUE. 4. If the slope is estimated using different branch lengths for the explanatory and response variables in the PIC algorithm, the estimator is no longer the BLUE, so this is not recommended. Finally, we discuss whether or not and how to accommodate phylogenetic covariance in regression analyses, particularly in relation to the problem of phylogenetic uncertainty. This discussion is from both frequentist and Bayesian perspectives.
Discriminative components of data.
Peltonen, Jaakko; Kaski, Samuel
2005-01-01
A simple probabilistic model is introduced to generalize classical linear discriminant analysis (LDA) in finding components that are informative of or relevant for data classes. The components maximize the predictability of the class distribution which is asymptotically equivalent to 1) maximizing mutual information with the classes, and 2) finding principal components in the so-called learning or Fisher metrics. The Fisher metric measures only distances that are relevant to the classes, that is, distances that cause changes in the class distribution. The components have applications in data exploration, visualization, and dimensionality reduction. In empirical experiments, the method outperformed, in addition to more classical methods, a Renyi entropy-based alternative while having essentially equivalent computational cost.
On Discontinuous Piecewise Linear Models for Memristor Oscillators
NASA Astrophysics Data System (ADS)
Amador, Andrés; Freire, Emilio; Ponce, Enrique; Ros, Javier
2017-06-01
In this paper, we provide for the first time rigorous mathematical results regarding the rich dynamics of piecewise linear memristor oscillators. In particular, for each nonlinear oscillator given in [Itoh & Chua, 2008], we show the existence of an infinite family of invariant manifolds and that the dynamics on such manifolds can be modeled without resorting to discontinuous models. Our approach provides topologically equivalent continuous models with one dimension less but with one extra parameter associated to the initial conditions. It is possible to justify the periodic behavior exhibited by three-dimensional memristor oscillators, by taking advantage of known results for planar continuous piecewise linear systems. The analysis developed not only confirms the numerical results contained in previous works [Messias et al., 2010; Scarabello & Messias, 2014] but also goes much further by showing the existence of closed surfaces in the state space which are foliated by periodic orbits. The important role of initial conditions that justify the infinite number of periodic orbits exhibited by these models, is stressed. The possibility of unsuspected bistable regimes under specific configurations of parameters is also emphasized.
The evaluation of the neutron dose equivalent in the two-bend maze.
Tóth, Á Á; Petrović, B; Jovančević, N; Krmar, M; Rutonjski, L; Čudić, O
2017-04-01
The purpose of this study was to explore the effect of the second bend of the maze, on the neutron dose equivalent, in the 15MV linear accelerator vault, with two bend maze. These two bends of the maze were covered by 32 points where the neutron dose equivalent was measured. There is one available method for estimation of the neutron dose equivalent at the entrance door of the two bend maze which was tested using the results of the measurements. The results of this study show that the neutron equivalent dose at the door of the two bend maze was reduced almost three orders of magnitude. The measured TVD in the first bend (closer to the inner maze entrance) is about 5m. The measured TVD result is close to the TVD values usually used in the proposed models for estimation of neutron dose equivalent at the entrance door of the single bend maze. The results also determined that the TVD in the second bend (next to the maze entrance door) is significantly lower than the TVD values found in the first maze bend. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casarini, L.; Bonometto, S.A.; Tessarotto, E.
2016-08-01
We discuss an extension of the Coyote emulator to predict non-linear matter power spectra of dark energy (DE) models with a scale factor dependent equation of state of the form w = w {sub 0}+(1- a ) w {sub a} . The extension is based on the mapping rule between non-linear spectra of DE models with constant equation of state and those with time varying one originally introduced in ref. [40]. Using a series of N-body simulations we show that the spectral equivalence is accurate to sub-percent level across the same range of modes and redshift covered by the Coyotemore » suite. Thus, the extended emulator provides a very efficient and accurate tool to predict non-linear power spectra for DE models with w {sub 0}- w {sub a} parametrization. According to the same criteria we have developed a numerical code that we have implemented in a dedicated module for the CAMB code, that can be used in combination with the Coyote Emulator in likelihood analyses of non-linear matter power spectrum measurements. All codes can be found at https://github.com/luciano-casarini/pkequal.« less
Exhibit D modular design attitude control system study
NASA Technical Reports Server (NTRS)
Chichester, F.
1984-01-01
A dynamically equivalent four body approximation of the NASTRAN finite element model supplied for hybrid deployable truss to support the digital computer simulation of the ten body model of the flexible space platform that incorporates the four body truss model were investigated. Coefficients for sensitivity of state variables of the linearized model of the three axes rotational dynamics of the prototype flexible spacecraft were generated with respect to the model's parameters. Software changes required to accommodate addition of another rigid body to the five body model of the rotational dynamics of the prototype flexible spacecraft were evaluated.
Highway traffic estimation of improved precision using the derivative-free nonlinear Kalman Filter
NASA Astrophysics Data System (ADS)
Rigatos, Gerasimos; Siano, Pierluigi; Zervos, Nikolaos; Melkikh, Alexey
2015-12-01
The paper proves that the PDE dynamic model of the highway traffic is a differentially flat one and by applying spatial discretization its shows that the model's transformation into an equivalent linear canonical state-space form is possible. For the latter representation of the traffic's dynamics, state estimation is performed with the use of the Derivative-free nonlinear Kalman Filter. The proposed filter consists of the Kalman Filter recursion applied on the transformed state-space model of the highway traffic. Moreover, it makes use of an inverse transformation, based again on differential flatness theory which enables to obtain estimates of the state variables of the initial nonlinear PDE model. By avoiding approximate linearizations and the truncation of nonlinear terms from the PDE model of the traffic's dynamics the proposed filtering methods outperforms, in terms of accuracy, other nonlinear estimators such as the Extended Kalman Filter. The article's theoretical findings are confirmed through simulation experiments.
NASA Astrophysics Data System (ADS)
Giaccu, Gian Felice
2018-05-01
Pre-tensioned cable braces are widely used as bracing systems in various structural typologies. This technology is fundamentally utilized for stiffening purposes in the case of steel and timber structures. The pre-stressing force imparted to the braces provides to the system a remarkable increment of stiffness. On the other hand, the pre-tensioning force in the braces must be properly calibrated in order to satisfactorily meet both serviceability and ultimate limit states. Dynamic properties of these systems are however affected by non-linear behavior due to potential slackening of the pre-tensioned brace. In the recent years the author has been working on a similar problem regarding the non-linear response of cables in cable-stayed bridges and braced structures. In the present paper a displacement-based approach is used to examine the non-linear behavior of a building system. The methodology operates through linearization and allows obtaining an equivalent linearized frequency to approximately characterize, mode by mode, the dynamic behavior of the system. The equivalent frequency depends on both the mechanical characteristics of the system, the pre-tensioning level assigned to the braces and a characteristic vibration amplitude. The proposed approach can be used as a simplified technique, capable of linearizing the response of structural systems, characterized by non-linearity induced by the slackening of pre-tensioned braces.
NASA Technical Reports Server (NTRS)
Mickens, R. E.
1985-01-01
The classical method of equivalent linearization is extended to a particular class of nonlinear difference equations. It is shown that the method can be used to obtain an approximation of the periodic solutions of these equations. In particular, the parameters of the limit cycle and the limit points can be determined. Three examples illustrating the method are presented.
NASA Astrophysics Data System (ADS)
Constantin, Lucian A.; Fabiano, Eduardo; Della Sala, Fabio
2018-05-01
Orbital-free density functional theory (OF-DFT) promises to describe the electronic structure of very large quantum systems, being its computational cost linear with the system size. However, the OF-DFT accuracy strongly depends on the approximation made for the kinetic energy (KE) functional. To date, the most accurate KE functionals are nonlocal functionals based on the linear-response kernel of the homogeneous electron gas, i.e., the jellium model. Here, we use the linear-response kernel of the jellium-with-gap model to construct a simple nonlocal KE functional (named KGAP) which depends on the band-gap energy. In the limit of vanishing energy gap (i.e., in the case of metals), the KGAP is equivalent to the Smargiassi-Madden (SM) functional, which is accurate for metals. For a series of semiconductors (with different energy gaps), the KGAP performs much better than SM, and results are close to the state-of-the-art functionals with sophisticated density-dependent kernels.
Origin of nonsaturating linear magnetoresistivity
NASA Astrophysics Data System (ADS)
Kisslinger, Ferdinand; Ott, Christian; Weber, Heiko B.
2017-01-01
The observation of nonsaturating classical linear magnetoresistivity has been an enigmatic phenomenon in solid-state physics. We present a study of a two-dimensional ohmic conductor, including local Hall effect and a self-consistent consideration of the environment. An equivalent-circuit scheme delivers a simple and convincing argument why the magnetoresistivity is linear in strong magnetic field, provided that current and biasing electric field are misaligned by a nonlocal mechanism. A finite-element model of a two-dimensional conductor is suited to display the situations that create such deviating currents. Besides edge effects next to electrodes, charge carrier density fluctuations are efficiently generating this effect. However, mobility fluctuations that have frequently been related to linear magnetoresistivity are barely relevant. Despite its rare observation, linear magnetoresitivity is rather the rule than the exception in a regime of low charge carrier densities, misaligned current pathways and strong magnetic field.
Second cancer risk after 3D-CRT, IMRT and VMAT for breast cancer.
Abo-Madyan, Yasser; Aziz, Muhammad Hammad; Aly, Moamen M O M; Schneider, Frank; Sperk, Elena; Clausen, Sven; Giordano, Frank A; Herskind, Carsten; Steil, Volker; Wenz, Frederik; Glatting, Gerhard
2014-03-01
Second cancer risk after breast conserving therapy is becoming more important due to improved long term survival rates. In this study, we estimate the risks for developing a solid second cancer after radiotherapy of breast cancer using the concept of organ equivalent dose (OED). Computer-tomography scans of 10 representative breast cancer patients were selected for this study. Three-dimensional conformal radiotherapy (3D-CRT), tangential intensity modulated radiotherapy (t-IMRT), multibeam intensity modulated radiotherapy (m-IMRT), and volumetric modulated arc therapy (VMAT) were planned to deliver a total dose of 50 Gy in 2 Gy fractions. Differential dose volume histograms (dDVHs) were created and the OEDs calculated. Second cancer risks of ipsilateral, contralateral lung and contralateral breast cancer were estimated using linear, linear-exponential and plateau models for second cancer risk. Compared to 3D-CRT, cumulative excess absolute risks (EAR) for t-IMRT, m-IMRT and VMAT were increased by 2 ± 15%, 131 ± 85%, 123 ± 66% for the linear-exponential risk model, 9 ± 22%, 82 ± 96%, 71 ± 82% for the linear and 3 ± 14%, 123 ± 78%, 113 ± 61% for the plateau model, respectively. Second cancer risk after 3D-CRT or t-IMRT is lower than for m-IMRT or VMAT by about 34% for the linear model and 50% for the linear-exponential and plateau models, respectively. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.
2015-01-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448
Nonlinear Site Response Validation Studies Using KIK-net Strong Motion Data
NASA Astrophysics Data System (ADS)
Asimaki, D.; Shi, J.
2014-12-01
Earthquake simulations are nowadays producing realistic ground motion time-series in the range of engineering design applications. Of particular significance to engineers are simulations of near-field motions and large magnitude events, for which observations are scarce. With the engineering community slowly adopting the use of simulated ground motions, site response models need to be re-evaluated in terms of their capabilities and limitations to 'translate' the simulated time-series from rock surface output to structural analyses input. In this talk, we evaluate three one-dimensional site response models: linear viscoelastic, equivalent linear and nonlinear. We evaluate the performance of the models by comparing predictions to observations at 30 downhole stations of the Japanese network KIK-Net that have recorded several strong events, including the 2011 Tohoku earthquake. Velocity profiles are used as the only input to all models, while additional parameters such as quality factor, density and nonlinear dynamic soil properties are estimated from empirical correlations. We quantify the differences of ground surface predictions and observations in terms of both seismological and engineering intensity measures, including bias ratios of peak ground response and visual comparisons of elastic spectra, and inelastic to elastic deformation ratio for multiple ductility ratios. We observe that PGV/Vs,30 — as measure of strain— is a better predictor of site nonlinearity than PGA, and that incremental nonlinear analyses are necessary to produce reliable estimates of high-frequency ground motion components at soft sites. We finally discuss the implications of our findings on the parameterization of nonlinear amplification factors in GMPEs, and on the extensive use of equivalent linear analyses in probabilistic seismic hazard procedures.
Gauge invariance of excitonic linear and nonlinear optical response
NASA Astrophysics Data System (ADS)
Taghizadeh, Alireza; Pedersen, T. G.
2018-05-01
We study the equivalence of four different approaches to calculate the excitonic linear and nonlinear optical response of multiband semiconductors. These four methods derive from two choices of gauge, i.e., length and velocity gauges, and two ways of computing the current density, i.e., direct evaluation and evaluation via the time-derivative of the polarization density. The linear and quadratic response functions are obtained for all methods by employing a perturbative density-matrix approach within the mean-field approximation. The equivalence of all four methods is shown rigorously, when a correct interaction Hamiltonian is employed for the velocity gauge approaches. The correct interaction is written as a series of commutators containing the unperturbed Hamiltonian and position operators, which becomes equivalent to the conventional velocity gauge interaction in the limit of infinite Coulomb screening and infinitely many bands. As a case study, the theory is applied to hexagonal boron nitride monolayers, and the linear and nonlinear optical response found in different approaches are compared.
1988-12-01
PERFORMANCE IN REAL TIME* Dr. James A. Barnes Austron Boulder, Co. Abstract Kalman filters and ARIMA models provide optimum control and evaluation tech...estimates of the model parameters (e.g., the phi’s and theta’s for an ARIMA model ). These model parameters are often evaluated in a batch mode on a...random walk FM, and linear frequency drift. In ARIMA models , this is equivalent to an ARIMA (0,2,2) with a non-zero average sec- ond difference. Using
Kinjo, Ken; Uchibe, Eiji; Doya, Kenji
2013-01-01
Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.
Multigrid methods in structural mechanics
NASA Technical Reports Server (NTRS)
Raju, I. S.; Bigelow, C. A.; Taasan, S.; Hussaini, M. Y.
1986-01-01
Although the application of multigrid methods to the equations of elasticity has been suggested, few such applications have been reported in the literature. In the present work, multigrid techniques are applied to the finite element analysis of a simply supported Bernoulli-Euler beam, and various aspects of the multigrid algorithm are studied and explained in detail. In this study, six grid levels were used to model half the beam. With linear prolongation and sequential ordering, the multigrid algorithm yielded results which were of machine accuracy with work equivalent to 200 standard Gauss-Seidel iterations on the fine grid. Also with linear prolongation and sequential ordering, the V(1,n) cycle with n greater than 2 yielded better convergence rates than the V(n,1) cycle. The restriction and prolongation operators were derived based on energy principles. Conserving energy during the inter-grid transfers required that the prolongation operator be the transpose of the restriction operator, and led to improved convergence rates. With energy-conserving prolongation and sequential ordering, the multigrid algorithm yielded results of machine accuracy with a work equivalent to 45 Gauss-Seidel iterations on the fine grid. The red-black ordering of relaxations yielded solutions of machine accuracy in a single V(1,1) cycle, which required work equivalent to about 4 iterations on the finest grid level.
Rollinson, Njal; Holt, Sarah M; Massey, Melanie D; Holt, Richard C; Nancekivell, E Graham; Brooks, Ronald J
2018-05-01
Temperature has a strong effect on ectotherm development rate. It is therefore possible to construct predictive models of development that rely solely on temperature, which have applications in a range of biological fields. Here, we leverage a reference series of development stages for embryos of the turtle Chelydra serpentina, which was described at a constant temperature of 20 °C. The reference series acts to map each distinct developmental stage onto embryonic age (in days) at 20 °C. By extension, an embryo taken from any given incubation environment, once staged, can be assigned an equivalent age at 20 °C. We call this concept "Equivalent Development", as it maps the development stage of an embryo incubated at a given temperature to its equivalent age at a reference temperature. In the laboratory, we used the concept of Equivalent Development to estimate development rate of embryos of C. serpentina across a series of constant temperatures. Using these estimates of development rate, we created a thermal performance curve measured in units of Equivalent Development (TPC ED ). We then used the TPC ED to predict developmental stage of embryos in several natural turtle nests across six years. We found that 85% of the variation of development stage in natural nests could be explained. Further, we compared the predictive accuracy of the model based on the TPC ED to the predictive accuracy of a degree-day model, where development is assumed to be linearly related to temperature and the amount of accumulated heat is summed over time. Information theory suggested that the model based on the TPC ED better describes variation in developmental stage in wild nests than the degree-day model. We suggest the concept of Equivalent Development has several strengths and can be broadly applied. In particular, studies on temperature-dependent sex determination may be facilitated by the concept of Equivalent Development, as development age maps directly onto the developmental series of the organism, allowing critical periods of sex determination to be delineated without invasive sampling, even under fluctuating temperature. Copyright © 2018 Elsevier Ltd. All rights reserved.
Segmented and "equivalent" representation of the cable equation.
Andrietti, F; Bernardini, G
1984-11-01
The linear cable theory has been applied to a modular structure consisting of n repeating units each composed of two subunits with different values of resistance and capacitance. For n going to infinity, i.e., for infinite cables, we have derived analytically the Laplace transform of the solution by making use of a difference method and we have inverted it by means of a numerical procedure. The results have been compared with those obtained by the direct application of the cable equation to a simplified nonmodular model with "equivalent" electrical parameters. The implication of our work in the analysis of the time and space course of the potential of real fibers has been discussed. In particular, we have shown that the simplified ("equivalent") model is a very good representation of the segmented model for the nodal regions of myelinated fibers in a steady situation and in every condition for muscle fibers. An approximate solution for the steady potential of myelinated fibers has been derived for both nodal and internodal regions. The applications of our work to other cases dealing with repeating structures, such as earthworm giant fibers, have been discussed and our results have been compared with other attempts to solve similar problems.
A Linear Electromagnetic Piston Pump
NASA Astrophysics Data System (ADS)
Hogan, Paul H.
Advancements in mobile hydraulics for human-scale applications have increased demand for a compact hydraulic power supply. Conventional designs couple a rotating electric motor to a hydraulic pump, which increases the package volume and requires several energy conversions. This thesis investigates the use of a free piston as the moving element in a linear motor to eliminate multiple energy conversions and decrease the overall package volume. A coupled model used a quasi-static magnetic equivalent circuit to calculate the motor inductance and the electromagnetic force acting on the piston. The force was an input to a time domain model to evaluate the mechanical and pressure dynamics. The magnetic circuit model was validated with finite element analysis and an experimental prototype linear motor. The coupled model was optimized using a multi-objective genetic algorithm to explore the parameter space and maximize power density and efficiency. An experimental prototype linear pump coupled pistons to an off-the-shelf linear motor to validate the mechanical and pressure dynamics models. The magnetic circuit force calculation agreed within 3% of finite element analysis, and within 8% of experimental data from the unoptimized prototype linear motor. The optimized motor geometry also had good agreement with FEA; at zero piston displacement, the magnetic circuit calculates optimized motor force within 10% of FEA in less than 1/1000 the computational time. This makes it well suited to genetic optimization algorithms. The mechanical model agrees very well with the experimental piston pump position data when tuned for additional unmodeled mechanical friction. Optimized results suggest that an improvement of 400% of the state of the art power density is attainable with as high as 85% net efficiency. This demonstrates that a linear electromagnetic piston pump has potential to serve as a more compact and efficient supply of fluid power for the human scale.
NASA Technical Reports Server (NTRS)
Guo, Tong-Yi; Hwang, Chyi; Shieh, Leang-San
1994-01-01
This paper deals with the multipoint Cauer matrix continued-fraction expansion (MCFE) for model reduction of linear multi-input multi-output (MIMO) systems with various numbers of inputs and outputs. A salient feature of the proposed MCFE approach to model reduction of MIMO systems with square transfer matrices is its equivalence to the matrix Pade approximation approach. The Cauer second form of the ordinary MCFE for a square transfer function matrix is generalized in this paper to a multipoint and nonsquare-matrix version. An interesting connection of the multipoint Cauer MCFE method to the multipoint matrix Pade approximation method is established. Also, algorithms for obtaining the reduced-degree matrix-fraction descriptions and reduced-dimensional state-space models from a transfer function matrix via the multipoint Cauer MCFE algorithm are presented. Practical advantages of using the multipoint Cauer MCFE are discussed and a numerical example is provided to illustrate the algorithms.
Optical and biometric relationships of the isolated pig crystalline lens.
Vilupuru, A S; Glasser, A
2001-07-01
To investigate the interrelationships between optical and biometric properties of the porcine crystalline lens, to compare these findings with similar relationships found for the human lens and to attempt to fit this data to a geometric model of the optical and biometric properties of the pig lens. Weight, focal length, spherical aberration, surface curvatures, thickness and diameters of 20 isolated pig lenses were measured and equivalent refractive index was calculated. These parameters were compared and used to geometrically model the pig lens. Linear relationships were identified between many of the lens biometric and optical properties. The existence of these relationships allowed a simple geometrical model of the pig lens to be calculated which offers predictions of the optical properties. The linear relationships found and the agreement observed between measured and modeled results suggest that the pig lens confirms to a predictable, preset developmental pattern and that the optical and biometric properties are predictably interrelated.
On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.
Yamazaki, Keisuke
2012-07-01
Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.
Linear models for sound from supersonic reacting mixing layers
NASA Astrophysics Data System (ADS)
Chary, P. Shivakanth; Samanta, Arnab
2016-12-01
We perform a linearized reduced-order modeling of the aeroacoustic sound sources in supersonic reacting mixing layers to explore their sensitivities to some of the flow parameters in radiating sound. Specifically, we investigate the role of outer modes as the effective flow compressibility is raised, when some of these are expected to dominate over the traditional Kelvin-Helmholtz (K-H) -type central mode. Although the outer modes are known to be of lesser importance in the near-field mixing, how these radiate to the far-field is uncertain, on which we focus. On keeping the flow compressibility fixed, the outer modes are realized via biasing the respective mean densities of the fast (oxidizer) or slow (fuel) side. Here the mean flows are laminar solutions of two-dimensional compressible boundary layers with an imposed composite (turbulent) spreading rate, which we show to significantly alter the growth of instability waves by saturating them earlier, similar to in nonlinear calculations, achieved here via solving the linear parabolized stability equations. As the flow parameters are varied, instability of the slow modes is shown to be more sensitive to heat release, potentially exceeding equivalent central modes, as these modes yield relatively compact sound sources with lesser spreading of the mixing layer, when compared to the corresponding fast modes. In contrast, the radiated sound seems to be relatively unaffected when the mixture equivalence ratio is varied, except for a lean mixture which is shown to yield a pronounced effect on the slow mode radiation by reducing its modal growth.
NASA Astrophysics Data System (ADS)
Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman
2017-06-01
Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.
Nonlinear effects of stretch on the flame front propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halter, F.; Tahtouh, T.; Mounaim-Rousselle, C.
2010-10-15
In all experimental configurations, the flames are affected by stretch (curvature and/or strain rate). To obtain the unstretched flame speed, independent of the experimental configuration, the measured flame speed needs to be corrected. Usually, a linear relationship linking the flame speed to stretch is used. However, this linear relation is the result of several assumptions, which may be incorrected. The present study aims at evaluating the error in the laminar burning speed evaluation induced by using the traditional linear methodology. Experiments were performed in a closed vessel at atmospheric pressure for two different mixtures: methane/air and iso-octane/air. The initial temperaturesmore » were respectively 300 K and 400 K for methane and iso-octane. Both methodologies (linear and nonlinear) are applied and results in terms of laminar speed and burned gas Markstein length are compared. Methane and iso-octane were chosen because they present opposite evolutions in their Markstein length when the equivalence ratio is increased. The error induced by the linear methodology is evaluated, taking the nonlinear methodology as the reference. It is observed that the use of the linear methodology starts to induce substantial errors after an equivalence ratio of 1.1 for methane/air mixtures and before an equivalence ratio of 1 for iso-octane/air mixtures. One solution to increase the accuracy of the linear methodology for these critical cases consists in reducing the number of points used in the linear methodology by increasing the initial flame radius used. (author)« less
Adaptive Control Allocation in the Presence of Actuator Failures
NASA Technical Reports Server (NTRS)
Liu, Yu; Crespo, Luis G.
2010-01-01
In this paper, a novel adaptive control allocation framework is proposed. In the adaptive control allocation structure, cooperative actuators are grouped and treated as an equivalent control effector. A state feedback adaptive control signal is designed for the equivalent effector and allocated to the member actuators adaptively. Two adaptive control allocation algorithms are proposed, which guarantee closed-loop stability and asymptotic state tracking in the presence of uncertain loss of effectiveness and constant-magnitude actuator failures. The proposed algorithms can be shown to reduce the controller complexity with proper grouping of the actuators. The proposed adaptive control allocation schemes are applied to two linearized aircraft models, and the simulation results demonstrate the performance of the proposed algorithms.
Ramsahoi, L; Gao, A; Fabri, M; Odumeru, J A
2011-07-01
Automated electronic milk analyzers for rapid enumeration of total bacteria counts (TBC) are widely used for raw milk testing by many analytical laboratories worldwide. In Ontario, Canada, Bactoscan flow cytometry (BsnFC; Foss Electric, Hillerød, Denmark) is the official anchor method for TBC in raw cow milk. Penalties are levied at the BsnFC equivalent level of 50,000 cfu/mL, the standard plate count (SPC) regulatory limit. This study was conducted to assess the BsnFC for TBC in raw goat milk, to determine the mathematical relationship between the SPC and BsnFC methods, and to identify probable reasons for the difference in the SPC:BsnFC equivalents for goat and cow milks. Test procedures were conducted according to International Dairy Federation Bulletin guidelines. Approximately 115 farm bulk tank milk samples per month were tested for inhibitor residues, SPC, BsnFC, psychrotrophic bacteria count, composition (fat, protein, lactose, lactose and other solids, and freezing point), and somatic cell count from March 2009 to February 2010. Data analysis of the results for the samples tested indicated that the BsnFC method would be a good alternative to the SPC method, providing accurate and more precise results with a faster turnaround time. Although a linear regression model showed good correlation and prediction, tests for linearity indicated that the relationship was linear only beyond log 4.1 SPC. The logistic growth curve best modeled the relationship between the SPC and BsnFC for the entire sample population. The BsnFC equivalent to the SPC 50,000 cfu/mL regulatory limit was estimated to be 321,000 individual bacteria count (ibc)/mL. This estimate differs considerably from the BsnFC equivalent for cow milk (121,000 ibc/mL). Because of the low frequency of bulk tank milk pickups at goat farms, 78.5% of the samples had their oldest milking in the tank to be 6.5 to 9.0 d old when tested, compared with the cow milk samples, which had their oldest milking at 4 d old when tested. This may be one of the major factors contributing to the larger goat milk BsnFC equivalence. Correlations and interactions between various test results were also discussed to further understand differences between the 2 methods for goat and cow milks. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
On the equivalence of Gaussian elimination and Gauss-Jordan reduction in solving linear equations
NASA Technical Reports Server (NTRS)
Tsao, Nai-Kuan
1989-01-01
A novel general approach to round-off error analysis using the error complexity concepts is described. This is applied to the analysis of the Gaussian Elimination and Gauss-Jordan scheme for solving linear equations. The results show that the two algorithms are equivalent in terms of our error complexity measures. Thus the inherently parallel Gauss-Jordan scheme can be implemented with confidence if parallel computers are available.
Smirnova, Olga A; Cucinotta, Francis A
2018-02-01
A recently developed biologically motivated dynamical model of the assessment of the excess relative risk (ERR) for radiogenic leukemia among acutely/continuously irradiated humans (Smirnova, 2015, 2017) is applied to estimate the ERR for radiogenic leukemia among astronauts engaged in long-term interplanetary space missions. Numerous scenarios of space radiation exposure during space missions are used in the modeling studies. The dependence of the ERR for leukemia among astronauts on several mission parameters including the dose equivalent rates of galactic cosmic rays (GCR) and large solar particle events (SPEs), the number of large SPEs, the time interval between SPEs, mission duration, the degree of astronaut's additional shielding during SPEs, the degree of their additional 12-hour's daily shielding, as well as the total mission dose equivalent, is examined. The results of the estimation of ERR for radiogenic leukemia among astronauts, which are obtained in the framework of the developed dynamical model for various scenarios of space radiation exposure, are compared with the corresponding results, computed by the commonly used linear model. It is revealed that the developed dynamical model along with the linear model can be applied to estimate ERR for radiogenic leukemia among astronauts engaged in long-term interplanetary space missions in the range of applicability of the latter. In turn, the developed dynamical model is capable of predicting the ERR for leukemia among astronauts for the irradiation regimes beyond the applicability range of the linear model in emergency cases. As a supplement to the estimations of cancer incidence and death (REIC and REID) (Cucinotta et al., 2013, 2017), the developed dynamical model for the assessment of the ERR for leukemia can be employed on the pre-mission design phase for, e.g., the optimization of the regimes of astronaut's additional shielding in the course of interplanetary space missions. The developed model can also be used on the phase of the real-time responses during the space mission to make the decisions on the operational application of appropriate countermeasures to minimize the risks of occurrences of leukemia, especially, for emergency cases. Copyright © 2017 The Committee on Space Research (COSPAR). Published by Elsevier Ltd. All rights reserved.
Quantile equivalence to evaluate compliance with habitat management objectives
Cade, Brian S.; Johnson, Pamela R.
2011-01-01
Equivalence estimated with linear quantile regression was used to evaluate compliance with habitat management objectives at Arapaho National Wildlife Refuge based on monitoring data collected in upland (5,781 ha; n = 511 transects) and riparian and meadow (2,856 ha, n = 389 transects) habitats from 2005 to 2008. Quantiles were used because the management objectives specified proportions of the habitat area that needed to comply with vegetation criteria. The linear model was used to obtain estimates that were averaged across 4 y. The equivalence testing framework allowed us to interpret confidence intervals for estimated proportions with respect to intervals of vegetative criteria (equivalence regions) in either a liberal, benefit-of-doubt or conservative, fail-safe approach associated with minimizing alternative risks. Simple Boolean conditional arguments were used to combine the quantile equivalence results for individual vegetation components into a joint statement for the multivariable management objectives. For example, management objective 2A required at least 809 ha of upland habitat with a shrub composition ≥0.70 sagebrush (Artemisia spp.), 20–30% canopy cover of sagebrush ≥25 cm in height, ≥20% canopy cover of grasses, and ≥10% canopy cover of forbs on average over 4 y. Shrub composition and canopy cover of grass each were readily met on >3,000 ha under either conservative or liberal interpretations of sampling variability. However, there were only 809–1,214 ha (conservative to liberal) with ≥10% forb canopy cover and 405–1,098 ha with 20–30%canopy cover of sagebrush ≥25 cm in height. Only 91–180 ha of uplands simultaneously met criteria for all four components, primarily because canopy cover of sagebrush and forbs was inversely related when considered at the spatial scale (30 m) of a sample transect. We demonstrate how the quantile equivalence analyses also can help refine the numerical specification of habitat objectives and explore specification of spatial scales for objectives with respect to sampling scales used to evaluate those objectives.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kok, H. Petra, E-mail: H.P.Kok@amc.uva.nl; Crezee, Johannes; Franken, Nicolaas A.P.
2014-03-01
Purpose: To develop a method to quantify the therapeutic effect of radiosensitization by hyperthermia; to this end, a numerical method was proposed to convert radiation therapy dose distributions with hyperthermia to equivalent dose distributions without hyperthermia. Methods and Materials: Clinical intensity modulated radiation therapy plans were created for 15 prostate cancer cases. To simulate a clinically relevant heterogeneous temperature distribution, hyperthermia treatment planning was performed for heating with the AMC-8 system. The temperature-dependent parameters α (Gy{sup −1}) and β (Gy{sup −2}) of the linear–quadratic model for prostate cancer were estimated from the literature. No thermal enhancement was assumed for normalmore » tissue. The intensity modulated radiation therapy plans and temperature distributions were exported to our in-house-developed radiation therapy treatment planning system, APlan, and equivalent dose distributions without hyperthermia were calculated voxel by voxel using the linear–quadratic model. Results: The planned average tumor temperatures T90, T50, and T10 in the planning target volume were 40.5°C, 41.6°C, and 42.4°C, respectively. The planned minimum, mean, and maximum radiation therapy doses were 62.9 Gy, 76.0 Gy, and 81.0 Gy, respectively. Adding hyperthermia yielded an equivalent dose distribution with an extended 95% isodose level. The equivalent minimum, mean, and maximum doses reflecting the radiosensitization by hyperthermia were 70.3 Gy, 86.3 Gy, and 93.6 Gy, respectively, for a linear increase of α with temperature. This can be considered similar to a dose escalation with a substantial increase in tumor control probability for high-risk prostate carcinoma. Conclusion: A model to quantify the effect of combined radiation therapy and hyperthermia in terms of equivalent dose distributions was presented. This model is particularly instructive to estimate the potential effects of interaction from different treatment modalities.« less
Millar, W T; Davidson, S E
2013-01-01
Objective: To consider the implications of the use of biphasic rather than monophasic repair in calculations of biologically-equivalent doses for pulsed-dose-rate brachytherapy of cervix carcinoma. Methods: Calculations are presented of pulsed-dose-rate (PDR) doses equivalent to former low-dose-rate (LDR) doses, using biphasic vs monophasic repair kinetics, both for cervical carcinoma and for the organ at risk (OAR), namely the rectum. The linear-quadratic modelling calculations included effects due to varying the dose per PDR cycle, the dose reduction factor for the OAR compared with Point A, the repair kinetics and the source strength. Results: When using the recommended 1 Gy per hourly PDR cycle, different LDR-equivalent PDR rectal doses were calculated depending on the choice of monophasic or biphasic repair kinetics pertaining to the rodent central nervous and skin systems. These differences virtually disappeared when the dose per hourly cycle was increased to 1.7 Gy. This made the LDR-equivalent PDR doses more robust and independent of the choice of repair kinetics and α/β ratios as a consequence of the described concept of extended equivalence. Conclusion: The use of biphasic and monophasic repair kinetics for optimised modelling of the effects on the OAR in PDR brachytherapy suggests that an optimised PDR protocol with the dose per hourly cycle nearest to 1.7 Gy could be used. Hence, the durations of the new PDR treatments would be similar to those of the former LDR treatments and not longer as currently prescribed. Advances in knowledge: Modelling calculations indicate that equivalent PDR protocols can be developed which are less dependent on the different α/β ratios and monophasic/biphasic kinetics usually attributed to normal and tumour tissues for treatment of cervical carcinoma. PMID:23934965
NASA Astrophysics Data System (ADS)
Zhao, Yanlin; Yao, Jun; Wang, Mi
2016-07-01
On-line monitoring of crystal size in the crystallization process is crucial to many pharmaceutical and fine-chemical industrial applications. In this paper, a novel method is proposed for the on-line monitoring of the cooling crystallization process of L-glutamic acid (LGA) using electrical impedance spectroscopy (EIS). The EIS method can be used to monitor the growth of crystal particles relying on the presence of an electrical double layer on the charged particle surface and the polarization of double layer under the excitation of alternating electrical field. The electrical impedance spectra and crystal size were measured on-line simultaneously by an impedance analyzer and focused beam reflectance measurement (FBRM), respectively. The impedance spectra were analyzed using the equivalent circuit model and the equivalent circuit elements in the model can be obtained by fitting the experimental data. Two equivalent circuit elements, including capacitance (C 2) and resistance (R 2) from the dielectric polarization of the LGA solution and crystal particle/solution interface, are in relation with the crystal size. The mathematical relationship between the crystal size and the equivalent circuit elements can be obtained by a non-linear fitting method. The function can be used to predict the change of crystal size during the crystallization process.
NASA Technical Reports Server (NTRS)
Muravyov, Alexander A.; Turner, Travis L.; Robinson, Jay H.; Rizzi, Stephen A.
1999-01-01
In this paper, the problem of random vibration of geometrically nonlinear MDOF structures is considered. The solutions obtained by application of two different versions of a stochastic linearization method are compared with exact (F-P-K) solutions. The formulation of a relatively new version of the stochastic linearization method (energy-based version) is generalized to the MDOF system case. Also, a new method for determination of nonlinear sti ness coefficients for MDOF structures is demonstrated. This method in combination with the equivalent linearization technique is implemented in a new computer program. Results in terms of root-mean-square (RMS) displacements obtained by using the new program and an existing in-house code are compared for two examples of beam-like structures.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Nikolsky, Aleksandr I.; Lazarev, Alexander A.; Magas, Taras E.
2010-04-01
Equivalence models (EM) advantages of neural networks (NN) are shown in paper. EMs are based on vectormatrix procedures with basic operations of continuous neurologic: normalized vector operations "equivalence", "nonequivalence", "autoequivalence", "autononequivalence". The capacity of NN on the basis of EM and of its modifications, including auto-and heteroassociative memories for 2D images, exceeds in several times quantity of neurons. Such neuroparadigms are very perspective for processing, recognition, storing large size and strongly correlated images. A family of "normalized equivalence-nonequivalence" neuro-fuzzy logic operations on the based of generalized operations fuzzy-negation, t-norm and s-norm is elaborated. A biologically motivated concept and time pulse encoding principles of continuous logic photocurrent reflexions and sample-storage devices with pulse-width photoconverters have allowed us to design generalized structures for realization of the family of normalized linear vector operations "equivalence"-"nonequivalence". Simulation results show, that processing time in such circuits does not exceed units of micro seconds. Circuits are simple, have low supply voltage (1-3 V), low power consumption (milliwatts), low levels of input signals (microwatts), integrated construction, satisfy the problem of interconnections and cascading.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.
2018-03-01
The biologically-motivated self-learning equivalence-convolutional recurrent-multilayer neural structures (BLM_SL_EC_RMNS) for fragments images clustering and recognition will be discussed. We shall consider these neural structures and their spatial-invariant equivalental models (SIEMs) based on proposed equivalent two-dimensional functions of image similarity and the corresponding matrix-matrix (or tensor) procedures using as basic operations of continuous logic and nonlinear processing. These SIEMs can simply describe the signals processing during the all training and recognition stages and they are suitable for unipolar-coding multilevel signals. The clustering efficiency in such models and their implementation depends on the discriminant properties of neural elements of hidden layers. Therefore, the main models and architecture parameters and characteristics depends on the applied types of non-linear processing and function used for image comparison or for adaptive-equivalent weighing of input patterns. We show that these SL_EC_RMNSs have several advantages, such as the self-study and self-identification of features and signs of the similarity of fragments, ability to clustering and recognize of image fragments with best efficiency and strong mutual correlation. The proposed combined with learning-recognition clustering method of fragments with regard to their structural features is suitable not only for binary, but also color images and combines self-learning and the formation of weight clustered matrix-patterns. Its model is constructed and designed on the basis of recursively continuous logic and nonlinear processing algorithms and to k-average method or method the winner takes all (WTA). The experimental results confirmed that fragments with a large numbers of elements may be clustered. For the first time the possibility of generalization of these models for space invariant case is shown. The experiment for an images of different dimensions (a reference array) and fragments with diferent dimensions for clustering is carried out. The experiments, using the software environment Mathcad showed that the proposed method is universal, has a significant convergence, the small number of iterations is easily, displayed on the matrix structure, and confirmed its prospects. Thus, to understand the mechanisms of self-learning equivalence-convolutional clustering, accompanying her to the competitive processes in neurons, and the neural auto-encoding-decoding and recognition principles with the use of self-learning cluster patterns is very important which used the algorithm and the principles of non-linear processing of two-dimensional spatial functions of images comparison. The experimental results show that such models can be successfully used for auto- and hetero-associative recognition. Also they can be used to explain some mechanisms, known as "the reinforcementinhibition concept". Also we demonstrate a real model experiments, which confirm that the nonlinear processing by equivalent function allow to determine the neuron-winners and customize the weight matrix. At the end of the report, we will show how to use the obtained results and to propose new more efficient hardware architecture of SL_EC_RMNS based on matrix-tensor multipliers. Also we estimate the parameters and performance of such architectures.
Equivalent circuit modeling of a piezo-patch energy harvester on a thin plate with AC-DC conversion
NASA Astrophysics Data System (ADS)
Bayik, B.; Aghakhani, A.; Basdogan, I.; Erturk, A.
2016-05-01
As an alternative to beam-like structures, piezoelectric patch-based energy harvesters attached to thin plates can be readily integrated to plate-like structures in automotive, marine, and aerospace applications, in order to directly exploit structural vibration modes of the host system without mass loading and volumetric occupancy of cantilever attachments. In this paper, a multi-mode equivalent circuit model of a piezo-patch energy harvester integrated to a thin plate is developed and coupled with a standard AC-DC conversion circuit. Equivalent circuit parameters are obtained in two different ways: (1) from the modal analysis solution of a distributed-parameter analytical model and (2) from the finite-element numerical model of the harvester by accounting for two-way coupling. After the analytical modeling effort, multi-mode equivalent circuit representation of the harvester is obtained via electronic circuit simulation software SPICE. Using the SPICE software, electromechanical response of the piezoelectric energy harvester connected to linear and nonlinear circuit elements are computed. Simulation results are validated for the standard AC-AC and AC-DC configurations. For the AC input-AC output problem, voltage frequency response functions are calculated for various resistive loads, and they show excellent agreement with modal analysis-based analytical closed-form solution and with the finite-element model. For the standard ideal AC input-DC output case, a full-wave rectifier and a smoothing capacitor are added to the harvester circuit for conversion of the AC voltage to a stable DC voltage, which is also validated against an existing solution by treating the single-mode plate dynamics as a single-degree-of-freedom system.
A Comparison of Multivariable Control Design Techniques for a Turbofan Engine Control
NASA Technical Reports Server (NTRS)
Garg, Sanjay; Watts, Stephen R.
1995-01-01
This paper compares two previously published design procedures for two different multivariable control design techniques for application to a linear engine model of a jet engine. The two multivariable control design techniques compared were the Linear Quadratic Gaussian with Loop Transfer Recovery (LQG/LTR) and the H-Infinity synthesis. The two control design techniques were used with specific previously published design procedures to synthesize controls which would provide equivalent closed loop frequency response for the primary control loops while assuring adequate loop decoupling. The resulting controllers were then reduced in order to minimize the programming and data storage requirements for a typical implementation. The reduced order linear controllers designed by each method were combined with the linear model of an advanced turbofan engine and the system performance was evaluated for the continuous linear system. Included in the performance analysis are the resulting frequency and transient responses as well as actuator usage and rate capability for each design method. The controls were also analyzed for robustness with respect to structured uncertainties in the unmodeled system dynamics. The two controls were then compared for performance capability and hardware implementation issues.
Koda, Shin-ichi
2016-03-21
We theoretically investigate a possibility that the symmetry of the repetitively branched structure of light-harvesting dendrimers creates the energy gradient descending toward inner generations (layers of pigment molecules) of the dendrimers. In the first half of this paper, we define a model system using the Frenkel exciton Hamiltonian that focuses only on the topology of dendrimers and numerically show that excitation energy tends to gather at inner generations of the model system at a thermal equilibrium state. This indicates that an energy gradient is formed in the model system. In the last half, we attribute this result to the symmetry of the model system and propose two symmetry-origin mechanisms creating the energy gradient. The present analysis and proposition are based on the theory of the linear chain (LC) decomposition [S. Koda, J. Chem. Phys. 142, 204112 (2015)], which equivalently transforms the model system into a set of one-dimensional systems on the basis of the symmetry of dendrimers. In the picture of the LC decomposition, we find that energy gradient is formed both in each linear chain and among linear chains, and these two mechanisms explain the numerical results well.
NASA Astrophysics Data System (ADS)
Koda, Shin-ichi
2016-03-01
We theoretically investigate a possibility that the symmetry of the repetitively branched structure of light-harvesting dendrimers creates the energy gradient descending toward inner generations (layers of pigment molecules) of the dendrimers. In the first half of this paper, we define a model system using the Frenkel exciton Hamiltonian that focuses only on the topology of dendrimers and numerically show that excitation energy tends to gather at inner generations of the model system at a thermal equilibrium state. This indicates that an energy gradient is formed in the model system. In the last half, we attribute this result to the symmetry of the model system and propose two symmetry-origin mechanisms creating the energy gradient. The present analysis and proposition are based on the theory of the linear chain (LC) decomposition [S. Koda, J. Chem. Phys. 142, 204112 (2015)], which equivalently transforms the model system into a set of one-dimensional systems on the basis of the symmetry of dendrimers. In the picture of the LC decomposition, we find that energy gradient is formed both in each linear chain and among linear chains, and these two mechanisms explain the numerical results well.
A nonlinear Kalman filtering approach to embedded control of turbocharged diesel engines
NASA Astrophysics Data System (ADS)
Rigatos, Gerasimos; Siano, Pierluigi; Arsie, Ivan
2014-10-01
The development of efficient embedded control for turbocharged Diesel engines, requires the programming of elaborated nonlinear control and filtering methods. To this end, in this paper nonlinear control for turbocharged Diesel engines is developed with the use of Differential flatness theory and the Derivative-free nonlinear Kalman Filter. It is shown that the dynamic model of the turbocharged Diesel engine is differentially flat and admits dynamic feedback linearization. It is also shown that the dynamic model can be written in the linear Brunovsky canonical form for which a state feedback controller can be easily designed. To compensate for modeling errors and external disturbances the Derivative-free nonlinear Kalman Filter is used and redesigned as a disturbance observer. The filter consists of the Kalman Filter recursion on the linearized equivalent of the Diesel engine model and of an inverse transformation based on differential flatness theory which enables to obtain estimates for the state variables of the initial nonlinear model. Once the disturbances variables are identified it is possible to compensate them by including an additional control term in the feedback loop. The efficiency of the proposed control method is tested through simulation experiments.
Durango delta: Complications on San Juan basin Cretaceous linear strandline theme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zech, R.S.; Wright, R.
1989-09-01
The Upper Cretaceous Point Lookout Sandstone generally conforms to a predictable cyclic shoreface model in which prograding linear strandline lithosomes dominate formation architecture. Multiple transgressive-regressive cycles results in systematic repetition of lithologies deposited in beach to inner shelf environments. Deposits of approximately five cycles are locally grouped into bundles. Such bundles extend at least 20 km along depositional strike and change from foreshore sandstone to offshore, time-equivalent Mancos mud rock in a downdip distance of 17 to 20 km. Excellent hydrocarbon reservoirs exist where well-sorted shoreface sandstone bundles stack and the formation thickens. This depositional model breaks down in themore » vicinity of Durango, Colorado, where a fluvial-dominated delta front and associated large distributary channels characterize the Point Lookout Sandstone and overlying Menefee Formation.« less
Observational tests of non-adiabatic Chaplygin gas
NASA Astrophysics Data System (ADS)
Carneiro, S.; Pigozzo, C.
2014-10-01
In a previous paper [1] it was shown that any dark sector model can be mapped into a non-adiabatic fluid formed by two interacting components, one with zero pressure and the other with equation-of-state parameter ω = -1. It was also shown that the latter does not cluster and, hence, the former is identified as the observed clustering matter. This guarantees that the dark matter power spectrum does not suffer from oscillations or instabilities. It applies in particular to the generalised Chaplygin gas, which was shown to be equivalent to interacting models at both background and perturbation levels. In the present paper we test the non-adiabatic Chaplygin gas against the Hubble diagram of type Ia supernovae, the position of the first acoustic peak in the anisotropy spectrum of the cosmic microwave background and the linear power spectrum of large scale structures. We consider two different compilations of SNe Ia, namely the Constitution and SDSS samples, both calibrated with the MLCS2k2 fitter, and for the power spectrum we use the 2dFGRS catalogue. The model parameters to be adjusted are the present Hubble parameter, the present matter density and the Chaplygin gas parameter α. The joint analysis best fit gives α ≈ - 0.5, which corresponds to a constant-rate energy flux from dark energy to dark matter, with the dark energy density decaying linearly with the Hubble parameter. The ΛCDM model, equivalent to α = 0, stands outside the 3σ confidence interval.
Iino, Fukuya; Takasuga, Takumi; Touati, Abderrahmane; Gullett, Brian K
2003-01-01
The toxic equivalency (TEQ) values of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs) are predicted with a model based on the homologue concentrations measured from a laboratory-scale reactor (124 data points), a package boiler (61 data points), and operating municipal waste incinerators (114 data points). Regardless of the three scales and types of equipment, the different temperature profiles, sampling emissions and/or solids (fly ash), and the various chemical and physical properties of the fuels, all the PCDF plots showed highly linear correlations (R(2)>0.99). The fitting lines of the reactor and the boiler data were almost linear with slope of unity, whereas the slope of the municipal waste incinerator data was 0.86, which is caused by higher predicted values for samples with high measured TEQ. The strong correlation also implies that each of the 10 toxic PCDF congeners has a constant concentration relative to its respective total homologue concentration despite a wide range of facility types and combustion conditions. The PCDD plots showed significant scatter and poor linearity, which implies that the relative concentration of PCDD TEQ congeners is more sensitive to variations in reaction conditions than that of the PCDF congeners.
Method for extracting long-equivalent wavelength interferometric information
NASA Technical Reports Server (NTRS)
Hochberg, Eric B. (Inventor)
1991-01-01
A process for extracting long-equivalent wavelength interferometric information from a two-wavelength polychromatic or achromatic interferometer. The process comprises the steps of simultaneously recording a non-linear sum of two different frequency visible light interferograms on a high resolution film and then placing the developed film in an optical train for Fourier transformation, low pass spatial filtering and inverse transformation of the film image to produce low spatial frequency fringes corresponding to a long-equivalent wavelength interferogram. The recorded non-linear sum irradiance derived from the two-wavelength interferometer is obtained by controlling the exposure so that the average interferogram irradiance is set at either the noise level threshold or the saturation level threshold of the film.
Learning epistatic interactions from sequence-activity data to predict enantioselectivity
NASA Astrophysics Data System (ADS)
Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K.; Bodén, Mikael
2017-12-01
Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger (AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients (r) from 50 {× } 5 -fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of r=0.90 and r=0.93 . As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from r=0.51 to r=0.87 respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.
Learning epistatic interactions from sequence-activity data to predict enantioselectivity
NASA Astrophysics Data System (ADS)
Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K.; Bodén, Mikael
2017-12-01
Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger ( AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients ( r) from 50 {× } 5-fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of r=0.90 and r=0.93. As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from r=0.51 to r=0.87 respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.
Learning epistatic interactions from sequence-activity data to predict enantioselectivity.
Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K; Bodén, Mikael
2017-12-01
Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger (AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients (r) from [Formula: see text]-fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of [Formula: see text] and [Formula: see text]. As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from [Formula: see text] to [Formula: see text] respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.
A meshless EFG-based algorithm for 3D deformable modeling of soft tissue in real-time.
Abdi, Elahe; Farahmand, Farzam; Durali, Mohammad
2012-01-01
The meshless element-free Galerkin method was generalized and an algorithm was developed for 3D dynamic modeling of deformable bodies in real time. The efficacy of the algorithm was investigated in a 3D linear viscoelastic model of human spleen subjected to a time-varying compressive force exerted by a surgical grasper. The model remained stable in spite of the considerably large deformations occurred. There was a good agreement between the results and those of an equivalent finite element model. The computational cost, however, was much lower, enabling the proposed algorithm to be effectively used in real-time applications.
Linear and nonlinear equivalent circuit modeling of CMUTs.
Lohfink, Annette; Eccardt, Peter-Christian
2005-12-01
Using piston radiator and plate capacitance theory capacitive micromachined ultrasound transducers (CMUT) membrane cells can be described by one-dimensional (1-D) model parameters. This paper describes in detail a new method, which derives a 1-D model for CMUT arrays from finite-element methods (FEM) simulations. A few static and harmonic FEM analyses of a single CMUT membrane cell are sufficient to derive the mechanical and electrical parameters of an equivalent piston as the moving part of the cell area. For an array of parallel-driven cells, the acoustic parameters are derived as a complex mechanical fluid impedance, depending on the membrane shape form. As a main advantage, the nonlinear behavior of the CMUT can be investigated much easier and faster compared to FEM simulations, e.g., for a design of the maximum applicable voltage depending on the input signal. The 1-D parameter model allows an easy description of the CMUT behavior in air and fluids and simplifies the investigation of wave propagation within the connecting fluid represented by FEM or transmission line matrix (TLM) models.
Peripheral refraction profiles in subjects with low foveal refractive errors.
Tabernero, Juan; Ohlendorf, Arne; Fischer, M Dominik; Bruckmann, Anna R; Schiefer, Ulrich; Schaeffel, Frank
2011-03-01
To study the variability of peripheral refraction in a population of 43 subjects with low foveal refractive errors. A scan of the refractive error in the vertical pupil meridian of the right eye of 43 subjects (age range, 18 to 80 years, foveal spherical equivalent, < ± 2.5 diopter) over the central ± 45° of the visual field was performed using a recently developed angular scanning photorefractor. Refraction profiles across the visual field were fitted with four different models: (1) "flat model" (refractions about constant across the visual field), (2) "parabolic model" (refractions follow about a parabolic function), (3) "bi-linear model" (linear change of refractions with eccentricity from the fovea to the periphery), and (4) "box model" ("flat" central area with a linear change in refraction from a certain peripheral angle). Based on the minimal residuals of each fit, the subjects were classified into one of the four models. The "box model" accurately described the peripheral refractions in about 50% of the subjects. Peripheral refractions in six subjects were better characterized by a "linear model," in eight subjects by a "flat model," and in eight by the "parabolic model." Even after assignment to one of the models, the variability remained strikingly large, ranging from -0.75 to 6 diopter in the temporal retina at 45° eccentricity. The most common peripheral refraction profile (observed in nearly 50% of our population) was best described by the "box model." The high variability among subjects may limit attempts to reduce myopia progression with a uniform lens design and may rather call for a customized approach.
Kanematsu, Nobuyuki
2009-03-07
Dose calculation for radiotherapy with protons and heavier ions deals with a large volume of path integrals involving a scattering power of body tissue. This work provides a simple model for such demanding applications. There is an approximate linearity between RMS end-point displacement and range of incident particles in water, empirically found in measurements and detailed calculations. This fact was translated into a simple linear formula, from which the scattering power that is only inversely proportional to the residual range was derived. The simplicity enabled the analytical formulation for ions stopping in water, which was designed to be equivalent with the extended Highland model and agreed with measurements within 2% or 0.02 cm in RMS displacement. The simplicity will also improve the efficiency of numerical path integrals in the presence of heterogeneity.
Trading strategies for distribution company with stochastic distributed energy resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chunyu; Wang, Qi; Wang, Jianhui
2016-09-01
This paper proposes a methodology to address the trading strategies of a proactive distribution company (PDISCO) engaged in the transmission-level (TL) markets. A one-leader multi-follower bilevel model is presented to formulate the gaming framework between the PDISCO and markets. The lower-level (LL) problems include the TL day-ahead market and scenario-based real-time markets, respectively with the objectives of maximizing social welfare and minimizing operation cost. The upper-level (UL) problem is to maximize the PDISCO’s profit across these markets. The PDISCO’s strategic offers/bids interactively influence the outcomes of each market. Since the LL problems are linear and convex, while the UL problemmore » is non-linear and non-convex, an equivalent primal–dual approach is used to reformulate this bilevel model to a solvable mathematical program with equilibrium constraints (MPEC). The effectiveness of the proposed model is verified by case studies.« less
Relating Cohesive Zone Model to Linear Elastic Fracture Mechanics
NASA Technical Reports Server (NTRS)
Wang, John T.
2010-01-01
The conditions required for a cohesive zone model (CZM) to predict a failure load of a cracked structure similar to that obtained by a linear elastic fracture mechanics (LEFM) analysis are investigated in this paper. This study clarifies why many different phenomenological cohesive laws can produce similar fracture predictions. Analytical results for five cohesive zone models are obtained, using five different cohesive laws that have the same cohesive work rate (CWR-area under the traction-separation curve) but different maximum tractions. The effect of the maximum traction on the predicted cohesive zone length and the remote applied load at fracture is presented. Similar to the small scale yielding condition for an LEFM analysis to be valid. the cohesive zone length also needs to be much smaller than the crack length. This is a necessary condition for a CZM to obtain a fracture prediction equivalent to an LEFM result.
Stochastic Stability of Nonlinear Sampled Data Systems with a Jump Linear Controller
NASA Technical Reports Server (NTRS)
Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven
2004-01-01
This paper analyzes the stability of a sampled- data system consisting of a deterministic, nonlinear, time- invariant, continuous-time plant and a stochastic, discrete- time, jump linear controller. The jump linear controller mod- els, for example, computer systems and communication net- works that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. To analyze stability, appropriate topologies are introduced for the signal spaces of the sampled- data system. With these topologies, the ideal sampling and zero-order-hold operators are shown to be measurable maps. This paper shows that the known equivalence between the stability of a deterministic, linear sampled-data system and its associated discrete-time representation as well as between a nonlinear sampled-data system and a linearized representation holds even in a stochastic framework.
Yan, Liang; Peng, Juanjuan; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming
2014-10-01
This paper proposes a novel permanent magnet linear motor possessing two movers and one stator. The two movers are isolated and can interact with the stator poles to generate independent forces and motions. Compared with conventional multiple motor driving system, it helps to increase the system compactness, and thus improve the power density and working efficiency. The magnetic field distribution is obtained by using equivalent magnetic circuit method. Following that, the formulation of force output considering armature reaction is carried out. Then inductances are analyzed with finite element method to investigate the relationships of the two movers. It is found that the mutual-inductances are nearly equal to zero, and thus the interaction between the two movers is negligible. A research prototype of the linear motor and a measurement apparatus on thrust force have been developed. Both numerical computation and experiment measurement are conducted to validate the analytical model of thrust force. Comparison shows that the analytical model matches the numerical and experimental results well.
NASA Astrophysics Data System (ADS)
Nonato, Fábio; Cavalca, Katia L.
2014-12-01
This work presents a methodology for including the Elastohydrodynamic (EHD) film effects to a lateral vibration model of a deep groove ball bearing by using a novel approximation for the EHD contacts by a set of equivalent nonlinear spring and viscous damper. The fitting of the equivalent contact model used the results of a transient multi-level finite difference EHD algorithm to adjust the dynamic parameters. The comparison between the approximated model and the finite difference simulated results showed a suitable representation of the stationary and dynamic contact behaviors. The linear damping hypothesis could be shown as a rough representation of the actual hysteretic behavior of the EHD contact. Nevertheless, the overall accuracy of the model was not impaired by the use of such approximation. Further on, the inclusion of the equivalent EHD contact model is equated for both the restoring and the dissipative components of the bearing's lateral dynamics. The derived model was used to investigate the effects of the rolling element bearing lubrication on the vibration response of a rotor's lumped parameter model. The fluid film stiffening effect, previously only observable by experimentation, could be quantified using the proposed model, as well as the portion of the bearing damping provided by the EHD fluid film. Results from a laboratory rotor-bearing test rig were used to indirectly validate the proposed contact approximation. A finite element model of the rotor accounting for the lubricated bearing formulation adequately portrayed the frequency content of the bearing orbits observed on the test rig.
Additivity of nonsimultaneous masking for short Gaussian-shaped sinusoids.
Laback, Bernhard; Balazs, Peter; Necciari, Thibaud; Savel, Sophie; Ystad, Solvi; Meunier, Sabine; Kronland-Martinet, Richard
2011-02-01
The additivity of nonsimultaneous masking was studied using Gaussian-shaped tone pulses (referred to as Gaussians) as masker and target stimuli. Combinations of up to four temporally separated Gaussian maskers with an equivalent rectangular bandwidth of 600 Hz and an equivalent rectangular duration of 1.7 ms were tested. Each masker was level-adjusted to produce approximately 8 dB of masking. Excess masking (exceeding linear additivity) was generally stronger than reported in the literature for longer maskers and comparable target levels. A model incorporating a compressive input/output function, followed by a linear summation stage, underestimated excess masking when using an input/output function derived from literature data for longer maskers and comparable target levels. The data could be predicted with a more compressive input/output function. Stronger compression may be explained by assuming that the Gaussian stimuli were too short to evoke the medial olivocochlear reflex (MOCR), whereas for longer maskers tested previously the MOCR caused reduced compression. Overall, the interpretation of the data suggests strong basilar membrane compression for very short stimuli.
Wang, Ye; Tan, Ngiap-Chuan; Tay, Ee-Guan; Thumboo, Julian; Luo, Nan
2015-07-16
This study aimed to assess the measurement equivalence of the 5-level EQ-5D (EQ-5D-5L) among the English, Chinese, and Malay versions. A convenience sample of patients with type 2 diabetes mellitus were enrolled from a public primary health care institution in Singapore. The survey questionnaire comprised the EQ-5D-5L and questions assessing participants' socio-demographic and clinical characteristics. Multiple linear regression models were used to assess the difference in EQ-5D-5L index (calculated using an interim algorithm) and EQ-visual analog scale (EQ-VAS) scores across survey language (Chinese vs. English, Malay vs. English, and Malay vs. Chinese). Measurement equivalence was examined by comparing the 90% confidence interval of difference in the EQ-5D-5L index and EQ-VAS scores with a pre-determined equivalence margin. Multiple logistic regression models were used to assess the response patterns of the 5 Likert-type items of the EQ-5D-5L across survey language. Equivalence was demonstrated between the Chinese and English versions and between the Malay and English versions of the EQ-5D-5L index scores. Equivalence was also demonstrated between the Chinese and English versions and between the Malay and Chinese versions of the EQ-VAS scores. Equivalence could not be determined between the Malay and Chinese versions of the EQ-5D-5L index score and between the Malay and English versions of the EQ-VAS score. No significant difference was found in responses to EQ-5D-5L items between any languages, except that patients who chose to complete the Chinese version were more likely to report "no problems" in mobility compared to those who completed the Malay version of the questionnaire. This study provided evidence for the measurement equivalence of the different language versions of EQ-5D-5L in Singapore.
Use of AMMI and linear regression models to analyze genotype-environment interaction in durum wheat.
Nachit, M M; Nachit, G; Ketata, H; Gauch, H G; Zobel, R W
1992-03-01
The joint durum wheat (Triticum turgidum L var 'durum') breeding program of the International Maize and Wheat Improvement Center (CIMMYT) and the International Center for Agricultural Research in the Dry Areas (ICARDA) for the Mediterranean region employs extensive multilocation testing. Multilocation testing produces significant genotype-environment (GE) interaction that reduces the accuracy for estimating yield and selecting appropriate germ plasm. The sum of squares (SS) of GE interaction was partitioned by linear regression techniques into joint, genotypic, and environmental regressions, and by Additive Main effects and the Multiplicative Interactions (AMMI) model into five significant Interaction Principal Component Axes (IPCA). The AMMI model was more effective in partitioning the interaction SS than the linear regression technique. The SS contained in the AMMI model was 6 times higher than the SS for all three regressions. Postdictive assessment recommended the use of the first five IPCA axes, while predictive assessment AMMI1 (main effects plus IPCA1). After elimination of random variation, AMMI1 estimates for genotypic yields within sites were more precise than unadjusted means. This increased precision was equivalent to increasing the number of replications by a factor of 3.7.
NASA Astrophysics Data System (ADS)
Engdahl, N.
2017-12-01
Backward in time (BIT) simulations of passive tracers are often used for capture zone analysis, source area identification, and generation of travel time and age distributions. The BIT approach has the potential to become an immensely powerful tool for direct inverse modeling but the necessary relationships between the processes modeled in the forward and backward models have yet to be formally established. This study explores the time reversibility of passive and reactive transport models in a variety of 2D heterogeneous domains using particle-based random walk methods for the transport and nonlinear reaction steps. Distributed forward models are used to generate synthetic observations that form the initial conditions for the backward in time models and we consider both linear-flood and point injections. The results for passive travel time distributions show that forward and backward models are not exactly equivalent but that the linear-flood BIT models are reasonable approximations. Point based BIT models fall within the travel time range of the forward models, though their distributions can be distinctive in some cases. The BIT approximation is not as robust when nonlinear reactive transport is considered and we find that this reaction system is only exactly reversible under uniform flow conditions. We use a series of simplified, longitudinally symmetric, but heterogeneous, domains to illustrate the causes of these discrepancies between the two model types. Many of the discrepancies arise because diffusion is a "self-adjoint" operator, which causes mass to spread in the forward and backward models. This allows particles to enter low velocity regions in the both models, which has opposite effects in the forward and reverse models. It may be possible to circumvent some of these limitations using an anti-diffusion model to undo mixing when time is reversed, but this is beyond the capabilities of the existing Lagrangian methods.
NASA Astrophysics Data System (ADS)
Bressan, José Divo; Liewald, Mathias; Drotleff, Klaus
2017-10-01
Forming limit strain curves of conventional aluminium alloy AA6014 sheets after loading with non-linear strain paths are presented and compared with D-Bressan macroscopic model of sheet metal rupture by critical shear stress criterion. AA6014 exhibits good formability at room temperature and, thus, is mainly employed in car body external parts by manufacturing at room temperature. According to Weber et al., experimental bi-linear strain paths were carried out in specimens with 1mm thickness by pre-stretching in uniaxial and biaxial directions up to 5%, 10% and 20% strain levels before performing Nakajima testing experiments to obtain the forming limit strain curves, FLCs. In addition, FLCs of AA6014 were predicted by employing D-Bressan critical shear stress criterion for bi-linear strain path and comparisons with the experimental FLCs were analyzed and discussed. In order to obtain the material coefficients of plastic anisotropy, strain and strain rate hardening behavior and calibrate the D-Bressan model, tensile tests, two different strain rate on specimens cut at 0°, 45° and 90° to the rolling direction and also bulge test were carried out at room temperature. The correlation of experimental bi-linear strain path FLCs is reasonably good with the predicted limit strains from D-Bressan model, assuming equivalent pre-strain calculated by Hill 1979 yield criterion.
NASA Astrophysics Data System (ADS)
Tang, F. R.; Zhang, Rong; Li, Huichao; Li, C. N.; Liu, Wei; Bai, Long
2018-05-01
The trade-off criterion is used to systemically investigate the performance features of two chemical engine models (the low-dissipation model and the endoreversible model). The optimal efficiencies, the dissipation ratios, and the corresponding ratios of the dissipation rates for two models are analytically determined. Furthermore, the performance properties of two kinds of chemical engines are precisely compared and analyzed, and some interesting physics is revealed. Our investigations show that the certain universal equivalence between two models is within the framework of the linear irreversible thermodynamics, and their differences are rooted in the different physical contexts. Our results can contribute to a precise understanding of the general features of chemical engines.
New Results on the Linear Equating Methods for the Non-Equivalent-Groups Design
ERIC Educational Resources Information Center
von Davier, Alina A.
2008-01-01
The two most common observed-score equating functions are the linear and equipercentile functions. These are often seen as different methods, but von Davier, Holland, and Thayer showed that any equipercentile equating function can be decomposed into linear and nonlinear parts. They emphasized the dominant role of the linear part of the nonlinear…
NASA Technical Reports Server (NTRS)
Sheen, Jyh-Jong; Bishop, Robert H.
1992-01-01
The feedback linearization technique is applied to the problem of spacecraft attitude control and momentum management with control moment gyros (CMGs). The feedback linearization consists of a coordinate transformation, which transforms the system to a companion form, and a nonlinear feedback control law to cancel the nonlinear dynamics resulting in a linear equivalent model. Pole placement techniques are then used to place the closed-loop poles. The coordinate transformation proposed here evolves from three output functions of relative degree four, three, and two, respectively. The nonlinear feedback control law is presented. Stability in a neighborhood of a controllable torque equilibrium attitude (TEA) is guaranteed and this fact is demonstrated by the simulation results. An investigation of the nonlinear control law shows that singularities exist in the state space outside the neighborhood of the controllable TEA. The nonlinear control law is simplified by a standard linearization technique and it is shown that the linearized nonlinear controller provides a natural way to select control gains for the multiple-input, multiple-output system. Simulation results using the linearized nonlinear controller show good performance relative to the nonlinear controller in the neighborhood of the TEA.
Response of a tissue equivalent proportional counter to neutrons
NASA Technical Reports Server (NTRS)
Badhwar, G. D.; Robbins, D. E.; Gibbons, F.; Braby, L. A.
2002-01-01
The absorbed dose as a function of lineal energy was measured at the CERN-EC Reference-field Facility (CERF) using a 512-channel tissue equivalent proportional counter (TEPC), and neutron dose equivalent response evaluated. Although there are some differences, the measured dose equivalent is in agreement with that measured by the 16-channel HANDI tissue equivalent counter. Comparison of TEPC measurements with those made by a silicon solid-state detector for low linear energy transfer particles produced by the same beam, is presented. The measurements show that about 4% of dose equivalent is delivered by particles heavier than protons generated in the conducting tissue equivalent plastic. c2002 Elsevier Science Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Xi; Lu, Jinling; Yuan, Shifei; Yang, Jun; Zhou, Xuan
2017-03-01
This paper proposes a novel parameter identification method for the lithium-ion (Li-ion) battery equivalent circuit model (ECM) considering the electrochemical properties. An improved pseudo two-dimension (P2D) model is established on basis of partial differential equations (PDEs), since the electrolyte potential is simplified from the nonlinear to linear expression while terminal voltage can be divided into the electrolyte potential, open circuit voltage (OCV), overpotential of electrodes, internal resistance drop, and so on. The model order reduction process is implemented by the simplification of the PDEs using the Laplace transform, inverse Laplace transform, Pade approximation, etc. A unified second order transfer function between cell voltage and current is obtained for the comparability with that of ECM. The final objective is to obtain the relationship between the ECM resistances/capacitances and electrochemical parameters such that in various conditions, ECM precision could be improved regarding integration of battery interior properties for further applications, e.g., SOC estimation. Finally simulation and experimental results prove the correctness and validity of the proposed methodology.
2011-01-01
Background Safety assessment of genetically modified organisms is currently often performed by comparative evaluation. However, natural variation of plant characteristics between commercial varieties is usually not considered explicitly in the statistical computations underlying the assessment. Results Statistical methods are described for the assessment of the difference between a genetically modified (GM) plant variety and a conventional non-GM counterpart, and for the assessment of the equivalence between the GM variety and a group of reference plant varieties which have a history of safe use. It is proposed to present the results of both difference and equivalence testing for all relevant plant characteristics simultaneously in one or a few graphs, as an aid for further interpretation in safety assessment. A procedure is suggested to derive equivalence limits from the observed results for the reference plant varieties using a specific implementation of the linear mixed model. Three different equivalence tests are defined to classify any result in one of four equivalence classes. The performance of the proposed methods is investigated by a simulation study, and the methods are illustrated on compositional data from a field study on maize grain. Conclusions A clear distinction of practical relevance is shown between difference and equivalence testing. The proposed tests are shown to have appropriate performance characteristics by simulation, and the proposed simultaneous graphical representation of results was found to be helpful for the interpretation of results from a practical field trial data set. PMID:21324199
NASA Technical Reports Server (NTRS)
Douglass, A. R.; Stolarski, R. S.; Strahan, S. E.; Polansky, B. C.
2006-01-01
The sensitivity of Arctic ozone loss to polar stratospheric cloud volume (V(sub PSC)) and chlorine and bromine loading is explored using chemistry and transport models (CTMs). A simulation using multi-decadal output from a general circulation model (GCM) in the Goddard Space Flight Center (GSFC) CTM complements one recycling a single year s GCM output in the Global Modeling Initiative (GMI) CTM. Winter polar ozone loss in the GSFC CTM depends on equivalent effective stratospheric chlorine (EESC) and polar vortex characteristics (temperatures, descent, isolation, polar stratospheric cloud amount). Polar ozone loss in the GMI CTM depends only on changes in EESC as the dynamics repeat annually. The GSFC CTM simulation reproduces a linear relationship between ozone loss and Vpsc derived from observations for 1992 - 2003 which holds for EESC within approx.85% of its maximum (approx.1990 - 2020). The GMI simulation shows that ozone loss varies linearly with EESC for constant, high V(sub PSC).
AC impedance analysis of polypyrrole thin films
NASA Technical Reports Server (NTRS)
Penner, Reginald M.; Martin, Charles R.
1987-01-01
The AC impedance spectra of thin polypyrrole films were obtained at open circuit potentials from -0.4 to 0.4 V vs SCE. Two limiting cases are discussed for which simplified equivalent circuits are applicable. At very positive potentials, the predominantly nonfaradaic AC impedance of polypyrrole is very similar to that observed previously for finite porous metallic films. Modeling of the data with the appropriate equivalent circuit permits effective pore diameter and pore number densities of the oxidized film to be estimated. At potentials from -0.4 to -0.3 V, the polypyrrole film is essentially nonelectronically conductive and diffusion of polymer oxidized sites with their associated counterions can be assumed to be linear from the film/substrate electrode interface. The equivalent circuit for the polypyrrole film at these potentials is that previously described for metal oxide, lithium intercalation thin films. Using this model, counterion diffusion coefficients are determined for both semi-infinite and finite diffusion domains. In addition, the limiting low frequency resistance and capacitance of the polypyrrole thin fims was determined and compared to that obtained previously for thicker films of the polymer. The origin of the observed potential dependence of these low frequency circuit components is discussed.
Key-Generation Algorithms for Linear Piece In Hand Matrix Method
NASA Astrophysics Data System (ADS)
Tadaki, Kohtaro; Tsujii, Shigeo
The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.
Piecewise affine models of chaotic attractors: the Rossler and Lorenz systems.
Amaral, Gleison F V; Letellier, Christophe; Aguirre, Luis Antonio
2006-03-01
This paper proposes a procedure by which it is possible to synthesize Rossler [Phys. Lett. A 57, 397-398 (1976)] and Lorenz [J. Atmos. Sci. 20, 130-141 (1963)] dynamics by means of only two affine linear systems and an abrupt switching law. Comparison of different (valid) switching laws suggests that parameters of such a law behave as codimension one bifurcation parameters that can be changed to produce various dynamical regimes equivalent to those observed with the original systems. Topological analysis is used to characterize the resulting attractors and to compare them with the original attractors. The paper provides guidelines that are helpful to synthesize other chaotic dynamics by means of switching affine linear systems.
Exact Solution of Klein-Gordon and Dirac Equations with Snyder-de Sitter Algebra
NASA Astrophysics Data System (ADS)
Merad, M.; Hadj Moussa, M.
2018-01-01
In this paper, we present the exact solution of the (1+1)-dimensional relativistic Klein-Gordon and Dirac equations with linear vector and scalar potentials in the framework of deformed Snyder-de Sitter model. We introduce some changes of variables, we show that a one-dimensional linear potential for the relativistic system in a space deformed can be equivalent to the trigonometric Rosen-Morse potential in a regular space. In both cases, we determine explicitly the energy eigenvalues and their corresponding eigenfunctions expressed in terms of Romonovski polynomials. The limiting cases are analyzed for α 1 and α 2 → 0 and are compared with those of literature.
Transformation to equivalent dimensions—a new methodology to study earthquake clustering
NASA Astrophysics Data System (ADS)
Lasocki, Stanislaw
2014-05-01
A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.
ERIC Educational Resources Information Center
Arntzen, Erik; Grondahl, Terje; Eilifsen, Christoffer
2010-01-01
Previous studies comparing groups of subjects have indicated differential probabilities of stimulus equivalence outcome as a function of training structures. One-to-Many (OTM) and Many-to-One (MTO) training structures seem to produce positive outcomes on tests for stimulus equivalence more often than a Linear Series (LS) training structure does.…
NASA Astrophysics Data System (ADS)
Ryzhikov, I. S.; Semenkin, E. S.; Akhmedova, Sh A.
2017-02-01
A novel order reduction method for linear time invariant systems is described. The method is based on reducing the initial problem to an optimization one, using the proposed model representation, and solving the problem with an efficient optimization algorithm. The proposed method of determining the model allows all the parameters of the model with lower order to be identified and by definition, provides the model with the required steady-state. As a powerful optimization tool, the meta-heuristic Co-Operation of Biology-Related Algorithms was used. Experimental results proved that the proposed approach outperforms other approaches and that the reduced order model achieves a high level of accuracy.
Observational tests of non-adiabatic Chaplygin gas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carneiro, S.; Pigozzo, C., E-mail: saulo.carneiro@pq.cnpq.br, E-mail: cpigozzo@ufba.br
2014-10-01
In a previous paper [1] it was shown that any dark sector model can be mapped into a non-adiabatic fluid formed by two interacting components, one with zero pressure and the other with equation-of-state parameter ω = -1. It was also shown that the latter does not cluster and, hence, the former is identified as the observed clustering matter. This guarantees that the dark matter power spectrum does not suffer from oscillations or instabilities. It applies in particular to the generalised Chaplygin gas, which was shown to be equivalent to interacting models at both background and perturbation levels. In the present papermore » we test the non-adiabatic Chaplygin gas against the Hubble diagram of type Ia supernovae, the position of the first acoustic peak in the anisotropy spectrum of the cosmic microwave background and the linear power spectrum of large scale structures. We consider two different compilations of SNe Ia, namely the Constitution and SDSS samples, both calibrated with the MLCS2k2 fitter, and for the power spectrum we use the 2dFGRS catalogue. The model parameters to be adjusted are the present Hubble parameter, the present matter density and the Chaplygin gas parameter α. The joint analysis best fit gives α ≈ - 0.5, which corresponds to a constant-rate energy flux from dark energy to dark matter, with the dark energy density decaying linearly with the Hubble parameter. The ΛCDM model, equivalent to α = 0, stands outside the 3σ confidence interval.« less
A comparison of linear and non-linear data assimilation methods using the NEMO ocean model
NASA Astrophysics Data System (ADS)
Kirchgessner, Paul; Tödter, Julian; Nerger, Lars
2015-04-01
The assimilation behavior of the widely used LETKF is compared with the Equivalent Weight Particle Filter (EWPF) in a data assimilation application with an idealized configuration of the NEMO ocean model. The experiments show how the different filter methods behave when they are applied to a realistic ocean test case. The LETKF is an ensemble-based Kalman filter, which assumes Gaussian error distributions and hence implicitly requires model linearity. In contrast, the EWPF is a fully nonlinear data assimilation method that does not rely on a particular error distribution. The EWPF has been demonstrated to work well in highly nonlinear situations, like in a model solving a barotropic vorticity equation, but it is still unknown how the assimilation performance compares to ensemble Kalman filters in realistic situations. For the experiments, twin assimilation experiments with a square basin configuration of the NEMO model are performed. The configuration simulates a double gyre, which exhibits significant nonlinearity. The LETKF and EWPF are both implemented in PDAF (Parallel Data Assimilation Framework, http://pdaf.awi.de), which ensures identical experimental conditions for both filters. To account for the nonlinearity, the assimilation skill of the two methods is assessed by using different statistical metrics, like CRPS and Histograms.
On the use of Lineal Energy Measurements to Estimate Linear Energy Transfer Spectra
NASA Technical Reports Server (NTRS)
Adams, David A.; Howell, Leonard W., Jr.; Adam, James H., Jr.
2007-01-01
This paper examines the error resulting from using a lineal energy spectrum to represent a linear energy transfer spectrum for applications in the space radiation environment. Lineal energy and linear energy transfer spectra are compared in three diverse but typical space radiation environments. Different detector geometries are also studied to determine how they affect the error. LET spectra are typically used to compute dose equivalent for radiation hazard estimation and single event effect rates to estimate radiation effects on electronics. The errors in the estimations of dose equivalent and single event rates that result from substituting lineal energy spectra for linear energy spectra are examined. It is found that this substitution has little effect on dose equivalent estimates in interplanetary quiet-time environment regardless of detector shape. The substitution has more of an effect when the environment is dominated by solar energetic particles or trapped radiation, but even then the errors are minor especially if a spherical detector is used. For single event estimation, the effect of the substitution can be large if the threshold for the single event effect is near where the linear energy spectrum drops suddenly. It is judged that single event rate estimates made from lineal energy spectra are unreliable and the use of lineal energy spectra for single event rate estimation should be avoided.
NASA Astrophysics Data System (ADS)
Avitabile, Peter; O'Callahan, John
2009-01-01
Generally, response analysis of systems containing discrete nonlinear connection elements such as typical mounting connections require the physical finite element system matrices to be used in a direct integration algorithm to compute the nonlinear response analysis solution. Due to the large size of these physical matrices, forced nonlinear response analysis requires significant computational resources. Usually, the individual components of the system are analyzed and tested as separate components and their individual behavior may essentially be linear when compared to the total assembled system. However, the joining of these linear subsystems using highly nonlinear connection elements causes the entire system to become nonlinear. It would be advantageous if these linear modal subsystems could be utilized in the forced nonlinear response analysis since much effort has usually been expended in fine tuning and adjusting the analytical models to reflect the tested subsystem configuration. Several more efficient techniques have been developed to address this class of problem. Three of these techniques given as: equivalent reduced model technique (ERMT);modal modification response technique (MMRT); andcomponent element method (CEM); are presented in this paper and are compared to traditional methods.
Andreasen, Nancy C; Pressler, Marcus; Nopoulos, Peg; Miller, Del; Ho, Beng-Choon
2010-02-01
A standardized quantitative method for comparing dosages of different drugs is a useful tool for designing clinical trials and for examining the effects of long-term medication side effects such as tardive dyskinesia. Such a method requires establishing dose equivalents. An expert consensus group has published charts of equivalent doses for various antipsychotic medications for first- and second-generation medications. These charts were used in this study. Regression was used to compare each drug in the experts' charts to chlorpromazine and haloperidol and to create formulas for each relationship. The formulas were solved for chlorpromazine 100 mg and haloperidol 2 mg to derive new chlorpromazine and haloperidol equivalents. The formulas were incorporated into our definition of dose-years such that 100 mg/day of chlorpromazine equivalent or 2 mg/day of haloperidol equivalent taken for 1 year is equal to one dose-year. All comparisons to chlorpromazine and haloperidol were highly linear with R(2) values greater than .9. A power transformation further improved linearity. By deriving a unique formula that converts doses to chlorpromazine or haloperidol equivalents, we can compare otherwise dissimilar drugs. These equivalents can be multiplied by the time an individual has been on a given dose to derive a cumulative value measured in dose-years in the form of (chlorpromazine equivalent in mg) x (time on dose measured in years). After each dose has been converted to dose-years, the results can be summed to provide a cumulative quantitative measure of lifetime exposure. Copyright 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.
Dumas, J L; Lorchel, F; Perrot, Y; Aletti, P; Noel, A; Wolf, D; Courvoisier, P; Bosset, J F
2007-03-01
The goal of our study was to quantify the limits of the EUD models for use in score functions in inverse planning software, and for clinical application. We focused on oesophagus cancer irradiation. Our evaluation was based on theoretical dose volume histograms (DVH), and we analyzed them using volumetric and linear quadratic EUD models, average and maximum dose concepts, the linear quadratic model and the differential area between each DVH. We evaluated our models using theoretical and more complex DVHs for the above regions of interest. We studied three types of DVH for the target volume: the first followed the ICRU dose homogeneity recommendations; the second was built out of the first requirements and the same average dose was built in for all cases; the third was truncated by a small dose hole. We also built theoretical DVHs for the organs at risk, in order to evaluate the limits of, and the ways to use both EUD(1) and EUD/LQ models, comparing them to the traditional ways of scoring a treatment plan. For each volume of interest we built theoretical treatment plans with differences in the fractionation. We concluded that both volumetric and linear quadratic EUDs should be used. Volumetric EUD(1) takes into account neither hot-cold spot compensation nor the differences in fractionation, but it is more sensitive to the increase of the irradiated volume. With linear quadratic EUD/LQ, a volumetric analysis of fractionation variation effort can be performed.
Zhou, Gaochao; Tao, Xudong; Shen, Ze; Zhu, Guanghao; Jin, Biaobing; Kang, Lin; Xu, Weiwei; Chen, Jian; Wu, Peiheng
2016-01-01
We propose a kind of general framework for the design of a perfect linear polarization converter that works in the transmission mode. Using an intuitive picture that is based on the method of bi-directional polarization mode decomposition, it is shown that when the device under consideration simultaneously possesses two complementary symmetry planes, with one being equivalent to a perfect electric conducting surface and the other being equivalent to a perfect magnetic conducting surface, linear polarization conversion can occur with an efficiency of 100% in the absence of absorptive losses. The proposed framework is validated by two design examples that operate near 10 GHz, where the numerical, experimental and analytic results are in good agreements. PMID:27958313
Solvent effects in time-dependent self-consistent field methods. I. Optical response calculations
Bjorgaard, J. A.; Kuzmenko, V.; Velizhanin, K. A.; ...
2015-01-22
In this study, we implement and examine three excited state solvent models in time-dependent self-consistent field methods using a consistent formalism which unambiguously shows their relationship. These are the linear response, state specific, and vertical excitation solvent models. Their effects on energies calculated with the equivalent of COSMO/CIS/AM1 are given for a set of test molecules with varying excited state charge transfer character. The resulting solvent effects are explained qualitatively using a dipole approximation. It is shown that the fundamental differences between these solvent models are reflected by the character of the calculated excitations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... followed by a gravimetric mass determination, but which is not a Class I equivalent method because of... MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.1 Definitions. Terms used but not defined... slope of a linear plot fitted to corresponding candidate and reference method mean measurement data...
Adaptive Channel Measurement Study
1975-09-01
of P 3 as a Function of Step Size and Iteration Number With and Without Noise Using the LMS Algorithm and a Quadratic Model at a -Fade...real, al(t) will vanish, and the linear term 0,(t) is a filtered version of the input signal with a filter identical to the lowpass equivalent of the...we see tnat a (t) +ij(t) -n Il+ ’n] - - + ..- (2.71) 2-49 Collecting terms of the same order 0(t) + JO(t) ,,
Galactic cosmic ray radiation levels in spacecraft on interplanetary missions
NASA Technical Reports Server (NTRS)
Shinn, J. L.; Nealy, J. E.; Townsend, L. W.; Wilson, J. W.; Wood, J.S.
1994-01-01
Using the Langley Research Center Galactic Cosmic Ray (GCR) transport computer code (HZETRN) and the Computerized Anatomical Man (CAM) model, crew radiation levels inside manned spacecraft on interplanetary missions are estimated. These radiation-level estimates include particle fluxes, LET (Linear Energy Transfer) spectra, absorbed dose, and dose equivalent within various organs of interest in GCR protection studies. Changes in these radiation levels resulting from the use of various different types of shield materials are presented.
Massive gravity in three dimensions.
Bergshoeff, Eric A; Hohm, Olaf; Townsend, Paul K
2009-05-22
A particular higher-derivative extension of the Einstein-Hilbert action in three spacetime dimensions is shown to be equivalent at the linearized level to the (unitary) Pauli-Fierz action for a massive spin-2 field. A more general model, which also includes "topologically-massive" gravity as a special case, propagates the two spin-2 helicity states with different masses. We discuss the extension to massive N-extended supergravity, and we present a "cosmological" extension that admits an anti-de Sitter vacuum.
Application of Exactly Linearized Error Transport Equations to AIAA CFD Prediction Workshops
NASA Technical Reports Server (NTRS)
Derlaga, Joseph M.; Park, Michael A.; Rallabhandi, Sriram
2017-01-01
The computational fluid dynamics (CFD) prediction workshops sponsored by the AIAA have created invaluable opportunities in which to discuss the predictive capabilities of CFD in areas in which it has struggled, e.g., cruise drag, high-lift, and sonic boom pre diction. While there are many factors that contribute to disagreement between simulated and experimental results, such as modeling or discretization error, quantifying the errors contained in a simulation is important for those who make decisions based on the computational results. The linearized error transport equations (ETE) combined with a truncation error estimate is a method to quantify one source of errors. The ETE are implemented with a complex-step method to provide an exact linearization with minimal source code modifications to CFD and multidisciplinary analysis methods. The equivalency of adjoint and linearized ETE functional error correction is demonstrated. Uniformly refined grids from a series of AIAA prediction workshops demonstrate the utility of ETE for multidisciplinary analysis with a connection between estimated discretization error and (resolved or under-resolved) flow features.
NASA Astrophysics Data System (ADS)
Torres Cedillo, Sergio G.; Bonello, Philip
2016-01-01
The high pressure (HP) rotor in an aero-engine assembly cannot be accessed under operational conditions because of the restricted space for instrumentation and high temperatures. This motivates the development of a non-invasive inverse problem approach for unbalance identification and balancing, requiring prior knowledge of the structure. Most such methods in the literature necessitate linear bearing models, making them unsuitable for aero-engine applications which use nonlinear squeeze-film damper (SFD) bearings. A previously proposed inverse method for nonlinear rotating systems was highly limited in its application (e.g. assumed circular centered SFD orbits). The methodology proposed in this paper overcomes such limitations. It uses the Receptance Harmonic Balance Method (RHBM) to generate the backward operator using measurements of the vibration at the engine casing, provided there is at least one linear connection between rotor and casing, apart from the nonlinear connections. A least-squares solution yields the equivalent unbalance distribution in prescribed planes of the rotor, which is consequently used to balance it. The method is validated on distinct rotordynamic systems using simulated casing vibration readings. The method is shown to provide effective balancing under hitherto unconsidered practical conditions. The repeatability of the method, as well as its robustness to noise, model uncertainty and balancing errors, are satisfactorily demonstrated and the limitations of the process discussed.
NASA Astrophysics Data System (ADS)
Gomez, Jamie; Nelson, Ruben; Kalu, Egwu E.; Weatherspoon, Mark H.; Zheng, Jim P.
2011-05-01
Equivalent circuit model (EMC) of a high-power Li-ion battery that accounts for both temperature and state of charge (SOC) effects known to influence battery performance is presented. Electrochemical impedance measurements of a commercial high power Li-ion battery obtained in the temperature range 20 to 50 °C at various SOC values was used to develop a simple EMC which was used in combination with a non-linear least squares fitting procedure that used thirteen parameters for the analysis of the Li-ion cell. The experimental results show that the solution and charge transfer resistances decreased with increase in cell operating temperature and decreasing SOC. On the other hand, the Warburg admittance increased with increasing temperature and decreasing SOC. The developed model correlations that are capable of being used in process control algorithms are presented for the observed impedance behavior with respect to temperature and SOC effects. The predicted model parameters for the impedance elements Rs, Rct and Y013 show low variance of 5% when compared to the experimental data and therefore indicates a good statistical agreement of correlation model to the actual experimental values.
Obusek, J P; Holt, K G; Rosenstein, R M
1995-07-01
Human leg swinging is modeled as the harmonic motion of a hybrid mass-spring pendulum. The cycle period is determined by a gravitational component and an elastic component, which is provided by the attachment of a soft-tissue/muscular spring of variable stiffness. To confirm that the stiffness of the spring changes with alterations in the inertial properties of the oscillator and that stiffness is relevant for the control of cycle period, we conducted this study in which the simple pendulum equivalent length was experimentally manipulated by adding mass to the ankle of a comfortably swinging leg. Twenty-four young, healthy adults were videotaped as they swung their right leg under four conditions: no added mass and with masses of 2.27, 4.55, and 6.82kg added to the ankle. Strong, linear relationships between the acceleration and displacement of the swinging leg within subjects and conditions were found, confirming the motion's harmonic nature. Cycle period significantly increased with the added mass. However, the observed increases were not as large as would be predicted by the induced changes in the gravitational component alone. These differences were interpreted as being due to increases in the active muscular stiffness. Significant linear increases in the elastic component (and hence stiffness) were demonstrated with increases in the simple pendulum equivalent length in 20 of the individual subjects, with r2 values ranging between 0.89 and 0.99. Significant linear relationships were also demonstrated between the elastic and gravitational components in 22 subjects, with individual r2 values between 0.90 and 0.99.(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Astrophysics Data System (ADS)
Serrat-Capdevila, A.; Valdes, J. B.
2005-12-01
An optimization approach for the operation of international multi-reservoir systems is presented. The approach uses Stochastic Dynamic Programming (SDP) algorithms, both steady-state and real-time, to develop two models. In the first model, the reservoirs and flows of the system are aggregated to yield an equivalent reservoir, and the obtained operating policies are disaggregated using a non-linear optimization procedure for each reservoir and for each nation water balance. In the second model a multi-reservoir approach is applied, disaggregating the releases for each country water share in each reservoir. The non-linear disaggregation algorithm uses SDP-derived operating policies as boundary conditions for a local time-step optimization. Finally, the performance of the different approaches and methods is compared. These models are applied to the Amistad-Falcon International Reservoir System as part of a binational dynamic modeling effort to develop a decision support system tool for a better management of the water resources in the Lower Rio Grande Basin, currently enduring a severe drought.
Primal/dual linear programming and statistical atlases for cartilage segmentation.
Glocker, Ben; Komodakis, Nikos; Paragios, Nikos; Glaser, Christian; Tziritas, Georgios; Navab, Nassir
2007-01-01
In this paper we propose a novel approach for automatic segmentation of cartilage using a statistical atlas and efficient primal/dual linear programming. To this end, a novel statistical atlas construction is considered from registered training examples. Segmentation is then solved through registration which aims at deforming the atlas such that the conditional posterior of the learned (atlas) density is maximized with respect to the image. Such a task is reformulated using a discrete set of deformations and segmentation becomes equivalent to finding the set of local deformations which optimally match the model to the image. We evaluate our method on 56 MRI data sets (28 used for the model and 28 used for evaluation) and obtain a fully automatic segmentation of patella cartilage volume with an overlap ratio of 0.84 with a sensitivity and specificity of 94.06% and 99.92%, respectively.
Generalised Transfer Functions of Neural Networks
NASA Astrophysics Data System (ADS)
Fung, C. F.; Billings, S. A.; Zhang, H.
1997-11-01
When artificial neural networks are used to model non-linear dynamical systems, the system structure which can be extremely useful for analysis and design, is buried within the network architecture. In this paper, explicit expressions for the frequency response or generalised transfer functions of both feedforward and recurrent neural networks are derived in terms of the network weights. The derivation of the algorithm is established on the basis of the Taylor series expansion of the activation functions used in a particular neural network. This leads to a representation which is equivalent to the non-linear recursive polynomial model and enables the derivation of the transfer functions to be based on the harmonic expansion method. By mapping the neural network into the frequency domain information about the structure of the underlying non-linear system can be recovered. Numerical examples are included to demonstrate the application of the new algorithm. These examples show that the frequency response functions appear to be highly sensitive to the network topology and training, and that the time domain properties fail to reveal deficiencies in the trained network structure.
A complete graphical criterion for the adjustment formula in mediation analysis.
Shpitser, Ilya; VanderWeele, Tyler J
2011-03-04
Various assumptions have been used in the literature to identify natural direct and indirect effects in mediation analysis. These effects are of interest because they allow for effect decomposition of a total effect into a direct and indirect effect even in the presence of interactions or non-linear models. In this paper, we consider the relation and interpretation of various identification assumptions in terms of causal diagrams interpreted as a set of non-parametric structural equations. We show that for such causal diagrams, two sets of assumptions for identification that have been described in the literature are in fact equivalent in the sense that if either set of assumptions holds for all models inducing a particular causal diagram, then the other set of assumptions will also hold for all models inducing that diagram. We moreover build on prior work concerning a complete graphical identification criterion for covariate adjustment for total effects to provide a complete graphical criterion for using covariate adjustment to identify natural direct and indirect effects. Finally, we show that this criterion is equivalent to the two sets of independence assumptions used previously for mediation analysis.
Dosimetric Comparison in Breast Radiotherapy of 4 MV and 6 MV on Physical Chest Simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donato da Silva, Sabrina; Passos Ribeiro Campos, Tarcisio; Batista Nogueira, Luciana
2015-07-01
According to the World Health Organization (2014) breast cancer is the main cause of death by cancer in women worldwide. The biggest challenge of radiotherapy in the treatment of cancer is to deposit the entire prescribed dose homogeneously in the breast, sparing the surrounding tissue. In this context, this paper aimed at evaluating and comparing internal dose distribution in the mammary gland based on experimental procedures submitted to two distinct energy spectra produced in breast cancer radiotherapy. The methodology consisted of reproducing opposite parallel fields used in the treatment of breast tumors in a chest phantom. This simulator with syntheticmore » breast, composed of equivalent tissue material (TE), was previously developed by the NRI Research Group (UFMG). The computer tomography (CT) scan of the simulator was obtained antecedently. The radiotherapy planning systems (TPS) in the chest phantom were performed in the ECLIPSE system from Varian Medical Systems and CAT 3D system from MEVIS. The irradiations were reproduced in the Varian linear accelerator, model SL- 20 Precise, 6 MV energy and Varian linear accelerator, 4 MV Clinac 6x SN11 model. Calibrations of the absorbed dose versus optical density from radiochromic films were generated in order to obtain experimental dosimetric distribution at the films positioned within the glandular and skin equivalent tissues of the chest phantom. The spatial dose distribution showed equivalence with the TPS on measurement data performed in the 6 MV spectrum. The average dose found in radiochromic films placed on the skin ranged from 49 to 79%, and from 39 to 49% in the mammary areola, for the prescribed dose. Dosimetric comparisons between the spectra of 4 and 6 MV, keeping the constant geometry of the fields applied in the same phantom, will be presented showing their equivalence in breast radiotherapy, as well as the variations will be discussed. To sum up, the dose distribution has reached the value expected in the breast dose of the 180 cGy in a wide range of the film in the glandular TE in both spectra. (authors)« less
NASA Technical Reports Server (NTRS)
Sloss, J. M.; Kranzler, S. K.
1972-01-01
The equivalence of a considered integral equation form with an infinite system of linear equations is proved, and the localization of the eigenvalues of the infinite system is expressed. Error estimates are derived, and the problems of finding upper bounds and lower bounds for the eigenvalues are solved simultaneously.
NASA Technical Reports Server (NTRS)
Lee, F. C. Y.; Wilson, T. G.
1982-01-01
The present investigation is concerned with an important class of power conditioning networks, taking into account self-oscillating dc-to-square-wave transistor inverters. The considered circuits are widely used both as the principal power converting and processing means in many systems and as low-power analog-to-discrete-time converters for controlling the switching of the output-stage semiconductors in a variety of power conditioning systems. Aspects of piecewise-linear modeling are discussed, taking into consideration component models, and an equivalent-circuit model. Questions of singular point analysis and state plane representation are also investigated, giving attention to limit cycles, starting circuits, the region of attraction, a hard oscillator, and a soft oscillator.
Guevara, V R
2004-02-01
A nonlinear programming optimization model was developed to maximize margin over feed cost in broiler feed formulation and is described in this paper. The model identifies the optimal feed mix that maximizes profit margin. Optimum metabolizable energy level and performance were found by using Excel Solver nonlinear programming. Data from an energy density study with broilers were fitted to quadratic equations to express weight gain, feed consumption, and the objective function income over feed cost in terms of energy density. Nutrient:energy ratio constraints were transformed into equivalent linear constraints. National Research Council nutrient requirements and feeding program were used for examining changes in variables. The nonlinear programming feed formulation method was used to illustrate the effects of changes in different variables on the optimum energy density, performance, and profitability and was compared with conventional linear programming. To demonstrate the capabilities of the model, I determined the impact of variation in prices. Prices for broiler, corn, fish meal, and soybean meal were increased and decreased by 25%. Formulations were identical in all other respects. Energy density, margin, and diet cost changed compared with conventional linear programming formulation. This study suggests that nonlinear programming can be more useful than conventional linear programming to optimize performance response to energy density in broiler feed formulation because an energy level does not need to be set.
2011-09-01
with the bilinear plasticity relation. We used the bilinear relation, which allowed a full range of hardening from isotropic to kinematic to be...43 Table 12. Verification of the Weight Function Method for Single Corner Crack at a Hole in an Infinite ...determine the “Young’s Modulus,” or the slope of the linear region of the curve, the experimental data is curve fit with
A method for the analysis of nonlinearities in aircraft dynamic response to atmospheric turbulence
NASA Technical Reports Server (NTRS)
Sidwell, K.
1976-01-01
An analytical method is developed which combines the equivalent linearization technique for the analysis of the response of nonlinear dynamic systems with the amplitude modulated random process (Press model) for atmospheric turbulence. The method is initially applied to a bilinear spring system. The analysis of the response shows good agreement with exact results obtained by the Fokker-Planck equation. The method is then applied to an example of control-surface displacement limiting in an aircraft with a pitch-hold autopilot.
Estimates of internal-dose equivalent from inhalation and ingestion of selected radionuclides
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunning, D.E.
1982-01-01
This report presents internal radiation dose conversion factors for radionuclides of interest in environmental assessments of nuclear fuel cycles. This volume provides an updated summary of estimates of committed dose equivalent for radionuclides considered in three previous Oak Ridge National Laboratory (ORNL) reports. Intakes by inhalation and ingestion are considered. The International Commission on Radiological Protection (ICRP) Task Group Lung Model has been used to simulate the deposition and retention of particulate matter in the respiratory tract. Results corresponding to activity median aerodynamic diameters (AMAD) of 0.3, 1.0, and 5.0 ..mu..m are given. The gastorintestinal (GI) tract has been representedmore » by a four-segment catenary model with exponential transfer of radioactivity from one segment to the next. Retention of radionuclides in systemic organs is characterized by linear combinations of decaying exponential functions, recommended in ICRP Publication 30. The first-year annual dose rate, maximum annual dose rate, and fifty-year dose commitment per microcurie intake of each radionuclide is given for selected target organs and the effective dose equivalent. These estimates include contributions from specified source organs plus the systemic activity residing in the rest of the body; cross irradiation due to penetrating radiations has been incorporated into these estimates. 15 references.« less
Testing the Equivalence Principle and Lorentz Invariance with PeV Neutrinos from Blazar Flares.
Wang, Zi-Yi; Liu, Ruo-Yu; Wang, Xiang-Yu
2016-04-15
It was recently proposed that a giant flare of the blazar PKS B1424-418 at redshift z=1.522 is in association with a PeV-energy neutrino event detected by IceCube. Based on this association we here suggest that the flight time difference between the PeV neutrino and gamma-ray photons from blazar flares can be used to constrain the violations of equivalence principle and the Lorentz invariance for neutrinos. From the calculated Shapiro delay due to clusters or superclusters in the nearby universe, we find that violation of the equivalence principle for neutrinos and photons is constrained to an accuracy of at least 10^{-5}, which is 2 orders of magnitude tighter than the constraint placed by MeV neutrinos from supernova 1987A. Lorentz invariance violation (LIV) arises in various quantum-gravity theories, which predicts an energy-dependent velocity of propagation in vacuum for particles. We find that the association of the PeV neutrino with the gamma-ray outburst set limits on the energy scale of possible LIV to >0.01E_{pl} for linear LIV models and >6×10^{-8}E_{pl} for quadratic order LIV models, where E_{pl} is the Planck energy scale. These are the most stringent constraints on neutrino LIV for subluminal neutrinos.
NASA Astrophysics Data System (ADS)
Sapilewski, Glen Alan
The Satellite Test of the Equivalence Principle (STEP) is a modern version of Galileo's experiment of dropping two objects from the leaning tower of Pisa. The Equivalence Principle states that all objects fall with the same acceleration, independent of their composition. The primary scientific objective of STEP is to measure a possible violation of the Equivalence Principle one million times better than the best ground based tests. This extraordinary sensitivity is made possible by using cryogenic differential accelerometers in the space environment. Critical to the STEP experiment is a sound fundamental understanding of the behavior of the superconducting magnetic linear bearings used in the accelerometers. We have developed a theoretical bearing model and a precision measuring system with which to validate the model. The accelerometers contain two concentric hollow cylindrical test masses, of different materials, each levitated and constrained to axial motion by a superconducting magnetic bearing. Ensuring that the bearings satisfy the stringent mission specifications requires developing new testing apparatus and methods. The bearing is tested using an actively-controlled table which tips it relative to gravity. This balances the magnetic forces from the bearing against a component of gravity. The magnetic force profile of the bearing can be mapped by measuring the tilt necessary to position the test mass at various locations. An operational bearing has been built and is being used to verify the theoretical levitation models. The experimental results obtained from the bearing test apparatus were inconsistent with the previous models used for STEP bearings. This led to the development of a new bearing model that includes the influence of surface current variations in the bearing wires and the effect of the superconducting transformer. The new model, which has been experimentally verified, significantly improves the prediction of levitation current, accurately estimates the relationship between tilting and translational modes, and predicts the dependence of radial mode frequencies on the bearing current. In addition, we developed a new model for the forces produced by trapped magnetic fluxons, a potential source of imperfections in the bearing. This model estimates the forces between magnetic fluxons trapped in separate superconducting objects.
NASA Astrophysics Data System (ADS)
Cho, Inhee; Huh, Keon; Kwak, Rhokyun; Lee, Hyomin; Kim, Sung Jae
2016-11-01
The first direct chronopotentiometric measurement was provided to distinguish the potential difference through the extended space charge (ESC) layer which is formed with the electrical double layer (EDL) near a perm-selective membrane. From this experimental result, the linear relationship was obtained between the resistance of ESC and the applied current density. Furthermore, we observed the step-wise distributions of relaxation time at the limiting current regime, confirming the existence of ESC capacitance other than EDL's. In addition, we proposed the equivalent electrokinetic circuit model inside ion concentration polarization (ICP) layer under rigorous consideration of EDL, ESC and electro-convection (EC). In order to elucidate the voltage configuration in chronopotentiometric measurement, the EC component was considered as the "dependent voltage source" which is serially connected to the ESC layer. This model successfully described the charging behavior of the ESC layer with or without EC, where both cases determined each relaxation time, respectively. Finally, we quantitatively verified their values utilizing the Poisson-Nernst-Planck equations. Therefore, this unified circuit model would provide a key insight of ICP system and potential energy-efficient applications.
Performance assessment of a compressive sensing single-pixel imaging system
NASA Astrophysics Data System (ADS)
Du Bosq, Todd W.; Preece, Bradley L.
2017-04-01
Conventional sensors measure the light incident at each pixel in a focal plane array. Compressive sensing (CS) involves capturing a smaller number of unconventional measurements from the scene, and then using a companion process to recover the image. CS has the potential to acquire imagery with equivalent information content to a large format array while using smaller, cheaper, and lower bandwidth components. However, the benefits of CS do not come without compromise. The CS architecture chosen must effectively balance between physical considerations, reconstruction accuracy, and reconstruction speed to meet operational requirements. Performance modeling of CS imagers is challenging due to the complexity and nonlinearity of the system and reconstruction algorithm. To properly assess the value of such systems, it is necessary to fully characterize the image quality, including artifacts and sensitivity to noise. Imagery of a two-handheld object target set was collected using an shortwave infrared single-pixel CS camera for various ranges and number of processed measurements. Human perception experiments were performed to determine the identification performance within the trade space. The performance of the nonlinear CS camera was modeled by mapping the nonlinear degradations to an equivalent linear shift invariant model. Finally, the limitations of CS modeling techniques are discussed.
Equivalent equations of motion for gravity and entropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czech, Bartlomiej; Lamprou, Lampros; McCandlish, Samuel
We demonstrate an equivalence between the wave equation obeyed by the entanglement entropy of CFT subregions and the linearized bulk Einstein equation in Anti-de Sitter space. In doing so, we make use of the formalism of kinematic space and fields on this space. We show that the gravitational dynamics are equivalent to a gauge invariant wave-equation on kinematic space and that this equation arises in natural correspondence to the conformal Casimir equation in the CFT.
Equivalent equations of motion for gravity and entropy
Czech, Bartlomiej; Lamprou, Lampros; McCandlish, Samuel; ...
2017-02-01
We demonstrate an equivalence between the wave equation obeyed by the entanglement entropy of CFT subregions and the linearized bulk Einstein equation in Anti-de Sitter space. In doing so, we make use of the formalism of kinematic space and fields on this space. We show that the gravitational dynamics are equivalent to a gauge invariant wave-equation on kinematic space and that this equation arises in natural correspondence to the conformal Casimir equation in the CFT.
Time-delay control of a magnetic levitated linear positioning system
NASA Technical Reports Server (NTRS)
Tarn, J. H.; Juang, K. Y.; Lin, C. E.
1994-01-01
In this paper, a high accuracy linear positioning system with a linear force actuator and magnetic levitation is proposed. By locating a permanently magnetized rod inside a current-carrying solenoid, the axial force is achieved by the boundary effect of magnet poles and utilized to power the linear motion, while the force for levitation is governed by Ampere's Law supplied with the same solenoid. With the levitation in a radial direction, there is hardly any friction between the rod and the solenoid. The high speed motion can hence be achieved. Besides, the axial force acting on the rod is a smooth function of rod position, so the system can provide nanometer resolution linear positioning to the molecule size. Since the force-position relation is highly nonlinear, and the mathematical model is derived according to some assumptions, such as the equivalent solenoid of the permanently magnetized rod, so there exists unknown dynamics in practical application. Thus 'robustness' is an important issue in controller design. Meanwhile the load effect reacts directly on the servo system without transmission elements, so the capability of 'disturbance rejection; is also required. With the above consideration, a time-delay control scheme is chosen and applied. By comparing the input-output relation and the mathematical model, the time-delay controller calculates an estimation of unmodeled dynamics and disturbances and then composes the desired compensation into the system. Effectiveness of the linear positioning system and control scheme are illustrated with simulation results.
Cold-air performance of a tip turbine designed to drive a lift fan
NASA Technical Reports Server (NTRS)
Haas, J. E.; Kofskey, M. G.; Hotz, G. M.
1978-01-01
Performance was obtained over a range of speeds and pressure ratios for a 0.4 linear scale version of the LF460 lift fan turbine with the rotor radial tip clearance reduced to about 2.5 percent of the rotor blade height. These tests covered a range of speeds from 60 to 140 percent of design equivalent speed and a range of scroll inlet total to diffuser exit static pressure ratios from 2.6 to 4.2. Results are presented in terms of equivalent mass flow, equivalent torque, equivalent specific work, and efficiency.
Progress Toward Improving Jet Noise Predictions in Hot Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Kenzakowski, Donald C.
2007-01-01
An acoustic analogy methodology for improving noise predictions in hot round jets is presented. Past approaches have often neglected the impact of temperature fluctuations on the predicted sound spectral density, which could be significant for heated jets, and this has yielded noticeable acoustic under-predictions in such cases. The governing acoustic equations adopted here are a set of linearized, inhomogeneous Euler equations. These equations are combined into a single third order linear wave operator when the base flow is considered as a locally parallel mean flow. The remaining second-order fluctuations are regarded as the equivalent sources of sound and are modeled. It is shown that the hot jet effect may be introduced primarily through a fluctuating velocity/enthalpy term. Modeling this additional source requires specialized inputs from a RANS-based flowfield simulation. The information is supplied using an extension to a baseline two equation turbulence model that predicts total enthalpy variance in addition to the standard parameters. Preliminary application of this model to a series of unheated and heated subsonic jets shows significant improvement in the acoustic predictions at the 90 degree observer angle.
NASA Astrophysics Data System (ADS)
Imamura, N.; Schultz, A.
2015-12-01
Recently, a full waveform time domain solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations of non-zero wavenumber, the ability to operate in areas of high levels of source signal spatial complexity and non-stationarity, etc. This goal would not be obtainable if one were to adopt the finite difference time-domain (FDTD) approach for the forward problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across the large frequency bandwidth. It means that for FDTD simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a linear system that is computationally burdensome to solve. We have implemented our code that addresses this situation through the use of a fictitious wave domain method and GPUs to speed up the computation time. We also substantially reduce the size of the linear systems by applying concepts from successive cascade decimation, through quasi-equivalent time domain decomposition. By combining these refinements, we have made good progress toward implementing the core of a full waveform joint source field/earth conductivity inverse modeling method. From results, we found the use of previous generation of CPU/GPU speeds computations by an order of magnitude over a parallel CPU only approach. In part, this arises from the use of the quasi-equivalent time domain decomposition, which shrinks the size of the linear system dramatically.
He, Jiangnan; Lu, Lina; He, Xiangui; Xu, Xian; Du, Xuan; Zhang, Bo; Zhao, Huijuan; Sha, Jida; Zhu, Jianfeng; Zou, Haidong; Xu, Xun
2017-01-01
To report calculated crystalline lens power and describe the distribution of ocular biometry and its association with refractive error in older Chinese adults. Random clustering sampling was used to identify adults aged 50 years and above in Xuhui and Baoshan districts of Shanghai. Refraction was determined by subjective refraction that achieved the best corrected vision based on monocular measurement. Ocular biometry was measured by IOL Master. The crystalline lens power of right eyes was calculated using modified Bennett-Rabbetts formula. We analyzed 6099 normal phakic right eyes. The mean crystalline lens power was 20.34 ± 2.24D (range: 13.40-36.08). Lens power, spherical equivalent, and anterior chamber depth changed linearly with age; however, axial length, corneal power and AL/CR ratio did not vary with age. The overall prevalence of hyperopia, myopia, and high myopia was 48.48% (95% CI: 47.23%-49.74%), 22.82% (95% CI: 21.77%-23.88%), and 4.57% (95% CI: 4.05-5.10), respectively. The prevalence of hyperopia increased linearly with age while lens power decreased with age. In multivariate models, refractive error was strongly correlated with axial length, lens power, corneal power, and anterior chamber depth; refractive error was slightly correlated with best corrected visual acuity, age and sex. Lens power, hyperopia, and spherical equivalent changed linearly with age; Moreover, the continuous loss of lens power produced hyperopic shifts in refraction in subjects aged more than 50 years.
A Block Iterative Finite Element Model for Nonlinear Leaky Aquifer Systems
NASA Astrophysics Data System (ADS)
Gambolati, Giuseppe; Teatini, Pietro
1996-01-01
A new quasi three-dimensional finite element model of groundwater flow is developed for highly compressible multiaquifer systems where aquitard permeability and elastic storage are dependent on hydraulic drawdown. The model is solved by a block iterative strategy, which is naturally suggested by the geological structure of the porous medium and can be shown to be mathematically equivalent to a block Gauss-Seidel procedure. As such it can be generalized into a block overrelaxation procedure and greatly accelerated by the use of the optimum overrelaxation factor. Results for both linear and nonlinear multiaquifer systems emphasize the excellent computational performance of the model and indicate that convergence in leaky systems can be improved up to as much as one order of magnitude.
Thakore, Vaibhav; Molnar, Peter; Hickman, James J.
2014-01-01
Extracellular neuroelectronic interfacing is an emerging field with important applications in the fields of neural prosthetics, biological computation and biosensors. Traditionally, neuron-electrode interfaces have been modeled as linear point or area contact equivalent circuits but it is now being increasingly realized that such models cannot explain the shapes and magnitudes of the observed extracellular signals. Here, results were compared and contrasted from an unprecedented optimization based study of the point contact models for an extracellular ‘on-cell’ neuron-patch electrode and a planar neuron-microelectrode interface. Concurrent electrophysiological recordings from a single neuron simultaneously interfaced to three distinct electrodes (intracellular, ‘on-cell’ patch and planar microelectrode) allowed novel insights into the mechanism of signal transduction at the neuron-electrode interface. After a systematic isolation of the nonlinear neuronal contribution to the extracellular signal, a consistent underestimation of the simulated supra-threshold extracellular signals compared to the experimentally recorded signals was observed. This conclusively demonstrated that the dynamics of the interfacial medium contribute nonlinearly to the process of signal transduction at the neuron-electrode interface. Further, an examination of the optimized model parameters for the experimental extracellular recordings from sub- and supra-threshold stimulations of the neuron-electrode junctions revealed that ionic transport at the ‘on-cell’ neuron-patch electrode is dominated by diffusion whereas at the neuron-microelectrode interface the electric double layer (EDL) effects dominate. Based on this study, the limitations of the equivalent circuit models in their failure to account for the nonlinear EDL and ionic electrodiffusion effects occurring during signal transduction at the neuron-electrode interfaces are discussed. PMID:22695342
Identifying equivalent sound sources from aeroacoustic simulations using a numerical phased array
NASA Astrophysics Data System (ADS)
Pignier, Nicolas J.; O'Reilly, Ciarán J.; Boij, Susann
2017-04-01
An application of phased array methods to numerical data is presented, aimed at identifying equivalent flow sound sources from aeroacoustic simulations. Based on phased array data extracted from compressible flow simulations, sound source strengths are computed on a set of points in the source region using phased array techniques assuming monopole propagation. Two phased array techniques are used to compute the source strengths: an approach using a Moore-Penrose pseudo-inverse and a beamforming approach using dual linear programming (dual-LP) deconvolution. The first approach gives a model of correlated sources for the acoustic field generated from the flow expressed in a matrix of cross- and auto-power spectral values, whereas the second approach results in a model of uncorrelated sources expressed in a vector of auto-power spectral values. The accuracy of the equivalent source model is estimated by computing the acoustic spectrum at a far-field observer. The approach is tested first on an analytical case with known point sources. It is then applied to the example of the flow around a submerged air inlet. The far-field spectra obtained from the source models for two different flow conditions are in good agreement with the spectra obtained with a Ffowcs Williams-Hawkings integral, showing the accuracy of the source model from the observer's standpoint. Various configurations for the phased array and for the sources are used. The dual-LP beamforming approach shows better robustness to changes in the number of probes and sources than the pseudo-inverse approach. The good results obtained with this simulation case demonstrate the potential of the phased array approach as a modelling tool for aeroacoustic simulations.
Revisiting the extended spring indices using gridded weather data and machine learning
NASA Astrophysics Data System (ADS)
Mehdipoor, Hamed; Izquierdo-Verdiguier, Emma; Zurita-Milla, Raul
2016-04-01
The extended spring indices or SI-x [1] have been successfully used to predict the timing of spring onset at continental scales. The SI-x models were created by combining lilac and honeysuckle volunteered phenological observations, temperature data (from weather stations) and latitudinal information. More precisely, these models use a linear regression to predict the day of year of first leaf and first bloom for these two indicator species. In this contribution we revisit both the data and the method used to calibrate the SI-x models to check whether the addition of new input data or the use of non-linear regression methods could lead to improments in the model outputs. In particular, we use a recently published dataset [2] of volunteered observations on cloned and common lilac over longer period of time (1980-2014) and we replace the weather station data by 54 features derived from Daymet [3], which provides 1 by 1 km gridded estimates of daily weather parameters (maximum and minimum temperatures, precipitation, water vapor pressure, solar radiation, day length, snow water equivalent) for North America. These features consist of both daily weather values and their long- and short-term accumulations and elevation. we also replace the original linear regression by a non-linear method. Specifically, we use random forests to both identify the most important features and to predict the day of year of the first leaf of cloned and common lilacs. Preliminary results confirm the importance of the SI-x features (maximum and minimum temperatures and day length). However, our results show that snow water equivalent and water vapor pressure are also necessary to properly model leaf onset. Regarding the predictions, our results indicate that Random Forests yield comparable results to those produced by the SI-x models (in terms of root mean square error -RMSE). For cloned and common lilac, the models predict the day of year of leafing with 16 and 15 days of accuracy respectively. Further research should focus on extensively comparing the features used by both modelling approaches and on analyzing spring onset patterns over continental United States. References 1. Schwartz, M.D., T.R. Ault, and J.L. Betancourt, Spring onset variations and trends in the continental United States: past and regional assessment using temperature-based indices. International Journal of Climatology, 2013. 33(13): p. 2917-2922. 2. Rosemartin, A.H., et al., Lilac and honeysuckle phenology data 1956-2014. Scientific Data, 2015. 2: p. 150038. 3. Thornton, P.E., et al. Daymet: Daily Surface Weather Data on a 1-km Grid for North America, Version 2. 2014.
A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications
Austin, Peter C.
2017-01-01
Summary Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log–log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata). PMID:29307954
A Tutorial on Multilevel Survival Analysis: Methods, Models and Applications.
Austin, Peter C
2017-08-01
Data that have a multilevel structure occur frequently across a range of disciplines, including epidemiology, health services research, public health, education and sociology. We describe three families of regression models for the analysis of multilevel survival data. First, Cox proportional hazards models with mixed effects incorporate cluster-specific random effects that modify the baseline hazard function. Second, piecewise exponential survival models partition the duration of follow-up into mutually exclusive intervals and fit a model that assumes that the hazard function is constant within each interval. This is equivalent to a Poisson regression model that incorporates the duration of exposure within each interval. By incorporating cluster-specific random effects, generalised linear mixed models can be used to analyse these data. Third, after partitioning the duration of follow-up into mutually exclusive intervals, one can use discrete time survival models that use a complementary log-log generalised linear model to model the occurrence of the outcome of interest within each interval. Random effects can be incorporated to account for within-cluster homogeneity in outcomes. We illustrate the application of these methods using data consisting of patients hospitalised with a heart attack. We illustrate the application of these methods using three statistical programming languages (R, SAS and Stata).
Mesh Deformation Based on Fully Stressed Design: The Method and Two-Dimensional Examples
NASA Technical Reports Server (NTRS)
Hsu, Su-Yuen; Chang, Chau-Lyan
2007-01-01
Mesh deformation in response to redefined boundary geometry is a frequently encountered task in shape optimization and analysis of fluid-structure interaction. We propose a simple and concise method for deforming meshes defined with three-node triangular or four-node tetrahedral elements. The mesh deformation method is suitable for large boundary movement. The approach requires two consecutive linear elastic finite-element analyses of an isotropic continuum using a prescribed displacement at the mesh boundaries. The first analysis is performed with homogeneous elastic property and the second with inhomogeneous elastic property. The fully stressed design is employed with a vanishing Poisson s ratio and a proposed form of equivalent strain (modified Tresca equivalent strain) to calculate, from the strain result of the first analysis, the element-specific Young s modulus for the second analysis. The theoretical aspect of the proposed method, its convenient numerical implementation using a typical linear elastic finite-element code in conjunction with very minor extra coding for data processing, and results for examples of large deformation of two-dimensional meshes are presented in this paper. KEY WORDS: Mesh deformation, shape optimization, fluid-structure interaction, fully stressed design, finite-element analysis, linear elasticity, strain failure, equivalent strain, Tresca failure criterion
Electro-thermal battery model identification for automotive applications
NASA Astrophysics Data System (ADS)
Hu, Y.; Yurkovich, S.; Guezennec, Y.; Yurkovich, B. J.
This paper describes a model identification procedure for identifying an electro-thermal model of lithium ion batteries used in automotive applications. The dynamic model structure adopted is based on an equivalent circuit model whose parameters are scheduled on the state-of-charge, temperature, and current direction. Linear spline functions are used as the functional form for the parametric dependence. The model identified in this way is valid inside a large range of temperatures and state-of-charge, so that the resulting model can be used for automotive applications such as on-board estimation of the state-of-charge and state-of-health. The model coefficients are identified using a multiple step genetic algorithm based optimization procedure designed for large scale optimization problems. The validity of the procedure is demonstrated experimentally for an A123 lithium ion iron-phosphate battery.
Incorporating inductances in tissue-scale models of cardiac electrophysiology
NASA Astrophysics Data System (ADS)
Rossi, Simone; Griffith, Boyce E.
2017-09-01
In standard models of cardiac electrophysiology, including the bidomain and monodomain models, local perturbations can propagate at infinite speed. We address this unrealistic property by developing a hyperbolic bidomain model that is based on a generalization of Ohm's law with a Cattaneo-type model for the fluxes. Further, we obtain a hyperbolic monodomain model in the case that the intracellular and extracellular conductivity tensors have the same anisotropy ratio. In one spatial dimension, the hyperbolic monodomain model is equivalent to a cable model that includes axial inductances, and the relaxation times of the Cattaneo fluxes are strictly related to these inductances. A purely linear analysis shows that the inductances are negligible, but models of cardiac electrophysiology are highly nonlinear, and linear predictions may not capture the fully nonlinear dynamics. In fact, contrary to the linear analysis, we show that for simple nonlinear ionic models, an increase in conduction velocity is obtained for small and moderate values of the relaxation time. A similar behavior is also demonstrated with biophysically detailed ionic models. Using the Fenton-Karma model along with a low-order finite element spatial discretization, we numerically analyze differences between the standard monodomain model and the hyperbolic monodomain model. In a simple benchmark test, we show that the propagation of the action potential is strongly influenced by the alignment of the fibers with respect to the mesh in both the parabolic and hyperbolic models when using relatively coarse spatial discretizations. Accurate predictions of the conduction velocity require computational mesh spacings on the order of a single cardiac cell. We also compare the two formulations in the case of spiral break up and atrial fibrillation in an anatomically detailed model of the left atrium, and we examine the effect of intracellular and extracellular inductances on the virtual electrode phenomenon.
Nick, H M; Paluszny, A; Blunt, M J; Matthai, S K
2011-11-01
A second order in space accurate implicit scheme for time-dependent advection-dispersion equations and a discrete fracture propagation model are employed to model solute transport in porous media. We study the impact of the fractures on mass transport and dispersion. To model flow and transport, pressure and transport equations are integrated using a finite-element, node-centered finite-volume approach. Fracture geometries are incrementally developed from a random distributions of material flaws using an adoptive geomechanical finite-element model that also produces fracture aperture distributions. This quasistatic propagation assumes a linear elastic rock matrix, and crack propagation is governed by a subcritical crack growth failure criterion. Fracture propagation, intersection, and closure are handled geometrically. The flow and transport simulations are separately conducted for a range of fracture densities that are generated by the geomechanical finite-element model. These computations show that the most influential parameters for solute transport in fractured porous media are as follows: fracture density and fracture-matrix flux ratio that is influenced by matrix permeability. Using an equivalent fracture aperture size, computed on the basis of equivalent permeability of the system, we also obtain an acceptable prediction of the macrodispersion of poorly interconnected fracture networks. The results hold for fractures at relatively low density.
A thermo-elastoplastic model for soft rocks considering structure
NASA Astrophysics Data System (ADS)
He, Zuoyue; Zhang, Sheng; Teng, Jidong; Xiong, Yonglin
2017-11-01
In the fields of nuclear waste geological deposit, geothermy and deep mining, the effects of temperature on the mechanical behaviors of soft rocks cannot be neglected. Experimental data in the literature also showed that the structure of soft rocks cannot be ignored. Based on the superloading yield surface and the concept of temperature-deduced equivalent stress, a thermo-elastoplastic model for soft rocks is proposed considering the structure. Compared to the superloading yield surface, only one parameter is added, i.e. the linear thermal expansion coefficient. The predicted results and the comparisons with experimental data in the literature show that the proposed model is capable of simultaneously describing heat increase and heat decrease of soft rocks. A stronger initial structure leads to a greater strength of the soft rocks. Heat increase and heat decrease can be converted between each other due to the change of the initial structure of soft rocks. Furthermore, regardless of the heat increase or heat decrease, a larger linear thermal expansion coefficient or a greater temperature always leads to a much rapider degradation of the structure. The degradation trend will be more obvious for the coupled greater values of linear thermal expansion coefficient and temperature. Lastly, compared to heat decrease, the structure will degrade more easily in the case of heat increase.
NASA Technical Reports Server (NTRS)
Tulintseff, A. N.
1993-01-01
Printed dipole elements and their complement, linear slots, are elementary radiators that have found use in low-profile antenna arrays. Low-profile antenna arrays, in addition to their small size and low weight characteristics, offer the potential advantage of low-cost, high-volume production with easy integration with active integrated circuit components. The design of such arrays requires that the radiation and impedance characteristics of the radiating elements be known. The FDTD (Finite-Difference Time-Domain) method is a general, straight-forward implementation of Maxwell's equations and offers a relatively simple way of analyzing both printed dipole and slot elements. Investigated in this work is the application of the FDTD method to the analysis of printed dipole and slot elements transversely coupled to an infinite transmission line in a multilayered configuration. Such dipole and slot elements may be used in dipole and slot series-fed-type linear arrays, where element offsets and interelement line lengths are used to obtain the desired amplitude distribution and beam direction, respectively. The design of such arrays is achieved using transmission line theory with equivalent circuit models for the radiating elements. In an equivalent circuit model, the dipole represents a shunt impedance to the transmission line, where the impedance is a function of dipole offset, length, and width. Similarly, the slot represents a series impedance to the transmission line. The FDTD method is applied to single dipole and slot elements transversely coupled to an infinite microstrip line using a fixed rectangular grid with Mur's second order absorbing boundary conditions. Frequency-dependent circuit and scattering parameters are obtained by saving desired time-domain quantities and using the Fourier transform. A Gaussian pulse excitation is applied to the microstrip transmission line, where the resulting reflected signal due to the presence of the radiating element is used to determine the equivalent element impedance.
Magnetically tunable graphene-based reflector under linear polarized incidence at room temperature
NASA Astrophysics Data System (ADS)
Yang, Liang; Tian, Jing; Giddens, Henry; Poumirol, Jean-Marie; Wu, JingBo; Kuzmenko, Alexey B.; Hao, Yang
2018-04-01
At the terahertz spectrum, the 2D material graphene has diagonal and Hall conductivities in the presence of a magnetic field. These peculiar properties provide graphene-based structures with a magnetically tunable response to electromagnetic waves. In this work, the absolute reflection intensity was measured for a graphene-based reflector illuminated by linearly polarized incident waves at room temperature, which demonstrated the intensity modulation depth (IMD) under different magnetostatic biases by up to 15%. Experimental data were fitted and analyzed by a modified equivalent circuit model. In addition, as an important phenomenon of the graphene gyrotropic response, Kerr rotation is discussed according to results achieved from full-wave simulations. It is concluded that the IMD is reduced for the best Kerr rotation in the proposed graphene-based reflector.
Jones, Bleddyn; Cominos, Matilda; Dale, Roger G
2003-03-01
To investigate the potential for mathematic modeling in the assessment of symptom relief in palliative radiotherapy and cytotoxic chemotherapy. The linear quadratic model of radiation effect with the overall treatment time and the daily dose equivalent of repopulation is modified to include the regrowth time after completion of therapy. The predicted times to restore the original tumor volumes after treatment are dependent on the biological effective dose (BED) delivered and the repopulation parameter (K); it is also possible to estimate K values from analysis of palliative treatment response durations. Hypofractionated radiotherapy given at a low total dose may produce long symptom relief in slow-growing tumors because of their low alpha/beta ratios (which confer high fraction sensitivity) and their slow regrowth rates. Cancers that have high alpha/beta ratios (which confer low fraction sensitivity), and that are expected to repopulate rapidly during therapy, are predicted to have short durations of symptom control. The BED concept can be used to estimate the equivalent dose of radiotherapy that will achieve the same duration of symptom relief as palliative chemotherapy. Relatively simple radiobiologic modeling can be used to guide decision-making regarding the choice of the most appropriate palliative schedules and has important implications in the design of radiotherapy or chemotherapy clinical trials. The methods described provide a rationalization for treatment selection in a wide variety of tumors.
Optimum Damping in a Non-Linear Base Isolation System
NASA Astrophysics Data System (ADS)
Jangid, R. S.
1996-02-01
Optimum isolation damping for minimum acceleration of a base-isolated structure subjected to earthquake ground excitation is investigated. The stochastic model of the El-Centro1940 earthquake, which preserves the non-stationary evolution of amplitude and frequency content of ground motion, is used as an earthquake excitation. The base isolated structure consists of a linear flexible shear type multi-storey building supported on a base isolation system. The resilient-friction base isolator (R-FBI) is considered as an isolation system. The non-stationary stochastic response of the system is obtained by the time dependent equivalent linearization technique as the force-deformation of the R-FBI system is non-linear. The optimum damping of the R-FBI system is obtained under important parametric variations; i.e., the coefficient of friction of the R-FBI system, the period and damping of the superstructure; the effective period of base isolation. The criterion selected for optimality is the minimization of the top floor root mean square (r.m.s.) acceleration. It is shown that the above parameters have significant effects on optimum isolation damping.
Singular optimal control and the identically non-regular problem in the calculus of variations
NASA Technical Reports Server (NTRS)
Menon, P. K. A.; Kelley, H. J.; Cliff, E. M.
1985-01-01
A small but interesting class of optimal control problems featuring a scalar control appearing linearly is equivalent to the class of identically nonregular problems in the Calculus of Variations. It is shown that a condition due to Mancill (1950) is equivalent to the generalized Legendre-Clebsch condition for this narrow class of problems.
A Complete Multimode Equivalent-Circuit Theory for Electrical Design
Williams, Dylan F.; Hayden, Leonard A.; Marks, Roger B.
1997-01-01
This work presents a complete equivalent-circuit theory for lossy multimode transmission lines. Its voltages and currents are based on general linear combinations of standard normalized modal voltages and currents. The theory includes new expressions for transmission line impedance matrices, symmetry and lossless conditions, source representations, and the thermal noise of passive multiports. PMID:27805153
Section Preequating under the Equivalent Groups Design without IRT
ERIC Educational Resources Information Center
Guo, Hongwen; Puhan, Gautam
2014-01-01
In this article, we introduce a section preequating (SPE) method (linear and nonlinear) under the randomly equivalent groups design. In this equating design, sections of Test X (a future new form) and another existing Test Y (an old form already on scale) are administered. The sections of Test X are equated to Test Y, after adjusting for the…
NASA Technical Reports Server (NTRS)
Beaton, K. H.; Holly, J. E.; Clement, G. R.; Wood, S. J.
2011-01-01
The neural mechanisms to resolve ambiguous tilt-translation motion have been hypothesized to be different for motion perception and eye movements. Previous studies have demonstrated differences in ocular and perceptual responses using a variety of motion paradigms, including Off-Vertical Axis Rotation (OVAR), Variable Radius Centrifugation (VRC), translation along a linear track, and tilt about an Earth-horizontal axis. While the linear acceleration across these motion paradigms is presumably equivalent, there are important differences in semicircular canal cues. The purpose of this study was to compare translation motion perception and horizontal slow phase velocity to quantify consistencies, or lack thereof, across four different motion paradigms. Twelve healthy subjects were exposed to sinusoidal interaural linear acceleration between 0.01 and 0.6 Hz at 1.7 m/s/s (equivalent to 10 tilt) using OVAR, VRC, roll tilt, and lateral translation. During each trial, subjects verbally reported the amount of perceived peak-to-peak lateral translation and indicated the direction of motion with a joystick. Binocular eye movements were recorded using video-oculography. In general, the gain of translation perception (ratio of reported linear displacement to equivalent linear stimulus displacement) increased with stimulus frequency, while the phase did not significantly vary. However, translation perception was more pronounced during both VRC and lateral translation involving actual translation, whereas perceptions were less consistent and more variable during OVAR and roll tilt which did not involve actual translation. For each motion paradigm, horizontal eye movements were negligible at low frequencies and showed phase lead relative to the linear stimulus. At higher frequencies, the gain of the eye movements increased and became more inphase with the acceleration stimulus. While these results are consistent with the hypothesis that the neural computational strategies for motion perception and eye movements differ, they also indicate that the specific motion platform employed can have a significant effect on both the amplitude and phase of each.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohri, Nitin; Dicker, Adam P.; Lawrence, Yaacov Richard, E-mail: yaacovla@gmail.com
2012-05-01
Purpose: Hypofractionated radiotherapy (hRT) is being explored for a number of malignancies. The potential benefit of giving concurrent chemotherapy with hRT is not known. We sought to predict the effects of combined modality treatments by using mathematical models derived from laboratory data. Methods and Materials: Data from 26 published clonogenic survival assays for cancer cell lines with and without the use of radiosensitizing chemotherapy were collected. The first three data points of the RT arm of each assay were used to derive parameters for the linear quadratic (LQ) model, the multitarget (MT) model, and the generalized linear quadratic (gLQ) model.more » For each assay and model, the difference between the predicted and observed surviving fractions at the highest tested RT dose was calculated. The gLQ model was fitted to all the data from each RT cell survival assay, and the biologically equivalent doses in 2-Gy fractions (EQD2s) of clinically relevant hRT regimens were calculated. The increase in cell kill conferred by the addition of chemotherapy was used to estimate the EQD2 of hRT along with a radiosensitizing agent. For comparison, this was repeated using conventionally fractionated RT regimens. Results: At a mean RT dose of 8.0 Gy, the average errors for the LQ, MT, and gLQ models were 1.63, 0.83, and 0.56 log units, respectively, favoring the gLQ model (p < 0.05). Radiosensitizing chemotherapy increased the EQD2 of hRT schedules by an average of 28% to 82%, depending on disease site. This increase was similar to the gains predicted for the addition of chemotherapy to conventionally fractionated RT. Conclusions: Based on published in vitro assays, the gLQ equation is superior to the LQ and MT models in predicting cell kill at high doses of RT. Modeling exercises demonstrate that significant increases in biologically equivalent dose may be achieved with the addition of radiosensitizing agents to hRT. Clinical study of this approach is warranted.« less
Energetic consequences of mechanical loads.
Loiselle, D S; Crampin, E J; Niederer, S A; Smith, N P; Barclay, C J
2008-01-01
In this brief review, we have focussed largely on the well-established, but essentially phenomenological, linear relationship between the energy expenditure of the heart (commonly assessed as the oxygen consumed per beat, oxygen consumption (VO2)) and the pressure-volume-area (PVA, the sum of pressure-volume work and a specified 'potential energy' term). We raise concerns regarding the propriety of ignoring work done during 'passive' ventricular enlargement during diastole as well as the work done against series elasticity during systole. We question the common assumption that the rate of basal metabolism is independent of ventricular volume, given the equally well-established Feng- or stretch-effect. Admittedly, each of these issues is more of conceptual than of quantitative import. We point out that the linearity of the enthalpy-PVA relation is now so well established that observed deviations from linearity are often ignored. Given that a one-dimensional equivalent of the linear VO2-PVA relation exists in papillary muscles, it seems clear that the phenomenon arises at the cellular level, rather than being a property of the intact heart. This leads us to discussion of the classes of crossbridge models that can be applied to the study of cardiac energetics. An admittedly superficial examination of the historical role played by Hooke's Law in theories of muscle contraction foreshadows deeper consideration of the thermodynamic constraints that must, in our opinion, guide the development of any mathematical model. We conclude that a satisfying understanding of the origin of the enthalpy-PVA relation awaits the development of such a model.
Observed Score Linear Equating with Covariates
ERIC Educational Resources Information Center
Branberg, Kenny; Wiberg, Marie
2011-01-01
This paper examined observed score linear equating in two different data collection designs, the equivalent groups design and the nonequivalent groups design, when information from covariates (i.e., background variables correlated with the test scores) was included. The main purpose of the study was to examine the effect (i.e., bias, variance, and…
Estimation of stature using anthropometry of feet and footprints in a Western Australian population.
Hemy, Naomi; Flavel, Ambika; Ishak, Nur-Intaniah; Franklin, Daniel
2013-07-01
The aim of the study is to develop accurate stature estimation models for a contemporary Western Australian population from measurements of the feet and footprints. The sample comprises 200 adults (90 males, 110 females). A stature measurement, three linear measurements from each foot and bilateral footprints were collected from each subject. Seven linear measurements were then extracted from each print. Prior to data collection, a precision test was conducted to determine the repeatability of measurement acquisition. The primary data were then analysed using a range of parametric statistical tests. Results show that all foot and footprint measurements were significantly (P < 0.01-0.001) correlated with stature and estimation models were formulated with a prediction accuracy of ± 4.673 cm to ± 6.926 cm. Left foot length was the most accurate single variable in the simple linear regressions (males: ± 5.065 cm; females: ± 4.777 cm). This study provides viable alternatives for estimating stature in a Western Australian population that are equivalent to established standards developed from foot bones. Copyright © 2013 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
NASA Astrophysics Data System (ADS)
Yaya, Kamel; Bechir, Hocine
2018-05-01
We propose a new hyper-elastic model that is based on the standard invariants of Green-Cauchy. Experimental data reported by Treloar (Trans. Faraday Soc. 40:59, 1944) are used to identify the model parameters. To this end, the data of uni-axial tension and equi-bi-axial tension are used simultaneously. The new model has four material parameters, their identification leads to linear optimisation problem and it is able to predict multi-axial behaviour of rubber-like materials. We show that the response quality of the new model is equivalent to that of the well-known Ogden six parameters model. Thereafter, the new model is implemented in FE code. Then, we investigate the inflation of a rubber balloon with the new model and Ogden models. We compare both the analytic and numerical solutions derived from these models.
Bisimulation equivalence of differential-algebraic systems
NASA Astrophysics Data System (ADS)
Megawati, Noorma Yulia; Schaft, Arjan van der
2018-01-01
In this paper, the notion of bisimulation relation for linear input-state-output systems is extended to general linear differential-algebraic (DAE) systems. Geometric control theory is used to derive a linear-algebraic characterisation of bisimulation relations, and an algorithm for computing the maximal bisimulation relation between two linear DAE systems. The general definition is specialised to the case where the matrix pencil sE - A is regular. Furthermore, by developing a one-sided version of bisimulation, characterisations of simulation and abstraction are obtained.
Effects of different representations of transport in the new EMAC-SWIFT chemistry climate model
NASA Astrophysics Data System (ADS)
Scheffler, Janice; Langematz, Ulrike; Wohltmann, Ingo; Kreyling, Daniel; Rex, Markus
2017-04-01
It is well known that the representation of atmospheric ozone chemistry in weather and climate models is essential for a realistic simulation of the atmospheric state. Interactively coupled chemistry climate models (CCMs) provide a means to realistically simulate the interaction between atmospheric chemistry and dynamics. The calculation of chemistry in CCMs, however, is computationally expensive which renders the use of complex chemistry models not suitable for ensemble simulations or simulations with multiple climate change scenarios. In these simulations ozone is therefore usually prescribed as a climatological field or included by incorporating a fast linear ozone scheme into the model. While prescribed climatological ozone fields are often not aligned with the modelled dynamics, a linear ozone scheme may not be applicable for a wide range of climatological conditions. An alternative approach to represent atmospheric chemistry in climate models which can cope with non-linearities in ozone chemistry and is applicable to a wide range of climatic states is the Semi-empirical Weighted Iterative Fit Technique (SWIFT) that is driven by reanalysis data and has been validated against observational satellite data and runs of a full Chemistry and Transport Model. SWIFT has been implemented into the ECHAM/MESSy (EMAC) chemistry climate model that uses a modular approach to climate modelling where individual model components can be switched on and off. When using SWIFT in EMAC, there are several possibilities to represent the effect of transport inside the polar vortex: the semi-Lagrangian transport scheme of EMAC and a transport parameterisation that can be useful when using SWIFT in models not having transport of their own. Here, we present results of equivalent simulations with different handling of transport, compare with EMAC simulations with full interactive chemistry and evaluate the results with observations.
NASA Astrophysics Data System (ADS)
Molotch, N. P.; Painter, T. H.; Bales, R. C.; Dozier, J.
2003-04-01
In this study, an accumulated net radiation / accumulated degree-day index snowmelt model was coupled with remotely sensed snow covered area (SCA) data to simulate snow cover depletion and reconstruct maximum snow water equivalent (SWE) in the 19.1-km2 Tokopah Basin of the Sierra Nevada, California. Simple net radiation snowmelt models are attractive for operational snowmelt runoff forecasts as they are computationally inexpensive and have low input requirements relative to physically based energy balance models. The objective of this research was to assess the accuracy of a simple net radiation snowmelt model in a topographically heterogeneous alpine environment. Previous applications of net radiation / temperature index snowmelt models have not been evaluated in alpine terrain with intensive field observations of SWE. Solar radiation data from two meteorological stations were distributed using the topographic radiation model TOPORAD. Relative humidity and temperature data were distributed based on the lapse rate calculated between three meteorological stations within the basin. Fractional SCA data from the Landsat Enhanced Thematic Mapper (5 acquisitions) and the Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) (2 acquisitions) were used to derive daily SCA using a linear regression between acquisition dates. Grain size data from AVIRIS (4 acquisitions) were used to infer snow surface albedo and interpolated linearly with time to derive daily albedo values. Modeled daily snowmelt rates for each 30-m pixel were scaled by the SCA and integrated over the snowmelt season to obtain estimates of maximum SWE accumulation. Snow surveys consisting of an average of 335 depth measurements and 53 density measurements during April, May and June, 1997 were interpolated using a regression tree / co-krig model, with independent variables of average incoming solar radiation, elevation, slope and maximum upwind slope. The basin was clustered into 7 elevation / average-solar-radiation zones for SWE accuracy assessment. Model simulations did a poor job at estimating the spatial distribution of SWE. Basin clusters where the solar radiative flux dominated the melt flux were simulated more accurately than those dominated by the turbulent fluxes or the longwave radiative flux.
Vehicular traffic noise prediction using soft computing approach.
Singh, Daljeet; Nigam, S P; Agrawal, V P; Kumar, Maneek
2016-12-01
A new approach for the development of vehicular traffic noise prediction models is presented. Four different soft computing methods, namely, Generalized Linear Model, Decision Trees, Random Forests and Neural Networks, have been used to develop models to predict the hourly equivalent continuous sound pressure level, Leq, at different locations in the Patiala city in India. The input variables include the traffic volume per hour, percentage of heavy vehicles and average speed of vehicles. The performance of the four models is compared on the basis of performance criteria of coefficient of determination, mean square error and accuracy. 10-fold cross validation is done to check the stability of the Random Forest model, which gave the best results. A t-test is performed to check the fit of the model with the field data. Copyright © 2016 Elsevier Ltd. All rights reserved.
On a comparison of two schemes in sequential data assimilation
NASA Astrophysics Data System (ADS)
Grishina, Anastasiia A.; Penenko, Alexey V.
2017-11-01
This paper is focused on variational data assimilation as an approach to mathematical modeling. Realization of the approach requires a sequence of connected inverse problems with different sets of observational data to be solved. Two variational data assimilation schemes, "implicit" and "explicit", are considered in the article. Their equivalence is shown and the numerical results are given on a basis of non-linear Robertson system. To avoid the "inverse problem crime" different schemes were used to produce synthetic measurement and to solve the data assimilation problem.
Maximum principle for a stochastic delayed system involving terminal state constraints.
Wen, Jiaqiang; Shi, Yufeng
2017-01-01
We investigate a stochastic optimal control problem where the controlled system is depicted as a stochastic differential delayed equation; however, at the terminal time, the state is constrained in a convex set. We firstly introduce an equivalent backward delayed system depicted as a time-delayed backward stochastic differential equation. Then a stochastic maximum principle is obtained by virtue of Ekeland's variational principle. Finally, applications to a state constrained stochastic delayed linear-quadratic control model and a production-consumption choice problem are studied to illustrate the main obtained result.
Chen, Rui; Hyrien, Ollivier
2011-01-01
This article deals with quasi- and pseudo-likelihood estimation in a class of continuous-time multi-type Markov branching processes observed at discrete points in time. “Conventional” and conditional estimation are discussed for both approaches. We compare their properties and identify situations where they lead to asymptotically equivalent estimators. Both approaches possess robustness properties, and coincide with maximum likelihood estimation in some cases. Quasi-likelihood functions involving only linear combinations of the data may be unable to estimate all model parameters. Remedial measures exist, including the resort either to non-linear functions of the data or to conditioning the moments on appropriate sigma-algebras. The method of pseudo-likelihood may also resolve this issue. We investigate the properties of these approaches in three examples: the pure birth process, the linear birth-and-death process, and a two-type process that generalizes the previous two examples. Simulations studies are conducted to evaluate performance in finite samples. PMID:21552356
Linear strain sensor made of multi-walled carbon nanotube/epoxy composite
NASA Astrophysics Data System (ADS)
Tong, Shuying; Yuan, Weifeng; Liu, Haidong; Alamusi; Hu, Ning; Zhao, Chaoyang; Zhao, Yangzhou
2017-11-01
In this study, a fabrication process was developed to make the multi-walled carbon nanotubes/epoxy (MWCNT/EP) composite films. The electrical-strain behaviour of the films in direct and alternating current circuits were both tested. It is found that the direct current resistance and the dielectric loss tangent of the MWCNT/EP composite films are dependent on the strain and the weight fraction of the carbon nanotubes. In an alternating current circuit, the test frequency affects the impedance and the phase angle of the composite film, but it has nothing to do with the change ratio of the dielectric loss tangent of the film in tension. This phenomenon can be interpreted by a proposed equivalent circuit model. Experiment results show that the change rate of the dielectric loss tangent of the MWCNT/EP sensor is linearly proportional to the strain. The findings obtained in the present study provide a promising method to develop ultrasensitive linear strain gauges.
Smith, Andrea D; Crippa, Alessio; Woodcock, James; Brage, Søren
2016-12-01
Inverse associations between physical activity (PA) and type 2 diabetes mellitus are well known. However, the shape of the dose-response relationship is still uncertain. This review synthesises results from longitudinal studies in general populations and uses non-linear models of the association between PA and incident type 2 diabetes. A systematic literature search identified 28 prospective studies on leisure-time PA (LTPA) or total PA and risk of type 2 diabetes. PA exposures were converted into metabolic equivalent of task (MET) h/week and marginal MET (MMET) h/week, a measure only considering energy expended above resting metabolic rate. Restricted cubic splines were used to model the exposure-disease relationship. Our results suggest an overall non-linear relationship; using the cubic spline model we found a risk reduction of 26% (95% CI 20%, 31%) for type 2 diabetes among those who achieved 11.25 MET h/week (equivalent to 150 min/week of moderate activity) relative to inactive individuals. Achieving twice this amount of PA was associated with a risk reduction of 36% (95% CI 27%, 46%), with further reductions at higher doses (60 MET h/week, risk reduction of 53%). Results for the MMET h/week dose-response curve were similar for moderate intensity PA, but benefits were greater for higher intensity PA and smaller for lower intensity activity. Higher levels of LTPA were associated with substantially lower incidence of type 2 diabetes in the general population. The relationship between LTPA and type 2 diabetes was curvilinear; the greatest relative benefits are achieved at low levels of activity, but additional benefits can be realised at exposures considerably higher than those prescribed by public health recommendations.
Hasan, Mehedi; Hall, Trevor
2015-11-01
A photonic integrated circuit architecture for implementing frequency upconversion is proposed. The circuit consists of a 1×2 splitter and 2×1 combiner interconnected by two stages of differentially driven phase modulators having 2×2 multimode interference coupler between the stages. A transfer matrix approach is used to model the operation of the architecture. The predictions of the model are validated by simulations performed using an industry standard software tool. The intrinsic conversion efficiency of the proposed design is improved by 6 dB over the alternative functionally equivalent circuit based on dual parallel Mach-Zehnder modulators known in the prior art. A two-tone analysis is presented to study the linearity of the proposed circuit, and a comparison is provided over the alternative. The proposed circuit is suitable for integration in any platform that offers linear electro-optic phase modulation such as LiNbO(3), silicon, III-V, or hybrid technology.
Pattern Recognition Analysis of Age-Related Retinal Ganglion Cell Signatures in the Human Eye
Yoshioka, Nayuta; Zangerl, Barbara; Nivison-Smith, Lisa; Khuu, Sieu K.; Jones, Bryan W.; Pfeiffer, Rebecca L.; Marc, Robert E.; Kalloniatis, Michael
2017-01-01
Purpose To characterize macular ganglion cell layer (GCL) changes with age and provide a framework to assess changes in ocular disease. This study used data clustering to analyze macular GCL patterns from optical coherence tomography (OCT) in a large cohort of subjects without ocular disease. Methods Single eyes of 201 patients evaluated at the Centre for Eye Health (Sydney, Australia) were retrospectively enrolled (age range, 20–85); 8 × 8 grid locations obtained from Spectralis OCT macular scans were analyzed with unsupervised classification into statistically separable classes sharing common GCL thickness and change with age. The resulting classes and gridwise data were fitted with linear and segmented linear regression curves. Additionally, normalized data were analyzed to determine regression as a percentage. Accuracy of each model was examined through comparison of predicted 50-year-old equivalent macular GCL thickness for the entire cohort to a true 50-year-old reference cohort. Results Pattern recognition clustered GCL thickness across the macula into five to eight spatially concentric classes. F-test demonstrated segmented linear regression to be the most appropriate model for macular GCL change. The pattern recognition–derived and normalized model revealed less difference between the predicted macular GCL thickness and the reference cohort (average ± SD 0.19 ± 0.92 and −0.30 ± 0.61 μm) than a gridwise model (average ± SD 0.62 ± 1.43 μm). Conclusions Pattern recognition successfully identified statistically separable macular areas that undergo a segmented linear reduction with age. This regression model better predicted macular GCL thickness. The various unique spatial patterns revealed by pattern recognition combined with core GCL thickness data provide a framework to analyze GCL loss in ocular disease. PMID:28632847
Three-Dimensional Transgenic Cell Models to Quantify Space Genotoxic Effects
NASA Technical Reports Server (NTRS)
Gonda, S.; Wu, H.; Pingerelli, P.; Glickman, B.
2000-01-01
In this paper we describe a three-dimensional, multicellular tissue-equivalent model, produced in NASA-designed, rotating wall bioreactors using mammalian cells engineered for genomic containment of mUltiple copies of defined target genes for genotoxic assessment. The Rat 2(lambda) fibroblasts (Stratagene, Inc.) were genetically engineered to contain high-density target genes for mutagenesis. Stable three-dimensional, multicellular spheroids were formed when human mammary epithelial cells and Rat 2(lambda) fibroblasts were cocultured on Cytodex 3 Beads in a rotating wall bioreactor. The utility of this spheroidal model for genotoxic assessment was indicated by a linear dose response curve and by results of gene sequence analysis of mutant clones from 400micron diameter spheroids following low-dose, high-energy, neon radiation exposure
Enhancement of Electrical Conductivity in Multicomponent Nanocomposites.
NASA Astrophysics Data System (ADS)
Ni, Xiaojuan; Hui, Chao; Su, Ninghai; Liu, Feng
To date, very limited theoretical or numerical analyses have been carried out to understand the electrical percolation properties in multicomponent nanocomposite systems. In this work, a disk-stick percolation model was developed to investigate the electrical percolation behavior of an electrically insulating matrix reinforced with one-dimensional (1D) and two-dimensional (2D) conductors via Monte Carlo simulation. The effective electrical conductivity was evaluated through Kirchhoff's current law by transforming it into an equivalent resistor network. The percolation threshold, equivalent resistance and conductivity were obtained from the distribution of nodal voltages by solving a system of linear equations with Gaussian elimination method. The effects of size, aspect ratio, relative concentration and contact patterns of 1D/2D inclusions on conductivity performance were examined. Our model is able to predict the electrical percolation threshold and evaluate the conductivity for hybrid systems with multiple components. The results suggest that carbon-based nanocomposites can have a high potential for applications where favorable electrical properties and low specific weight are required. We acknowledge the financial support from DOE-BES (No. DE-FG02-04ER46148).
Nonlocal torque operators in ab initio theory of the Gilbert damping in random ferromagnetic alloys
NASA Astrophysics Data System (ADS)
Turek, I.; Kudrnovský, J.; Drchal, V.
2015-12-01
We present an ab initio theory of the Gilbert damping in substitutionally disordered ferromagnetic alloys. The theory rests on introduced nonlocal torques which replace traditional local torque operators in the well-known torque-correlation formula and which can be formulated within the atomic-sphere approximation. The formalism is sketched in a simple tight-binding model and worked out in detail in the relativistic tight-binding linear muffin-tin orbital method and the coherent potential approximation (CPA). The resulting nonlocal torques are represented by nonrandom, non-site-diagonal, and spin-independent matrices, which simplifies the configuration averaging. The CPA-vertex corrections play a crucial role for the internal consistency of the theory and for its exact equivalence to other first-principles approaches based on the random local torques. This equivalence is also illustrated by the calculated Gilbert damping parameters for binary NiFe and FeCo random alloys, for pure iron with a model atomic-level disorder, and for stoichiometric FePt alloys with a varying degree of L 10 atomic long-range order.
A preliminary investigation of finite-element modeling for composite rotor blades
NASA Technical Reports Server (NTRS)
Lake, Renee C.; Nixon, Mark W.
1988-01-01
The results from an initial phase of an in-house study aimed at improving the dynamic and aerodynamic characteristics of composite rotor blades through the use of elastic couplings are presented. Large degree of freedom shell finite element models of an extension twist coupled composite tube were developed and analyzed using MSC/NASTRAN. An analysis employing a simplified beam finite element representation of the specimen with the equivalent engineering stiffness was additionally performed. Results from the shell finite element normal modes and frequency analysis were compared to those obtained experimentally, showing an agreement within 13 percent. There was appreciable degradation in the frequency prediction for the torsional mode, which is elastically coupled. This was due to the absence of off-diagonal coupling terms in the formulation of the equivalent engineering stiffness. Parametric studies of frequency variation due to small changes in ply orientation angle and ply thickness were also performed. Results showed linear frequency variations less than 2 percent per 1 degree variation in the ply orientation angle, and 1 percent per 0.0001 inch variation in the ply thickness.
NASA Technical Reports Server (NTRS)
Ostroff, Aaron J.
1998-01-01
This paper contains a study of two methods for use in a generic nonlinear simulation tool that could be used to determine achievable control dynamics and control power requirements while performing perfect tracking maneuvers over the entire flight envelope. The two methods are NDI (nonlinear dynamic inversion) and the SOFFT(Stochastic Optimal Feedforward and Feedback Technology) feedforward control structure. Equivalent discrete and continuous SOFFT feedforward controllers have been developed. These equivalent forms clearly show that the closed-loop plant model loop is a plant inversion and is the same as the NDI formulation. The main difference is that the NDI formulation has a closed-loop controller structure whereas SOFFT uses an open-loop command model. Continuous, discrete, and hybrid controller structures have been developed and integrated into the formulation. Linear simulation results show that seven different configurations all give essentially the same response, with the NDI hybrid being slightly different. The SOFFT controller gave better tracking performance compared to the NDI controller when a nonlinear saturation element was added. Future plans include evaluation using a nonlinear simulation.
Neutron star merger GW170817 strongly constrains doubly coupled bigravity
NASA Astrophysics Data System (ADS)
Akrami, Yashar; Brax, Philippe; Davis, Anne-Christine; Vardanyan, Valeri
2018-06-01
We study the implications of the recent detection of gravitational waves emitted by a pair of merging neutron stars and their electromagnetic counterpart, events GW170817 and GRB170817A, on the viability of the doubly coupled bimetric models of cosmic evolution, where the two metrics couple directly to matter through a composite, effective metric. We demonstrate that the bounds on the speed of gravitational waves place strong constraints on the doubly coupled models, forcing either the two metrics to be proportional at the background level or the models to become singly coupled. Proportional backgrounds are particularly interesting as they provide stable cosmological solutions with phenomenologies equivalent to that of Λ CDM at the background level as well as for linear perturbations, while nonlinearities are expected to show deviations from the standard model.
Reduced-Order Models Based on POD-Tpwl for Compositional Subsurface Flow Simulation
NASA Astrophysics Data System (ADS)
Durlofsky, L. J.; He, J.; Jin, L. Z.
2014-12-01
A reduced-order modeling procedure applicable for compositional subsurface flow simulation will be described and applied. The technique combines trajectory piecewise linearization (TPWL) and proper orthogonal decomposition (POD) to provide highly efficient surrogate models. The method is based on a molar formulation (which uses pressure and overall component mole fractions as the primary variables) and is applicable for two-phase, multicomponent systems. The POD-TPWL procedure expresses new solutions in terms of linearizations around solution states generated and saved during previously simulated 'training' runs. High-dimensional states are projected into a low-dimensional subspace using POD. Thus, at each time step, only a low-dimensional linear system needs to be solved. Results will be presented for heterogeneous three-dimensional simulation models involving CO2 injection. Both enhanced oil recovery and carbon storage applications (with horizontal CO2 injectors) will be considered. Reasonably close agreement between full-order reference solutions and compositional POD-TPWL simulations will be demonstrated for 'test' runs in which the well controls differ from those used for training. Construction of the POD-TPWL model requires preprocessing overhead computations equivalent to about 3-4 full-order runs. Runtime speedups using POD-TPWL are, however, very significant - typically O(100-1000). The use of POD-TPWL for well control optimization will also be illustrated. For this application, some amount of retraining during the course of the optimization is required, which leads to smaller, but still significant, speedup factors.
Koda, Shin-ichi
2015-05-28
It has been shown by some existing studies that some linear dynamical systems defined on a dendritic network are equivalent to those defined on a set of one-dimensional networks in special cases and this transformation to the simple picture, which we call linear chain (LC) decomposition, has a significant advantage in understanding properties of dendrimers. In this paper, we expand the class of LC decomposable system with some generalizations. In addition, we propose two general sufficient conditions for LC decomposability with a procedure to systematically realize the LC decomposition. Some examples of LC decomposable linear dynamical systems are also presented with their graphs. The generalization of the LC decomposition is implemented in the following three aspects: (i) the type of linear operators; (ii) the shape of dendritic networks on which linear operators are defined; and (iii) the type of symmetry operations representing the symmetry of the systems. In the generalization (iii), symmetry groups that represent the symmetry of dendritic systems are defined. The LC decomposition is realized by changing the basis of a linear operator defined on a dendritic network into bases of irreducible representations of the symmetry group. The achievement of this paper makes it easier to utilize the LC decomposition in various cases. This may lead to a further understanding of the relation between structure and functions of dendrimers in future studies.
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2002-01-01
Two new equivalent linearization implementations for geometrically nonlinear random vibrations are presented. Both implementations are based upon a novel approach for evaluating the nonlinear stiffness within commercial finite element codes and are suitable for use with any finite element code having geometrically nonlinear static analysis capabilities. The formulation includes a traditional force-error minimization approach and a relatively new version of a potential energy-error minimization approach, which has been generalized for multiple degree-of-freedom systems. Results for a simply supported plate under random acoustic excitation are presented and comparisons of the displacement root-mean-square values and power spectral densities are made with results from a nonlinear time domain numerical simulation.
NASA Astrophysics Data System (ADS)
Fujimura, Toshio; Takeshita, Kunimasa; Suzuki, Ryosuke O.
2018-04-01
An analytical approximate solution to non-linear solute- and heat-transfer equations in the unsteady-state mushy zone of Fe-C plain steel has been obtained, assuming a linear relationship between the solid fraction and the temperature of the mushy zone. The heat transfer equations for both the solid and liquid zone along with the boundary conditions have been linked with the equations to solve the whole equations. The model predictions ( e.g., the solidification constants and the effective partition ratio) agree with the generally accepted values and with a separately performed numerical analysis. The solidus temperature predicted by the model is in the intermediate range of the reported formulas. The model and Neuman's solution are consistent in the low carbon range. A conventional numerical heat analysis ( i.e., an equivalent specific heat method using the solidus temperature predicted by the model) is consistent with the model predictions for Fe-C plain steels. The model presented herein simplifies the computations to solve the solute- and heat-transfer simultaneous equations while searching for a solidus temperature as a part of the solution. Thus, this model can reduce the complexity of analyses considering the heat- and solute-transfer phenomena in the mushy zone.
NASA Astrophysics Data System (ADS)
Satoh, Katsuhiko
2013-08-01
The thermodynamic scaling of molecular dynamic properties of rotation and thermodynamic parameters in a nematic phase was investigated by a molecular dynamic simulation using the Gay-Berne potential. A master curve for the relaxation time of flip-flop motion was obtained using thermodynamic scaling, and the dynamic property could be solely expressed as a function of TV^{γ _τ }, where T and V are the temperature and volume, respectively. The scaling parameter γτ was in excellent agreement with the thermodynamic parameter Γ, which is the logarithm of the slope of a line plotted for the temperature and volume at constant P2. This line was fairly linear, and as good as the line for p-azoxyanisole or using the highly ordered small cluster model. The equivalence relation between Γ and γτ was compared with results obtained from the highly ordered small cluster model. The possibility of adapting the molecular model for the thermodynamic scaling of other dynamic rotational properties was also explored. The rotational diffusion constant and rotational viscosity coefficients, which were calculated using established theoretical and experimental expressions, were rescaled onto master curves with the same scaling parameters. The simulation illustrates the universal nature of the equivalence relation for liquid crystals.
NASA Astrophysics Data System (ADS)
Brekke, Stewart
2010-11-01
Originally Einstein proposed the the mass-energy equivalence at low speeds as E=mc^2 + 1/2 mv^2. However, a mass may also be rotating and vibrating as well as moving linearly. Although small, these kinetic energies must be included in formulating a true mathematical statement of the mass-energy equivalence. Also, gravitational, electromagneic and magnetic potential energies must be included in the mass-energy equivalence mathematical statement. While the kinetic energy factors may differ in each physical situation such as types of vibrations and rotations, the basic equation for the mass- energy equivalence is therefore E = m0c^2 + 1/2m0v^2 + 1/2I2̂+ 1/2kx^2 + WG+ WE+ WM.
Quality factor and dose equivalent investigations aboard the Soviet Space Station Mir
NASA Astrophysics Data System (ADS)
Bouisset, P.; Nguyen, V. D.; Parmentier, N.; Akatov, Ia. A.; Arkhangel'Skii, V. V.; Vorozhtsov, A. S.; Petrov, V. M.; Kovalev, E. E.; Siegrist, M.
1992-07-01
Since Dec 1988, date of the French-Soviet joint space mission 'ARAGATZ', the CIRCE device, had recorded dose equivalent and quality factor values inside the Mir station (380-410 km, 51.5 deg). After the initial gas filling two years ago, the low pressure tissue equivalent proportional counter is still in good working conditions. Some results of three periods are presented. The average dose equivalent rates measured are respectively 0.6, 0.8 and 0.6 mSv/day with a quality factor equal to 1.9. Some detailed measurements show the increasing of the dose equivalent rates through the SAA and near polar horns. The real time determination of the quality factors allows to point out high linear energy transfer events with quality factors in the range 10-20.
He, Jiangnan; Lu, Lina; He, Xiangui; Xu, Xian; Du, Xuan; Zhang, Bo; Zhao, Huijuan; Sha, Jida; Zhu, Jianfeng; Zou, Haidong; Xu, Xun
2017-01-01
Purpose To report calculated crystalline lens power and describe the distribution of ocular biometry and its association with refractive error in older Chinese adults. Methods Random clustering sampling was used to identify adults aged 50 years and above in Xuhui and Baoshan districts of Shanghai. Refraction was determined by subjective refraction that achieved the best corrected vision based on monocular measurement. Ocular biometry was measured by IOL Master. The crystalline lens power of right eyes was calculated using modified Bennett-Rabbetts formula. Results We analyzed 6099 normal phakic right eyes. The mean crystalline lens power was 20.34 ± 2.24D (range: 13.40–36.08). Lens power, spherical equivalent, and anterior chamber depth changed linearly with age; however, axial length, corneal power and AL/CR ratio did not vary with age. The overall prevalence of hyperopia, myopia, and high myopia was 48.48% (95% CI: 47.23%–49.74%), 22.82% (95% CI: 21.77%–23.88%), and 4.57% (95% CI: 4.05–5.10), respectively. The prevalence of hyperopia increased linearly with age while lens power decreased with age. In multivariate models, refractive error was strongly correlated with axial length, lens power, corneal power, and anterior chamber depth; refractive error was slightly correlated with best corrected visual acuity, age and sex. Conclusion Lens power, hyperopia, and spherical equivalent changed linearly with age; Moreover, the continuous loss of lens power produced hyperopic shifts in refraction in subjects aged more than 50 years. PMID:28114313
Ellingson, Laura D; Hibbing, Paul R; Kim, Youngwon; Frey-Law, Laura A; Saint-Maurice, Pedro F; Welk, Gregory J
2017-06-01
The wrist is increasingly being used as the preferred site for objectively assessing physical activity but the relative accuracy of processing methods for wrist data has not been determined. This study evaluates the validity of four processing methods for wrist-worn ActiGraph (AG) data against energy expenditure (EE) measured using a portable metabolic analyzer (OM; Oxycon mobile) and the Compendium of physical activity. Fifty-one adults (ages 18-40) completed 15 activities ranging from sedentary to vigorous in a laboratory setting while wearing an AG and the OM. Estimates of EE and categorization of activity intensity were obtained from the AG using a linear method based on Hildebrand cutpoints (HLM), a non-linear modification of this method (HNLM), and two methods developed by Staudenmayer based on a Linear Model (SLM) and using random forest (SRF). Estimated EE and classification accuracy were compared to the OM and Compendium using Bland-Altman plots, equivalence testing, mean absolute percent error (MAPE), and Kappa statistics. Overall, classification agreement with the Compendium was similar across methods ranging from a Kappa of 0.46 (HLM) to 0.54 (HNLM). However, specificity and sensitivity varied by method and intensity, ranging from a sensitivity of 0% (HLM for sedentary) to a specificity of ~99% for all methods for vigorous. None of the methods was significantly equivalent to the OM (p > 0.05). Across activities, none of the methods evaluated had a high level of agreement with criterion measures. Additional research is needed to further refine the accuracy of processing wrist-worn accelerometer data.
Gasser, T C; Nchimi, A; Swedenborg, J; Roy, J; Sakalihasan, N; Böckler, D; Hyhlik-Dürr, A
2014-03-01
To translate the individual abdominal aortic aneurysm (AAA) patient's biomechanical rupture risk profile to risk-equivalent diameters, and to retrospectively test their predictability in ruptured and non-ruptured aneurysms. Biomechanical parameters of ruptured and non-ruptured AAAs were retrospectively evaluated in a multicenter study. General patient data and high resolution computer tomography angiography (CTA) images from 203 non-ruptured and 40 ruptured aneurysmal infrarenal aortas. Three-dimensional AAA geometries were semi-automatically derived from CTA images. Finite element (FE) models were used to predict peak wall stress (PWS) and peak wall rupture index (PWRI) according to the individual anatomy, gender, blood pressure, intra-luminal thrombus (ILT) morphology, and relative aneurysm expansion. Average PWS diameter and PWRI diameter responses were evaluated, which allowed for the PWS equivalent and PWRI equivalent diameters for any individual aneurysm to be defined. PWS increased linearly and PWRI exponentially with respect to maximum AAA diameter. A size-adjusted analysis showed that PWS equivalent and PWRI equivalent diameters were increased by 7.5 mm (p = .013) and 14.0 mm (p < .001) in ruptured cases when compared to non-ruptured controls, respectively. In non-ruptured cases the PWRI equivalent diameters were increased by 13.2 mm (p < .001) in females when compared with males. Biomechanical parameters like PWS and PWRI allow for a highly individualized analysis by integrating factors that influence the risk of AAA rupture like geometry (degree of asymmetry, ILT morphology, etc.) and patient characteristics (gender, family history, blood pressure, etc.). PWRI and the reported annual risk of rupture increase similarly with the diameter. PWRI equivalent diameter expresses the PWRI through the diameter of the average AAA that has the same PWRI, i.e. is at the same biomechanical risk of rupture. Consequently, PWRI equivalent diameter facilitates a straightforward interpretation of biomechanical analysis and connects to diameter-based guidelines for AAA repair indication. PWRI equivalent diameter reflects an additional diagnostic parameter that may provide more accurate clinical data for AAA repair indication. Copyright © 2013 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Improving linear accelerator service response with a real- time electronic event reporting system.
Hoisak, Jeremy D P; Pawlicki, Todd; Kim, Gwe-Ya; Fletcher, Richard; Moore, Kevin L
2014-09-08
To track linear accelerator performance issues, an online event recording system was developed in-house for use by therapists and physicists to log the details of technical problems arising on our institution's four linear accelerators. In use since October 2010, the system was designed so that all clinical physicists would receive email notification when an event was logged. Starting in October 2012, we initiated a pilot project in collaboration with our linear accelerator vendor to explore a new model of service and support, in which event notifications were also sent electronically directly to dedicated engineers at the vendor's technical help desk, who then initiated a response to technical issues. Previously, technical issues were reported by telephone to the vendor's call center, which then disseminated information and coordinated a response with the Technical Support help desk and local service engineers. The purpose of this work was to investigate the improvements to clinical operations resulting from this new service model. The new and old service models were quantitatively compared by reviewing event logs and the oncology information system database in the nine months prior to and after initiation of the project. Here, we focus on events that resulted in an inoperative linear accelerator ("down" machine). Machine downtime, vendor response time, treatment cancellations, and event resolution were evaluated and compared over two equivalent time periods. In 389 clinical days, there were 119 machine-down events: 59 events before and 60 after introduction of the new model. In the new model, median time to service response decreased from 45 to 8 min, service engineer dispatch time decreased 44%, downtime per event decreased from 45 to 20 min, and treatment cancellations decreased 68%. The decreased vendor response time and reduced number of on-site visits by a service engineer resulted in decreased downtime and decreased patient treatment cancellations.
NASA Astrophysics Data System (ADS)
Yang, B. D.; Chu, M. L.; Menq, C. H.
1998-03-01
Mechanical systems in which moving components are mutually constrained through contacts often lead to complex contact kinematics involving tangential and normal relative motions. A friction contact model is proposed to characterize this type of contact kinematics that imposes both friction non-linearity and intermittent separation non-linearity on the system. The stick-slip friction phenomenon is analyzed by establishing analytical criteria that predict the transition between stick, slip, and separation of the interface. The established analytical transition criteria are particularly important to the proposed friction contact model for the transition conditions of the contact kinematics are complicated by the effect of normal load variation and possible interface separation. With these transition criteria, the induced friction force on the contact plane and the variable normal load perpendicular to the contact plane, can be predicted for any given cyclic relative motions at the contact interface and hysteresis loops can be produced so as to characterize the equivalent damping and stiffness of the friction contact. These-non-linear damping and stiffness methods along with the harmonic balance method are then used to predict the resonant response of a frictionally constrained two-degree-of-freedom oscillator. The predicted results are compared with those of the time integration method and the damping effect, the resonant frequency shift, and the jump phenomenon are examined.
Observations on personnel dosimetry for radiotherapy personnel operating high-energy LINACs.
Glasgow, G P; Eichling, J; Yoder, R C
1986-06-01
A series of measurements were conducted to determine the cause of a sudden increase in personnel radiation exposures. One objective of the measurements was to determine if the increases were related to changing from film dosimeters exchanged monthly to TLD-100 dosimeters exchanged quarterly. While small increases were observed in the dose equivalents of most employees, the dose equivalents of personnel operating medical electron linear accelerators with energies greater than 20 MV doubled coincidentally with the change in the personnel dosimeter program. The measurements indicated a small thermal neutron radiation component around the accelerators operated by these personnel. This component caused the doses measured with the TLD-100 dosimeters to be overstated. Therefore, the increase in these personnel dose equivalents was not due to changes in work habits or radiation environments. Either film or TLD-700 dosimeters would be suitable for personnel monitoring around high-energy linear accelerators. The final choice would depend on economics and personal preference.
Slope stability analysis using limit equilibrium method in nonlinear criterion.
Lin, Hang; Zhong, Wenwen; Xiong, Wei; Tang, Wenyu
2014-01-01
In slope stability analysis, the limit equilibrium method is usually used to calculate the safety factor of slope based on Mohr-Coulomb criterion. However, Mohr-Coulomb criterion is restricted to the description of rock mass. To overcome its shortcomings, this paper combined Hoek-Brown criterion and limit equilibrium method and proposed an equation for calculating the safety factor of slope with limit equilibrium method in Hoek-Brown criterion through equivalent cohesive strength and the friction angle. Moreover, this paper investigates the impact of Hoek-Brown parameters on the safety factor of slope, which reveals that there is linear relation between equivalent cohesive strength and weakening factor D. However, there are nonlinear relations between equivalent cohesive strength and Geological Strength Index (GSI), the uniaxial compressive strength of intact rock σ ci , and the parameter of intact rock m i . There is nonlinear relation between the friction angle and all Hoek-Brown parameters. With the increase of D, the safety factor of slope F decreases linearly; with the increase of GSI, F increases nonlinearly; when σ ci is relatively small, the relation between F and σ ci is nonlinear, but when σ ci is relatively large, the relation is linear; with the increase of m i , F decreases first and then increases.
Slope Stability Analysis Using Limit Equilibrium Method in Nonlinear Criterion
Lin, Hang; Zhong, Wenwen; Xiong, Wei; Tang, Wenyu
2014-01-01
In slope stability analysis, the limit equilibrium method is usually used to calculate the safety factor of slope based on Mohr-Coulomb criterion. However, Mohr-Coulomb criterion is restricted to the description of rock mass. To overcome its shortcomings, this paper combined Hoek-Brown criterion and limit equilibrium method and proposed an equation for calculating the safety factor of slope with limit equilibrium method in Hoek-Brown criterion through equivalent cohesive strength and the friction angle. Moreover, this paper investigates the impact of Hoek-Brown parameters on the safety factor of slope, which reveals that there is linear relation between equivalent cohesive strength and weakening factor D. However, there are nonlinear relations between equivalent cohesive strength and Geological Strength Index (GSI), the uniaxial compressive strength of intact rock σ ci, and the parameter of intact rock m i. There is nonlinear relation between the friction angle and all Hoek-Brown parameters. With the increase of D, the safety factor of slope F decreases linearly; with the increase of GSI, F increases nonlinearly; when σ ci is relatively small, the relation between F and σ ci is nonlinear, but when σ ci is relatively large, the relation is linear; with the increase of m i, F decreases first and then increases. PMID:25147838
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Goetz, Alexander F. H.
1992-01-01
Over the last decade, technological advances in airborne imaging spectrometers, having spectral resolution comparable with laboratory spectrometers, have made it possible to estimate biochemical constituents of vegetation canopies. Wessman estimated lignin concentration from data acquired with NASA's Airborne Imaging Spectrometer (AIS) over Blackhawk Island in Wisconsin. A stepwise linear regression technique was used to determine the single spectral channel or channels in the AIS data that best correlated with measured lignin contents using chemical methods. The regression technique does not take advantage of the spectral shape of the lignin reflectance feature as a diagnostic tool nor the increased discrimination among other leaf components with overlapping spectral features. A nonlinear least squares spectral matching technique was recently reported for deriving both the equivalent water thicknesses of surface vegetation and the amounts of water vapor in the atmosphere from contiguous spectra measured with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). The same technique was applied to a laboratory reflectance spectrum of fresh, green leaves. The result demonstrates that the fresh leaf spectrum in the 1.0-2.5 microns region consists of spectral components of dry leaves and the spectral component of liquid water. A linear least squares spectral matching technique for retrieving equivalent water thickness and biochemical components of green vegetation is described.
Connes' embedding problem and winning strategies for quantum XOR games
NASA Astrophysics Data System (ADS)
Harris, Samuel J.
2017-12-01
We consider quantum XOR games, defined in the work of Regev and Vidick [ACM Trans. Comput. Theory 7, 43 (2015)], from the perspective of unitary correlations defined in the work of Harris and Paulsen [Integr. Equations Oper. Theory 89, 125 (2017)]. We show that the winning bias of a quantum XOR game in the tensor product model (respectively, the commuting model) is equal to the norm of its associated linear functional on the unitary correlation set from the appropriate model. We show that Connes' embedding problem has a positive answer if and only if every quantum XOR game has entanglement bias equal to the commuting bias. In particular, the embedding problem is equivalent to determining whether every quantum XOR game G with a winning strategy in the commuting model also has a winning strategy in the approximate finite-dimensional model.
Acoustic Treatment Design Scaling Methods. Volume 3; Test Plans, Hardware, Results, and Evaluation
NASA Technical Reports Server (NTRS)
Yu, J.; Kwan, H. W.; Echternach, D. K.; Kraft, R. E.; Syed, A. A.
1999-01-01
The ability to design, build, and test miniaturized acoustic treatment panels on scale-model fan rigs representative of the full-scale engine provides not only a cost-savings, but an opportunity to optimize the treatment by allowing tests of different designs. To be able to use scale model treatment as a full-scale design tool, it is necessary that the designer be able to reliably translate the scale model design and performance to an equivalent full-scale design. The primary objective of the study presented in this volume of the final report was to conduct laboratory tests to evaluate liner acoustic properties and validate advanced treatment impedance models. These laboratory tests include DC flow resistance measurements, normal incidence impedance measurements, DC flow and impedance measurements in the presence of grazing flow, and in-duct liner attenuation as well as modal measurements. Test panels were fabricated at three different scale factors (i.e., full-scale, half-scale, and one-fifth scale) to support laboratory acoustic testing. The panel configurations include single-degree-of-freedom (SDOF) perforated sandwich panels, SDOF linear (wire mesh) liners, and double-degree-of-freedom (DDOF) linear acoustic panels.
Gyrofluid turbulence models with kinetic effects
NASA Astrophysics Data System (ADS)
Dorland, W.; Hammett, G. W.
1993-03-01
Nonlinear gyrofluid equations are derived by taking moments of the nonlinear, electrostatic gyrokinetic equation. The principal model presented includes evolution equations for the guiding center n, u∥, T∥, and T⊥ along with an equation expressing the quasineutrality constraint. Additional evolution equations for higher moments are derived that may be used if greater accuracy is desired. The moment hierarchy is closed with a Landau damping model [G. W. Hammett and F. W. Perkins, Phys. Rev. Lett. 64, 3019 (1990)], which is equivalent to a multipole approximation to the plasma dispersion function, extended to include finite Larmor radius effects (FLR). In particular, new dissipative, nonlinear terms are found that model the perpendicular phase mixing of the distribution function along contours of constant electrostatic potential. These ``FLR phase-mixing'' terms introduce a hyperviscositylike damping ∝k⊥2‖Φkk×k'‖, which should provide a physics-based damping mechanism at high k⊥ρ which is potentially as important as the usual polarization drift nonlinearity. The moments are taken in guiding center space to pick up the correct nonlinear FLR terms and the gyroaveraging of the shear. The equations are solved with a nonlinear, three-dimensional initial value code. Linear results are presented, showing excellent agreement with linear gyrokinetic theory.
NASA Technical Reports Server (NTRS)
Badhwar, G. D.; Konradi, A.; Atwell, W.; Golightly, M. J.; Cucinotta, F. A.; Wilson, J. W.; Petrov, V. M.; Tchernykh, I. V.; Shurshakov, V. A.; Lobakov, A. P.
1996-01-01
A tissue equivalent proportional counter designed to measure the linear energy transfer spectra (LET) in the range 0.2-1250 keV/micrometer was flown in the Kvant module on the Mir orbital station during September 1994. The spacecraft was in a 51.65 degrees inclination, elliptical (390 x 402 km) orbit. This is nearly the lower limit of its flight altitude. The total absorbed dose rate measured was 411.3 +/- 4.41 microGy/day with an average quality factor of 2.44. The galactic cosmic radiation (GCR) dose rate was 133.6 microGy/day with a quality factor of 3.35. The trapped radiation belt dose rate was 277.7 microGy/day with an average quality factor of 1.94. The peak rate through the South Atlantic Anomaly was approximately 12 microGy/min and nearly constant from one pass to another. A detailed comparison of the measured LET spectra has been made with radiation transport models. The GCR results are in good agreement with model calculations; however, this is not the case for radiation belt particles and again points to the need for improving the AP8 omni-directional trapped proton models.
A spatial operator algebra for manipulator modeling and control
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Jain, A.; Kreutz-Delgado, K.
1991-01-01
A recently developed spatial operator algebra for manipulator modeling, control, and trajectory design is discussed. The elements of this algebra are linear operators whose domain and range spaces consist of forces, moments, velocities, and accelerations. The effect of these operators is equivalent to a spatial recursion along the span of a manipulator. Inversion of operators can be efficiently obtained via techniques of recursive filtering and smoothing. The operator algebra provides a high-level framework for describing the dynamic and kinematic behavior of a manipulator and for control and trajectory design algorithms. The interpretation of expressions within the algebraic framework leads to enhanced conceptual and physical understanding of manipulator dynamics and kinematics.
Bounded Linear Stability Margin Analysis of Nonlinear Hybrid Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Boskovic, Jovan D.
2008-01-01
This paper presents a bounded linear stability analysis for a hybrid adaptive control that blends both direct and indirect adaptive control. Stability and convergence of nonlinear adaptive control are analyzed using an approximate linear equivalent system. A stability margin analysis shows that a large adaptive gain can lead to a reduced phase margin. This method can enable metrics-driven adaptive control whereby the adaptive gain is adjusted to meet stability margin requirements.
Rigatos, Gerasimos G; Rigatou, Efthymia G; Djida, Jean Daniel
2015-10-01
A method for early diagnosis of parametric changes in intracellular protein synthesis models (e.g. the p53 protein - mdm2 inhibitor model) is developed with the use of a nonlinear Kalman Filtering approach (Derivative-free nonlinear Kalman Filter) and of statistical change detection methods. The intracellular protein synthesis dynamic model is described by a set of coupled nonlinear differential equations. It is shown that such a dynamical system satisfies differential flatness properties and this allows to transform it, through a change of variables (diffeomorphism), to the so-called linear canonical form. For the linearized equivalent of the dynamical system, state estimation can be performed using the Kalman Filter recursion. Moreover, by applying an inverse transformation based on the previous diffeomorphism it becomes also possible to obtain estimates of the state variables of the initial nonlinear model. By comparing the output of the Kalman Filter (which is assumed to correspond to the undistorted dynamical model) with measurements obtained from the monitored protein synthesis system, a sequence of differences (residuals) is obtained. The statistical processing of the residuals with the use of x2 change detection tests, can provide indication within specific confidence intervals about parametric changes in the considered biological system and consequently indications about the appearance of specific diseases (e.g. malignancies).
Yang, Qing; Fan, Liu-Yin; Huang, Shan-Sheng; Zhang, Wei; Cao, Cheng-Xi
2011-04-01
In this paper, we developed a novel method of acid-base titration, viz. the electromigration acid-base titration (EABT), via a moving neutralization boundary (MNR). With HCl and NaOH as the model strong acid and base, respectively, we conducted the experiments on the EABT via the method of moving neutralization boundary for the first time. The experiments revealed that (i) the concentration of agarose gel, the voltage used and the content of background electrolyte (KCl) had evident influence on the boundary movement; (ii) the movement length was a function of the running time under the constant acid and base concentrations; and (iii) there was a good linearity between the length and natural logarithmic concentration of HCl under the optimized conditions, and the linearity could be used to detect the concentration of acid. The experiments further manifested that (i) the RSD values of intra-day and inter-day runs were less than 1.59 and 3.76%, respectively, indicating similar precision and stability in capillary electrophoresis or HPLC; (ii) the indicators with different pK(a) values had no obvious effect on EABT, distinguishing strong influence on the judgment of equivalence-point titration in the classic one; and (iii) the constant equivalence-point titration always existed in the EABT, rather than the classic volumetric analysis. Additionally, the EABT could be put to good use for the determination of actual acid concentrations. The experimental results achieved herein showed a new general guidance for the development of classic volumetric analysis and element (e.g. nitrogen) content analysis in protein chemistry. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Papagiannopoulou, Christina; Decubber, Stijn; Miralles, Diego; Demuzere, Matthias; Dorigo, Wouter; Verhoest, Niko; Waegeman, Willem
2017-04-01
Satellite data provide an abundance of information about crucial climatic and environmental variables. These data - consisting of global records, spanning up to 35 years and having the form of multivariate time series with different spatial and temporal resolutions - enable the study of key climate-vegetation interactions. Although methods which are based on correlations and linear models are typically used for this purpose, their assumptions for linearity about the climate-vegetation relationships are too simplistic. Therefore, we adopt a recently proposed non-linear Granger causality analysis [1], in which we incorporate spatial information, concatenating data from neighboring pixels and training a joint model on the combined data. Experimental results based on global data sets show that considering non-linear relationships leads to a higher explained variance of past vegetation dynamics, compared to simple linear models. Our approach consists of several steps. First, we compile an extensive database [1], which includes multiple data sets for land surface temperature, near-surface air temperature, surface radiation, precipitation, snow water equivalents and surface soil moisture. Based on this database, high-level features are constructed and considered as predictors in our machine-learning framework. These high-level features include (de-trended) seasonal anomalies, lagged variables, past cumulative variables, and extreme indices, all calculated based on the raw climatic data. Second, we apply a spatiotemporal non-linear Granger causality framework - in which the linear predictive model is substituted for a non-linear machine learning algorithm - in order to assess which of these predictor variables Granger-cause vegetation dynamics at each 1° pixel. We use the de-trended anomalies of Normalized Difference Vegetation Index (NDVI) to characterize vegetation, being the target variable of our framework. Experimental results indicate that climate strongly (Granger-)causes vegetation dynamics in most regions globally. More specifically, water availability is the most dominant vegetation driver, being the dominant vegetation driver in 54% of the vegetated surface. Furthermore, our results show that precipitation and soil moisture have prolonged impacts on vegetation in semiarid regions, with up to 10% of additional explained variance on the vegetation dynamics occurring three months later. Finally, hydro-climatic extremes seem to have a remarkable impact on vegetation, since they also explain up to 10% of additional variance of vegetation in certain regions despite their infrequent occurrence. References [1] Papagiannopoulou, C., Miralles, D. G., Verhoest, N. E. C., Dorigo, W. A., and Waegeman, W.: A non-linear Granger causality framework to investigate climate-vegetation dynamics, Geosci. Model Dev. Discuss., doi:10.5194/gmd-2016-266, in review, 2016.
Cosmic ray LET spectra and doses on board Cosmos-2044 biosatellite
NASA Technical Reports Server (NTRS)
Watts, J. W., Jr.; Parnell, T. A.; Dudkin, V. E.; Kovalev, E. E.; Potapov, Yu. V.; Benton, E. V.; Frank, A. L.; Benton, E. R.; Beaujean, R.; Heilmann, C.
1995-01-01
Results of the experiments on board Cosmos-2044 (Biosatellite 9) are presented. Various nuclear track detectors (NTD) (dielectric, AgCl-based, nuclear emulsions) were used to obtain the Linear Energy Transfer (LET) spectra inside and outside the satellite. The spectra from the different NTDs have proved to be in general agreement. The results of LET spectra calculations using two different models are also presented. The resultant LET distributions are used to calculate the absorbed and equivalent doses and the orbit-averaged quality factors (QF) of the cosmic rays (CR). Absorbed dose rates inside (approximately 20 g cm (exp -2) shielding) and outside (1 g cm(exp -2) the spacecraft, omitting electrons, were found to be 4.8 and 8.6 mrad d (exp -1), respectively, while the corresponding equivalent doses were 8.8 and 19.7 mrem d(exp -1). The effects of the flight parameters on the total fluence of, and on the dose from the CR particles are analyzed. Integral dose distributions of the detected particles are also determined. The LET values which separate absorbed and equivalent doses into 50% intervals are estimated. The CR-39 dielectric NTD is shown to detect 20-30% of the absorbed dose and 60-70% of the equivalent dose in the Cosmos-2044 orbit. The influence of solar activity phase on the magnitude of CR flux is discussed.
Ma, Lijun; Lee, Letitia; Barani, Igor; Hwang, Andrew; Fogh, Shannon; Nakamura, Jean; McDermott, Michael; Sneed, Penny; Larson, David A; Sahgal, Arjun
2011-11-21
Rapid delivery of multiple shots or isocenters is one of the hallmarks of Gamma Knife radiosurgery. In this study, we investigated whether the temporal order of shots delivered with Gamma Knife Perfexion would significantly influence the biological equivalent dose for complex multi-isocenter treatments. Twenty single-target cases were selected for analysis. For each case, 3D dose matrices of individual shots were extracted and single-fraction equivalent uniform dose (sEUD) values were determined for all possible shot delivery sequences, corresponding to different patterns of temporal dose delivery within the target. We found significant variations in the sEUD values among these sequences exceeding 15% for certain cases. However, the sequences for the actual treatment delivery were found to agree (<3%) and to correlate (R² = 0.98) excellently with the sequences yielding the maximum sEUD values for all studied cases. This result is applicable for both fast and slow growing tumors with α/β values of 2 to 20 according to the linear-quadratic model. In conclusion, despite large potential variations in different shot sequences for multi-isocenter Gamma Knife treatments, current clinical delivery sequences exhibited consistent biological target dosing that approached that maximally achievable for all studied cases.
Power Supply Fault Tolerant Reliability Study
1991-04-01
easier to design than for equivalent bipolar transistors. MCDONNELL DOUGLAS ELECTRONICS SYSTEMS COMPANY 9. Base circuitry should be designed to drive...SWITCHING REGULATORS (Ref. 28), SWITCHING AND LINEAR POWER SUPPLY DESIGN (Ref. 25) 6. Sequence the turn-off/turn-on logic in an orderly and controllable ...for equivalent bipolar transistors. MCDONNELL DOUGLAS ELECTRONICS SYSTEMS COMPANY 8. Base circuitry should be designed to drive the transistor into
A Model for Temperature Fluctuations in a Buoyant Plume
NASA Astrophysics Data System (ADS)
Bisignano, A.; Devenish, B. J.
2015-11-01
We present a hybrid Lagrangian stochastic model for buoyant plume rise from an isolated source that includes the effects of temperature fluctuations. The model is based on that of Webster and Thomson (Atmos Environ 36:5031-5042, 2002) in that it is a coupling of a classical plume model in a crossflow with stochastic differential equations for the vertical velocity and temperature (which are themselves coupled). The novelty lies in the addition of the latter stochastic differential equation. Parametrizations of the plume turbulence are presented that are used as inputs to the model. The root-mean-square temperature is assumed to be proportional to the difference between the centreline temperature of the plume and the ambient temperature. The constant of proportionality is tuned by comparison with equivalent statistics from large-eddy simulations (LES) of buoyant plumes in a uniform crossflow and linear stratification. We compare plume trajectories for a wide range of crossflow velocities and find that the model generally compares well with the equivalent LES results particularly when added mass is included in the model. The exception occurs when the crossflow velocity component becomes very small. Comparison of the scalar concentration, both in terms of the height of the maximum concentration and its vertical spread, shows similar behaviour. The model is extended to allow for realistic profiles of ambient wind and temperature and the results are compared with LES of the plume that emanated from the explosion and fire at the Buncefield oil depot in 2005.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
Zavgorodni, S
2004-12-07
Inter-fraction dose fluctuations, which appear as a result of setup errors, organ motion and treatment machine output variations, may influence the radiobiological effect of the treatment even when the total delivered physical dose remains constant. The effect of these inter-fraction dose fluctuations on the biological effective dose (BED) has been investigated. Analytical expressions for the BED accounting for the dose fluctuations have been derived. The concept of biological effective constant dose (BECD) has been introduced. The equivalent constant dose (ECD), representing the constant physical dose that provides the same cell survival fraction as the fluctuating dose, has also been introduced. The dose fluctuations with Gaussian as well as exponential probability density functions were investigated. The values of BECD and ECD calculated analytically were compared with those derived from Monte Carlo modelling. The agreement between Monte Carlo modelled and analytical values was excellent (within 1%) for a range of dose standard deviations (0-100% of the dose) and the number of fractions (2 to 37) used in the comparison. The ECDs have also been calculated for conventional radiotherapy fields. The analytical expression for the BECD shows that BECD increases linearly with the variance of the dose. The effect is relatively small, and in the flat regions of the field it results in less than 1% increase of ECD. In the penumbra region of the 6 MV single radiotherapy beam the ECD exceeded the physical dose by up to 35%, when the standard deviation of combined patient setup/organ motion uncertainty was 5 mm. Equivalently, the ECD field was approximately 2 mm wider than the physical dose field. The difference between ECD and the physical dose is greater for normal tissues than for tumours.
Kargar, Soudabeh; Borisch, Eric A; Froemming, Adam T; Kawashima, Akira; Mynderse, Lance A; Stinson, Eric G; Trzasko, Joshua D; Riederer, Stephen J
2018-05-01
To describe an efficient numerical optimization technique using non-linear least squares to estimate perfusion parameters for the Tofts and extended Tofts models from dynamic contrast enhanced (DCE) MRI data and apply the technique to prostate cancer. Parameters were estimated by fitting the two Tofts-based perfusion models to the acquired data via non-linear least squares. We apply Variable Projection (VP) to convert the fitting problem from a multi-dimensional to a one-dimensional line search to improve computational efficiency and robustness. Using simulation and DCE-MRI studies in twenty patients with suspected prostate cancer, the VP-based solver was compared against the traditional Levenberg-Marquardt (LM) strategy for accuracy, noise amplification, robustness to converge, and computation time. The simulation demonstrated that VP and LM were both accurate in that the medians closely matched assumed values across typical signal to noise ratio (SNR) levels for both Tofts models. VP and LM showed similar noise sensitivity. Studies using the patient data showed that the VP method reliably converged and matched results from LM with approximate 3× and 2× reductions in computation time for the standard (two-parameter) and extended (three-parameter) Tofts models. While LM failed to converge in 14% of the patient data, VP converged in the ideal 100%. The VP-based method for non-linear least squares estimation of perfusion parameters for prostate MRI is equivalent in accuracy and robustness to noise, while being more reliably (100%) convergent and computationally about 3× (TM) and 2× (ETM) faster than the LM-based method. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Tyson, R. W.; Muraca, R. J.
1975-01-01
The local linearization method for axisymmetric flow is combined with the transonic equivalence rule to calculate pressure distribution on slender bodies at free-stream Mach numbers from .8 to 1.2. This is an approximate solution to the transonic flow problem which yields results applicable during the preliminary design stages of a configuration development. The method can be used to determine the aerodynamic loads on parabolic arc bodies having either circular or elliptical cross sections. It is particularly useful in predicting pressure distributions and normal force distributions along the body at small angles of attack. The equations discussed may be extended to include wing-body combinations.
Simple taper: Taper equations for the field forester
David R. Larsen
2017-01-01
"Simple taper" is set of linear equations that are based on stem taper rates; the intent is to provide taper equation functionality to field foresters. The equation parameters are two taper rates based on differences in diameter outside bark at two points on a tree. The simple taper equations are statistically equivalent to more complex equations. The linear...
NASA Astrophysics Data System (ADS)
Helman, E. Udi
This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using a DC load flow approximation). Chapter 9 shows the price results. In contrast to prior market power simulations of these markets, much greater variability in price-cost margins is found when using a realistic model of hourly conditions on such a large network. Chapter 10 shows that the conventional concentration indices (HHIs) are poorly correlated with PCMs. Finally, Chapter 11 proposes that the simulation models are applied to merger analysis and provides two large-scale merger examples. (Abstract shortened by UMI.)
Extending local canonical correlation analysis to handle general linear contrasts for FMRI data.
Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar
2012-01-01
Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.
Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data
Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar
2012-01-01
Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic. PMID:22461786
NASA Astrophysics Data System (ADS)
Nasser Eddine, Achraf; Huard, Benoît; Gabano, Jean-Denis; Poinot, Thierry
2018-06-01
This paper deals with the initialization of a non linear identification algorithm used to accurately estimate the physical parameters of Lithium-ion battery. A Randles electric equivalent circuit is used to describe the internal impedance of the battery. The diffusion phenomenon related to this modeling is presented using a fractional order method. The battery model is thus reformulated into a transfer function which can be identified through Levenberg-Marquardt algorithm to ensure the algorithm's convergence to the physical parameters. An initialization method is proposed in this paper by taking into account previously acquired information about the static and dynamic system behavior. The method is validated using noisy voltage response, while precision of the final identification results is evaluated using Monte-Carlo method.
NASA Astrophysics Data System (ADS)
Hoffmann, Aswin L.; den Hertog, Dick; Siem, Alex Y. D.; Kaanders, Johannes H. A. M.; Huizenga, Henk
2008-11-01
Finding fluence maps for intensity-modulated radiation therapy (IMRT) can be formulated as a multi-criteria optimization problem for which Pareto optimal treatment plans exist. To account for the dose-per-fraction effect of fractionated IMRT, it is desirable to exploit radiobiological treatment plan evaluation criteria based on the linear-quadratic (LQ) cell survival model as a means to balance the radiation benefits and risks in terms of biologic response. Unfortunately, the LQ-model-based radiobiological criteria are nonconvex functions, which make the optimization problem hard to solve. We apply the framework proposed by Romeijn et al (2004 Phys. Med. Biol. 49 1991-2013) to find transformations of LQ-model-based radiobiological functions and establish conditions under which transformed functions result in equivalent convex criteria that do not change the set of Pareto optimal treatment plans. The functions analysed are: the LQ-Poisson-based model for tumour control probability (TCP) with and without inter-patient heterogeneity in radiation sensitivity, the LQ-Poisson-based relative seriality s-model for normal tissue complication probability (NTCP), the equivalent uniform dose (EUD) under the LQ-Poisson model and the fractionation-corrected Probit-based model for NTCP according to Lyman, Kutcher and Burman. These functions differ from those analysed before in that they cannot be decomposed into elementary EUD or generalized-EUD functions. In addition, we show that applying increasing and concave transformations to the convexified functions is beneficial for the piecewise approximation of the Pareto efficient frontier.
Identification of aerodynamic models for maneuvering aircraft
NASA Technical Reports Server (NTRS)
Lan, C. Edward; Hu, C. C.
1992-01-01
A Fourier analysis method was developed to analyze harmonic forced-oscillation data at high angles of attack as functions of the angle of attack and its time rate of change. The resulting aerodynamic responses at different frequencies are used to build up the aerodynamic models involving time integrals of the indicial type. An efficient numerical method was also developed to evaluate these time integrals for arbitrary motions based on a concept of equivalent harmonic motion. The method was verified by first using results from two-dimensional and three-dimensional linear theories. The developed models for C sub L, C sub D, and C sub M based on high-alpha data for a 70 deg delta wing in harmonic motions showed accurate results in reproducing hysteresis. The aerodynamic models are further verified by comparing with test data using ramp-type motions.
Extending the ΛCDM model through shear-free anisotropies
NASA Astrophysics Data System (ADS)
Pereira, Thiago S.; Pabon, Davincy T.
2016-07-01
If the spacetime metric has anisotropic spatial curvature, one can still expand the universe as if it were isotropic, provided that the energy-momentum tensor satisfies a certain constraint. This leads to the so-called shear-free (SF) metrics, which have the interesting property of violating the cosmological principle while still preserving the isotropy of the cosmic microwave background (CMB) radiation. In this work, we show that SF cosmologies correspond to an attractor solution in the space of models with anisotropic spatial curvature. Through a rigorous definition of linear perturbation theory in these spacetimes, we show that SF models represent a viable alternative to explain the large-scale evolution of the universe, leading, in particular to a kinematically equivalent Sachs-Wolfe (SW) effect. Alternatively, we discuss some specific signatures that SF models would imprint on the temperature spectrum of CMB.
Review of Recent Development of Dynamic Wind Farm Equivalent Models Based on Big Data Mining
NASA Astrophysics Data System (ADS)
Wang, Chenggen; Zhou, Qian; Han, Mingzhe; Lv, Zhan’ao; Hou, Xiao; Zhao, Haoran; Bu, Jing
2018-04-01
Recently, the big data mining method has been applied in dynamic wind farm equivalent modeling. In this paper, its recent development with present research both domestic and overseas is reviewed. Firstly, the studies of wind speed prediction, equivalence and its distribution in the wind farm are concluded. Secondly, two typical approaches used in the big data mining method is introduced, respectively. For single wind turbine equivalent modeling, it focuses on how to choose and identify equivalent parameters. For multiple wind turbine equivalent modeling, the following three aspects are concentrated, i.e. aggregation of different wind turbine clusters, the parameters in the same cluster, and equivalence of collector system. Thirdly, an outlook on the development of dynamic wind farm equivalent models in the future is discussed.
NASA Astrophysics Data System (ADS)
Ferhatoglu, Erhan; Cigeroglu, Ender; Özgüven, H. Nevzat
2018-07-01
In this paper, a new modal superposition method based on a hybrid mode shape concept is developed for the determination of steady state vibration response of nonlinear structures. The method is developed specifically for systems having nonlinearities where the stiffness of the system may take different limiting values. Stiffness variation of these nonlinear systems enables one to define different linear systems corresponding to each value of the limiting equivalent stiffness. Moreover, the response of the nonlinear system is bounded by the confinement of these linear systems. In this study, a modal superposition method utilizing novel hybrid mode shapes which are defined as linear combinations of the modal vectors of the limiting linear systems is proposed to determine periodic response of nonlinear systems. In this method the response of the nonlinear system is written in terms of hybrid modes instead of the modes of the underlying linear system. This provides decrease of the number of modes that should be retained for an accurate solution, which in turn reduces the number of nonlinear equations to be solved. In this way, computational time for response calculation is directly curtailed. In the solution, the equations of motion are converted to a set of nonlinear algebraic equations by using describing function approach, and the numerical solution is obtained by using Newton's method with arc-length continuation. The method developed is applied on two different systems: a lumped parameter model and a finite element model. Several case studies are performed and the accuracy and computational efficiency of the proposed modal superposition method with hybrid mode shapes are compared with those of the classical modal superposition method which utilizes the mode shapes of the underlying linear system.
Asymptotic Stability of Interconnected Passive Non-Linear Systems
NASA Technical Reports Server (NTRS)
Isidori, A.; Joshi, S. M.; Kelkar, A. G.
1999-01-01
This paper addresses the problem of stabilization of a class of internally passive non-linear time-invariant dynamic systems. A class of non-linear marginally strictly passive (MSP) systems is defined, which is less restrictive than input-strictly passive systems. It is shown that the interconnection of a non-linear passive system and a non-linear MSP system is globally asymptotically stable. The result generalizes and weakens the conditions of the passivity theorem, which requires one of the systems to be input-strictly passive. In the case of linear time-invariant systems, it is shown that the MSP property is equivalent to the marginally strictly positive real (MSPR) property, which is much simpler to check.
Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus
2015-10-01
In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nikolaevich Lipatnikov, Andrei; Nishiki, Shinnosuke; Hasegawa, Tatsuya
2015-05-01
The linear relation between the mean rate of product creation and the mean scalar dissipation rate, derived in the seminal paper by K.N.C. Bray ['The interaction between turbulence and combustion', Proceedings of the Combustion Institute, Vol. 17 (1979), pp. 223-233], is the cornerstone for models of premixed turbulent combustion that deal with the dissipation rate in order to close the reaction rate. In the present work, this linear relation is straightforwardly validated by analysing data computed earlier in the 3D Direct Numerical Simulation (DNS) of three statistically stationary, 1D, planar turbulent flames associated with the flamelet regime of premixed combustion. Although the linear relation does not hold at the leading and trailing edges of the mean flame brush, such a result is expected within the framework of Bray's theory. However, the present DNS yields substantially larger (smaller) values of an input parameter cm (or K2 = 1/(2cm - 1)), involved by the studied linear relation, when compared to the commonly used value of cm = 0.7 (or K2 = 2.5). To gain further insight into the issue and into the eventual dependence of cm on mixture composition, the DNS data are combined with the results of numerical simulations of stationary, 1D, planar laminar methane-air flames with complex chemistry, with the results being reported in terms of differently defined combustion progress variables c, i.e. the normalised temperature, density, or mole fraction of CH4, O2, CO2 or H2O. Such a study indicates the dependence of cm both on the definition of c and on the equivalence ratio. Nevertheless, K2 and cm can be estimated by processing the results of simulations of counterpart laminar premixed flames. Similar conclusions were also drawn by skipping the DNS data, but invoking a presumed beta probability density function in order to evaluate cm for the differently defined c's and various equivalence ratios.
Minimally invasive estimation of ventricular dead space volume through use of Frank-Starling curves.
Davidson, Shaun; Pretty, Chris; Pironet, Antoine; Desaive, Thomas; Janssen, Nathalie; Lambermont, Bernard; Morimont, Philippe; Chase, J Geoffrey
2017-01-01
This paper develops a means of more easily and less invasively estimating ventricular dead space volume (Vd), an important, but difficult to measure physiological parameter. Vd represents a subject and condition dependent portion of measured ventricular volume that is not actively participating in ventricular function. It is employed in models based on the time varying elastance concept, which see widespread use in haemodynamic studies, and may have direct diagnostic use. The proposed method involves linear extrapolation of a Frank-Starling curve (stroke volume vs end-diastolic volume) and its end-systolic equivalent (stroke volume vs end-systolic volume), developed across normal clinical procedures such as recruitment manoeuvres, to their point of intersection with the y-axis (where stroke volume is 0) to determine Vd. To demonstrate the broad applicability of the method, it was validated across a cohort of six sedated and anaesthetised male Pietrain pigs, encompassing a variety of cardiac states from healthy baseline behaviour to circulatory failure due to septic shock induced by endotoxin infusion. Linear extrapolation of the curves was supported by strong linear correlation coefficients of R = 0.78 and R = 0.80 average for pre- and post- endotoxin infusion respectively, as well as good agreement between the two linearly extrapolated y-intercepts (Vd) for each subject (no more than 7.8% variation). Method validity was further supported by the physiologically reasonable Vd values produced, equivalent to 44.3-53.1% and 49.3-82.6% of baseline end-systolic volume before and after endotoxin infusion respectively. This method has the potential to allow Vd to be estimated without a particularly demanding, specialised protocol in an experimental environment. Further, due to the common use of both mechanical ventilation and recruitment manoeuvres in intensive care, this method, subject to the availability of multi-beat echocardiography, has the potential to allow for estimation of Vd in a clinical environment.
van Leeuwen, C M; Oei, A L; Crezee, J; Bel, A; Franken, N A P; Stalpers, L J A; Kok, H P
2018-05-16
Prediction of radiobiological response is a major challenge in radiotherapy. Of several radiobiological models, the linear-quadratic (LQ) model has been best validated by experimental and clinical data. Clinically, the LQ model is mainly used to estimate equivalent radiotherapy schedules (e.g. calculate the equivalent dose in 2 Gy fractions, EQD 2 ), but increasingly also to predict tumour control probability (TCP) and normal tissue complication probability (NTCP) using logistic models. The selection of accurate LQ parameters α, β and α/β is pivotal for a reliable estimate of radiation response. The aim of this review is to provide an overview of published values for the LQ parameters of human tumours as a guideline for radiation oncologists and radiation researchers to select appropriate radiobiological parameter values for LQ modelling in clinical radiotherapy. We performed a systematic literature search and found sixty-four clinical studies reporting α, β and α/β for tumours. Tumour site, histology, stage, number of patients, type of LQ model, radiation type, TCP model, clinical endpoint and radiobiological parameter estimates were extracted. Next, we stratified by tumour site and by tumour histology. Study heterogeneity was expressed by the I 2 statistic, i.e. the percentage of variance in reported values not explained by chance. A large heterogeneity in LQ parameters was found within and between studies (I 2 > 75%). For the same tumour site, differences in histology partially explain differences in the LQ parameters: epithelial tumours have higher α/β values than adenocarcinomas. For tumour sites with different histologies, such as in oesophageal cancer, the α/β estimates correlate well with histology. However, many other factors contribute to the study heterogeneity of LQ parameters, e.g. tumour stage, type of LQ model, TCP model and clinical endpoint (i.e. survival, tumour control and biochemical control). The value of LQ parameters for tumours as published in clinical radiotherapy studies depends on many clinical and methodological factors. Therefore, for clinical use of the LQ model, LQ parameters for tumour should be selected carefully, based on tumour site, histology and the applied LQ model. To account for uncertainties in LQ parameter estimates, exploring a range of values is recommended.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tseng, VFG; Xie, HK
2014-07-01
This paper presents the fabrication and characterization of a high-density multilayer stacked metal-insulator-metal (MIM) capacitor based on a novel process of depositing the MIM multilayer on pillars followed by polishing and selective etching steps to form a stacked capacitor with merely three photolithography steps. In this paper, the pillars were made of glass to prevent substrate loss, whereas an oxide-nitride-oxide dielectric was employed for lower leakage, better voltage/frequency linearity, and better stress compensation. MIM capacitors with six dielectric layers were successfully fabricated, yielding capacitance density of 3.8 fF/mu m(2), maximum capacitance of 2.47 nF, and linear and quadratic voltage coefficientsmore » of capacitance below 21.2 ppm/V and 2.31 ppm/V-2. The impedance was measured from 40 Hz to 3 GHz, and characterized by an analytically derived equivalent circuit model to verify the radio frequency applicability. The multilayer stacking-induced plate resistance mismatch and its effect on the equivalent series resistance (ESR) and effective capacitance was also investigated, which can be counteracted by a corrected metal thickness design. A low ESR of 800 m Omega was achieved, whereas the self-resonance frequency was >760 MHz, successfully demonstrating the feasibility of this method to scale up capacitance densities for high-quality-factor, high-frequency, and large-value MIM capacitors.« less
Schmidt, Kerstin; Schmidtke, Jörg; Mast, Yvonne; Waldvogel, Eva; Wohlleben, Wolfgang; Klemke, Friederike; Lockau, Wolfgang; Hausmann, Tina; Hühns, Maja; Broer, Inge
2017-08-01
Potatoes are a promising system for industrial production of the biopolymer cyanophycin as a second compound in addition to starch. To assess the efficiency in the field, we analysed the stability of the system, specifically its sensitivity to environmental factors. Field and greenhouse trials with transgenic potatoes (two independent events) were carried out for three years. The influence of environmental factors was measured and target compounds in the transgenic plants (cyanophycin, amino acids) were analysed for differences to control plants. Furthermore, non-target parameters (starch content, number, weight and size of tubers) were analysed for equivalence with control plants. The huge amount of data received was handled using modern statistical approaches to model the correlation between influencing environmental factors (year of cultivation, nitrogen fertilization, origin of plants, greenhouse or field cultivation) and key components (starch, amino acids, cyanophycin) and agronomic characteristics. General linear models were used for modelling, and standard effect sizes were applied to compare conventional and genetically modified plants. Altogether, the field trials prove that significant cyanophycin production is possible without reduction of starch content. Non-target compound composition seems to be equivalent under varying environmental conditions. Additionally, a quick test to measure cyanophycin content gives similar results compared to the extensive enzymatic test. This work facilitates the commercial cultivation of cyanophycin potatoes.
NASA Astrophysics Data System (ADS)
Snauffer, Andrew M.; Hsieh, William W.; Cannon, Alex J.; Schnorbus, Markus A.
2018-03-01
Estimates of surface snow water equivalent (SWE) in mixed alpine environments with seasonal melts are particularly difficult in areas of high vegetation density, topographic relief, and snow accumulations. These three confounding factors dominate much of the province of British Columbia (BC), Canada. An artificial neural network (ANN) was created using as predictors six gridded SWE products previously evaluated for BC. Relevant spatiotemporal covariates were also included as predictors, and observations from manual snow surveys at stations located throughout BC were used as target data. Mean absolute errors (MAEs) and interannual correlations for April surveys were found using cross-validation. The ANN using the three best-performing SWE products (ANN3) had the lowest mean station MAE across the province. ANN3 outperformed each product as well as product means and multiple linear regression (MLR) models in all of BC's five physiographic regions except for the BC Plains. Subsequent comparisons with predictions generated by the Variable Infiltration Capacity (VIC) hydrologic model found ANN3 to better estimate SWE over the VIC domain and within most regions. The superior performance of ANN3 over the individual products, product means, MLR, and VIC was found to be statistically significant across the province.
Modelling, analyses and design of switching converters
NASA Technical Reports Server (NTRS)
Cuk, S. M.; Middlebrook, R. D.
1978-01-01
A state-space averaging method for modelling switching dc-to-dc converters for both continuous and discontinuous conduction mode is developed. In each case the starting point is the unified state-space representation, and the end result is a complete linear circuit model, for each conduction mode, which correctly represents all essential features, namely, the input, output, and transfer properties (static dc as well as dynamic ac small-signal). While the method is generally applicable to any switching converter, it is extensively illustrated for the three common power stages (buck, boost, and buck-boost). The results for these converters are then easily tabulated owing to the fixed equivalent circuit topology of their canonical circuit model. The insights that emerge from the general state-space modelling approach lead to the design of new converter topologies through the study of generic properties of the cascade connection of basic buck and boost converters.
Attempting to bridge the gap between laboratory and seismic estimates of fracture energy
McGarr, A.; Fletcher, Joe B.; Beeler, N.M.
2004-01-01
To investigate the behavior of the fracture energy associated with expanding the rupture zone of an earthquake, we have used the results of a large-scale, biaxial stick-slip friction experiment to set the parameters of an equivalent dynamic rupture model. This model is determined by matching the fault slip, the static stress drop and the apparent stress. After confirming that the fracture energy associated with this model earthquake is in reasonable agreement with corresponding laboratory values, we can use it to determine fracture energies for earthquakes as functions of stress drop, rupture velocity and fault slip. If we take account of the state of stress at seismogenic depths, the model extrapolation to larger fault slips yields fracture energies that agree with independent estimates by others based on dynamic rupture models for large earthquakes. For fixed stress drop and rupture speed, the fracture energy scales linearly with fault slip.
NASA Astrophysics Data System (ADS)
Onn, Shing-Chung; Chiang, Hau-Jei; Hwang, Hang-Che; Wei, Jen-Ko; Cherng, Dao-Lien
1993-06-01
The dynamic behavior of a 2D turbulent mixing and combustion process has been studied numerically in the main combustion chamber of a solid-propellant ducted rocket (SDR). The mathematical model is based on the Favre-averaged conservation equations developed by Cherng (1990). Combustion efficiency, rather than specific impulse from earlier studies, is applied successfully to optimize the effects of two parameters by a multiple linear regression model. Specifically, the fuel-air equivalence ratio of the operating conditions and the air inlet location of configurations for the SDR combustor have been studied. For a equivalence ratio near the stoichiometric condition, the use of specific impulse or combustion efficiency will show similar trend in characterizing the reacting flow field in the combustor. For the overall fuel lean operating conditions, the change of combustion efficiency is much more sensitive to that of air inlet location than specific impulse does, suggesting combustion efficiency a better property than specific impulse in representing the condition toward flammability limits. In addition, the air inlet for maximum efficiency, in general, appears to be located at downstream of that for highest specific impulse. The optimal case for the effects of two parameters occurs at fuel lean condition, which shows a larger recirculation zone in front, deeper penetration of ram air into the combustor and much larger high temperature zone near the centerline of the combustor exit than those shown in the optimal case for overall equivalence ratio close to stoichiometric.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rotsch, David A.; Brossard, Tom; Roussin, Ethan
Molybdenum-99, the mother of Tc-99m, can be produced from fission of U-235 in nuclear reactors and purified from fission products by the Cintichem process, later modified for low-enriched uranium (LEU) targets. The key step in this process is the precipitation of Mo with α-benzoin oxime (ABO). The stability of this complex to radiation has been examined. Molybdenum-ABO was irradiated with 3 MeV electrons produced by a Van de Graaff generator and 35 MeV electrons produced by a 50 MeV/25 kW electron linear accelerator. Dose equivalents of 1.7–31.2 kCi of Mo-99 were administered to freshly prepared Mo-ABO. Irradiated samples of Mo-ABOmore » were processed according to the LEU Modified-Cintichem process. The Van de Graaff data indicated good radiation stability of the Mo-ABO complex up to ~15 kCi dose equivalents of Mo-99 and nearly complete destruction at doses >24 kCi Mo-99. The linear accelerator data indicate that even at 6.2 kCi of Mo-99 equivalence of dose, the sample lost ~20% of Mo-99. The 20% loss of Mo-99 at this low dose may be attributed to thermal decomposition of the product from the heat deposited in the sample during irradiation.« less
Fabrication and kinetics study of nano-Al/NiO thermite film by electrophoretic deposition.
Zhang, Daixiong; Li, Xueming
2015-05-21
Nano-Al/NiO thermites were successfully prepared as film by electrophoretic deposition (EPD). For the key issue of this EPD, a mixture solvent of ethanol-acetylacetone (1:1 in volume) containing 0.00025 M nitric acid was proved to be a suitable dispersion system for EPD. The kinetics of electrophoretic deposition for both nano-Al and nano-NiO were investigated; the linear relation between deposition weight and deposition time in short time and parabolic relation in prolonged time were observed in both EPDs. The critical transition time between linear deposition kinetics and parabolic deposition kinetics for nano-Al and nano-NiO were 20 and 10 min, respectively. The theoretical calculation of the kinetics of electrophoretic deposition revealed that the equivalence ratio of nano-Al/NiO thermites film would be affected by the behavior of electrophoretic deposition for nano-Al and nano-NiO. The equivalence ratio remained steady when the linear deposition kinetics dominated for both nano-Al and nano-NiO. The equivalence ratio would change with deposition time when deposition kinetics for nano-NiO changed into parabolic kinetics dominated after 10 min. Therefore, the rule was suggested to be suitable for other EPD of bicomposites. We also studied thermodynamic properties of electrophoretic nano-Al/NiO thermites film as well as combustion performance.
NASA Technical Reports Server (NTRS)
Bainum, P. M.; James, P. K.
1977-01-01
The dynamics of a spinning symmetrical spacecraft system during the deployment (or retraction) of flexible boom-type appendages were investigated. The effect of flexibility during boom deployment is treated by modelling the deployable members as compound spherical pendula of varying length (according to a control law). The orientation of the flexible booms with respect to the hub, is described by a sequence of two Euler angles. The boom members contain a flexural stiffness which can be related to an assumed effective restoring linear spring constant, and structural damping which effects the entire system. Linearized equations of motion for this system, when the boom length is constant, involve periodic coefficients with the frequency of the hub spin. A bounded transformation is found which converts this system into a kinematically equivalent one involving only constant coefficients.
Should Pruning be a Pre-Processor of any Linear System?
NASA Technical Reports Server (NTRS)
Sen, Syamal K.; Shaykhian, Gholam Ali
2011-01-01
There are many real-world problems whose mathematical models turn out to be linear systems Ax = b , where A is an m by x n matrix. Each equation of the linear system is an information. An information, in a physical problem, such as 4 mangoes, 6 bananas, and 5 oranges cost $10, is mathematically modeled as 4x(sub 1) + 6x(sub 2) + 5x (sub 3) = 10, where x(sub 1), x(sub 2), x(sub 3) are each cost of one mango, that of one banana, and that of one orange, respectively. All the information put together in a specified context, constitutes the physical problem and need not be all distinct. Some of these could be redundant, which cannot be readily identified by inspection. The resulting mathematical model will thus have equations corresponding to this redundant information and hence are linearly dependent and thus superfluous. Consequently, these equations once identified should be better pruned in the process of solving the system. The benefits are (i) less computation and hence less error and consequently a better quality of solution and (ii) reduced storage requirements. In literature, the pruning concept is not in vogue so far although it is most desirable. In a numerical linear system, the system could be slightly inconsistent or inconsistent of varying degree. If the system is too inconsistent, then we should fall back on to the physical problem (PP), check the correctness of the PP derived from the material universe, modify it, if necessary, and then check the corresponding mathematical model (MM) and correct it. In nature/material universe, inconsistency is completely nonexistent. If the MM becomes inconsistent, it could be due to error introduced by the concerned measuring device and/or due to assumptions made on the PP to obtain an MM which is relatively easily solvable or simply due to human error. No measuring device can usually measure a quantity with an accuracy greater that 0.005% or, equivalently with a relative error less than 0.005%. Hence measurement error is unavoidable in a numerical linear system when the quantities are continuous (or even discrete with extremely large number). Assumptions, though not desirable, are usually made when we find the problem sufficiently difficult to be solved within the available means/tools/resources and hence distort the PP and the corresponding MM. The error thus introduced in the system could (not always necessarily though) make the system somewhat inconsistent. If the inconsistency (contradiction) is too much then one should definitely not proceed to solve the system in terms of getting a least-squares solution or a minimum norm solution or the minimum-norm least-squares solution. All these solutions will be invariably of no real-world use. If, on the other hand, inconsistency is reasonably low, i.e. the system is near-consistent or, equivalently, has near-linearly-dependent rows, then the foregoing solutions are useful. Pruning in such a near-consistent system should be performed based on the desired accuracy and on the definition of near-linear dependence. In this article, we discuss pruning over various kinds of linear systems and strongly suggest its use as a pre-processor or as a part of an algorithm. Ideally pruning should (i) be a part of the solution process (algorithm) of the system, (ii) reduce both computational error and complexity of the process, and (iii) take into account the numerical zero defined in the context. These are precisely what we achieve through our proposed O(mn2) algorithm presented in Matlab, that uses a subprogram of solving a single linear equation and that has embedded in it the pruning.
Should Pruning be a Pre-Processor of any Linear System?
NASA Technical Reports Server (NTRS)
Sen, Syamal K.; Ramakrishnan, Suja; Agarwal, Ravi P.; Shaykhian, Gholam Ali
2011-01-01
There are many real-world problems whose mathematical models turn out to be linear systems Ax = b, where A is an m x n matrix. Each equation of the linear system is an information. An information, in a physical problem, such as 4 mangoes, 6 bananas, and 5 oranges cost $10, is mathematically modeled as an equation 4x(sub 1) + 6x(sub 2) + 5x(sub 3) = 10 , where x(sub 1), x(sub 2), x(sub 3) are each cost of one mango, that of one banana, and that of one orange, respectively. All the information put together in a specified context, constitutes the physical problem and need not be all distinct. Some of these could be redundant, which cannot be readily identified by inspection. The resulting mathematical model will thus have equations corresponding to this redundant information and hence are linearly dependent and thus superfluous. Consequently, these equations once identified should be better pruned in the process of solving the system. The benefits are (i) less computation and hence less error and consequently a better quality of solution and (ii) reduced storage requirements. In literature, the pruning concept is not in vogue so far although it is most desirable. It is assumed that at least one information, i.e. one equation is known to be correct and which will be our first equation. In a numerical linear system, the system could be slightly inconsistent or inconsistent of varying degree. If the system is too inconsistent, then we should fall back on to the physical problem (PP), check the correctness of the PP derived from the material universe, modify it, if necessary, and then check the corresponding mathematical model (MM) and correct it. In nature/material universe, inconsistency is completely nonexistent. If the MM becomes inconsistent, it could be due to error introduced by the concerned measuring device and/or due to assumptions made on the PP to obtain an MM which is relatively easily solvable or simply due to human error. No measuring device can usually measure a quantity with an accuracy greater that 0.005% or, equivalently with a relative error less than 0.005%. Hence measurement error is unavoidable in a numerical linear system when the quantities are continuous (or even discrete with extremely large number). Assumptions, though not desirable, are usually made when we find the problem sufficiently difficult to be solved within the available means/tools/resources and hence distort the PP and the corresponding MM. The . error thus introduced in the system could (not always necessarily though) make the system somewhat inconsistent. If the inconsistency (contradiction) is too much then one should definitely not proceed to solve the system in terms of getting a least-squares solution or the minimum-norm least-squares solution. All these solutions will be invariably of no real-world use. If, on the other hand, inconsistency is reasonably low, i.e. the system is near-consistent or, equivalently, has near-linearly-dependent rows, then the foregoing solutions are useful. Pruning in such a near-consistent system should be performed based on the desired accuracy and on the definition of near-linear dependence. In this article, we discuss pruning over various kinds of linear systems and strongly suggest its use as a pre-processor or as a part of an algorithm. Ideally pruning should (i) be a part of the solution process (algorithm) of the system, (ii) reduce both computational error and complexity of the process, and (iii) take into account the numerical zero defined in the context. These are precisely what we achieve through our proposed O(mn2) algorithm presented in Matlab, that uses a subprogram of solving a single linear equation and that has embedded in it the pruning.
NASA Technical Reports Server (NTRS)
Sassen, Kenneth; Zhao, Hongjie; Yu, Bing-Kun
1989-01-01
The optical depolarizing properties of simulated stratospheric aerosols were studied in laboratory laser (0.633 micrometer) backscattering experiments for application to polarization lidar observations. Clouds composed of sulfuric acid solution droplets, some treated with ammonia gas, were observed during evaporation. The results indicate that the formation of minute ammonium sulfate particles from the evaporation of acid droplets produces linear depolarization ratios of beta equivalent to 0.02, but beta equivalent to 0.10 to 0.15 are generated from aged acid cloud aerosols and acid droplet crystalization effects following the introduction of ammonia gas into the chamber. It is concluded that partially crystallized sulfuric acid droplets are a likely candidate for explaining the lidar beta equivalent to 0.10 values that have been observed in the lower stratosphere in the absence of the relatively strong backscattering from homogeneous sulfuric acid droplet (beta equivalent to 0) or ice crystal (beta equivalent to 0.5) clouds.
NASA Technical Reports Server (NTRS)
Sassen, Kenneth; Zhao, Hongjie; Yu, Bing-Kun
1988-01-01
The optical depolarizing properties of simulated stratospheric aerosols were studied in laboratory laser (0.633 micrometer) backscattering experiments for application to polarization lidar observations. Clouds composed of sulfuric acid solution droplets, some treated with ammonia gas, were observed during evaporation. The results indicate that the formation of minute ammonium sulfate particles from the evaporation of acid droplets produces linear depolarization ratios of beta equivalent to 0.02, but beta equivalent to 0.10 to 0.15 are generated from aged acid cloud aerosols and acid droplet crystallization effects following the introduction of ammonia gas into the chamber. It is concluded that partially crystallized sulfuric acid droplets are a likely candidate for explaining the lidar beta equivalent to 0.10 values that have been observed in the lower stratosphere in the absence of the relatively strong backscattering from homogeneous sulfuric acid droplet (beta equivalent to 0) or ice crystal (beta equivalent to 0.5) clouds.
Monte Carlo study of neutron-ambient dose equivalent to patient in treatment room.
Mohammadi, A; Afarideh, H; Abbasi Davani, F; Ghergherehchi, M; Arbabi, A
2016-12-01
This paper presents an analytical method for the calculation of the neutron ambient dose equivalent H* (10) regarding patients, whereby the different concrete types that are used in the surrounding walls of the treatment room are considered. This work has been performed according to a detailed simulation of the Varian 2300C/D linear accelerator head that is operated at 18MV, and silver activation counter as a neutron detector, for which the Monte Carlo MCNPX 2.6 code is used, with and without the treatment room walls. The results show that, when compared to the neutrons that leak from the LINAC, both the scattered and thermal neutrons are the major factors that comprise the out-of field neutron dose. The scattering factors for the limonite-steel, magnetite-steel, and ordinary concretes have been calculated as 0.91±0.09, 1.08±0.10, and 0.371±0.01, respectively, while the corresponding thermal factors are 34.22±3.84, 23.44±1.62, and 52.28±1.99, respectively (both the scattering and thermal factors are for the isocenter region); moreover, the treatment room is composed of magnetite-steel and limonite-steel concretes, so the neutron doses to the patient are 1.79 times and 1.62 times greater than that from an ordinary concrete composition. The results also confirm that the scattering and thermal factors do not depend on the details of the chosen linear accelerator head model. It is anticipated that the results of the present work will be of great interest to the manufacturers of medical linear accelerators. Copyright © 2016. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Graf, Wiley E.
1991-01-01
A mixed formulation is chosen to overcome deficiencies of the standard displacement-based shell model. Element development is traced from the incremental variational principle on through to the final set of equilibrium equations. Particular attention is paid to developing specific guidelines for selecting the optimal set of strain parameters. A discussion of constraint index concepts and their predictive capability related to locking is included. Performance characteristics of the elements are assessed in a wide variety of linear and nonlinear plate/shell problems. Despite limiting the study to geometric nonlinear analysis, a substantial amount of additional insight concerning the finite element modeling of thin plate/shell structures is provided. For example, in nonlinear analysis, given the same mesh and load step size, mixed elements converge in fewer iterations than equivalent displacement-based models. It is also demonstrated that, in mixed formulations, lower order elements are preferred. Additionally, meshes used to obtain accurate linear solutions do not necessarily converge to the correct nonlinear solution. Finally, a new form of locking was identified associated with employing elements designed for biaxial bending in uniaxial bending applications.
A rational model of function learning.
Lucas, Christopher G; Griffiths, Thomas L; Williams, Joseph J; Kalish, Michael L
2015-10-01
Theories of how people learn relationships between continuous variables have tended to focus on two possibilities: one, that people are estimating explicit functions, or two that they are performing associative learning supported by similarity. We provide a rational analysis of function learning, drawing on work on regression in machine learning and statistics. Using the equivalence of Bayesian linear regression and Gaussian processes, which provide a probabilistic basis for similarity-based function learning, we show that learning explicit rules and using similarity can be seen as two views of one solution to this problem. We use this insight to define a rational model of human function learning that combines the strengths of both approaches and accounts for a wide variety of experimental results.
Equivalent model and power flow model for electric railway traction network
NASA Astrophysics Data System (ADS)
Wang, Feng
2018-05-01
An equivalent model of the Cable Traction Network (CTN) considering the distributed capacitance effect of the cable system is proposed. The model can be divided into 110kV side and 27.5kV side two kinds. The 110kV side equivalent model can be used to calculate the power supply capacity of the CTN. The 27.5kV side equivalent model can be used to solve the voltage of the catenary. Based on the equivalent simplified model of CTN, the power flow model of CTN which involves the reactive power compensation coefficient and the interaction of voltage and current, is derived.
Unifying dynamical and structural stability of equilibria
NASA Astrophysics Data System (ADS)
Arnoldi, Jean-François; Haegeman, Bart
2016-09-01
We exhibit a fundamental relationship between measures of dynamical and structural stability of linear dynamical systems-e.g. linearized models in the vicinity of equilibria. We show that dynamical stability, quantified via the response to external perturbations (i.e. perturbation of dynamical variables), coincides with the minimal internal perturbation (i.e. perturbations of interactions between variables) able to render the system unstable. First, by reformulating a result of control theory, we explain that harmonic external perturbations reflect the spectral sensitivity of the Jacobian matrix at the equilibrium, with respect to constant changes of its coefficients. However, for this equivalence to hold, imaginary changes of the Jacobian's coefficients have to be allowed. The connection with dynamical stability is thus lost for real dynamical systems. We show that this issue can be avoided, thus recovering the fundamental link between dynamical and structural stability, by considering stochastic noise as external and internal perturbations. More precisely, we demonstrate that a linear system's response to white-noise perturbations directly reflects the intensity of internal white-noise disturbance that it can accommodate before becoming stochastically unstable.
Unifying dynamical and structural stability of equilibria.
Arnoldi, Jean-François; Haegeman, Bart
2016-09-01
We exhibit a fundamental relationship between measures of dynamical and structural stability of linear dynamical systems-e.g. linearized models in the vicinity of equilibria. We show that dynamical stability, quantified via the response to external perturbations (i.e. perturbation of dynamical variables), coincides with the minimal internal perturbation (i.e. perturbations of interactions between variables) able to render the system unstable. First, by reformulating a result of control theory, we explain that harmonic external perturbations reflect the spectral sensitivity of the Jacobian matrix at the equilibrium, with respect to constant changes of its coefficients. However, for this equivalence to hold, imaginary changes of the Jacobian's coefficients have to be allowed. The connection with dynamical stability is thus lost for real dynamical systems. We show that this issue can be avoided, thus recovering the fundamental link between dynamical and structural stability, by considering stochastic noise as external and internal perturbations. More precisely, we demonstrate that a linear system's response to white-noise perturbations directly reflects the intensity of internal white-noise disturbance that it can accommodate before becoming stochastically unstable.
Jendza, J A; Dilger, R N; Sands, J S; Adeola, O
2006-12-01
Two studies were conducted to determine the efficacy of an Escherichia coli-derived phytase (ECP) and its equivalency relative to inorganic phosphorus (iP) from monosodium phosphate (MSP). In Exp. 1, one thousand two hundred 1-d-old male broilers were used in a 42-d trial to assess the effect of ECP and iP supplementation on growth performance and nutrient digestibility. Dietary treatments were based on corn-soybean meal basal diets (BD) containing 239 and 221 g of CP, 8.2 and 6.6 g of Ca, and 2.4 and 1.5 g of nonphytate P (nPP) per kg for the starter and grower phases, respectively. Treatments consisted of the BD; the BD + 0.6, 1.2, or 1.8 g of iP from MSP per kg; and the BD + 250, 500, 750, or 1,000 phytase units (FTU) of ECP per kg. Increasing levels of MSP improved gain, gain:feed, and tibia ash (linear, P < 0.01). Increasing levels of ECP improved gain, gain:feed, tibia ash (linear, P < 0.01), apparent ileal digestibility of P, N, Arg, His, Phe, and Trp at d 21 (linear, P < 0.05), and apparent retention of P at d 21 (linear, P < 0.05). Increasing levels of ECP decreased apparent retention of energy (linear, P < 0.01). Five hundred FTU of ECP per kg was determined to be equivalent to the addition of 0.72, 0.78, and 1.19 g of iP from MSP per kg in broiler diets based on gain, feed intake, and bone ash, respectively. In Exp. 2, forty-eight 10-kg pigs were used in a 28-d trial to assess the effect of ECP and iP supplementation on growth performance and nutrient digestibility. Dietary treatments consisted of a positive control containing 6.1 and 3.5 g of Ca and nPP, respectively, per kg; a negative control (NC) containing 4.8 and 1.7 g of Ca and nPP, respectively, per kg; the NC diet plus 0.4, 0.8, or 1.2 g of iP from MSP per kg; and the NC diet plus 500, 750, or 1,000 FTU of ECP per kg. Daily gain improved (linear, P < 0.05) with ECP addition, as did apparent digestibility of Ca and P (linear, P < 0.01). Five hundred FTU of ECP per kg was determined to be equivalent to the addition of 0.49 and 1.00 g of iP from MSP per kg in starter pigs diets, based on ADG and bone ash, respectively.
Oracle estimation of parametric models under boundary constraints.
Wong, Kin Yau; Goldberg, Yair; Fine, Jason P
2016-12-01
In many classical estimation problems, the parameter space has a boundary. In most cases, the standard asymptotic properties of the estimator do not hold when some of the underlying true parameters lie on the boundary. However, without knowledge of the true parameter values, confidence intervals constructed assuming that the parameters lie in the interior are generally over-conservative. A penalized estimation method is proposed in this article to address this issue. An adaptive lasso procedure is employed to shrink the parameters to the boundary, yielding oracle inference which adapt to whether or not the true parameters are on the boundary. When the true parameters are on the boundary, the inference is equivalent to that which would be achieved with a priori knowledge of the boundary, while if the converse is true, the inference is equivalent to that which is obtained in the interior of the parameter space. The method is demonstrated under two practical scenarios, namely the frailty survival model and linear regression with order-restricted parameters. Simulation studies and real data analyses show that the method performs well with realistic sample sizes and exhibits certain advantages over standard methods. © 2016, The International Biometric Society.
Design of HIFU Transducers for Generating Specified Nonlinear Ultrasound Fields.
Rosnitskiy, Pavel B; Yuldashev, Petr V; Sapozhnikov, Oleg A; Maxwell, Adam D; Kreider, Wayne; Bailey, Michael R; Khokhlova, Vera A
2017-02-01
Various clinical applications of high-intensity focused ultrasound have different requirements for the pressure levels and degree of nonlinear waveform distortion at the focus. The goal of this paper is to determine transducer design parameters that produce either a specified shock amplitude in the focal waveform or specified peak pressures while still maintaining quasi-linear conditions at the focus. Multiparametric nonlinear modeling based on the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation with an equivalent source boundary condition was employed. Peak pressures, shock amplitudes at the focus, and corresponding source outputs were determined for different transducer geometries and levels of nonlinear distortion. The results are presented in terms of the parameters of an equivalent single-element spherically shaped transducer. The accuracy of the method and its applicability to cases of strongly focused transducers were validated by comparing the KZK modeling data with measurements and nonlinear full diffraction simulations for a single-element source and arrays with 7 and 256 elements. The results provide look-up data for evaluating nonlinear distortions at the focus of existing therapeutic systems as well as for guiding the design of new transducers that generate specified nonlinear fields.
Theoretical aspects of the equivalence principle
NASA Astrophysics Data System (ADS)
Damour, Thibault
2012-09-01
We review several theoretical aspects of the equivalence principle (EP). We emphasize the unsatisfactory fact that the EP maintains the absolute character of the coupling constants of physics, while general relativity and its generalizations (Kaluza-Klein, …, string theory) suggest that all absolute structures should be replaced by dynamical entities. We discuss the EP-violation phenomenology of dilaton-like models, which is likely to be dominated by the linear superposition of two effects: a signal proportional to the nuclear Coulomb energy, related to the variation of the fine-structure constant, and a signal proportional to the surface nuclear binding energy, related to the variation of the light quark masses. We recall various theoretical arguments (including a recently proposed anthropic argument) suggesting that the EP be violated at a small, but not unmeasurably small level. This motivates the need for improved tests of the EP. These tests are probing new territories in physics that are related to deep, and mysterious, issues in fundamental physics.
Linear and nonlinear dynamic analysis of redundant load path bearingless rotor systems
NASA Technical Reports Server (NTRS)
Murthy, V. R.
1985-01-01
The bearingless rotorcraft offers reduced weight, less complexity and superior flying qualities. Almost all the current industrial structural dynamic programs of conventional rotors which consist of single load path rotor blades employ the transfer matrix method to determine natural vibration characteristics because this method is ideally suited for one dimensional chain like structures. This method is extended to multiple load path rotor blades without resorting to an equivalent single load path approximation. Unlike the conventional blades, it isk necessary to introduce the axial-degree-of-freedom into the solution process to account for the differential axial displacements in the different load paths. With the present extension, the current rotor dynamic programs can be modified with relative ease to account for the multiple load paths without resorting to the equivalent single load path modeling. The results obtained by the transfer matrix method are validated by comparing with the finite element solutions. A differential stiffness matrix due to blade rotation is derived to facilitate the finite element solutions.
Flux quench in a system of interacting spinless fermions in one dimension
NASA Astrophysics Data System (ADS)
Nakagawa, Yuya O.; Misguich, Grégoire; Oshikawa, Masaki
2016-05-01
We study a quantum quench in a one-dimensional spinless fermion model (equivalent to the XXZ spin chain), where a magnetic flux is suddenly switched off. This quench is equivalent to imposing a pulse of electric field and therefore generates an initial particle current. This current is not a conserved quantity in the presence of a lattice and interactions, and we investigate numerically its time evolution after the quench, using the infinite time-evolving block decimation method. For repulsive interactions or large initial flux, we find oscillations that are governed by excitations deep inside the Fermi sea. At long times we observe that the current remains nonvanishing in the gapless cases, whereas it decays to zero in the gapped cases. Although the linear response theory (valid for a weak flux) predicts the same long-time limit of the current for repulsive and attractive interactions (relation with the zero-temperature Drude weight), larger nonlinearities are observed in the case of repulsive interactions compared with that of the attractive case.
Equivalence of quantum Boltzmann equation and Kubo formula for dc conductivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Su, Z.B.; Chen, L.Y.
1990-02-01
This paper presents a derivation of the quantum Boltzmann equation for linear dc transport with a correction term to Mahan-Hansch's equations and derive a formal solution to it. Based on this formal solution, the authors find the electric conductivity can be expressed as the retarded current-current correlation. Therefore, the authors explicitly demonstrate the equivalence of the two most important theoretical methods: quantum Boltzmann equation and Kubo formula.
Malachowski, George C; Clegg, Robert M; Redford, Glen I
2007-12-01
A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.
Wing box transonic-flutter suppression using piezoelectric self-sensing actuators attached to skin
NASA Astrophysics Data System (ADS)
Otiefy, R. A. H.; Negm, H. M.
2010-12-01
The main objective of this research is to study the capability of piezoelectric (PZT) self-sensing actuators to suppress the transonic wing box flutter, which is a flow-structure interaction phenomenon. The unsteady general frequency modified transonic small disturbance (TSD) equation is used to model the transonic flow about the wing. The wing box structure and piezoelectric actuators are modeled using the equivalent plate method, which is based on the first order shear deformation plate theory (FSDPT). The piezoelectric actuators are bonded to the skin. The optimal electromechanical coupling conditions between the piezoelectric actuators and the wing are collected from previous work. Three main different control strategies, a linear quadratic Gaussian (LQG) which combines the linear quadratic regulator (LQR) with the Kalman filter estimator (KFE), an optimal static output feedback (SOF), and a classic feedback controller (CFC), are studied and compared. The optimum actuator and sensor locations are determined using the norm of feedback control gains (NFCG) and norm of Kalman filter estimator gains (NKFEG) respectively. A genetic algorithm (GA) optimization technique is used to calculate the controller and estimator parameters to achieve a target response.
Reproducing the nonlinear dynamic behavior of a structured beam with a generalized continuum model
NASA Astrophysics Data System (ADS)
Vila, J.; Fernández-Sáez, J.; Zaera, R.
2018-04-01
In this paper we study the coupled axial-transverse nonlinear vibrations of a kind of one dimensional structured solids by application of the so called Inertia Gradient Nonlinear continuum model. To show the accuracy of this axiomatic model, previously proposed by the authors, its predictions are compared with numeric results from a previously defined finite discrete chain of lumped masses and springs, for several number of particles. A continualization of the discrete model equations based on Taylor series allowed us to set equivalent values of the mechanical properties in both discrete and axiomatic continuum models. Contrary to the classical continuum model, the inertia gradient nonlinear continuum model used herein is able to capture scale effects, which arise for modes in which the wavelength is comparable to the characteristic distance of the structured solid. The main conclusion of the work is that the proposed generalized continuum model captures the scale effects in both linear and nonlinear regimes, reproducing the behavior of the 1D nonlinear discrete model adequately.
Currency arbitrage detection using a binary integer programming model
NASA Astrophysics Data System (ADS)
Soon, Wanmei; Ye, Heng-Qing
2011-04-01
In this article, we examine the use of a new binary integer programming (BIP) model to detect arbitrage opportunities in currency exchanges. This model showcases an excellent application of mathematics to the real world. The concepts involved are easily accessible to undergraduate students with basic knowledge in Operations Research. Through this work, students can learn to link several types of basic optimization models, namely linear programming, integer programming and network models, and apply the well-known sensitivity analysis procedure to accommodate realistic changes in the exchange rates. Beginning with a BIP model, we discuss how it can be reduced to an equivalent but considerably simpler model, where an efficient algorithm can be applied to find the arbitrages and incorporate the sensitivity analysis procedure. A simple comparison is then made with a different arbitrage detection model. This exercise helps students learn to apply basic Operations Research concepts to a practical real-life example, and provides insights into the processes involved in Operations Research model formulations.
NASA Astrophysics Data System (ADS)
Goyal, Deepak
Textile composites have a wide variety of applications in the aerospace, sports, automobile, marine and medical industries. Due to the availability of a variety of textile architectures and numerous parameters associated with each, optimal design through extensive experimental testing is not practical. Predictive tools are needed to perform virtual experiments of various options. The focus of this research is to develop a better understanding of linear elastic response, plasticity and material damage induced nonlinear behavior and mechanics of load flow in textile composites. Textile composites exhibit multiple scales of complexity. The various textile behaviors are analyzed using a two-scale finite element modeling. A framework to allow use of a wide variety of damage initiation and growth models is proposed. Plasticity induced non-linear behavior of 2x2 braided composites is investigated using a modeling approach based on Hill's yield function for orthotropic materials. The mechanics of load flow in textile composites is demonstrated using special non-standard postprocessing techniques that not only highlight the important details, but also transform the extensive amount of output data into comprehensible modes of behavior. The investigations show that the damage models differ from each other in terms of amount of degradation as well as the properties to be degraded under a particular failure mode. When compared with experimental data, predictions of some models match well for glass/epoxy composite whereas other's match well for carbon/epoxy composites. However, all the models predicted very similar response when damage factors were made similar, which shows that the magnitude of damage factors are very important. Full 3D as well as equivalent tape laminate predictions lie within the range of the experimental data for a wide variety of braided composites with different material systems, which validated the plasticity analysis. Conclusions about the effect of fiber type on the degree of plasticity induced non-linearity in a +/-25° braid depend on the measure of non-linearity. Investigations about the mechanics of load flow in textile composites bring new insights about the textile behavior. For example, the reasons for existence of transverse shear stress under uni-axial loading and occurrence of stress concentrations at certain locations were explained.
Molenaar, Peter C M
2017-01-01
Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.
Guillaume, Bryan; Wang, Changqing; Poh, Joann; Shen, Mo Jun; Ong, Mei Lyn; Tan, Pei Fang; Karnani, Neerja; Meaney, Michael; Qiu, Anqi
2018-06-01
Statistical inference on neuroimaging data is often conducted using a mass-univariate model, equivalent to fitting a linear model at every voxel with a known set of covariates. Due to the large number of linear models, it is challenging to check if the selection of covariates is appropriate and to modify this selection adequately. The use of standard diagnostics, such as residual plotting, is clearly not practical for neuroimaging data. However, the selection of covariates is crucial for linear regression to ensure valid statistical inference. In particular, the mean model of regression needs to be reasonably well specified. Unfortunately, this issue is often overlooked in the field of neuroimaging. This study aims to adopt the existing Confounder Adjusted Testing and Estimation (CATE) approach and to extend it for use with neuroimaging data. We propose a modification of CATE that can yield valid statistical inferences using Principal Component Analysis (PCA) estimators instead of Maximum Likelihood (ML) estimators. We then propose a non-parametric hypothesis testing procedure that can improve upon parametric testing. Monte Carlo simulations show that the modification of CATE allows for more accurate modelling of neuroimaging data and can in turn yield a better control of False Positive Rate (FPR) and Family-Wise Error Rate (FWER). We demonstrate its application to an Epigenome-Wide Association Study (EWAS) on neonatal brain imaging and umbilical cord DNA methylation data obtained as part of a longitudinal cohort study. Software for this CATE study is freely available at http://www.bioeng.nus.edu.sg/cfa/Imaging_Genetics2.html. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Hecksel, D; Anferov, V; Fitzek, M; Shahnazi, K
2010-06-01
Conventional proton therapy facilities use double scattering nozzles, which are optimized for delivery of a few fixed field sizes. Similarly, uniform scanning nozzles are commissioned for a limited number of field sizes. However, cases invariably occur where the treatment field is significantly different from these fixed field sizes. The purpose of this work was to determine the impact of the radiation field conformity to the patient-specific collimator on the secondary neutron dose equivalent. Using a WENDI-II neutron detector, the authors experimentally investigated how the neutron dose equivalent at a particular point of interest varied with different collimator sizes, while the beam spreading was kept constant. The measurements were performed for different modes of dose delivery in proton therapy, all of which are available at the Midwest Proton Radiotherapy Institute (MPRI): Double scattering, uniform scanning delivering rectangular fields, and uniform scanning delivering circular fields. The authors also studied how the neutron dose equivalent changes when one changes the amplitudes of the scanned field for a fixed collimator size. The secondary neutron dose equivalent was found to decrease linearly with the collimator area for all methods of dose delivery. The relative values of the neutron dose equivalent for a collimator with a 5 cm diameter opening using 88 MeV protons were 1.0 for the double scattering field, 0.76 for rectangular uniform field, and 0.6 for the circular uniform field. Furthermore, when a single circle wobbling was optimized for delivery of a uniform field 5 cm in diameter, the secondary neutron dose equivalent was reduced by a factor of 6 compared to the double scattering nozzle. Additionally, when the collimator size was kept constant, the neutron dose equivalent at the given point of interest increased linearly with the area of the scanned proton beam. The results of these experiments suggest that the patient-specific collimator is a significant contributor to the secondary neutron dose equivalent to a distant organ at risk. Improving conformity of the radiation field to the patient-specific collimator can significantly reduce secondary neutron dose equivalent to the patient. Therefore, it is important to increase the number of available generic field sizes in double scattering systems as well as in uniform scanning nozzles.
Simulation Study of Near-Surface Coupling of Nuclear Devices vs. Equivalent High-Explosive Charges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fournier, Kevin B; Walton, Otis R; Benjamin, Russ
2014-09-29
A computational study was performed to examine the differences in near-surface ground-waves and air-blast waves generated by high-explosive energy sources and those generated by much higher energy - density low - yield nuclear sources. The study examined the effect of explosive-source emplacement (i.e., height-of-burst, HOB, or depth-of-burial, DOB) over a range from depths of -35m to heights of 20m, for explosions with an explosive yield of 1-kt . The chemical explosive was modeled by a JWL equation-of-state model for a ~14m diameter sphere of ANFO (~1,200,000kg – 1 k t equivalent yield ), and the high-energy-density source was modeled asmore » a one tonne (1000 kg) plasma of ‘Iron-gas’ (utilizing LLNL’s tabular equation-of-state database, LEOS) in a 2m diameter sphere, with a total internal-energy content equivalent to 1 k t . A consistent equivalent-yield coupling-factor approach was developed to compare the behavior of the two sources. The results indicate that the equivalent-yield coupling-factor for air-blasts from 1 k t ANFO explosions varies monotonically and continuously from a nearly perfec t reflected wave off of the ground surface for a HOB ≈ 20m, to a coupling factor of nearly zero at DOB ≈ -25m. The nuclear air - blast coupling curve, on the other hand, remained nearly equal to a perfectly reflected wave all the way down to HOB’s very near zero, and then quickly dropped to a value near zero for explosions with a DOB ≈ -10m. The near - surface ground - wave traveling horizontally out from the explosive source region to distances of 100’s of meters exhibited equivalent - yield coupling - factors t hat varied nearly linearly with HOB/DOB for the simulated ANFO explosive source, going from a value near zero at HOB ≈ 5m to nearly one at DOB ≈ -25m. The nuclear-source generated near-surface ground wave coupling-factor remained near zero for almost all HOB’s greater than zero, and then appeared to vary nearly - linearly with depth-of-burial until it reached a value of one at a DOB between 15m and 20m. These simulations confirm the expected result that the variation of coupling to the ground, or the air, change s much more rapidly with emplacement location for a high-energy-density (i.e., nuclear-like) explosive source than it does for relatively low - energy - density chemical explosive sources. The Energy Partitioning, Energy Coupling (EPEC) platform at LLNL utilizes laser energy from one quad (i.e. 4-laser beams) of the 192 - beam NIF Laser bank to deliver ~10kJ of energy to 1mg of silver in a hohlraum creating an effective small-explosive ‘source’ with an energy density comparable to those in low-yield nuclear devices. Such experiments have the potential to provide direct experimental confirmation of the simulation results obtained in this study, at a physical scale (and time-scale) which is a factor of 1000 smaller than the spatial- or temporal-scales typically encountered when dealing with nuclear explosions.« less
Model-based meta-analysis for comparing Vitamin D2 and D3 parent-metabolite pharmacokinetics.
Ocampo-Pelland, Alanna S; Gastonguay, Marc R; Riggs, Matthew M
2017-08-01
Association of Vitamin D (D3 & D2) and its 25OHD metabolite (25OHD3 & 25OHD2) exposures with various diseases is an active research area. D3 and D2 dose-equivalency and each form's ability to raise 25OHD concentrations are not well-defined. The current work describes a population pharmacokinetic (PK) model for D2 and 25OHD2 and the use of a previously developed D3-25OHD3 PK model [1] for comparing D3 and D2-related exposures. Public-source D2 and 25OHD2 PK data in healthy or osteoporotic populations, including 17 studies representing 278 individuals (15 individual-level and 18 arm-level units), were selected using search criteria in PUBMED. Data included oral, single and multiple D2 doses (400-100,000 IU/d). Nonlinear mixed effects models were developed simultaneously for D2 and 25OHD2 PK (NONMEM v7.2) by considering 1- and 2-compartment models with linear or nonlinear clearance. Unit-level random effects and residual errors were weighted by arm sample size. Model simulations compared 25OHD exposures, following repeated D2 and D3 oral administration across typical dosing and baseline ranges. D2 parent and metabolite were each described by 2-compartment models with numerous parameter estimates shared with the D3-25OHD3 model [1]. Notably, parent D2 was eliminated (converted to 25OHD) through a first-order clearance whereas the previously published D3 model [1] included a saturable non-linear clearance. Similar to 25OHD3 PK model results [1], 25OHD2 was eliminated by a first-order clearance, which was almost twice as fast as the former. Simulations at lower baselines, following lower equivalent doses, indicated that D3 was more effective than D2 at raising 25OHD concentrations. Due to saturation of D3 clearance, however, at higher doses or baselines, the probability of D2 surpassing D3's ability to raise 25OHD concentrations increased substantially. Since 25OHD concentrations generally surpassed 75 nmol/L at these higher baselines by 3 months, there would be no expected clinical difference in the two forms.
Bateli, Maria; Ben Rahal, Ghada; Christmann, Marin; Vach, Kirstin; Kohal, Ralf-Joachim
2018-01-01
Objective To test whether or not the modified design of the test implant (intended to increase primary stability) has an equivalent effect on MBL compared to the control. Methods Forty patients were randomly assigned to receive test or control implants to be installed in identically dimensioned bony beds. Implants were radiographically monitored at installation, at prosthetic delivery, and after one year. Treatments were considered equivalent if the 90% confidence interval (CI) for the mean difference (MD) in MBL was in between −0.25 and 0.25 mm. Additionally, several soft tissue parameters and patient-reported outcome measures (PROMs) were evaluated. Linear mixed models were fitted for each patient to assess time effects on response variables. Results Thirty-three patients (21 males, 12 females; 58.2 ± 15.2 years old) with 81 implants (47 test, 34 control) were available for analysis after a mean observation period of 13.9 ± 4.5 months (3 dropouts, 3 missed appointments, and 1 missing file). The adjusted MD in MBL after one year was −0.13 mm (90% CI: −0.46–0.19; test group: −0.49; control group: −0.36; p = 0.507). Conclusion Both implant systems can be considered successful after one year of observation. Concerning MBL in the presented setup, equivalence of the treatments cannot be concluded. Registration This trial is registered with the German Clinical Trials Register (ID: DRKS00007877). PMID:29610765
NASA Astrophysics Data System (ADS)
Tang, Jiayu; Kayo, Issha; Takada, Masahiro
2011-09-01
We develop a maximum likelihood based method of reconstructing the band powers of the density and velocity power spectra at each wavenumber bin from the measured clustering features of galaxies in redshift space, including marginalization over uncertainties inherent in the small-scale, non-linear redshift distortion, the Fingers-of-God (FoG) effect. The reconstruction can be done assuming that the density and velocity power spectra depend on the redshift-space power spectrum having different angular modulations of μ with μ2n (n= 0, 1, 2) and that the model FoG effect is given as a multiplicative function in the redshift-space spectrum. By using N-body simulations and the halo catalogues, we test our method by comparing the reconstructed power spectra with the spectra directly measured from the simulations. For the spectrum of μ0 or equivalently the density power spectrum Pδδ(k), our method recovers the amplitudes to an accuracy of a few per cent up to k≃ 0.3 h Mpc-1 for both dark matter and haloes. For the power spectrum of μ2, which is equivalent to the density-velocity power spectrum Pδθ(k) in the linear regime, our method can recover, within the statistical errors, the input power spectrum for dark matter up to k≃ 0.2 h Mpc-1 and at both redshifts z= 0 and 1, if the adequate FoG model being marginalized over is employed. However, for the halo spectrum that is least affected by the FoG effect, the reconstructed spectrum shows greater amplitudes than the spectrum Pδθ(k) inferred from the simulations over a range of wavenumbers 0.05 ≤k≤ 0.3 h Mpc-1. We argue that the disagreement may be ascribed to a non-linearity effect that arises from the cross-bispectra of density and velocity perturbations. Using the perturbation theory and assuming Einstein gravity as in simulations, we derive the non-linear correction term to the redshift-space spectrum, and find that the leading-order correction term is proportional to μ2 and increases the μ2-power spectrum amplitudes more significantly at larger k, at lower redshifts and for more massive haloes. We find that adding the non-linearity correction term to the simulation Pδθ(k) can fairly well reproduce the reconstructed Pδθ(k) for haloes up to k≃ 0.2 h Mpc-1.
Datamining approaches for modeling tumor control probability.
Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D
2010-11-01
Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.
NASA Astrophysics Data System (ADS)
Wang, Hailong; Ho, Derek Y. H.; Lawton, Wayne; Wang, Jiao; Gong, Jiangbin
2013-11-01
Recent studies have established that, in addition to the well-known kicked-Harper model (KHM), an on-resonance double-kicked rotor (ORDKR) model also has Hofstadter's butterfly Floquet spectrum, with strong resemblance to the standard Hofstadter spectrum that is a paradigm in studies of the integer quantum Hall effect. Earlier it was shown that the quasienergy spectra of these two dynamical models (i) can exactly overlap with each other if an effective Planck constant takes irrational multiples of 2π and (ii) will be different if the same parameter takes rational multiples of 2π. This work makes detailed comparisons between these two models, with an effective Planck constant given by 2πM/N, where M and N are coprime and odd integers. It is found that the ORDKR spectrum (with two periodic kicking sequences having the same kick strength) has one flat band and N-1 nonflat bands with the largest bandwidth decaying in a power law as ˜KN+2, where K is a kick strength parameter. The existence of a flat band is strictly proven and the power-law scaling, numerically checked for a number of cases, is also analytically proven for a three-band case. By contrast, the KHM does not have any flat band and its bandwidths scale linearly with K. This is shown to result in dramatic differences in dynamical behavior, such as transient (but extremely long) dynamical localization in ORDKR, which is absent in the KHM. Finally, we show that despite these differences, there exist simple extensions of the KHM and ORDKR model (upon introducing an additional periodic phase parameter) such that the resulting extended KHM and ORDKR model are actually topologically equivalent, i.e., they yield exactly the same Floquet-band Chern numbers and display topological phase transitions at the same kick strengths. A theoretical derivation of this topological equivalence is provided. These results are also of interest to our current understanding of quantum-classical correspondence considering that the KHM and ORDKR model have exactly the same classical limit after a simple canonical transformation.
On Structural Equation Model Equivalence.
ERIC Educational Resources Information Center
Raykov, Tenko; Penev, Spiridon
1999-01-01
Presents a necessary and sufficient condition for the equivalence of structural-equation models that is applicable to models with parameter restrictions and models that may or may not fulfill assumptions of the rules. Illustrates the application of the approach for studying model equivalence. (SLD)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathi, Markandey M.; Krishnan, Sundar R.; Srinivasan, Kalyan K.
Chemiluminescence emissions from OH*, CH*, C2, and CO2 formed within the reaction zone of premixed flames depend upon the fuel-air equivalence ratio in the burning mixture. In the present paper, a new partial least square regression (PLS-R) based multivariate sensing methodology is investigated and compared with an OH*/CH* intensity ratio-based calibration model for sensing equivalence ratio in atmospheric methane-air premixed flames. Five replications of spectral data at nine different equivalence ratios ranging from 0.73 to 1.48 were used in the calibration of both models. During model development, the PLS-R model was initially validated with the calibration data set using themore » leave-one-out cross validation technique. Since the PLS-R model used the entire raw spectral intensities, it did not need the nonlinear background subtraction of CO2 emission that is required for typical OH*/CH* intensity ratio calibrations. An unbiased spectral data set (not used in the PLS-R model development), for 28 different equivalence ratio conditions ranging from 0.71 to 1.67, was used to predict equivalence ratios using the PLS-R and the intensity ratio calibration models. It was found that the equivalence ratios predicted with the PLS-R based multivariate calibration model matched the experimentally measured equivalence ratios within 7%; whereas, the OH*/CH* intensity ratio calibration grossly underpredicted equivalence ratios in comparison to measured equivalence ratios, especially under rich conditions ( > 1.2). The practical implications of the chemiluminescence-based multivariate equivalence ratio sensing methodology are also discussed.« less
Radiation dosimetry and biophysical models of space radiation effects
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Wu, Honglu; Shavers, Mark R.; George, Kerry
2003-01-01
Estimating the biological risks from space radiation remains a difficult problem because of the many radiation types including protons, heavy ions, and secondary neutrons, and the absence of epidemiology data for these radiation types. Developing useful biophysical parameters or models that relate energy deposition by space particles to the probabilities of biological outcomes is a complex problem. Physical measurements of space radiation include the absorbed dose, dose equivalent, and linear energy transfer (LET) spectra. In contrast to conventional dosimetric methods, models of radiation track structure provide descriptions of energy deposition events in biomolecules, cells, or tissues, which can be used to develop biophysical models of radiation risks. In this paper, we address the biophysical description of heavy particle tracks in the context of the interpretation of both space radiation dosimetry and radiobiology data, which may provide insights into new approaches to these problems.
A high speed model-based approach for wavefront sensorless adaptive optics systems
NASA Astrophysics Data System (ADS)
Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing
2018-02-01
To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).
Spatial modeling in ecology: the flexibility of eigenfunction spatial analyses.
Griffith, Daniel A; Peres-Neto, Pedro R
2006-10-01
Recently, analytical approaches based on the eigenfunctions of spatial configuration matrices have been proposed in order to consider explicitly spatial predictors. The present study demonstrates the usefulness of eigenfunctions in spatial modeling applied to ecological problems and shows equivalencies of and differences between the two current implementations of this methodology. The two approaches in this category are the distance-based (DB) eigenvector maps proposed by P. Legendre and his colleagues, and spatial filtering based upon geographic connectivity matrices (i.e., topology-based; CB) developed by D. A. Griffith and his colleagues. In both cases, the goal is to create spatial predictors that can be easily incorporated into conventional regression models. One important advantage of these two approaches over any other spatial approach is that they provide a flexible tool that allows the full range of general and generalized linear modeling theory to be applied to ecological and geographical problems in the presence of nonzero spatial autocorrelation.
Random walks exhibiting anomalous diffusion: elephants, urns and the limits of normality
NASA Astrophysics Data System (ADS)
Kearney, Michael J.; Martin, Richard J.
2018-01-01
A random walk model is presented which exhibits a transition from standard to anomalous diffusion as a parameter is varied. The model is a variant on the elephant random walk and differs in respect of the treatment of the initial state, which in the present work consists of a given number N of fixed steps. This also links the elephant random walk to other types of history dependent random walk. As well as being amenable to direct analysis, the model is shown to be asymptotically equivalent to a non-linear urn process. This provides fresh insights into the limiting form of the distribution of the walker’s position at large times. Although the distribution is intrinsically non-Gaussian in the anomalous diffusion regime, it gradually reverts to normal form when N is large under quite general conditions.
An equivalent viscoelastic model for rock mass with parallel joints
NASA Astrophysics Data System (ADS)
Li, Jianchun; Ma, Guowei; Zhao, Jian
2010-03-01
An equivalent viscoelastic medium model is proposed for rock mass with parallel joints. A concept of "virtual wave source (VWS)" is proposed to take into account the wave reflections between the joints. The equivalent model can be effectively applied to analyze longitudinal wave propagation through discontinuous media with parallel joints. Parameters in the equivalent viscoelastic model are derived analytically based on longitudinal wave propagation across a single rock joint. The proposed model is then verified by applying identical incident waves to the discontinuous and equivalent viscoelastic media at one end to compare the output waves at the other end. When the wavelength of the incident wave is sufficiently long compared to the joint spacing, the effect of the VWS on wave propagation in rock mass is prominent. The results from the equivalent viscoelastic medium model are very similar to those determined from the displacement discontinuity method. Frequency dependence and joint spacing effect on the equivalent viscoelastic model and the VWS method are discussed.
Fall with Linear Drag and Wien's Displacement Law: Approximate Solution and Lambert Function
ERIC Educational Resources Information Center
Vial, Alexandre
2012-01-01
We present an approximate solution for the downward time of travel in the case of a mass falling with a linear drag force. We show how a quasi-analytical solution implying the Lambert function can be found. We also show that solving the previous problem is equivalent to the search for Wien's displacement law. These results can be of interest for…
Influence of refractive condition on retinal vasculature complexity in younger subjects.
Azemin, Mohd Zulfaezal Che; Daud, Norsyazwani Mohamad; Ab Hamid, Fadilah; Zahari, Ilyanoon; Sapuan, Abdul Halim
2014-01-01
The aim of this study was to compare the retinal vasculature complexity between emmetropia, and myopia in younger subjects. A total of 82 patients (24.12 ± 1.25 years) with two types of refractive conditions, myopia and emmetropia were enrolled in this study. Refraction data were converted to spherical equivalent refraction. These retinal images (right eyes) were obtained from NAVIS Lite Image Filing System and the vasculature complexity was measured by fractal dimension (D f ), quantified using a computer software following a standardized protocol. There was a significant difference (P < 0.05) in the value of D f between emmetropic (1.5666 ± 0.0160) and myopic (1.5588 ± 0.0142) groups. A positive correlation (rho = 0.260, P < 0.05) between the D f and the spherical equivalent refraction was detected in this study. Using a linear model, it was estimated that 6.7% of the variation in D f could be explained by spherical equivalent refraction. This study provides valuable findings about the effect of moderate to high myopia on the fractal dimension of the retinal vasculature network. These results show that myopic refraction in younger subjects was associated with a decrease in D f , suggesting a loss of retinal vessel density with moderate to high myopia.
NASA Astrophysics Data System (ADS)
Ma, Lijun; Lee, Letitia; Barani, Igor; Hwang, Andrew; Fogh, Shannon; Nakamura, Jean; McDermott, Michael; Sneed, Penny; Larson, David A.; Sahgal, Arjun
2011-11-01
Rapid delivery of multiple shots or isocenters is one of the hallmarks of Gamma Knife radiosurgery. In this study, we investigated whether the temporal order of shots delivered with Gamma Knife Perfexion would significantly influence the biological equivalent dose for complex multi-isocenter treatments. Twenty single-target cases were selected for analysis. For each case, 3D dose matrices of individual shots were extracted and single-fraction equivalent uniform dose (sEUD) values were determined for all possible shot delivery sequences, corresponding to different patterns of temporal dose delivery within the target. We found significant variations in the sEUD values among these sequences exceeding 15% for certain cases. However, the sequences for the actual treatment delivery were found to agree (<3%) and to correlate (R2 = 0.98) excellently with the sequences yielding the maximum sEUD values for all studied cases. This result is applicable for both fast and slow growing tumors with α/β values of 2 to 20 according to the linear-quadratic model. In conclusion, despite large potential variations in different shot sequences for multi-isocenter Gamma Knife treatments, current clinical delivery sequences exhibited consistent biological target dosing that approached that maximally achievable for all studied cases.
van der Vorm, Lisa N; Hendriks, Jan C M; Laarakkers, Coby M; Klaver, Siem; Armitage, Andrew E; Bamberg, Alison; Geurts-Moespot, Anneke J; Girelli, Domenico; Herkert, Matthias; Itkonen, Outi; Konrad, Robert J; Tomosugi, Naohisa; Westerman, Mark; Bansal, Sukhvinder S; Campostrini, Natascia; Drakesmith, Hal; Fillet, Marianne; Olbina, Gordana; Pasricha, Sant-Rayn; Pitts, Kelly R; Sloan, John H; Tagliaro, Franco; Weykamp, Cas W; Swinkels, Dorine W
2016-07-01
Absolute plasma hepcidin concentrations measured by various procedures differ substantially, complicating interpretation of results and rendering reference intervals method dependent. We investigated the degree of equivalence achievable by harmonization and the identification of a commutable secondary reference material to accomplish this goal. We applied technical procedures to achieve harmonization developed by the Consortium for Harmonization of Clinical Laboratory Results. Eleven plasma hepcidin measurement procedures (5 mass spectrometry based and 6 immunochemical based) quantified native individual plasma samples (n = 32) and native plasma pools (n = 8) to assess analytical performance and current and achievable equivalence. In addition, 8 types of candidate reference materials (3 concentrations each, n = 24) were assessed for their suitability, most notably in terms of commutability, to serve as secondary reference material. Absolute hepcidin values and reproducibility (intrameasurement procedure CVs 2.9%-8.7%) differed substantially between measurement procedures, but all were linear and correlated well. The current equivalence (intermeasurement procedure CV 28.6%) between the methods was mainly attributable to differences in calibration and could thus be improved by harmonization with a common calibrator. Linear regression analysis and standardized residuals showed that a candidate reference material consisting of native lyophilized plasma with cryolyoprotectant was commutable for all measurement procedures. Mathematically simulated harmonization with this calibrator resulted in a maximum achievable equivalence of 7.7%. The secondary reference material identified in this study has the potential to substantially improve equivalence between hepcidin measurement procedures and contributes to the establishment of a traceability chain that will ultimately allow standardization of hepcidin measurement results. © 2016 American Association for Clinical Chemistry.
NASA Technical Reports Server (NTRS)
Castles, Walter, Jr.; Gray, Robin B.
1951-01-01
The empirical relation between the induced velocity, thrust, and rate of vertical descent of a helicopter rotor was calculated from wind tunnel force tests on four model rotors by the application of blade-element theory to the measured values of the thrust, torque, blade angle, and equivalent free-stream rate of descent. The model tests covered the useful range of C(sub t)/sigma(sub e) (where C(sub t) is the thrust coefficient and sigma(sub e) is the effective solidity) and the range of vertical descent from hovering to descent velocities slightly greater than those for autorotation. The three bladed models, each of which had an effective solidity of 0.05 and NACA 0015 blade airfoil sections, were as follows: (1) constant-chord, untwisted blades of 3-ft radius; (2) untwisted blades of 3-ft radius having a 3/1 taper; (3) constant-chord blades of 3-ft radius having a linear twist of 12 degrees (washout) from axis of rotation to tip; and (4) constant-chord, untwisted blades of 2-ft radius. Because of the incorporation of a correction for blade dynamic twist and the use of a method of measuring the approximate equivalent free-stream velocity, it is believed that the data obtained from this program are more applicable to free-flight calculations than the data from previous model tests.
NASA Technical Reports Server (NTRS)
Castles, Walter, Jr; Gray, Robin B
1951-01-01
The empirical relation between the induced velocity, thrust, and rate of vertical descent of a helicopter rotor was calculated from wind tunnel force tests on four model rotors by the application of blade-element theory to the measured values of the thrust, torque, blade angle, and equivalent free-stream rate of descent. The model tests covered the useful range of C(sub t)/sigma(sub e) (where C(sub t) is the thrust coefficient and sigma(sub e) is the effective solidity) and the range of vertical descent from hovering to descent velocities slightly greater than those for autorotation. The three bladed models, each of which had an effective solidity of 0.05 and NACA 0015 blade airfoil sections, were as follows: (1) constant-chord, untwisted blades of 3-ft radius; (2) untwisted blades of 3-ft radius having a 3/1 taper; (3) constant-chord blades of 3-ft radius having a linear twist of 12 degrees (washout) from axis of rotation to tip; and (4) constant-chord, untwisted blades of 2-ft radius. Because of the incorporation of a correction for blade dynamic twist and the use of a method of measuring the approximate equivalent free-stream velocity, it is believed that the data obtained from this program are more applicable to free-flight calculations than the data from previous model tests.
Gritti, Fabrice
2016-11-18
An new class of gradient liquid chromatography (GLC) is proposed and its performance is analyzed from a theoretical viewpoint. During the course of such gradients, both the solvent strength and the column temperature are simultaneously changed in time and space. The solvent and temperature gradients propagate along the chromatographic column at their own and independent linear velocity. This class of gradient is called combined solvent- and temperature-programmed gradient liquid chromatography (CST-GLC). The general expressions of the retention time, retention factor, and of the temporal peak width of the analytes at elution in CST-GLC are derived for linear solvent strength (LSS) retention models, modified van't Hoff retention behavior, linear and non-distorted solvent gradients, and for linear temperature gradients. In these conditions, the theory predicts that CST-GLC is equivalent to a unique and apparent dynamic solvent gradient. The apparent solvent gradient steepness is the sum of the solvent and temperature steepness. The apparent solvent linear velocity is the reciprocal of the steepness-averaged sum of the reciprocal of the actual solvent and temperature linear velocities. The advantage of CST-GLC over conventional GLC is demonstrated for the resolution of protein digests (peptide mapping) when applying smooth, retained, and linear acetonitrile gradients in combination with a linear temperature gradient (from 20°C to 90°C) using 300μm×150mm capillary columns packed with sub-2 μm particles. The benefit of CST-GLC is demonstrated when the temperature gradient propagates at the same velocity as the chromatographic speed. The experimental proof-of-concept for the realization of temperature ramps propagating at a finite and constant linear velocity is also briefly described. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, W.
High-resolution satellite data provide detailed, quantitative descriptions of land surface characteristics over large areas so that objective scale linkage becomes feasible. With the aid of satellite data, Sellers et al. and Wood and Lakshmi examined the linearity of processes scaled up from 30 m to 15 km. If the phenomenon is scale invariant, then the aggregated value of a function or flux is equivalent to the function computed from aggregated values of controlling variables. The linear relation may be realistic for limited land areas having no large surface contrasts to cause significant horizontal exchange. However, for areas with sharp surfacemore » contrasts, horizontal exchange and different dynamics in the atmospheric boundary may induce nonlinear interactions, such as at interfaces of land-water, forest-farm land, and irrigated crops-desert steppe. The linear approach, however, represents the simplest scenario, and is useful for developing an effective scheme for incorporating subgrid land surface processes into large-scale models. Our studies focus on coupling satellite data and ground measurements with a satellite-data-driven land surface model to parameterize surface fluxes for large-scale climate models. In this case study, we used surface spectral reflectance data from satellite remote sensing to characterize spatial and temporal changes in vegetation and associated surface parameters in an area of about 350 {times} 400 km covering the southern Great Plains (SGP) Cloud and Radiation Testbed (CART) site of the US Department of Energy`s Atmospheric Radiation Measurement (ARM) Program.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devic, Slobodan; Tomic, Nada; Aldelaijan, Saad
Purpose: Despite numerous advantages of radiochromic film dosimeter (high spatial resolution, near tissue equivalence, low energy dependence) to measure a relative dose distribution with film, one needs to first measure an absolute dose (following previously established reference dosimetry protocol) and then convert measured absolute dose values into relative doses. In this work, we present result of our efforts to obtain a functional form that would linearize the inherently nonlinear dose-response curve of the radiochromic film dosimetry system. Methods: Functional form [{zeta}= (-1){center_dot}netOD{sup (2/3)}/ln(netOD)] was derived from calibration curves of various previously established radiochromic film dosimetry systems. In order to testmore » the invariance of the proposed functional form with respect to the film model used we tested it with three different GAFCHROMIC Trade-Mark-Sign film models (EBT, EBT2, and EBT3) irradiated to various doses and scanned on a same scanner. For one of the film models (EBT2), we tested the invariance of the functional form to the scanner model used by scanning irradiated film pieces with three different flatbed scanner models (Epson V700, 1680, and 10000XL). To test our hypothesis that the proposed functional argument linearizes the response of the radiochromic film dosimetry system, verification tests have been performed in clinical applications: percent depth dose measurements, IMRT quality assurance (QA), and brachytherapy QA. Results: Obtained R{sup 2} values indicate that the choice of the functional form of the new argument appropriately linearizes the dose response of the radiochromic film dosimetry system we used. The linear behavior was insensitive to both film model and flatbed scanner model used. Measured PDD values using the green channel response of the GAFCHROMIC Trade-Mark-Sign EBT3 film model are well within {+-}2% window of the local relative dose value when compared to the tabulated Cobalt-60 data. It was also found that criteria of 3%/3 mm for an IMRT QA plan and 3%/2 mm for a brachytherapy QA plan are passing 95% gamma function points. Conclusions: In this paper, we demonstrate the use of functional argument to linearize the inherently nonlinear response of a radiochromic film based reference dosimetry system. In this way, relative dosimetry can be conveniently performed using radiochromic film dosimetry system without the need of establishing calibration curve.« less
Analytical and numerical construction of equivalent cables.
Lindsay, K A; Rosenberg, J R; Tucker, G
2003-08-01
The mathematical complexity experienced when applying cable theory to arbitrarily branched dendrites has lead to the development of a simple representation of any branched dendrite called the equivalent cable. The equivalent cable is an unbranched model of a dendrite and a one-to-one mapping of potentials and currents on the branched model to those on the unbranched model, and vice versa. The piecewise uniform cable, with a symmetrised tri-diagonal system matrix, is shown to represent the canonical form for an equivalent cable. Through a novel application of the Laplace transform it is demonstrated that an arbitrary branched model of a dendrite can be transformed to the canonical form of an equivalent cable. The characteristic properties of the equivalent cable are extracted from the matrix for the transformed branched model. The one-to-one mapping follows automatically from the construction of the equivalent cable. The equivalent cable is used to provide a new procedure for characterising the location of synaptic contacts on spinal interneurons.
NASA Astrophysics Data System (ADS)
Li, Chong; Yuan, Juyun; Yu, Haitao; Yuan, Yong
2018-01-01
Discrete models such as the lumped parameter model and the finite element model are widely used in the solution of soil amplification of earthquakes. However, neither of the models will accurately estimate the natural frequencies of soil deposit, nor simulate a damping of frequency independence. This research develops a new discrete model for one-dimensional viscoelastic response analysis of layered soil deposit based on the mode equivalence method. The new discrete model is a one-dimensional equivalent multi-degree-of-freedom (MDOF) system characterized by a series of concentrated masses, springs and dashpots with a special configuration. The dynamic response of the equivalent MDOF system is analytically derived and the physical parameters are formulated in terms of modal properties. The equivalent MDOF system is verified through a comparison of amplification functions with the available theoretical solutions. The appropriate number of degrees of freedom (DOFs) in the equivalent MDOF system is estimated. A comparative study of the equivalent MDOF system with the existing discrete models is performed. It is shown that the proposed equivalent MDOF system can exactly present the natural frequencies and the hysteretic damping of soil deposits and provide more accurate results with fewer DOFs.
Optimisation of an idealised primitive equation ocean model using stochastic parameterization
NASA Astrophysics Data System (ADS)
Cooper, Fenwick C.
2017-05-01
Using a simple parameterization, an idealised low resolution (biharmonic viscosity coefficient of 5 × 1012 m4s-1 , 128 × 128 grid) primitive equation baroclinic ocean gyre model is optimised to have a much more accurate climatological mean, variance and response to forcing, in all model variables, with respect to a high resolution (biharmonic viscosity coefficient of 8 × 1010 m4s-1 , 512 × 512 grid) equivalent. For example, the change in the climatological mean due to a small change in the boundary conditions is more accurate in the model with parameterization. Both the low resolution and high resolution models are strongly chaotic. We also find that long timescales in the model temperature auto-correlation at depth are controlled by the vertical temperature diffusion parameter and time mean vertical advection and are caused by short timescale random forcing near the surface. This paper extends earlier work that considered a shallow water barotropic gyre. Here the analysis is extended to a more turbulent multi-layer primitive equation model that includes temperature as a prognostic variable. The parameterization consists of a constant forcing, applied to the velocity and temperature equations at each grid point, which is optimised to obtain a model with an accurate climatological mean, and a linear stochastic forcing, that is optimised to also obtain an accurate climatological variance and 5 day lag auto-covariance. A linear relaxation (nudging) is not used. Conservation of energy and momentum is discussed in an appendix.
The risk equivalent of an exposure to-, versus a dose of radiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bond, V.P.
The long-term potential carcinogenic effects of low-level exposure (LLE) are addressed. The principal point discussed is linear, no-threshold dose-response curve. That the linear no-threshold, or proportional relationship is widely used is seen in the way in which the values for cancer risk coefficients are expressed - in terms of new cases, per million persons exposed, per year, per unit exposure or dose. This implies that the underlying relationship is proportional, i.e., ''linear, without threshold''. 12 refs., 9 figs., 1 tab.
Downscaling Smooth Tomographic Models: Separating Intrinsic and Apparent Anisotropy
NASA Astrophysics Data System (ADS)
Bodin, Thomas; Capdeville, Yann; Romanowicz, Barbara
2016-04-01
In recent years, a number of tomographic models based on full waveform inversion have been published. Due to computational constraints, the fitted waveforms are low pass filtered, which results in an inability to map features smaller than half the shortest wavelength. However, these tomographic images are not a simple spatial average of the true model, but rather an effective, apparent, or equivalent model that provides a similar 'long-wave' data fit. For example, it can be shown that a series of horizontal isotropic layers will be seen by a 'long wave' as a smooth anisotropic medium. In this way, the observed anisotropy in tomographic models is a combination of intrinsic anisotropy produced by lattice-preferred orientation (LPO) of minerals, and apparent anisotropy resulting from the incapacity of mapping discontinuities. Interpretations of observed anisotropy (e.g. in terms of mantle flow) requires therefore the separation of its intrinsic and apparent components. The "up-scaling" relations that link elastic properties of a rapidly varying medium to elastic properties of the effective medium as seen by long waves are strongly non-linear and their inverse highly non-unique. That is, a smooth homogenized effective model is equivalent to a large number of models with discontinuities. In the 1D case, Capdeville et al (GJI, 2013) recently showed that a tomographic model which results from the inversion of low pass filtered waveforms is an homogenized model, i.e. the same as the model computed by upscaling the true model. Here we propose a stochastic method to sample the ensemble of layered models equivalent to a given tomographic profile. We use a transdimensional formulation where the number of layers is variable. Furthermore, each layer may be either isotropic (1 parameter) or intrinsically anisotropic (2 parameters). The parsimonious character of the Bayesian inversion gives preference to models with the least number of parameters (i.e. least number of layers, and maximum number of isotropic layers). The non-uniqueness of the problem can be addressed by adding high frequency data such as receiver functions, able to map first order discontinuities. We show with synthetic tests that this method enables us to distinguish between intrinsic and apparent anisotropy in tomographic models, as layers with intrinsic anisotropy are only present when required by the data. A real data example is presented based on the latest global model produced at Berkeley.
Sreedevi, Gudapati; Prasad, Yenumula Gerard; Prabhakar, Mathyam; Rao, Gubbala Ramachandra; Vennila, Sengottaiyan; Venkateswarlu, Bandi
2013-01-01
Temperature-driven development and survival rates of the mealybug, Phenacoccussolenopsis Tinsley (Hemiptera: Pseudococcidae) were examined at nine constant temperatures (15, 20, 25, 27, 30, 32, 35 and 40°C) on hibiscus ( Hibiscus rosa -sinensis L.). Crawlers successfully completed development to adult stage between 15 and 35°C, although their survival was affected at low temperatures. Two linear and four nonlinear models were fitted to describe developmental rates of P . solenopsis as a function of temperature, and for estimating thermal constants and bioclimatic thresholds (lower, optimum and upper temperature thresholds for development: Tmin, Topt and Tmax, respectively). Estimated thresholds between the two linear models were statistically similar. Ikemoto and Takai’s linear model permitted testing the equivalence of lower developmental thresholds for life stages of P . solenopsis reared on two hosts, hibiscus and cotton. Thermal constants required for completion of cumulative development of female and male nymphs and for the whole generation were significantly lower on hibiscus (222.2, 237.0, 308.6 degree-days, respectively) compared to cotton. Three nonlinear models performed better in describing the developmental rate for immature instars and cumulative life stages of female and male and for generation based on goodness-of-fit criteria. The simplified β type distribution function estimated Topt values closer to the observed maximum rates. Thermodynamic SSI model indicated no significant differences in the intrinsic optimum temperature estimates for different geographical populations of P . solenopsis . The estimated bioclimatic thresholds and the observed survival rates of P . solenopsis indicate the species to be high-temperature adaptive, and explained the field abundance of P . solenopsis on its host plants. PMID:24086597
Towards a Comprehensive Model of Jet Noise Using an Acoustic Analogy and Steady RANS Solutions
NASA Technical Reports Server (NTRS)
Miller, Steven A. E.
2013-01-01
An acoustic analogy is developed to predict the noise from jet flows. It contains two source models that independently predict the noise from turbulence and shock wave shear layer interactions. The acoustic analogy is based on the Euler equations and separates the sources from propagation. Propagation effects are taken into account by calculating the vector Green's function of the linearized Euler equations. The sources are modeled following the work of Tam and Auriault, Morris and Boluriaan, and Morris and Miller. A statistical model of the two-point cross-correlation of the velocity fluctuations is used to describe the turbulence. The acoustic analogy attempts to take into account the correct scaling of the sources for a wide range of nozzle pressure and temperature ratios. It does not make assumptions regarding fine- or large-scale turbulent noise sources, self- or shear-noise, or convective amplification. The acoustic analogy is partially informed by three-dimensional steady Reynolds-Averaged Navier-Stokes solutions that include the nozzle geometry. The predictions are compared with experiments of jets operating subsonically through supersonically and at unheated and heated temperatures. Predictions generally capture the scaling of both mixing noise and BBSAN for the conditions examined, but some discrepancies remain that are due to the accuracy of the steady RANS turbulence model closure, the equivalent sources, and the use of a simplified vector Green's function solver of the linearized Euler equations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuzmina, L.K.
The research deals with different aspects of mathematical modelling and the analysis of complex dynamic non-linear systems as a consequence of applied problems in mechanics (in particular those for gyrosystems, for stabilization and orientation systems, control systems of movable objects, including the aviation and aerospace systems) Non-linearity, multi-connectedness and high dimensionness of dynamical problems, that occur at the initial full statement lead to the need of the problem narrowing, and of the decomposition of the full model, but with safe-keeping of main properties and of qualitative equivalence. The elaboration of regular methods for modelling problems in dynamics, the generalization ofmore » reduction principle are the main aims of the investigations. Here, uniform methodology, based on Lyapunov`s methods, founded by N.G.Ohetayev, is developed. The objects of the investigations are considered with exclusive positions, as systems of singularly perturbed class, treated as ones with singular parametrical perturbations. It is the natural extension of the statements of N.G.Chetayev and P.A.Kuzmin for parametrical stability. In paper the systematical procedures for construction of correct simplified models (comparison ones) are developed, the validity conditions of the transition are determined the appraisals are received, the regular algorithms of engineering level are obtained. Applicabilitelly to the stabilization and orientation systems with the gyroscopic controlling subsystems, these methods enable to build the hierarchical sequence of admissible simplified models; to determine the conditions of their correctness.« less
On the climate impacts from the volcanic and solar forcings
NASA Astrophysics Data System (ADS)
Varotsos, Costas A.; Lovejoy, Shaun
2016-04-01
The observed and the modelled estimations show that the main forcings on the atmosphere are of volcanic and solar origins, which act however in an opposite way. The former can be very strong and decrease at short time scales, whereas, the latter increase with time scale. On the contrary, the observed fluctuations in temperatures increase at long scales (e.g. centennial and millennial), and the solar forcings do increase with scale. The common practice is to reduce forcings to radiative equivalents assuming that their combination is linear. In order to clarify the validity of the linearity assumption and determine its range of validity, we systematically compare the statistical properties of solar only, volcanic only and combined solar and volcanic forcings over the range of time scales from one to 1000 years. Additionally, we attempt to investigate plausible reasons for the discrepancies observed between the measured and modeled anomalies of tropospheric temperatures in the tropics. For this purpose, we analyse tropospheric temperature anomalies for both the measured and modeled time series. The results obtained show that the measured temperature fluctuations reveal white noise behavior, while the modeled ones exhibit long-range power law correlations. We suggest that the persistent signal, should be removed from the modeled values in order to achieve better agreement with observations. Keywords: Scaling, Nonlinear variability, Climate system, Solar radiation
Adiabatic dynamics of one-dimensional classical Hamiltonian dissipative systems
NASA Astrophysics Data System (ADS)
Pritula, G. M.; Petrenko, E. V.; Usatenko, O. V.
2018-02-01
A linearized plane pendulum with the slowly varying mass and length of string and the suspension point moving at a slowly varying speed is presented as an example of a simple 1D mechanical system described by the generalized harmonic oscillator equation, which is a basic model in discussion of the adiabatic dynamics and geometric phase. The expression for the pendulum geometric phase is obtained by three different methods. The pendulum is shown to be canonically equivalent to the damped harmonic oscillator. This supports the mathematical conclusion, not widely accepted in physical community, of no difference between the dissipative and Hamiltonian 1D systems.
Meulenbroek, Bernard; Ebert, Ute; Schäfer, Lothar
2005-11-04
The dynamics of ionization fronts that generate a conducting body are in the simplest approximation equivalent to viscous fingering without regularization. Going beyond this approximation, we suggest that ionization fronts can be modeled by a mixed Dirichlet-Neumann boundary condition. We derive exact uniformly propagating solutions of this problem in 2D and construct a single partial differential equation governing small perturbations of these solutions. For some parameter value, this equation can be solved analytically, which shows rigorously that the uniformly propagating solution is linearly convectively stable and that the asymptotic relaxation is universal and exponential in time.
Wind turbine sound pressure level calculations at dwellings.
Keith, Stephen E; Feder, Katya; Voicescu, Sonia A; Soukhovtsev, Victor; Denning, Allison; Tsang, Jason; Broner, Norm; Leroux, Tony; Richarz, Werner; van den Berg, Frits
2016-03-01
This paper provides calculations of outdoor sound pressure levels (SPLs) at dwellings for 10 wind turbine models, to support Health Canada's Community Noise and Health Study. Manufacturer supplied and measured wind turbine sound power levels were used to calculate outdoor SPL at 1238 dwellings using ISO [(1996). ISO 9613-2-Acoustics] and a Swedish noise propagation method. Both methods yielded statistically equivalent results. The A- and C-weighted results were highly correlated over the 1238 dwellings (Pearson's linear correlation coefficient r > 0.8). Calculated wind turbine SPLs were compared to ambient SPLs from other sources, estimated using guidance documents from the United States and Alberta, Canada.
Time-Domain Impedance Boundary Conditions for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.; Auriault, Laurent
1996-01-01
It is an accepted practice in aeroacoustics to characterize the properties of an acoustically treated surface by a quantity known as impedance. Impedance is a complex quantity. As such, it is designed primarily for frequency-domain analysis. Time-domain boundary conditions that are the equivalent of the frequency-domain impedance boundary condition are proposed. Both single frequency and model broadband time-domain impedance boundary conditions are provided. It is shown that the proposed boundary conditions, together with the linearized Euler equations, form well-posed initial boundary value problems. Unlike ill-posed problems, they are free from spurious instabilities that would render time-marching computational solutions impossible.
Spherical earth gravity and magnetic anomaly analysis by equivalent point source inversion
NASA Technical Reports Server (NTRS)
Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.
1981-01-01
To facilitate geologic interpretation of satellite elevation potential field data, analysis techniques are developed and verified in the spherical domain that are commensurate with conventional flat earth methods of potential field interpretation. A powerful approach to the spherical earth problem relates potential field anomalies to a distribution of equivalent point sources by least squares matrix inversion. Linear transformations of the equivalent source field lead to corresponding geoidal anomalies, pseudo-anomalies, vector anomaly components, spatial derivatives, continuations, and differential magnetic pole reductions. A number of examples using 1 deg-averaged surface free-air gravity anomalies of POGO satellite magnetometer data for the United States, Mexico, and Central America illustrate the capabilities of the method.
A computational approach for coupled 1D and 2D/3D CFD modelling of pulse Tube cryocoolers
NASA Astrophysics Data System (ADS)
Fang, T.; Spoor, P. S.; Ghiaasiaan, S. M.
2017-12-01
The physics behind Stirling-type cryocoolers are complicated. One dimensional (1D) simulation tools offer limited details and accuracy, in particular for cryocoolers that have non-linear configurations. Multi-dimensional Computational Fluid Dynamic (CFD) methods are useful but are computationally expensive in simulating cyrocooler systems in their entirety. In view of the fact that some components of a cryocooler, e.g., inertance tubes and compliance tanks, can be modelled as 1D components with little loss of critical information, a 1D-2D/3D coupled model was developed. Accordingly, one-dimensional - like components are represented by specifically developed routines. These routines can be coupled to CFD codes and provide boundary conditions for 2D/3D CFD simulations. The developed coupled model, while preserving sufficient flow field details, is two orders of magnitude faster than equivalent 2D/3D CFD models. The predictions show good agreement with experimental data and 2D/3D CFD model.
Application of neural models as controllers in mobile robot velocity control loop
NASA Astrophysics Data System (ADS)
Cerkala, Jakub; Jadlovska, Anna
2017-01-01
This paper presents the application of an inverse neural models used as controllers in comparison to classical PI controllers for velocity tracking control task used in two-wheel, differentially driven mobile robot. The PI controller synthesis is based on linear approximation of actuators with equivalent load. In order to obtain relevant datasets for training of feed-forward multi-layer perceptron based neural network used as neural model, the mathematical model of mobile robot, that combines its kinematic and dynamic properties such as chassis dimensions, center of gravity offset, friction and actuator parameters is used. Neural models are trained off-line to act as an inverse dynamics of DC motors with particular load using data collected in simulation experiment for motor input voltage step changes within bounded operating area. The performances of PI controllers versus inverse neural models in mobile robot internal velocity control loops are demonstrated and compared in simulation experiment of navigation control task for line segment motion in plane.
Bernard, A M; Burgot, J L
1981-12-01
The reversibility of the determination reaction is the most frequent cause of deviations from linearity of thermometric titration curves. Because of this, determination of the equivalence point by the tangent method is associated with a systematic error. The authors propose a relationship which connects this error quantitatively with the equilibrium constant. The relation, verified experimentally, is deduced from a mathematical study of the thermograms and could probably be generalized to apply to other linear methods of determination.
Herskind, Carsten; Griebel, Jürgen; Kraus-Tiefenbacher, Uta; Wenz, Frederik
2008-12-01
Accelerated partial breast radiotherapy with low-energy photons from a miniature X-ray machine is undergoing a randomized clinical trial (Targeted Intra-operative Radiation Therapy [TARGIT]) in a selected subgroup of patients treated with breast-conserving surgery. The steep radial dose gradient implies reduced tumor cell control with increasing depth in the tumor bed. The purpose was to compare the expected risk of local recurrence in this nonuniform radiation field with that after conventional external beam radiotherapy. The relative biologic effectiveness of low-energy photons was modeled using the linear-quadratic formalism including repair of sublethal lesions during protracted irradiation. Doses of 50-kV X-rays (Intrabeam) were converted to equivalent fractionated doses, EQD2, as function of depth in the tumor bed. The probability of local control was estimated using a logistic dose-response relationship fitted to clinical data from fractionated radiotherapy. The model calculations show that, for a cohort of patients, the increase in local control in the high-dose region near the applicator partly compensates the reduction of local control at greater distances. Thus a "sphere of equivalence" exists within which the risk of recurrence is equal to that after external fractionated radiotherapy. The spatial distribution of recurrences inside this sphere will be different from that after conventional radiotherapy. A novel target volume concept is presented here. The incidence of recurrences arising in the tumor bed around the excised tumor will test the validity of this concept and the efficacy of the treatment. Recurrences elsewhere will have implications for the rationale of TARGIT.
NASA Astrophysics Data System (ADS)
Domnisoru, L.; Modiga, A.; Gasparotti, C.
2016-08-01
At the ship's design, the first step of the hull structural assessment is based on the longitudinal strength analysis, with head wave equivalent loads by the ships' classification societies’ rules. This paper presents an enhancement of the longitudinal strength analysis, considering the general case of the oblique quasi-static equivalent waves, based on the own non-linear iterative procedure and in-house program. The numerical approach is developed for the mono-hull ships, without restrictions on 3D-hull offset lines non-linearities, and involves three interlinked iterative cycles on floating, pitch and roll trim equilibrium conditions. Besides the ship-wave equilibrium parameters, the ship's girder wave induced loads are obtained. As numerical study case we have considered a large LPG liquefied petroleum gas carrier. The numerical results of the large LPG are compared with the statistical design values from several ships' classification societies’ rules. This study makes possible to obtain the oblique wave conditions that are inducing the maximum loads into the large LPG ship's girder. The numerical results of this study are pointing out that the non-linear iterative approach is necessary for the computation of the extreme loads induced by the oblique waves, ensuring better accuracy of the large LPG ship's longitudinal strength assessment.
Electric field computation and measurements in the electroporation of inhomogeneous samples
NASA Astrophysics Data System (ADS)
Bernardis, Alessia; Bullo, Marco; Campana, Luca Giovanni; Di Barba, Paolo; Dughiero, Fabrizio; Forzan, Michele; Mognaschi, Maria Evelina; Sgarbossa, Paolo; Sieni, Elisabetta
2017-12-01
In clinical treatments of a class of tumors, e.g. skin tumors, the drug uptake of tumor tissue is helped by means of a pulsed electric field, which permeabilizes the cell membranes. This technique, which is called electroporation, exploits the conductivity of the tissues: however, the tumor tissue could be characterized by inhomogeneous areas, eventually causing a non-uniform distribution of current. In this paper, the authors propose a field model to predict the effect of tissue inhomogeneity, which can affect the current density distribution. In particular, finite-element simulations, considering non-linear conductivity against field relationship, are developed. Measurements on a set of samples subject to controlled inhomogeneity make it possible to assess the numerical model in view of identifying the equivalent resistance between pairs of electrodes.
A spatial operator algebra for manipulator modeling and control
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Kreutz, K.; Jain, A.
1989-01-01
A spatial operator algebra for modeling the control and trajectory design of manipulation is discussed, with emphasis on its analytical formulation and implementation in the Ada programming language. The elements of this algebra are linear operators whose domain and range spaces consist of forces, moments, velocities, and accelerations. The effect of these operators is equivalent to a spatial recursion along the span of the manipulator. Inversion is obtained using techniques of recursive filtering and smoothing. The operator alegbra provides a high-level framework for describing the dynamic and kinematic behavior of a manipulator and control and trajectory design algorithms. Implementable recursive algorithms can be immediately derived from the abstract operator expressions by inspection, thus greatly simplifying the transition from an abstract problem formulation and solution to the detailed mechanization of a specific algorithm.
Can we use the equivalent sphere model to approximate organ doses in space radiation environments?
NASA Astrophysics Data System (ADS)
Lin, Zi-Wei
For space radiation protection one often calculates the dose or dose equivalent in blood forming organs (BFO). It has been customary to use a 5cm equivalent sphere to approximate the BFO dose. However, previous studies have concluded that a 5cm sphere gives a very different dose from the exact BFO dose. One study concludes that a 9cm sphere is a reasonable approximation for the BFO dose in solar particle event (SPE) environments. In this study we investigate the reason behind these observations and extend earlier studies by studying whether BFO, eyes or the skin can be approximated by the equivalent sphere model in different space radiation environments such as solar particle events and galactic cosmic ray (GCR) environments. We take the thickness distribution functions of the organs from the CAM (Computerized Anatomical Man) model, then use a deterministic radiation transport to calculate organ doses in different space radiation environments. The organ doses have been evaluated with a water or aluminum shielding from 0 to 20 g/cm2. We then compare these exact doses with results from the equivalent sphere model and determine in which cases and at what radius parameters the equivalent sphere model is a reasonable approximation. Furthermore, we propose to use a modified equivalent sphere model with two radius parameters to represent the skin or eyes. For solar particle events, we find that the radius parameters for the organ dose equivalent increase significantly with the shielding thickness, and the model works marginally for BFO but is unacceptable for eyes or the skin. For galactic cosmic rays environments, the equivalent sphere model with one organ-specific radius parameter works well for the BFO dose equivalent, marginally well for the BFO dose and the dose equivalent of eyes or the skin, but is unacceptable for the dose of eyes or the skin. The BFO radius parameters are found to be significantly larger than 5 cm in all cases, consistent with the conclusion of an earlier study. The radius parameters for the dose equivalent in GCR environments are approximately between 10 and 11 cm for the BFO, 3.7 to 4.8 cm for eyes, and 3.5 to 5.6 cm for the skin; while the radius parameters are between 10 and 13 cm for the BFO dose. In the proposed modified equivalent sphere model, the range of each of the two radius parameters for the skin (or eyes) is much tighter than that in the equivalent sphere model with one radius parameter. Our results thus show that the equivalent sphere model works better in galactic cosmic rays environments than in solar particle events. The model works well or marginally well for BFO but usually does not work for eyes or the skin. A modified model with two radius parameters works much better in approximating the dose and dose equivalent in eyes or the skin.
Virasoro constraints and polynomial recursion for the linear Hodge integrals
NASA Astrophysics Data System (ADS)
Guo, Shuai; Wang, Gehao
2017-04-01
The Hodge tau-function is a generating function for the linear Hodge integrals. It is also a tau-function of the KP hierarchy. In this paper, we first present the Virasoro constraints for the Hodge tau-function in the explicit form of the Virasoro equations. The expression of our Virasoro constraints is simply a linear combination of the Virasoro operators, where the coefficients are restored from a power series for the Lambert W function. Then, using this result, we deduce a simple version of the Virasoro constraints for the linear Hodge partition function, where the coefficients are restored from the Gamma function. Finally, we establish the equivalence relation between the Virasoro constraints and polynomial recursion formula for the linear Hodge integrals.
Multiphase model for transformation induced plasticity. Extended Leblond's model
NASA Astrophysics Data System (ADS)
Weisz-Patrault, Daniel
2017-09-01
Transformation induced plasticity (TRIP) classically refers to plastic strains observed during phase transitions that occur under mechanical loads (that can be lower than the yield stress). A theoretical approach based on homogenization is proposed to deal with multiphase changes and to extend the validity of the well known and widely used model proposed by Leblond (1989). The approach is similar, but several product phases are considered instead of one and several assumptions have been released. Thus, besides the generalization for several phases, one can mention three main improvements in the calculation of the local equivalent plastic strain: the deviatoric part of the phase transformation is taken into account, both parent and product phases are elastic-plastic with linear isotropic hardening and the applied stress is considered. Results show that classical issues of singularities arising in the Leblond's model (corrected by ad hoc numerical functions or thresholding) are solved in this contribution excepted when the applied equivalent stress reaches the yield stress. Indeed, in this situation the parent phase is entirely plastic as soon as the phase transformation begins and the same singularity as in the Leblond's model arises. A physical explanation of the cutoff function is introduced in order to regularize the singularity. Furthermore, experiments extracted from the literature dealing with multiphase transitions and multiaxial loads are compared with the original Leblond's model and the proposed extended version. For the extended version, very good agreement is observed without any fitting procedures (i.e., material parameters are extracted from other dedicated experiments) and for the original version results are more qualitative.
Wissmann, F; Reginatto, M; Möller, T
2010-09-01
The problem of finding a simple, generally applicable description of worldwide measured ambient dose equivalent rates at aviation altitudes between 8 and 12 km is difficult to solve due to the large variety of functional forms and parametrisations that are possible. We present an approach that uses Bayesian statistics and Monte Carlo methods to fit mathematical models to a large set of data and to compare the different models. About 2500 data points measured in the periods 1997-1999 and 2003-2006 were used. Since the data cover wide ranges of barometric altitude, vertical cut-off rigidity and phases in the solar cycle 23, we developed functions which depend on these three variables. Whereas the dependence on the vertical cut-off rigidity is described by an exponential, the dependences on barometric altitude and solar activity may be approximated by linear functions in the ranges under consideration. Therefore, a simple Taylor expansion was used to define different models and to investigate the relevance of the different expansion coefficients. With the method presented here, it is possible to obtain probability distributions for each expansion coefficient and thus to extract reliable uncertainties even for the dose rate evaluated. The resulting function agrees well with new measurements made at fixed geographic positions and during long haul flights covering a wide range of latitudes.
NASA Astrophysics Data System (ADS)
German, Brian Joseph
This research develops a technique for the solution of incompressible equivalents to planar steady subsonic potential flows. Riemannian geometric formalism is used to develop a gauge transformation of the length measure followed by a curvilinear coordinate transformation to map the given subsonic flow into a canonical Laplacian flow with the same boundary conditions. The effect of the transformation is to distort both the immersed profile shape and the domain interior nonuniformly as a function of local flow properties. The method represents the full nonlinear generalization of the classical methods of Prandtl-Glauert and Karman-Tsien. Unlike the classical methods which are "corrections," this method gives exact results in the sense that the inverse mapping produces the subsonic full potential solution over the original airfoil, up to numerical accuracy. The motivation for this research was provided by an observed analogy between linear potential flow and the special theory of relativity that emerges from the invariance of the d'Alembert wave equation under Lorentz transformations. This analogy is well known in an operational sense, being leveraged widely in linear unsteady aerodynamics and acoustics, stemming largely from the work of Kussner. Whereas elements of the special theory can be invoked for compressibility effects that are linear and global in nature, the question posed in this work was whether other mathematical techniques from the realm of relativity theory could be used to similar advantage for effects that are nonlinear and local. This line of thought led to a transformation leveraging Riemannian geometric methods common to the general theory of relativity. A gauge transformation is used to geometrize compressibility through the metric tensor of the underlying space to produce an equivalent incompressible flow that lives not on a plane but on a curved surface. In this sense, forces owing to compressibility can be ascribed to the geometry of space in much the same way that general relativity ascribes gravitational forces to the curvature of space-time. Although the analogy with general relativity is fruitful, it is important not to overstate the similarities between compressibility and the physics of gravity, as the interest for this thesis is primarily in the mathematical framework and not physical phenomenology or epistemology. The thesis presents the philosophy and theory for the transformation method followed by a numerical method for practical solutions of equivalent incompressible flows over arbitrary closed profiles. The numerical method employs an iterative approach involving the solution of the equivalent incompressible flow with a panel method, the calculation of the metric tensor for the gauge transformation, and the solution of the curvilinear coordinate mapping to the canonical flow with a finite difference approach for the elliptic boundary value problem. This method is demonstrated for non-circulatory flow over a circular cylinder and both symmetric and lifting flows over a NACA 0012 profile. Results are validated with accepted subcritical full potential test cases available in the literature. For chord-preserving mapping boundary conditions, the results indicate that the equivalent incompressible profiles thicken with Mach number and develop a leading edge droop with increased angle of attack. Two promising areas of potential applicability of the method have been identified. The first is in airfoil inverse design methods leveraging incompressible flow knowledge including heuristics and empirical data for the potential field effects on viscous phenomena such as boundary layer transition and separation. The second is in aerodynamic testing using distorted similarity-scaled models.
Performance assessment of a single-pixel compressive sensing imaging system
NASA Astrophysics Data System (ADS)
Du Bosq, Todd W.; Preece, Bradley L.
2016-05-01
Conventional electro-optical and infrared (EO/IR) systems capture an image by measuring the light incident at each of the millions of pixels in a focal plane array. Compressive sensing (CS) involves capturing a smaller number of unconventional measurements from the scene, and then using a companion process known as sparse reconstruction to recover the image as if a fully populated array that satisfies the Nyquist criteria was used. Therefore, CS operates under the assumption that signal acquisition and data compression can be accomplished simultaneously. CS has the potential to acquire an image with equivalent information content to a large format array while using smaller, cheaper, and lower bandwidth components. However, the benefits of CS do not come without compromise. The CS architecture chosen must effectively balance between physical considerations (SWaP-C), reconstruction accuracy, and reconstruction speed to meet operational requirements. To properly assess the value of such systems, it is necessary to fully characterize the image quality, including artifacts and sensitivity to noise. Imagery of the two-handheld object target set at range was collected using a passive SWIR single-pixel CS camera for various ranges, mirror resolution, and number of processed measurements. Human perception experiments were performed to determine the identification performance within the trade space. The performance of the nonlinear CS camera was modeled with the Night Vision Integrated Performance Model (NV-IPM) by mapping the nonlinear degradations to an equivalent linear shift invariant model. Finally, the limitations of CS modeling techniques will be discussed.
Cai, C; Rodet, T; Legoupil, S; Mohammad-Djafari, A
2013-11-01
Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images. This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed. The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions. The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.
Impact of Fractionation and Dose in a Multivariate Model for Radiation-Induced Chest Wall Pain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Din, Shaun U.; Williams, Eric L.; Jackson, Andrew
Purpose: To determine the role of patient/tumor characteristics, radiation dose, and fractionation using the linear-quadratic (LQ) model to predict stereotactic body radiation therapy–induced grade ≥2 chest wall pain (CWP2) in a larger series and develop clinically useful constraints for patients treated with different fraction numbers. Methods and Materials: A total of 316 lung tumors in 295 patients were treated with stereotactic body radiation therapy in 3 to 5 fractions to 39 to 60 Gy. Absolute dose–absolute volume chest wall (CW) histograms were acquired. The raw dose-volume histograms (α/β = ∞ Gy) were converted via the LQ model to equivalent doses in 2-Gy fractions (normalizedmore » total dose, NTD) with α/β from 0 to 25 Gy in 0.1-Gy steps. The Cox proportional hazards (CPH) model was used in univariate and multivariate models to identify and assess CWP2 exposed to a given physical and NTD. Results: The median follow-up was 15.4 months, and the median time to development of CWP2 was 7.4 months. On a univariate CPH model, prescription dose, prescription dose per fraction, number of fractions, D83cc, distance of tumor to CW, and body mass index were all statistically significant for the development of CWP2. Linear-quadratic correction improved the CPH model significance over the physical dose. The best-fit α/β was 2.1 Gy, and the physical dose (α/β = ∞ Gy) was outside the upper 95% confidence limit. With α/β = 2.1 Gy, V{sub NTD99Gy} was most significant, with median V{sub NTD99Gy} = 31.5 cm{sup 3} (hazard ratio 3.87, P<.001). Conclusion: There were several predictive factors for the development of CWP2. The LQ-adjusted doses using the best-fit α/β = 2.1 Gy is a better predictor of CWP2 than the physical dose. To aid dosimetrists, we have calculated the physical dose equivalent corresponding to V{sub NTD99Gy} = 31.5 cm{sup 3} for the 3- to 5-fraction groups.« less
NASA Technical Reports Server (NTRS)
Inbody, Michael Andrew
1993-01-01
The testing and development of existing global and detailed chemical kinetic models for soot formation requires measurements of soot and radical concentrations in flames. A clearer understanding of soot particle inception relies upon the evaluation and refinement of these models in comparison with such measurements. We present measurements of soot formation and hydroxyl (OH) concentration in sequences of flat premixed atmospheric-pressure C2H4/O2/N2 flames and 80-torr C2H4/O2 flames for a unique range of equivalence ratios bracketting the critical equivalence ratio (phi(sub c)) and extending to more heavily sooting conditions. Soot volume fraction and number density profiles are measured using a laser scattering-extinction apparatus capable of resolving a 0.1 percent absorption. Hydroxyl number density profiles are measured using laser-induced fluorescence (LIF) with broadband detection. Temperature profiles are obtained from Rayleigh scattering measurements. The relative volume fraction and number density profiles of the richer sooting flames exhibit the expected trends in soot formation. In near-phi(sub c) visibility sooting flames, particle scattering and extinction are not detected, but an LIF signal due to polycyclic aromatic hydrocarbons (PAH's) can be detected upon excitation with an argon-ion laser. A linear correlation between the argon-ion LIF and the soot volume fraction implies a common mechanistic source for the growth of PAH's and soot particles. The peak OH number density in both the atmospheric and 80-torr flames declines with increasing equivalence ratio, but the profile shape remains unchanged in the transition to sooting, implying that the primary reaction pathways for OH remain unchanged over this transition. Chemical kinetic modeling is demonstrated by comparing predictions using two current reaction mechanisms with the atmospheric flame data. The measured and predicted OH number density profiles show good agreement. The predicted benzene number density profiles correlate with the measured trends in soot formation, although anomalies in the benzene profiles for the richer and cooler sooting flames suggest a need for the inclusion of benzene oxidation reactions.
An adaptive control scheme for a flexible manipulator
NASA Technical Reports Server (NTRS)
Yang, T. C.; Yang, J. C. S.; Kudva, P.
1987-01-01
The problem of controlling a single link flexible manipulator is considered. A self-tuning adaptive control scheme is proposed which consists of a least squares on-line parameter identification of an equivalent linear model followed by a tuning of the gains of a pole placement controller using the parameter estimates. Since the initial parameter values for this model are assumed unknown, the use of arbitrarily chosen initial parameter estimates in the adaptive controller would result in undesirable transient effects. Hence, the initial stage control is carried out with a PID controller. Once the identified parameters have converged, control is transferred to the adaptive controller. Naturally, the relevant issues in this scheme are tests for parameter convergence and minimization of overshoots during control switch-over. To demonstrate the effectiveness of the proposed scheme, simulation results are presented with an analytical nonlinear dynamic model of a single link flexible manipulator.
Application of thermal model for pan evaporation to the hydrology of a defined medium, the sponge
NASA Technical Reports Server (NTRS)
Trenchard, M. H.; Artley, J. A. (Principal Investigator)
1981-01-01
A technique is presented which estimates pan evaporation from the commonly observed values of daily maximum and minimum air temperatures. These two variables are transformed to saturation vapor pressure equivalents which are used in a simple linear regression model. The model provides reasonably accurate estimates of pan evaporation rates over a large geographic area. The derived evaporation algorithm is combined with precipitation to obtain a simple moisture variable. A hypothetical medium with a capacity of 8 inches of water is initialized at 4 inches. The medium behaves like a sponge: it absorbs all incident precipitation, with runoff or drainage occurring only after it is saturated. Water is lost from this simple system through evaporation just as from a Class A pan, but at a rate proportional to its degree of saturation. The contents of the sponge is a moisture index calculated from only the maximum and minium temperatures and precipitation.
Fractal ladder models and power law wave equations
Kelly, James F.; McGough, Robert J.
2009-01-01
The ultrasonic attenuation coefficient in mammalian tissue is approximated by a frequency-dependent power law for frequencies less than 100 MHz. To describe this power law behavior in soft tissue, a hierarchical fractal network model is proposed. The viscoelastic and self-similar properties of tissue are captured by a constitutive equation based on a lumped parameter infinite-ladder topology involving alternating springs and dashpots. In the low-frequency limit, this ladder network yields a stress-strain constitutive equation with a time-fractional derivative. By combining this constitutive equation with linearized conservation principles and an adiabatic equation of state, a fractional partial differential equation that describes power law attenuation is derived. The resulting attenuation coefficient is a power law with exponent ranging between 1 and 2, while the phase velocity is in agreement with the Kramers–Kronig relations. The fractal ladder model is compared to published attenuation coefficient data, thus providing equivalent lumped parameters. PMID:19813816
Cardenas, Carlos E; Nitsch, Paige L; Kudchadker, Rajat J; Howell, Rebecca M; Kry, Stephen F
2016-07-08
Out-of-field doses from radiotherapy can cause harmful side effects or eventually lead to secondary cancers. Scattered doses outside the applicator field, neutron source strength values, and neutron dose equivalents have not been broadly investigated for high-energy electron beams. To better understand the extent of these exposures, we measured out-of-field dose characteristics of electron applicators for high-energy electron beams on two Varian 21iXs, a Varian TrueBeam, and an Elekta Versa HD operating at various energy levels. Out-of-field dose profiles and percent depth-dose curves were measured in a Wellhofer water phantom using a Farmer ion chamber. Neutron dose was assessed using a combination of moderator buckets and gold activation foils placed on the treatment couch at various locations in the patient plane on both the Varian 21iX and Elekta Versa HD linear accelerators. Our findings showed that out-of-field electron doses were highest for the highest electron energies. These doses typically decreased with increasing distance from the field edge but showed substantial increases over some distance ranges. The Elekta linear accelerator had higher electron out-of-field doses than the Varian units examined, and the Elekta dose profiles exhibited a second dose peak about 20 to 30 cm from central-axis, which was found to be higher than typical out-of-field doses from photon beams. Electron doses decreased sharply with depth before becoming nearly constant; the dose was found to decrease to a depth of approximately E(MeV)/4 in cm. With respect to neutron dosimetry, Q values and neutron dose equivalents increased with electron beam energy. Neutron contamination from electron beams was found to be much lower than that from photon beams. Even though the neutron dose equivalent for electron beams represented a small portion of neutron doses observed under photon beams, neutron doses from electron beams may need to be considered for special cases.
Sutherland, John C.
2017-04-15
Linear dichroism provides information on the orientation of chromophores part of, or bound to, an orientable molecule such as DNA. For molecular alignment induced by hydrodynamic shear, the principal axes orthogonal to the direction of alignment are not equivalent. Thus, the magnitude of the flow-induced change in absorption for light polarized parallel to the direction of flow can be more than a factor of two greater than the corresponding change for light polarized perpendicular to both that direction and the shear axis. The ratio of the two flow-induced changes in absorption, the dichroic increment ratio, is characterized using the orthogonalmore » orientation model, which assumes that each absorbing unit is aligned parallel to one of the principal axes of the apparatus. The absorption of the alienable molecules is characterized by components parallel and perpendicular to the orientable axis of the molecule. The dichroic increment ratio indicates that for the alignment of DNA in rectangular flow cells, average alignment is not uniaxial, but for higher shear, as produced in a Couette cell, it can be. The results from the simple model are identical to tensor models for typical experimental configuration. Approaches for measuring the dichroic increment ratio with modern dichrometers are further discussed.« less
Dust in a compact, cold, high-velocity cloud: A new approach to removing foreground emission
NASA Astrophysics Data System (ADS)
Lenz, D.; Flöer, L.; Kerp, J.
2016-02-01
Context. Because isolated high-velocity clouds (HVCs) are found at great distances from the Galactic radiation field and because they have subsolar metallicities, there have been no detections of dust in these structures. A key problem in this search is the removal of foreground dust emission. Aims: Using the Effelsberg-Bonn H I Survey and the Planck far-infrared data, we investigate a bright, cold, and clumpy HVC. This cloud apparently undergoes an interaction with the ambient medium and thus has great potential to form dust. Methods: To remove the local foreground dust emission we used a regularised, generalised linear model and we show the advantages of this approach with respect to other methods. To estimate the dust emissivity of the HVC, we set up a simple Bayesian model with mildly informative priors to perform the line fit instead of an ordinary linear least-squares approach. Results: We find that the foreground can be modelled accurately and robustly with our approach and is limited mostly by the cosmic infrared background. Despite this improvement, we did not detect any significant dust emission from this promising HVC. The 3σ-equivalent upper limit to the dust emissivity is an order of magnitude below the typical values for the Galactic interstellar medium.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutherland, John C.
Linear dichroism provides information on the orientation of chromophores part of, or bound to, an orientable molecule such as DNA. For molecular alignment induced by hydrodynamic shear, the principal axes orthogonal to the direction of alignment are not equivalent. Thus, the magnitude of the flow-induced change in absorption for light polarized parallel to the direction of flow can be more than a factor of two greater than the corresponding change for light polarized perpendicular to both that direction and the shear axis. The ratio of the two flow-induced changes in absorption, the dichroic increment ratio, is characterized using the orthogonalmore » orientation model, which assumes that each absorbing unit is aligned parallel to one of the principal axes of the apparatus. The absorption of the alienable molecules is characterized by components parallel and perpendicular to the orientable axis of the molecule. The dichroic increment ratio indicates that for the alignment of DNA in rectangular flow cells, average alignment is not uniaxial, but for higher shear, as produced in a Couette cell, it can be. The results from the simple model are identical to tensor models for typical experimental configuration. Approaches for measuring the dichroic increment ratio with modern dichrometers are further discussed.« less
Sutherland, John C
2017-04-15
Linear dichroism provides information on the orientation of chromophores part of, or bound to, an orientable molecule such as DNA. For molecular alignment induced by hydrodynamic shear, the principal axes orthogonal to the direction of alignment are not equivalent. Thus, the magnitude of the flow-induced change in absorption for light polarized parallel to the direction of flow can be more than a factor of two greater than the corresponding change for light polarized perpendicular to both that direction and the shear axis. The ratio of the two flow-induced changes in absorption, the dichroic increment ratio, is characterized using the orthogonal orientation model, which assumes that each absorbing unit is aligned parallel to one of the principal axes of the apparatus. The absorption of the alienable molecules is characterized by components parallel and perpendicular to the orientable axis of the molecule. The dichroic increment ratio indicates that for the alignment of DNA in rectangular flow cells, average alignment is not uniaxial, but for higher shear, as produced in a Couette cell, it can be. The results from the simple model are identical to tensor models for typical experimental configurations. Approaches for measuring the dichroic increment ratio with modern dichrometers are discussed. Copyright © 2017. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Achtemeier, Gary L.; Ochs, Harry T., III
1988-01-01
The variational method of undetermined multipliers is used to derive a multivariate model for objective analysis. The model is intended for the assimilation of 3-D fields of rawinsonde height, temperature and wind, and mean level temperature observed by satellite into a dynamically consistent data set. Relative measurement errors are taken into account. The dynamic equations are the two nonlinear horizontal momentum equations, the hydrostatic equation, and an integrated continuity equation. The model Euler-Lagrange equations are eleven linear and/or nonlinear partial differential and/or algebraic equations. A cyclical solution sequence is described. Other model features include a nonlinear terrain-following vertical coordinate that eliminates truncation error in the pressure gradient terms of the horizontal momentum equations and easily accommodates satellite observed mean layer temperatures in the middle and upper troposphere. A projection of the pressure gradient onto equivalent pressure surfaces removes most of the adverse impacts of the lower coordinate surface on the variational adjustment.
Tests of local Lorentz invariance violation of gravity in the standard model extension with pulsars.
Shao, Lijing
2014-03-21
The standard model extension is an effective field theory introducing all possible Lorentz-violating (LV) operators to the standard model and general relativity (GR). In the pure-gravity sector of minimal standard model extension, nine coefficients describe dominant observable deviations from GR. We systematically implemented 27 tests from 13 pulsar systems to tightly constrain eight linear combinations of these coefficients with extensive Monte Carlo simulations. It constitutes the first detailed and systematic test of the pure-gravity sector of minimal standard model extension with the state-of-the-art pulsar observations. No deviation from GR was detected. The limits of LV coefficients are expressed in the canonical Sun-centered celestial-equatorial frame for the convenience of further studies. They are all improved by significant factors of tens to hundreds with existing ones. As a consequence, Einstein's equivalence principle is verified substantially further by pulsar experiments in terms of local Lorentz invariance in gravity.
Geometric model of pseudo-distance measurement in satellite location systems
NASA Astrophysics Data System (ADS)
Panchuk, K. L.; Lyashkov, A. A.; Lyubchinov, E. V.
2018-04-01
The existing mathematical model of pseudo-distance measurement in satellite location systems does not provide a precise solution of the problem, but rather an approximate one. The existence of such inaccuracy, as well as bias in measurement of distance from satellite to receiver, results in inaccuracy level of several meters. Thereupon, relevance of refinement of the current mathematical model becomes obvious. The solution of the system of quadratic equations used in the current mathematical model is based on linearization. The objective of the paper is refinement of current mathematical model and derivation of analytical solution of the system of equations on its basis. In order to attain the objective, geometric analysis is performed; geometric interpretation of the equations is given. As a result, an equivalent system of equations, which allows analytical solution, is derived. An example of analytical solution implementation is presented. Application of analytical solution algorithm to the problem of pseudo-distance measurement in satellite location systems allows to improve the accuracy such measurements.
Conjugate gradient type methods for linear systems with complex symmetric coefficient matrices
NASA Technical Reports Server (NTRS)
Freund, Roland
1989-01-01
We consider conjugate gradient type methods for the solution of large sparse linear system Ax equals b with complex symmetric coefficient matrices A equals A(T). Such linear systems arise in important applications, such as the numerical solution of the complex Helmholtz equation. Furthermore, most complex non-Hermitian linear systems which occur in practice are actually complex symmetric. We investigate conjugate gradient type iterations which are based on a variant of the nonsymmetric Lanczos algorithm for complex symmetric matrices. We propose a new approach with iterates defined by a quasi-minimal residual property. The resulting algorithm presents several advantages over the standard biconjugate gradient method. We also include some remarks on the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
NASA Technical Reports Server (NTRS)
Krishnamurthy, Thiagarajan
2010-01-01
Equivalent plate analysis is often used to replace the computationally expensive finite element analysis in initial design stages or in conceptual design of aircraft wing structures. The equivalent plate model can also be used to design a wind tunnel model to match the stiffness characteristics of the wing box of a full-scale aircraft wing model while satisfying strength-based requirements An equivalent plate analysis technique is presented to predict the static and dynamic response of an aircraft wing with or without damage. First, a geometric scale factor and a dynamic pressure scale factor are defined to relate the stiffness, load and deformation of the equivalent plate to the aircraft wing. A procedure using an optimization technique is presented to create scaled equivalent plate models from the full scale aircraft wing using geometric and dynamic pressure scale factors. The scaled models are constructed by matching the stiffness of the scaled equivalent plate with the scaled aircraft wing stiffness. It is demonstrated that the scaled equivalent plate model can be used to predict the deformation of the aircraft wing accurately. Once the full equivalent plate geometry is obtained, any other scaled equivalent plate geometry can be obtained using the geometric scale factor. Next, an average frequency scale factor is defined as the average ratio of the frequencies of the aircraft wing to the frequencies of the full-scaled equivalent plate. The average frequency scale factor combined with the geometric scale factor is used to predict the frequency response of the aircraft wing from the scaled equivalent plate analysis. A procedure is outlined to estimate the frequency response and the flutter speed of an aircraft wing from the equivalent plate analysis using the frequency scale factor and geometric scale factor. The equivalent plate analysis is demonstrated using an aircraft wing without damage and another with damage. Both of the problems show that the scaled equivalent plate analysis can be successfully used to predict the frequencies and flutter speed of a typical aircraft wing.
Thomas, Megan L.A.; Fitzpatrick, Denis; McCreery, Ryan; Janky, Kristen L.
2017-01-01
Background Cervical and ocular Vestibular Evoked Myogenic Potentials (VEMPs) have become common clinical vestibular assessments. However, VEMP testing requires high intensity stimuli, raising concerns regarding safety with children, where sound pressure levels may be higher due to their smaller ear canal volumes. Purpose The purpose of this study was to estimate the range of peak-to-peak equivalent sound pressure levels (peSPLs) in child and adult ears in response to high intensity stimuli (i.e., 100 dB normal hearing level (nHL)) commonly used for VEMP testing and make a determination of whether acoustic stimuli levels with VEMP testing are safe for use in children. Research Design Prospective Experimental. Study Sample Ten children (4–6 years) and ten young adults (24 – 35 years) with normal hearing sensitivity and middle ear function participated in the study. Data Collection and Analysis Probe microphone peSPL measurements of clicks and 500 Hz tonebursts (TBs) were recorded in tubes of small, medium, and large diameter, and in a Brüel & Kjær Ear Simulator Type 4157 to assess for linearity of the stimulus at high levels. The different diameter tubes were used to approximate the range of cross-sectional areas in infant, child, and adult ears, respectively. Equivalent ear canal volume and peSPL measurements were then recorded in child and adult ears. Lower intensity levels were used in the participant’s ears to limit exposure to high intensity sound. The peSPL measurements in participant ears were extrapolated using predictions from linear mixed models to determine if equivalent ear canal volume significantly contributed to overall peSPL and to estimate the mean and 95% confidence intervals of peSPLs in child and adult ears when high intensity stimulus levels (100 dB nHL) are used for VEMP testing without exposing subjects to high-intensity stimuli. Results Measurements from the coupler and tubes suggested: 1) each stimuli was linear, 2) there were no distortions or non-linearities at high levels, and 3) peSPL increased with decreased tube diameter. Measurements in participant ears suggested: 1) peSPL was approximately 3 dB larger in child compared to adult ears, and 2) peSPL was larger in response to clicks compared to 500 Hz TBs. The model predicted the following 95% confidence interval for a 100 dB nHL click: 127–136.5 dB peSPL in adult ears and 128.7–138.2 dB peSPL in child ears. The model predicted the following 95% confidence interval for a 100 dB nHL 500 Hz TB stimulus: 122.2 – 128.2 dB peSPL in adult ears and 124.8–130.8 dB peSPL in child ears. Conclusions Our findings suggest that 1) when completing VEMP testing, the stimulus is approximately 3 dB higher in a child’s ear, 2) a 500 Hz TB is recommended over a click as it has lower peSPL compared to the click, and 3) both duration and intensity should be considered when choosing VEMP stimuli. Calculating the total sound energy exposure for your chosen stimuli is recommended as it accounts for both duration and intensity. When using this calculation for children, consider adding 3 dB to the stimulus level. PMID:28534730
GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING
Liu, Hongcheng; Yao, Tao; Li, Runze
2015-01-01
This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126
Testing the Einstein's equivalence principle with polarized gamma-ray bursts
NASA Astrophysics Data System (ADS)
Yang, Chao; Zou, Yuan-Chuan; Zhang, Yue-Yang; Liao, Bin; Lei, Wei-Hua
2017-07-01
The Einstein's equivalence principle can be tested by using parametrized post-Newtonian parameters, of which the parameter γ has been constrained by comparing the arrival times of photons with different energies. It has been constrained by a variety of astronomical transient events, such as gamma-ray bursts (GRBs), fast radio bursts as well as pulses of pulsars, with the most stringent constraint of Δγ ≲ 10-15. In this Letter, we consider the arrival times of lights with different circular polarization. For a linearly polarized light, it is the combination of two circularly polarized lights. If the arrival time difference between the two circularly polarized lights is too large, their combination may lose the linear polarization. We constrain the value of Δγp < 1.6 × 10-27 by the measurement of the polarization of GRB 110721A, which is the most stringent constraint ever achieved.
A canonical form of the equation of motion of linear dynamical systems
NASA Astrophysics Data System (ADS)
Kawano, Daniel T.; Salsa, Rubens Goncalves; Ma, Fai; Morzfeld, Matthias
2018-03-01
The equation of motion of a discrete linear system has the form of a second-order ordinary differential equation with three real and square coefficient matrices. It is shown that, for almost all linear systems, such an equation can always be converted by an invertible transformation into a canonical form specified by two diagonal coefficient matrices associated with the generalized acceleration and displacement. This canonical form of the equation of motion is unique up to an equivalence class for non-defective systems. As an important by-product, a damped linear system that possesses three symmetric and positive definite coefficients can always be recast as an undamped and decoupled system.
NASA Astrophysics Data System (ADS)
Wu, Y.; Xu, Z.; Li, Z. H.; Tang, C. X.
2012-07-01
In intermediate cavities of a relativistic klystron amplifier (RKA) driven by intense relativistic electron beam, the equivalent circuit model, which is widely adopted to investigate the interaction between bunched beam and the intermediate cavity in a conventional klystron design, is invalid due to the high gap voltage and the nonlinear beam loading in a RKA. According to Maxwell equations and Lorentz equation, the self-consistent equations for beam-wave interaction in the intermediate cavity are introduced to study the nonlinear interaction between bunched beam and the intermediate cavity in a RKA. Based on the equations, the effects of modulation depth and modulation frequency of the beam on the gap voltage amplitude and its phase are obtained. It is shown that the gap voltage is significantly lower than that estimated by the equivalent circuit model when the beam modulation is high. And the bandwidth becomes wider as the beam modulation depth increases. An S-band high gain relativistic klystron amplifier is designed based on the result. And the corresponding experiment is carried out on the linear transformer driver accelerator. The peak output power has achieved 1.2 GW with an efficiency of 28.6% and a gain of 46 dB in the corresponding experiment.
Design and Analysis of Tubular Permanent Magnet Linear Wave Generator
Si, Jikai; Feng, Haichao; Su, Peng; Zhang, Lufeng
2014-01-01
Due to the lack of mature design program for the tubular permanent magnet linear wave generator (TPMLWG) and poor sinusoidal characteristics of the air gap flux density for the traditional surface-mounted TPMLWG, a design method and a new secondary structure of TPMLWG are proposed. An equivalent mathematical model of TPMLWG is established to adopt the transformation relationship between the linear velocity of permanent magnet rotary generator and the operating speed of TPMLWG, to determine the structure parameters of the TPMLWG. The new secondary structure of the TPMLWG contains surface-mounted permanent magnets and the interior permanent magnets, which form a series-parallel hybrid magnetic circuit, and their reasonable structure parameters are designed to get the optimum pole-arc coefficient. The electromagnetic field and temperature field of TPMLWG are analyzed using finite element method. It can be included that the sinusoidal characteristics of air gap flux density of the new secondary structure TPMLWG are improved, the cogging force as well as mechanical vibration is reduced in the process of operation, and the stable temperature rise of generator meets the design requirements when adopting the new secondary structure of the TPMLWG. PMID:25050388
A new polytopic approach for the unknown input functional observer design
NASA Astrophysics Data System (ADS)
Bezzaoucha, Souad; Voos, Holger; Darouach, Mohamed
2018-03-01
In this paper, a constructive procedure to design Functional Unknown Input Observers for nonlinear continuous time systems is proposed under the Polytopic Takagi-Sugeno framework. An equivalent representation for the nonlinear model is achieved using the sector nonlinearity transformation. Applying the Lyapunov theory and the ? attenuation, linear matrix inequalities conditions are deduced which are solved for feasibility to obtain the observer design matrices. To cope with the effect of unknown inputs, classical approach of decoupling the unknown input for the linear case is used. Both algebraic and solver-based solutions are proposed (relaxed conditions). Necessary and sufficient conditions for the existence of the functional polytopic observer are given. For both approaches, the general and particular cases (measurable premise variables, full state estimation with full and reduced order cases) are considered and it is shown that the proposed conditions correspond to the one presented for standard linear case. To illustrate the proposed theoretical results, detailed numerical simulations are presented for a Quadrotor Aerial Robots Landing and a Waste Water Treatment Plant. Both systems are highly nonlinear and represented in a T-S polytopic form with unmeasurable premise variables and unknown inputs.
Experimental Validation of a Theory for a Variable Resonant Frequency Wave Energy Converter (VRFWEC)
NASA Astrophysics Data System (ADS)
Park, Minok; Virey, Louis; Chen, Zhongfei; Mäkiharju, Simo
2016-11-01
A point absorber wave energy converter designed to adapt to changes in wave frequency and be highly resilient to harsh conditions, was tested in a wave tank for wave periods from 0.8 s to 2.5 s. The VRFWEC consists of a closed cylindrical floater containing an internal mass moving vertically and connected to the floater through a spring system. The internal mass and equivalent spring constant are adjustable and enable to match the resonance frequency of the device to the exciting wave frequency, hence optimizing the performance. In a full scale device, a Permanent Magnet Linear Generator will convert the relative motion between the internal mass and the floater into electricity. For a PMLG as described in Yeung et al. (OMAE2012), the electromagnetic force proved to cause dominantly linear damping. Thus, for the present preliminary study it was possible to replace the generator with a linear damper. While the full scale device with 2.2 m diameter is expected to generate O(50 kW), the prototype could generate O(1 W). For the initial experiments the prototype was restricted to heave motion and data compared to predictions from a newly developed theoretical model (Chen, 2016).
A survey of the state of the art and focused research in range systems, task 2
NASA Technical Reports Server (NTRS)
Yao, K.
1986-01-01
Many communication, control, and information processing subsystems are modeled by linear systems incorporating tapped delay lines (TDL). Such optimized subsystems result in full precision multiplications in the TDL. In order to reduce complexity and cost in a microprocessor implementation, these multiplications can be replaced by single-shift instructions which are equivalent to powers of two multiplications. Since, in general, the obvious operation of rounding the infinite precision TDL coefficients to the nearest powers of two usually yield quite poor system performance, the optimum powers of two coefficient solution was considered. Detailed explanations on the use of branch-and-bound algorithms for finding the optimum powers of two solutions are given. Specific demonstration of this methodology to the design of a linear data equalizer and its implementation in assembly language on a 8080 microprocessor with a 12 bit A/D converter are reported. This simple microprocessor implementation with optimized TDL coefficients achieves a system performance comparable to the optimum linear equalization with full precision multiplications for an input data rate of 300 baud. The philosophy demonstrated in this implementation is dully applicable to many other microprocessor controlled information processing systems.
Design and analysis of tubular permanent magnet linear wave generator.
Si, Jikai; Feng, Haichao; Su, Peng; Zhang, Lufeng
2014-01-01
Due to the lack of mature design program for the tubular permanent magnet linear wave generator (TPMLWG) and poor sinusoidal characteristics of the air gap flux density for the traditional surface-mounted TPMLWG, a design method and a new secondary structure of TPMLWG are proposed. An equivalent mathematical model of TPMLWG is established to adopt the transformation relationship between the linear velocity of permanent magnet rotary generator and the operating speed of TPMLWG, to determine the structure parameters of the TPMLWG. The new secondary structure of the TPMLWG contains surface-mounted permanent magnets and the interior permanent magnets, which form a series-parallel hybrid magnetic circuit, and their reasonable structure parameters are designed to get the optimum pole-arc coefficient. The electromagnetic field and temperature field of TPMLWG are analyzed using finite element method. It can be included that the sinusoidal characteristics of air gap flux density of the new secondary structure TPMLWG are improved, the cogging force as well as mechanical vibration is reduced in the process of operation, and the stable temperature rise of generator meets the design requirements when adopting the new secondary structure of the TPMLWG.
Characterization of heat transfer in nutrient materials, part 2
NASA Technical Reports Server (NTRS)
Cox, J. E.; Bannerot, R. B.; Chen, C. K.; Witte, L. C.
1973-01-01
A thermal model is analyzed that takes into account phase changes in the nutrient material. The behavior of fluids in low gravity environments is discussed along with low gravity heat transfer. Thermal contact resistance in the Skylab food heater is analyzed. The original model is modified to include: equivalent conductance due to radiation, radial equivalent conductance, wall equivalent conductance, and equivalent heat capacity. A constant wall-temperature model is presented.
The generation of gravitational waves. 2: The post-linear formalism revisited
NASA Technical Reports Server (NTRS)
Crowley, R. J.; Thorne, K. S.
1975-01-01
Two different versions of the Green's function for the scalar wave equation in weakly curved spacetime (one due to DeWitt and DeWitt, the other to Thorne and Kovacs) are compared and contrasted; and their mathematical equivalence is demonstrated. The DeWitt-DeWitt Green's function is used to construct several alternative versions of the Thorne-Kovacs post-linear formalism for gravitational-wave generation. Finally it is shown that, in calculations of gravitational bremsstrahlung radiation, some of our versions of the post-linear formalism allow one to treat the interacting bodies as point masses, while others do not.
Consistent searches for SMEFT effects in non-resonant dijet events
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alte, Stefan; Konig, Matthias; Shepherd, William
Here, we investigate the bounds which can be placed on generic new-physics contributions to dijet production at the LHC using the framework of the Standard Model Effective Field Theory, deriving the first consistently-treated EFT bounds from non-resonant high-energy data. We recast an analysis searching for quark compositeness, equivalent to treating the SM with one higher-dimensional operator as a complete UV model. In order to reach consistent, model-independent EFT conclusions, it is necessary to truncate the EFT effects consistently at ordermore » $$1/\\Lambda^2$$ and to include the possibility of multiple operators simultaneously contributing to the observables, neither of which has been done in previous searches of this nature. Furthermore, it is important to give consistent error estimates for the theoretical predictions of the signal model, particularly in the region of phase space where the probed energy is approaching the cutoff scale of the EFT. There are two linear combinations of operators which contribute to dijet production in the SMEFT with distinct angular behavior; we identify those linear combinations and determine the ability of LHC searches to constrain them simultaneously. Consistently treating the EFT generically leads to weakened bounds on new-physics parameters. These constraints will be a useful input to future global analyses in the SMEFT framework, and the techniques used here to consistently search for EFT effects are directly applicable to other off-resonance signals.« less
2013-01-01
Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852
NASA Astrophysics Data System (ADS)
Waubke, Holger; Kasess, Christian H.
2016-11-01
Devices that emit structure-borne sound are commonly decoupled by elastic components to shield the environment from acoustical noise and vibrations. The elastic elements often have a hysteretic behavior that is typically neglected. In order to take hysteretic behavior into account, Bouc developed a differential equation for such materials, especially joints made of rubber or equipped with dampers. In this work, the Bouc model is solved by means of the Gaussian closure technique based on the Kolmogorov equation. Kolmogorov developed a method to derive probability density functions for arbitrary explicit first-order vector differential equations under white noise excitation using a partial differential equation of a multivariate conditional probability distribution. Up to now no analytical solution of the Kolmogorov equation in conjunction with the Bouc model exists. Therefore a wide range of approximate solutions, especially the statistical linearization, were developed. Using the Gaussian closure technique that is an approximation to the Kolmogorov equation assuming a multivariate Gaussian distribution an analytic solution is derived in this paper for the Bouc model. For the stationary case the two methods yield equivalent results, however, in contrast to statistical linearization the presented solution allows to calculate the transient behavior explicitly. Further, stationary case leads to an implicit set of equations that can be solved iteratively with a small number of iterations and without instabilities for specific parameter sets.
Consistent searches for SMEFT effects in non-resonant dijet events
Alte, Stefan; Konig, Matthias; Shepherd, William
2018-01-19
Here, we investigate the bounds which can be placed on generic new-physics contributions to dijet production at the LHC using the framework of the Standard Model Effective Field Theory, deriving the first consistently-treated EFT bounds from non-resonant high-energy data. We recast an analysis searching for quark compositeness, equivalent to treating the SM with one higher-dimensional operator as a complete UV model. In order to reach consistent, model-independent EFT conclusions, it is necessary to truncate the EFT effects consistently at ordermore » $$1/\\Lambda^2$$ and to include the possibility of multiple operators simultaneously contributing to the observables, neither of which has been done in previous searches of this nature. Furthermore, it is important to give consistent error estimates for the theoretical predictions of the signal model, particularly in the region of phase space where the probed energy is approaching the cutoff scale of the EFT. There are two linear combinations of operators which contribute to dijet production in the SMEFT with distinct angular behavior; we identify those linear combinations and determine the ability of LHC searches to constrain them simultaneously. Consistently treating the EFT generically leads to weakened bounds on new-physics parameters. These constraints will be a useful input to future global analyses in the SMEFT framework, and the techniques used here to consistently search for EFT effects are directly applicable to other off-resonance signals.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zwahlen, Daniel R.; Department of Radiation Oncology, University Hospital Zurich, Zurich; Ruben, Jeremy D.
2009-06-01
Purpose: To estimate and compare intensity-modulated radiotherapy (IMRT) with three-dimensional conformal radiotherapy (3DCRT) in terms of second cancer risk (SCR) for postoperative treatment of endometrial and cervical cancer. Methods and Materials: To estimate SCR, the organ equivalent dose concept with a linear-exponential, a plateau, and a linear dose-response model was applied to dose distributions, calculated in a planning computed tomography scan of a 68-year-old woman. Three plans were computed: four-field 18-MV 3DCRT and nine-field IMRT with 6- and 18-MV photons. SCR was estimated as a function of target dose (50.4 Gy/28 fractions) in organs of interest according to the Internationalmore » Commission on Radiological Protection Results: Cumulative SCR relative to 3DCRT was +6% (3% for a plateau model, -4% for a linear model) for 6-MV IMRT and +26% (25%, 4%) for the 18-MV IMRT plan. For an organ within the primary beam, SCR was +12% (0%, -12%) for 6-MV and +5% (-2%, -7%) for 18-MV IMRT. 18-MV IMRT increased SCR 6-7 times for organs away from the primary beam relative to 3DCRT and 6-MV IMRT. Skin SCR increased by 22-37% for 6-MV and 50-69% for 18-MV IMRT inasmuch as a larger volume of skin was exposed. Conclusion: Cancer risk after IMRT for cervical and endometrial cancer is dependent on treatment energy. 6-MV pelvic IMRT represents a safe alternative with respect to SCR relative to 3DCRT, independently of the dose-response model. 18-MV IMRT produces second neutrons that modestly increase the SCR.« less
Can the Equivalent Sphere Model Approximate Organ Doses in Space Radiation Environments?
NASA Technical Reports Server (NTRS)
Zi-Wei, Lin
2007-01-01
In space radiation calculations it is often useful to calculate the dose or dose equivalent in blood-forming organs (BFO). the skin or the eye. It has been customary to use a 5cm equivalent sphere to approximate the BFO dose. However previous studies have shown that a 5cm sphere gives conservative dose values for BFO. In this study we use a deterministic radiation transport with the Computerized Anatomical Man model to investigate whether the equivalent sphere model can approximate organ doses in space radiation environments. We find that for galactic cosmic rays environments the equivalent sphere model with an organ-specific constant radius parameter works well for the BFO dose equivalent and marginally well for the BFO dose and the dose equivalent of the eye or the skin. For solar particle events the radius parameters for the organ dose equivalent increase with the shielding thickness, and the model works marginally for BFO but is unacceptable for the eye or the skin The ranges of the radius parameters are also shown and the BFO radius parameters are found to be significantly larger than 5 cm in all eases.
Singh, Jai
2013-01-01
The objective of this study was a thorough reconsideration, within the framework of Newtonian mechanics and work-energy relationships, of the empirically interpreted relationships employed within the CRASH3 damage analysis algorithm in regards to linearity between barrier equivalent velocity (BEV) or peak collision force magnitude and residual damage depth. The CRASH3 damage analysis algorithm was considered, first in terms of the cases of collisions that produced no residual damage, in order to properly explain the damage onset speed and crush resistance terms. Under the modeling constraints of the collision partners representing a closed system and the a priori assumption of linearity between BEV or peak collision force magnitude and residual damage depth, the equations for the sole realistic model were derived. Evaluation of the work-energy relationships for collisions at or below the elastic limit revealed that the BEV or peak collision force magnitude relationships are bifurcated based upon the residual damage depth. Rather than being additive terms from the linear curve fits employed in the CRASH3 damage analysis algorithm, the Campbell b 0 and CRASH3 AL terms represent the maximum values that can be ascribed to the BEV or peak collision force magnitude, respectively, for collisions that produce zero residual damage. Collisions resulting in the production of non-zero residual damage depth already account for the surpassing of the elastic limit during closure and therefore the secondary addition of the elastic limit terms represents a double accounting of the same. This evaluation shows that the current energy absorbed formulation utilized in the CRASH3 damage analysis algorithm extraneously includes terms associated with the A and G stiffness coefficients. This sole realistic model, however, is limited, secondary to reducing the coefficient of restitution to a constant value for all cases in which the residual damage depth is nonzero. Linearity between BEV or peak collision force magnitude and residual damage depth may be applicable for particular ranges of residual damage depth for any given region of any given vehicle. Within the modeling construct employed by the CRASH3 damage algorithm, the case of uniform and ubiquitous linearity cannot be supported. Considerations regarding the inclusion of internal work recovered and restitution for modeling the separation phase change in velocity magnitude should account for not only the effects present during the evaluation of a vehicle-to-vehicle collision of interest but also to the approach taken for modeling the force-deflection response for each collision partner.
Jet Surface Interaction-Scrubbing Noise
NASA Technical Reports Server (NTRS)
Khavaran, Abbas
2013-01-01
Generation of sound due to scrubbing of a jet flow past a nearby solid surface is investigated within the framework of the generalized acoustic analogy theory. The analysis applies to the boundary layer noise generated at and near a wall, and excludes the scattered noise component that is produced at the leading or the trailing edge. While compressibility effects are relatively unimportant at very low Mach numbers, frictional heat generation and thermal gradient normal to the surface could play important roles in generation and propagation of sound in high speed jets of practical interest. A general expression is given for the spectral density of the far-field sound as governed by the variable density Pridmore- Brown equation. The propagation Green's function should be solved numerically starting with the boundary conditions on the surface and subject to specified mean velocity and temperature profiles between the surface and the observer. The equivalent sources of aerodynamic sound are associated with non-linear momentum flux and enthalpy flux terms that appear in the linearized Navier-Stokes equations. These multi-pole sources should be modeled and evaluated with input from a Reynolds-Averaged Navier-Stokes (RANS) solver with an appropriate turbulence model.
Influence of smooth temperature variation on hotspot ignition
NASA Astrophysics Data System (ADS)
Reinbacher, Fynn; Regele, Jonathan David
2018-01-01
Autoignition in thermally stratified reactive mixtures originates in localised hotspots. The ignition behaviour is often characterised using linear temperature gradients and more recently constant temperature plateaus combined with temperature gradients. Acoustic timescale characterisation of plateau regions has been successfully used to characterise the type of mechanical disturbance that will be created from a plateau core ignition. This work combines linear temperature gradients with superelliptic cores in order to more accurately account for a local temperature maximum of finite size and the smooth temperature variation contained inside realistic hotspot centres. A one-step Arrhenius reaction is used to model a H2-air reactive mixture. Using the superelliptic approach a range of behaviours for temperature distributions are investigated by varying the temperature profile between the gradient only and plateau and gradient bounding cases. Each superelliptic case is compared to a respective plateau and gradient case where simple acoustic timescale characterisation may be performed. It is shown that hot spots equivalent with excitation-to-acoustic timescale ratios sufficiently greater than unity exhibit behaviour very similar to a simple plateau-gradient model. However, for larger hot spots with timescale ratios sufficiently less than unity the reaction behaviour is highly dependent on the smooth temperature profile contained within the core region.
Constraining Solar Wind Heating Processes by Kinetic Properties of Heavy Ions
NASA Astrophysics Data System (ADS)
Tracy, Patrick J.; Kasper, Justin C.; Raines, Jim M.; Shearer, Paul; Gilbert, Jason A.; Zurbuchen, Thomas H.
2016-06-01
We analyze the heavy ion components (A >4 amu ) in collisionally young solar wind plasma and show that there is a clear, stable dependence of temperature on mass, probably reflecting the conditions in the solar corona. We consider both linear and power law forms for the dependence and find that a simple linear fit of the form Ti/Tp=(1.35 ±.02 )mi/mp describes the observations twice as well as the equivalent best fit power law of the form Ti/Tp=(mi/mp) 1.07 ±.01 . Most importantly we find that current model predictions based on turbulent transport and kinetic dissipation are in agreement with observed nonthermal heating in intermediate collisional age plasma for m /q <3.5 , but are not in quantitative or qualitative agreement with the lowest collisional age results. These dependencies provide new constraints on the physics of ion heating in multispecies plasmas, along with predictions to be tested by the upcoming Solar Probe Plus and Solar Orbiter missions to the near-Sun environment.
Randomly Sampled-Data Control Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Han, Kuoruey
1990-01-01
The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.
Tsuda, Shuichi; Sato, Tatsuhiko; Watanabe, Ritsuko; Takada, Masashi
2015-01-01
Using a wall-less tissue-equivalent proportional counter for a 0.72-μm site in tissue, we measured the radial dependence of the lineal energy distribution, yf(y), of 290-MeV/u carbon ions and 500-MeV/u iron ion beams. The measured yf(y) distributions and the dose-mean of y, y¯D, were compared with calculations performed with the track structure simulation code TRACION and the microdosimetric function of the Particle and Heavy Ion Transport code System (PHITS). The values of the measured y¯D were consistent with calculated results within an error of 2%, but differences in the shape of yf(y) were observed for iron ion irradiation. This result indicates that further improvement of the calculation model for yf(y) distribution in PHITS is needed for the analytical function that describes energy deposition by delta rays, particularly for primary ions having linear energy transfer in excess of a few hundred keV μm−1. PMID:25210053
Constitutive relations describing creep deformation for multi-axial time-dependent stress states
NASA Astrophysics Data System (ADS)
McCartney, L. N.
1981-02-01
A THEORY of primary and secondary creep deformation in metals is presented, which is based upon the concept of tensor internal state variables and the principles of continuum mechanics and thermodynamics. The theory is able to account for both multi-axial and time-dependent stress and strain states. The wellknown concepts of elastic, anelastic and plastic strains follow naturally from the theory. Homogeneous stress states are considered in detail and a simplified theory is derived by linearizing with respect to the internal state variables. It is demonstrated that the model can be developed in such a way that multi-axial constant-stress creep data can be presented as a single relationship between an equivalent stress and an equivalent strain. It is shown how the theory may be used to describe the multi-axial deformation of metals which are subjected to constant stress states. The multi-axial strain response to a general cyclic stress state is calculated. For uni-axial stress states, square-wave loading and a thermal fatigue stress cycle are analysed.
Biomethane potential of the POME generated in the palm oil industry in Ghana from 2002 to 2009.
Arthur, Richard; Glover, Kwasi
2012-05-01
The palm oil industry experienced significant improvement in its production level from 2002 to 2009 from the established companies, medium scale mills (MSM), small scale and other private holdings (SS and OPH) groups. However, the same cannot be said for treatment of the palm oil mill effluent (POME) produced. The quantity of crude palm oil (CPO) produced in Ghana from 2002 to 2009 and IPCC guidelines for National Greenhouse Gas Inventories, specifically on industrial wastewater were used in this study. During this period about 10 million cubic metres of POME was produced translating into biomethane potential of 38.5 million m(3) with equivalent of 388.29 GW h of energy. A linear growth model developed to predict the equivalent carbon dioxide (CO(2)) emissions indicates that if the biomethane is not harnessed then by 2015 the untreated POME could produce 0.58 million tCO(2)-eq and is expected to increase to 0.70 million tCO(2)-eq by 2020. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Carnal, Fabrice; Stoll, Serge
2011-01-01
Monte Carlo simulations have been used to study two different models of a weak linear polyelectrolyte surrounded by explicit counterions and salt particles: (i) a rigid rod and (ii) a flexible chain. We focused on the influence of the pH, chain stiffness, salt concentration, and valency on the polyelectrolyte titration process and conformational properties. It is shown that chain acid-base properties and conformational properties are strongly modified when multivalent salt concentration variation ranges below the charge equivalence. Increasing chain stiffness allows to minimize intramolecular electrostatic monomer interactions hence improving the deprotonation process. The presence of di and trivalent salt cations clearly promotes the chain degree of ionization but has only a limited effect at very low salt concentration ranges. Moreover, folded structures of fully charged chains are only observed when multivalent salt at a concentration equal or above charge equivalence is considered. Long-range electrostatic potential is found to influence the distribution of charges along and around the polyelectrolyte backbones hence resulting in a higher degree of ionization and a lower attraction of counterions and salt particles at the chain extremities.
Carnal, Fabrice; Stoll, Serge
2011-01-28
Monte Carlo simulations have been used to study two different models of a weak linear polyelectrolyte surrounded by explicit counterions and salt particles: (i) a rigid rod and (ii) a flexible chain. We focused on the influence of the pH, chain stiffness, salt concentration, and valency on the polyelectrolyte titration process and conformational properties. It is shown that chain acid-base properties and conformational properties are strongly modified when multivalent salt concentration variation ranges below the charge equivalence. Increasing chain stiffness allows to minimize intramolecular electrostatic monomer interactions hence improving the deprotonation process. The presence of di and trivalent salt cations clearly promotes the chain degree of ionization but has only a limited effect at very low salt concentration ranges. Moreover, folded structures of fully charged chains are only observed when multivalent salt at a concentration equal or above charge equivalence is considered. Long-range electrostatic potential is found to influence the distribution of charges along and around the polyelectrolyte backbones hence resulting in a higher degree of ionization and a lower attraction of counterions and salt particles at the chain extremities.
Radiative transfer calculated from a Markov chain formalism
NASA Technical Reports Server (NTRS)
Esposito, L. W.; House, L. L.
1978-01-01
The theory of Markov chains is used to formulate the radiative transport problem in a general way by modeling the successive interactions of a photon as a stochastic process. Under the minimal requirement that the stochastic process is a Markov chain, the determination of the diffuse reflection or transmission from a scattering atmosphere is equivalent to the solution of a system of linear equations. This treatment is mathematically equivalent to, and thus has many of the advantages of, Monte Carlo methods, but can be considerably more rapid than Monte Carlo algorithms for numerical calculations in particular applications. We have verified the speed and accuracy of this formalism for the standard problem of finding the intensity of scattered light from a homogeneous plane-parallel atmosphere with an arbitrary phase function for scattering. Accurate results over a wide range of parameters were obtained with computation times comparable to those of a standard 'doubling' routine. The generality of this formalism thus allows fast, direct solutions to problems that were previously soluble only by Monte Carlo methods. Some comparisons are made with respect to integral equation methods.
Ergodic properties of spiking neuronal networks with delayed interactions
NASA Astrophysics Data System (ADS)
Palmigiano, Agostina; Wolf, Fred
The dynamical stability of neuronal networks, and the possibility of chaotic dynamics in the brain pose profound questions to the mechanisms underlying perception. Here we advance on the tractability of large neuronal networks of exactly solvable neuronal models with delayed pulse-coupled interactions. Pulse coupled delayed systems with an infinite dimensional phase space can be studied in equivalent systems of fixed and finite degrees of freedom by introducing a delayer variable for each neuron. A Jacobian of the equivalent system can be analytically obtained, and numerically evaluated. We find that depending on the action potential onset rapidness and the level of heterogeneities, the asynchronous irregular regime characteristic of balanced state networks loses stability with increasing delays to either a slow synchronous irregular or a fast synchronous irregular state. In networks of neurons with slow action potential onset, the transition to collective oscillations leads to an increase of the exponential rate of divergence of nearby trajectories and of the entropy production rate of the chaotic dynamics. The attractor dimension, instead of increasing linearly with increasing delay as reported in many other studies, decreases until eventually the network reaches full synchrony
Computational technique for stepwise quantitative assessment of equation correctness
NASA Astrophysics Data System (ADS)
Othman, Nuru'l Izzah; Bakar, Zainab Abu
2017-04-01
Many of the computer-aided mathematics assessment systems that are available today possess the capability to implement stepwise correctness checking of a working scheme for solving equations. The computational technique for assessing the correctness of each response in the scheme mainly involves checking the mathematical equivalence and providing qualitative feedback. This paper presents a technique, known as the Stepwise Correctness Checking and Scoring (SCCS) technique that checks the correctness of each equation in terms of structural equivalence and provides quantitative feedback. The technique, which is based on the Multiset framework, adapts certain techniques from textual information retrieval involving tokenization, document modelling and similarity evaluation. The performance of the SCCS technique was tested using worked solutions on solving linear algebraic equations in one variable. 350 working schemes comprising of 1385 responses were collected using a marking engine prototype, which has been developed based on the technique. The results show that both the automated analytical scores and the automated overall scores generated by the marking engine exhibit high percent agreement, high correlation and high degree of agreement with manual scores with small average absolute and mixed errors.
Impedimetric method for measuring ultra-low E. coli concentrations in human urine.
Settu, Kalpana; Chen, Ching-Jung; Liu, Jen-Tsai; Chen, Chien-Lung; Tsai, Jang-Zern
2015-04-15
In this study, we developed an interdigitated gold microelectrode-based impedance sensor to detect Escherichia coli (E. coli) in human urine samples for urinary tract infection (UTI) diagnosis. E. coli growth in human urine samples was successfully monitored during a 12-h culture, and the results showed that the maximum relative changes could be measured at 10Hz. An equivalent electrical circuit model was used for evaluating the variations in impedance characteristics of bacterial growth. The equivalent circuit analysis indicated that the change in impedance values at low frequencies was caused by double layer capacitance due to bacterial attachment and formation of biofilm on electrode surface in urine. A linear relationship between the impedance change and initial E. coli concentration was obtained with the coefficient of determination R(2)>0.90 at various growth times of 1, 3, 5, 7, 9 and 12h in urine. Thus our sensor is capable of detecting a wide range of E. coli concentration, 7×10(0) to 7×10(8) cells/ml, in urine samples with high sensitivity. Copyright © 2014 Elsevier B.V. All rights reserved.
Zhou, Yufeng; Zhong, Pei
2006-06-01
A theoretical model for the propagation of shock wave from an axisymmetric reflector was developed by modifying the initial conditions for the conventional solution of a nonlinear parabolic wave equation (i.e., the Khokhlov-Zabolotskaya-Kuznestsov equation). The ellipsoidal reflector of an HM-3 lithotripter is modeled equivalently as a self-focusing spherically distributed pressure source. The pressure wave form generated by the spark discharge of the HM-3 electrode was measured by a fiber optic probe hydrophone and used as source conditions in the numerical calculation. The simulated pressure wave forms, accounting for the effects of diffraction, nonlinearity, and thermoviscous absorption in wave propagation and focusing, were compared with the measured results and a reasonably good agreement was found. Furthermore, the primary characteristics in the pressure wave forms produced by different reflector geometries, such as that produced by a reflector insert, can also be predicted by this model. It is interesting to note that when the interpulse delay time calculated by linear geometric model is less than about 1.5 micros, two pulses from the reflector insert and the uncovered bottom of the original HM-3 reflector will merge together. Coupling the simulated pressure wave form with the Gilmore model was carried out to evaluate the effect of reflector geometry on resultant bubble dynamics in a lithotripter field. Altogether, the equivalent reflector model was found to provide a useful tool for the prediction of pressure wave form generated in a lithotripter field. This model may be used to guide the design optimization of reflector geometries for improving the performance and safety of clinical lithotripters.
Zhou, Yufeng; Zhong, Pei
2007-01-01
A theoretical model for the propagation of shock wave from an axisymmetric reflector was developed by modifying the initial conditions for the conventional solution of a nonlinear parabolic wave equation (i.e., the Khokhlov–Zabolotskaya–Kuznestsov equation). The ellipsoidal reflector of an HM-3 lithotripter is modeled equivalently as a self-focusing spherically distributed pressure source. The pressure wave form generated by the spark discharge of the HM-3 electrode was measured by a fiber optic probe hydrophone and used as source conditions in the numerical calculation. The simulated pressure wave forms, accounting for the effects of diffraction, nonlinearity, and thermoviscous absorption in wave propagation and focusing, were compared with the measured results and a reasonably good agreement was found. Furthermore, the primary characteristics in the pressure wave forms produced by different reflector geometries, such as that produced by a reflector insert, can also be predicted by this model. It is interesting to note that when the interpulse delay time calculated by linear geometric model is less than about 1.5 μs, two pulses from the reflector insert and the uncovered bottom of the original HM-3 reflector will merge together. Coupling the simulated pressure wave form with the Gilmore model was carried out to evaluate the effect of reflector geometry on resultant bubble dynamics in a lithotripter field. Altogether, the equivalent reflector model was found to provide a useful tool for the prediction of pressure wave form generated in a lithotripter field. This model may be used to guide the design optimization of reflector geometries for improving the performance and safety of clinical lithotripters. PMID:16838506
An Empirical Temperature Variance Source Model in Heated Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James
2012-01-01
An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.
Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.
Böhning, Dankmar; Kuhnert, Ronny
2006-12-01
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
Design and analysis of an unconventional permanent magnet linear machine for energy harvesting
NASA Astrophysics Data System (ADS)
Zeng, Peng
This Ph.D. dissertation proposes an unconventional high power density linear electromagnetic kinetic energy harvester, and a high-performance two-stage interface power electronics to maintain maximum power abstraction from the energy source and charge the Li-ion battery load with constant current. The proposed machine architecture is composed of a double-sided flat type silicon steel stator with winding slots, a permanent magnet mover, coil windings, a linear motion guide and an adjustable spring bearing. The unconventional design of the machine is that NdFeB magnet bars in the mover are placed with magnetic fields in horizontal direction instead of vertical direction and the same magnetic poles are facing each other. The derived magnetic equivalent circuit model proves the average air-gap flux density of the novel topology is as high as 0.73 T with 17.7% improvement over that of the conventional topology at the given geometric dimensions of the proof-of-concept machine. Subsequently, the improved output voltage and power are achieved. The dynamic model of the linear generator is also developed, and the analytical equations of output maximum power are derived for the case of driving vibration with amplitude that is equal, smaller and larger than the relative displacement between the mover and the stator of the machine respectively. Furthermore, the finite element analysis (FEA) model has been simulated to prove the derived analytical results and the improved power generation capability. Also, an optimization framework is explored to extend to the multi-Degree-of-Freedom (n-DOF) vibration based linear energy harvesting devices. Moreover, a boost-buck cascaded switch mode converter with current controller is designed to extract the maximum power from the harvester and charge the Li-ion battery with trickle current. Meanwhile, a maximum power point tracking (MPPT) algorithm is proposed and optimized for low frequency driving vibrations. Finally, a proof-of-concept unconventional permanent magnet (PM) linear generator is prototyped and tested to verify the simulation results of the FEA model. For the coil windings of 33, 66 and 165 turns, the output power of the machine is tested to have the output power of 65.6 mW, 189.1 mW, and 497.7 mW respectively with the maximum power density of 2.486 mW/cm3.
NASA Astrophysics Data System (ADS)
Rodriguez Marco, Albert
Battery management systems (BMS) require computationally simple but highly accurate models of the battery cells they are monitoring and controlling. Historically, empirical equivalent-circuit models have been used, but increasingly researchers are focusing their attention on physics-based models due to their greater predictive capabilities. These models are of high intrinsic computational complexity and so must undergo some kind of order-reduction process to make their use by a BMS feasible: we favor methods based on a transfer-function approach of battery cell dynamics. In prior works, transfer functions have been found from full-order PDE models via two simplifying assumptions: (1) a linearization assumption--which is a fundamental necessity in order to make transfer functions--and (2) an assumption made out of expedience that decouples the electrolyte-potential and electrolyte-concentration PDEs in order to render an approach to solve for the transfer functions from the PDEs. This dissertation improves the fidelity of physics-based models by eliminating the need for the second assumption and, by linearizing nonlinear dynamics around different constant currents. Electrochemical transfer functions are infinite-order and cannot be expressed as a ratio of polynomials in the Laplace variable s. Thus, for practical use, these systems need to be approximated using reduced-order models that capture the most significant dynamics. This dissertation improves the generation of physics-based reduced-order models by introducing different realization algorithms, which produce a low-order model from the infinite-order electrochemical transfer functions. Physics-based reduced-order models are linear and describe cell dynamics if operated near the setpoint at which they have been generated. Hence, multiple physics-based reduced-order models need to be generated at different setpoints (i.e., state-of-charge, temperature and C-rate) in order to extend the cell operating range. This dissertation improves the implementation of physics-based reduced-order models by introducing different blending approaches that combine the pre-computed models generated (offline) at different setpoints in order to produce good electrochemical estimates (online) along the cell state-of-charge, temperature and C-rate range.
Wang, Y; Lin, D; Fu, T
1997-03-01
Morphology of inorganic material powders before and after being treated by ultrafine crush was observed by transformite electron microscope. The length and diameter of granules were measured. Polymers inorganic material powders before and after being treated by ultrafine crush were used for preparing radiological equivalent materials. Blending compatibility of inorganic meterials with polymer materials was observed by scanning electron microscope. CT values of tissue equivalent materials were measured by X-ray CT. Distribution of inorganic materials was examined. The compactness of materials was determined by the water absorbed method. The elastic module of materials was measured by laser speckle interferementry method. The results showed that the inorganic material powders treated by the ultrafine crush blent well with polymer and the distribution of these powders in the polymer was homogeneous. The equivalent errors of linear attenuation coefficients and CT values of equivalent materials were small. Their elastic modules increased one order of magnitude from 6.028 x 10(2) kg/cm2 to 9.753 x 10(3) kg/cm2. In addition, the rod inorganic material powders having rod granule blent easily with polymer. The present study provides a theoretical guidance and experimental basis for the design and synthesis of radiological equivalent materials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klink, W.H.; Wickramasekara, S., E-mail: wickrama@grinnell.edu; Department of Physics, Grinnell College, Grinnell, IA 50112
2014-01-15
In previous work we have developed a formulation of quantum mechanics in non-inertial reference frames. This formulation is grounded in a class of unitary cocycle representations of what we have called the Galilean line group, the generalization of the Galilei group that includes transformations amongst non-inertial reference frames. These representations show that in quantum mechanics, just as is the case in classical mechanics, the transformations to accelerating reference frames give rise to fictitious forces. A special feature of these previously constructed representations is that they all respect the non-relativistic equivalence principle, wherein the fictitious forces associated with linear acceleration canmore » equivalently be described by gravitational forces. In this paper we exhibit a large class of cocycle representations of the Galilean line group that violate the equivalence principle. Nevertheless the classical mechanics analogue of these cocycle representations all respect the equivalence principle. -- Highlights: •A formulation of Galilean quantum mechanics in non-inertial reference frames is given. •The key concept is the Galilean line group, an infinite dimensional group. •A large class of general cocycle representations of the Galilean line group is constructed. •These representations show violations of the equivalence principle at the quantum level. •At the classical limit, no violations of the equivalence principle are detected.« less
NASA Technical Reports Server (NTRS)
Summers, Geoffrey P.; Burke, Edward A.; Shapiro, Philip; Statler, Richard; Messenger, Scott R.; Walters, Robert J.
1994-01-01
It has been found useful in the past to use the concept of 'equivalent fluence' to compare the radiation response of different solar cell technologies. Results are usually given in terms of an equivalent 1 MeV electron or an equivalent 10 MeV proton fluence. To specify cell response in a complex space-radiation environment in terms of an equivalent fluence, it is necessary to measure damage coefficients for a number of representative electron and proton energies. However, at the last Photovoltaic Specialist Conference we showed that nonionizing energy loss (NIEL) could be used to correlate damage coefficients for protons, using measurements for GaAs as an example. This correlation means that damage coefficients for all proton energies except near threshold can be predicted from a measurement made at one particular energy. NIEL is the exact equivalent for displacement damage of linear energy transfer (LET) for ionization energy loss. The use of NIEL in this way leads naturally to the concept of 10 MeV equivalent proton fluence. The situation for electron damage is more complex, however. It is shown that the concept of 'displacement damage dose' gives a more general way of unifying damage coefficients. It follows that 1 MeV electron equivalent fluence is a special case of a more general quantity for unifying electron damage coefficients which we call the 'effective 1 MeV electron equivalent dose'.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spears, Robert Edward; Coleman, Justin Leigh
Currently the Department of Energy (DOE) and the nuclear industry perform seismic soil-structure interaction (SSI) analysis using equivalent linear numerical analysis tools. For lower levels of ground motion, these tools should produce reasonable in-structure response values for evaluation of existing and new facilities. For larger levels of ground motion these tools likely overestimate the in-structure response (and therefore structural demand) since they do not consider geometric nonlinearities (such as gaping and sliding between the soil and structure) and are limited in the ability to model nonlinear soil behavior. The current equivalent linear SSI (SASSI) analysis approach either joins the soilmore » and structure together in both tension and compression or releases the soil from the structure for both tension and compression. It also makes linear approximations for material nonlinearities and generalizes energy absorption with viscous damping. This produces the potential for inaccurately establishing where the structural concerns exist and/or inaccurately establishing the amplitude of the in-structure responses. Seismic hazard curves at nuclear facilities have continued to increase over the years as more information has been developed on seismic sources (i.e. faults), additional information gathered on seismic events, and additional research performed to determine local site effects. Seismic hazard curves are used to develop design basis earthquakes (DBE) that are used to evaluate nuclear facility response. As the seismic hazard curves increase, the input ground motions (DBE’s) used to numerically evaluation nuclear facility response increase causing larger in-structure response. As ground motions increase so does the importance of including nonlinear effects in numerical SSI models. To include material nonlinearity in the soil and geometric nonlinearity using contact (gaping and sliding) it is necessary to develop a nonlinear time domain methodology. This methodology will be known as, NonLinear Soil-Structure Interaction (NLSSI). In general NLSSI analysis should provide a more accurate representation of the seismic demands on nuclear facilities their systems and components. INL, in collaboration with a Nuclear Power Plant Vender (NPP-V), will develop a generic Nuclear Power Plant (NPP) structural design to be used in development of the methodology and for comparison with SASSI. This generic NPP design has been evaluated for the INL soil site because of the ease of access and quality of the site specific data. It is now being evaluated for a second site at Vogtle which is located approximately 15 miles East-Northeast of Waynesboro, Georgia and adjacent to Savanna River. The Vogtle site consists of many soil layers spanning down to a depth of 1058 feet. The reason that two soil sites are chosen is to demonstrate the methodology across multiple soil sites. The project will drive the models (soil and structure) using successively increasing acceleration time histories with amplitudes. The models will be run in time domain codes such as ABAQUS, LS-DYNA, and/or ESSI and compared with the same models run in SASSI. The project is focused on developing and documenting a method for performing time domain, non-linear seismic soil structure interaction (SSI) analysis. Development of this method will provide the Department of Energy (DOE) and industry with another tool to perform seismic SSI analysis.« less
Modeling spin magnetization transport in a spatially varying magnetic field
NASA Astrophysics Data System (ADS)
Picone, Rico A. R.; Garbini, Joseph L.; Sidles, John A.
2015-01-01
We present a framework for modeling the transport of any number of globally conserved quantities in any spatial configuration and apply it to obtain a model of magnetization transport for spin-systems that is valid in new regimes (including high-polarization). The framework allows an entropy function to define a model that explicitly respects the laws of thermodynamics. Three facets of the model are explored. First, it is expressed as nonlinear partial differential equations that are valid for the new regime of high dipole-energy and polarization. Second, the nonlinear model is explored in the limit of low dipole-energy (semi-linear), from which is derived a physical parameter characterizing separative magnetization transport (SMT). It is shown that the necessary and sufficient condition for SMT to occur is that the parameter is spatially inhomogeneous. Third, the high spin-temperature (linear) limit is shown to be equivalent to the model of nuclear spin transport of Genack and Redfield (1975) [1]. Differences among the three forms of the model are illustrated by numerical solution with parameters corresponding to a magnetic resonance force microscopy (MRFM) experiment (Degen et al., 2009 [2]; Kuehn et al., 2008 [3]; Sidles et al., 2003 [4]; Dougherty et al., 2000 [5]). A family of analytic, steady-state solutions to the nonlinear equation is derived and shown to be the spin-temperature analog of the Langevin paramagnetic equation and Curie's law. Finally, we analyze the separative quality of magnetization transport, and a steady-state solution for the magnetization is shown to be compatible with Fenske's separative mass transport equation (Fenske, 1932 [6]).
Oak, Sameer R; O'Rourke, Colin; Strnad, Greg; Andrish, Jack T; Parker, Richard D; Saluan, Paul; Jones, Morgan H; Stegmeier, Nicole A; Spindler, Kurt P
2015-09-01
The International Knee Documentation Committee (IKDC) Subjective Knee Evaluation Form is a patient-reported outcome with adult (1998) and pediatric (2011) versions validated at different ages. Prior longitudinal studies of patients aged 13 to 17 years who tore their anterior cruciate ligament (ACL) have used the only available adult IKDC, whereas currently the pediatric IKDC is the accepted form of choice. This study compared the adult and pediatric IKDC forms and tested whether the differences were clinically significant. The hypothesis was that the pediatric and adult IKDC questionnaires would show no clinically significant differences in score when completed by patients aged 13 to 17 years. Cohort study (diagnosis); Level of evidence, 2. A total of 100 participants aged 13 to 17 years with knee injuries were split into 2 groups by use of simple randomization. One group answered the adult IKDC form first and then the pediatric form. The second group answered the pediatric IKDC form first and then the adult form. A 10-minute break was given between form administrations to prevent rote repetition of answers. Study design was based on established methods to compare 2 forms of patient-reported outcomes. A 5-point threshold for clinical significance was set below previously published minimum clinically important differences for the adult IKDC. Paired t tests were used to test both differences and equivalence between scores. By ordinary least-squares models, scores were modeled to predict adult scores given certain pediatric scores and vice versa. Comparison between adult and pediatric IKDC scores showed a statistically significant difference of 1.5 points; however, the 95% CI (0.3-2.6) fell below the threshold of 5 points set for clinical significance. Further equivalence testing showed the 95% CI (0.5-2.4) between adult and pediatric scores being within the defined 5-point equivalence region. The scores were highly correlated, with a linear relationship (R(2) = 92%). There was no clinically significant difference between the pediatric and adult IKDC form scores in adolescents aged 13 to 17 years. This result allows use of whichever form is most practical for long-term tracking of patients. A simple linear equation can convert one form into the other. If the adult questionnaire is used at this age, it can be consistently used during follow-up. © 2015 The Author(s).
Huang, Yihua; Huang, Wenjin; Wang, Qinglei; Su, Xujian
2013-07-01
The equivalent circuit model of a piezoelectric transformer is useful in designing and optimizing the related driving circuits. Based on previous work, an equivalent circuit model for a circular flexural-vibration-mode piezoelectric transformer with moderate thickness is proposed and validated by finite element analysis. The input impedance, voltage gain, and efficiency of the transformer are determined through computation. The basic behaviors of the transformer are shown by numerical results.
Bakhshandeh, Mohsen; Hashemi, Bijan; Mahdavi, Seied Rabi Mehdi; Nikoofar, Alireza; Vasheghani, Maryam; Kazemnejad, Anoshirvan
2013-02-01
To determine the dose-response relationship of the thyroid for radiation-induced hypothyroidism in head-and-neck radiation therapy, according to 6 normal tissue complication probability models, and to find the best-fit parameters of the models. Sixty-five patients treated with primary or postoperative radiation therapy for various cancers in the head-and-neck region were prospectively evaluated. Patient serum samples (tri-iodothyronine, thyroxine, thyroid-stimulating hormone [TSH], free tri-iodothyronine, and free thyroxine) were measured before and at regular time intervals until 1 year after the completion of radiation therapy. Dose-volume histograms (DVHs) of the patients' thyroid gland were derived from their computed tomography (CT)-based treatment planning data. Hypothyroidism was defined as increased TSH (subclinical hypothyroidism) or increased TSH in combination with decreased free thyroxine and thyroxine (clinical hypothyroidism). Thyroid DVHs were converted to 2 Gy/fraction equivalent doses using the linear-quadratic formula with α/β = 3 Gy. The evaluated models included the following: Lyman with the DVH reduced to the equivalent uniform dose (EUD), known as LEUD; Logit-EUD; mean dose; relative seriality; individual critical volume; and population critical volume models. The parameters of the models were obtained by fitting the patients' data using a maximum likelihood analysis method. The goodness of fit of the models was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. Twenty-nine patients (44.6%) experienced hypothyroidism. None of the models was rejected according to the evaluation of the goodness of fit. The mean dose model was ranked as the best model on the basis of its Akaike's information criterion value. The D(50) estimated from the models was approximately 44 Gy. The implemented normal tissue complication probability models showed a parallel architecture for the thyroid. The mean dose model can be used as the best model to describe the dose-response relationship for hypothyroidism complication. Copyright © 2013 Elsevier Inc. All rights reserved.
Remarks on non-maximal integral elements of the Cartan plane in jet spaces
NASA Astrophysics Data System (ADS)
Bächtold, M.; Moreno, G.
2014-11-01
There is a natural filtration on the space of degree-k homogeneous polynomials in n independent variables with coefficients in the algebra of smooth functions on the Grassmannian Gr (n,s), determined by the tautological bundle. In this paper we show that the space of s-dimensional integral elements of a Cartan plane on J(E,n), with dimE=n+m, has an affine bundle structure modeled by the so-obtained bundles over Gr (n,s), and we study a natural distribution associated with it. As an example, we show that a third-order nonlinear PDE of Monge-Ampère type is not contact-equivalent to a quasi-linear one.
Xu, J; Bhattacharya, P; Váró, G
2004-03-15
The light-sensitive protein, bacteriorhodopsin (BR), is monolithically integrated with an InP-based amplifier circuit to realize a novel opto-electronic integrated circuit (OEIC) which performs as a high-speed photoreceiver. The circuit is realized by epitaxial growth of the field-effect transistors, currently used semiconductor device and circuit fabrication techniques, and selective area BR electro-deposition. The integrated photoreceiver has a responsivity of 175 V/W and linear photoresponse, with a dynamic range of 16 dB, with 594 nm photoexcitation. The dynamics of the photochemical cycle of BR has also been modeled and a proposed equivalent circuit simulates the measured BR photoresponse with good agreement.
Ionizing radiation measurements on LDEF: A0015 Free flyer biostack experiment
NASA Technical Reports Server (NTRS)
Benton, E. V.; Frank, A. L.; Benton, E. R.; Csige, I.; Frigo, L. A.
1995-01-01
This report covers the analysis of passive radiation detectors flown as part of the A0015 Free Flyer Biostack on LDEF (Long Duration Exposure Facility). LET (linear energy transfer) spectra and track density measurements were made with CR-39 and Polycarbonate plastic nuclear track detectors. Measurements of total absorbed dose were carried out using Thermoluminescent Detectors. Thermal and resonance neutron dose equivalents were measured with LiF/CR-39 detectors. High energy neutron and proton dose equivalents were measured with fission foil/CR-39 detectors.
Determination of precipitation profiles from airborne passive microwave radiometric measurements
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Hakkarinen, Ida M.; Pierce, Harold F.; Weinman, James A.
1991-01-01
This study presents the first quantitative retrievals of vertical profiles of precipitation derived from multispectral passive microwave radiometry. Measurements of microwave brightness temperature (Tb) obtained by a NASA high-altitude research aircraft are related to profiles of rainfall rate through a multichannel piecewise-linear statistical regression procedure. Statistics for Tb are obtained from a set of cloud radiative models representing a wide variety of convective, stratiform, and anvil structures. The retrieval scheme itself determines which cloud model best fits the observed meteorological conditions. Retrieved rainfall rate profiles are converted to equivalent radar reflectivity for comparison with observed reflectivities from a ground-based research radar. Results for two case studies, a stratiform rain situation and an intense convective thunderstorm, show that the radiometrically derived profiles capture the major features of the observed vertical structure of hydrometer density.
Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1991-01-01
We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
Lin, Meng-Yin; Chang, David C K; Shen, Yun-Dun; Lin, Yen-Kuang; Lin, Chang-Ping; Wang, I-Jong
2016-01-01
The aim of this study is to describe factors that influence the measured intraocular pressure (IOP) change and to develop a predictive model after myopic laser in situ keratomileusis (LASIK) with a femtosecond (FS) laser or a microkeratome (MK). We retrospectively reviewed preoperative, intraoperative, and 12-month postoperative medical records in 2485 eyes of 1309 patients who underwent LASIK with an FS laser or an MK for myopia and myopic astigmatism. Data were extracted, such as preoperative age, sex, IOP, manifest spherical equivalent (MSE), central corneal keratometry (CCK), central corneal thickness (CCT), and intended flap thickness and postoperative IOP (postIOP) at 1, 6 and 12 months. Linear mixed model (LMM) and multivariate linear regression (MLR) method were used for data analysis. In both models, the preoperative CCT and ablation depth had significant effects on predicting IOP changes in the FS and MK groups. The intended flap thickness was a significant predictor only in the FS laser group (P < .0001 in both models). In the FS group, LMM and MLR could respectively explain 47.00% and 18.91% of the variation of postoperative IOP underestimation (R2 = 0.47 and R(2) = 0.1891). In the MK group, LMM and MLR could explain 37.79% and 19.13% of the variation of IOP underestimation (R(2) = 0.3779 and 0.1913 respectively). The best-fit model for prediction of IOP changes was the LMM in LASIK with an FS laser.
Lin, Meng-Yin; Chang, David C. K.; Shen, Yun-Dun; Lin, Yen-Kuang; Lin, Chang-Ping; Wang, I-Jong
2016-01-01
The aim of this study is to describe factors that influence the measured intraocular pressure (IOP) change and to develop a predictive model after myopic laser in situ keratomileusis (LASIK) with a femtosecond (FS) laser or a microkeratome (MK). We retrospectively reviewed preoperative, intraoperative, and 12-month postoperative medical records in 2485 eyes of 1309 patients who underwent LASIK with an FS laser or an MK for myopia and myopic astigmatism. Data were extracted, such as preoperative age, sex, IOP, manifest spherical equivalent (MSE), central corneal keratometry (CCK), central corneal thickness (CCT), and intended flap thickness and postoperative IOP (postIOP) at 1, 6 and 12 months. Linear mixed model (LMM) and multivariate linear regression (MLR) method were used for data analysis. In both models, the preoperative CCT and ablation depth had significant effects on predicting IOP changes in the FS and MK groups. The intended flap thickness was a significant predictor only in the FS laser group (P < .0001 in both models). In the FS group, LMM and MLR could respectively explain 47.00% and 18.91% of the variation of postoperative IOP underestimation (R2 = 0.47 and R2 = 0.1891). In the MK group, LMM and MLR could explain 37.79% and 19.13% of the variation of IOP underestimation (R2 = 0.3779 and 0.1913 respectively). The best-fit model for prediction of IOP changes was the LMM in LASIK with an FS laser. PMID:26824754
Mesh-based Monte Carlo code for fluorescence modeling in complex tissues with irregular boundaries
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Chen, Leng-Chun; Lloyd, William; Kuo, Shiuhyang; Marcelo, Cynthia; Feinberg, Stephen E.; Mycek, Mary-Ann
2011-07-01
There is a growing need for the development of computational models that can account for complex tissue morphology in simulations of photon propagation. We describe the development and validation of a user-friendly, MATLAB-based Monte Carlo code that uses analytically-defined surface meshes to model heterogeneous tissue geometry. The code can use information from non-linear optical microscopy images to discriminate the fluorescence photons (from endogenous or exogenous fluorophores) detected from different layers of complex turbid media. We present a specific application of modeling a layered human tissue-engineered construct (Ex Vivo Produced Oral Mucosa Equivalent, EVPOME) designed for use in repair of oral tissue following surgery. Second-harmonic generation microscopic imaging of an EVPOME construct (oral keratinocytes atop a scaffold coated with human type IV collagen) was employed to determine an approximate analytical expression for the complex shape of the interface between the two layers. This expression can then be inserted into the code to correct the simulated fluorescence for the effect of the irregular tissue geometry.
On effective temperature in network models of collective behavior
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porfiri, Maurizio, E-mail: mporfiri@nyu.edu; Ariel, Gil, E-mail: arielg@math.biu.ac.il
Collective behavior of self-propelled units is studied analytically within the Vectorial Network Model (VNM), a mean-field approximation of the well-known Vicsek model. We propose a dynamical systems framework to study the stochastic dynamics of the VNM in the presence of general additive noise. We establish that a single parameter, which is a linear function of the circular mean of the noise, controls the macroscopic phase of the system—ordered or disordered. By establishing a fluctuation–dissipation relation, we posit that this parameter can be regarded as an effective temperature of collective behavior. The exact critical temperature is obtained analytically for systems withmore » small connectivity, equivalent to low-density ensembles of self-propelled units. Numerical simulations are conducted to demonstrate the applicability of this new notion of effective temperature to the Vicsek model. The identification of an effective temperature of collective behavior is an important step toward understanding order–disorder phase transitions, informing consistent coarse-graining techniques and explaining the physics underlying the emergence of collective phenomena.« less
Dual RBFNNs-Based Model-Free Adaptive Control With Aspen HYSYS Simulation.
Zhu, Yuanming; Hou, Zhongsheng; Qian, Feng; Du, Wenli
2017-03-01
In this brief, we propose a new data-driven model-free adaptive control (MFAC) method with dual radial basis function neural networks (RBFNNs) for a class of discrete-time nonlinear systems. The main novelty lies in that it provides a systematic design method for controller structure by the direct usage of I/O data, rather than using the first-principle model or offline identified plant model. The controller structure is determined by equivalent-dynamic-linearization representation of the ideal nonlinear controller, and the controller parameters are tuned by the pseudogradient information extracted from the I/O data of the plant, which can deal with the unknown nonlinear system. The stability of the closed-loop control system and the stability of the training process for RBFNNs are guaranteed by rigorous theoretical analysis. Meanwhile, the effectiveness and the applicability of the proposed method are further demonstrated by the numerical example and Aspen HYSYS simulation of distillation column in crude styrene produce process.
Explaining electric conductivity using the particle-in-a-box model: quantum superposition is the key
NASA Astrophysics Data System (ADS)
Sivanesan, Umaseh; Tsang, Kin; Izmaylov, Artur F.
2017-12-01
Most of the textbooks explaining electric conductivity in the context of quantum mechanics provide either incomplete or semi-classical explanations that are not connected with the elementary concepts of quantum mechanics. We illustrate the conduction phenomena using the simplest model system in quantum dynamics, a particle in a box (PIB). To induce the particle dynamics, a linear potential tilting the bottom of the box is introduced, which is equivalent to imposing a constant electric field for a charged particle. Although the PIB model represents a closed system that cannot have a flow of electrons through the system, we consider the oscillatory dynamics of the particle probability density as the analogue of the electric current. Relating the amplitude and other parameters of the particle oscillatory dynamics with the gap between the ground and excited states of the PIB model allows us to demonstrate one of the most basic dependencies of electric conductivity on the valence-conduction band gap of the material.
Teschke, Kay; Spierings, Judith; Marion, Stephen A; Demers, Paul A; Davies, Hugh W; Kennedy, Susan M
2004-12-01
In a study of wood dust exposure and lung function, we tested the effect on the exposure-response relationship of six different exposure metrics using the mean measured exposure of each subject versus the mean exposure based on various methods of grouping subjects, including job-based groups and groups based on an empirical model of the determinants of exposure. Multiple linear regression was used to examine the association between wood dust concentration and forced expiratory volume in 1s (FEV(1)), adjusting for age, sex, height, race, pediatric asthma, and smoking. Stronger point estimates of the exposure-response relationships were observed when exposures were based on increasing levels of aggregation, allowing the relationships to be found statistically significant in four of the six metrics. The strongest point estimates were found when exposures were based on the determinants of exposure model. Determinants of exposure modeling offers the potential for improvement in risk estimation equivalent to or beyond that from job-based exposure grouping.
Constraining dark sector perturbations I: cosmic shear and CMB lensing
NASA Astrophysics Data System (ADS)
Battye, Richard A.; Moss, Adam; Pearson, Jonathan A.
2015-04-01
We present current and future constraints on equations of state for dark sector perturbations. The equations of state considered are those corresponding to a generalized scalar field model and time-diffeomorphism invariant Script L(g) theories that are equivalent to models of a relativistic elastic medium and also Lorentz violating massive gravity. We develop a theoretical understanding of the observable impact of these models. In order to constrain these models we use CMB temperature data from Planck, BAO measurements, CMB lensing data from Planck and the South Pole Telescope, and weak galaxy lensing data from CFHTLenS. We find non-trivial exclusions on the range of parameters, although the data remains compatible with w=-1. We gauge how future experiments will help to constrain the parameters. This is done via a likelihood analysis for CMB experiments such as CoRE and PRISM, and tomographic galaxy weak lensing surveys, focussing in on the potential discriminatory power of Euclid on mildly non-linear scales.