Sample records for equivalent linearization technique

  1. Comparison of Nonlinear Random Response Using Equivalent Linearization and Numerical Simulation

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Muravyov, Alexander A.

    2000-01-01

    A recently developed finite-element-based equivalent linearization approach for the analysis of random vibrations of geometrically nonlinear multiple degree-of-freedom structures is validated. The validation is based on comparisons with results from a finite element based numerical simulation analysis using a numerical integration technique in physical coordinates. In particular, results for the case of a clamped-clamped beam are considered for an extensive load range to establish the limits of validity of the equivalent linearization approach.

  2. Semilinear programming: applications and implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohan, S.

    Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less

  3. Equivalent reduced model technique development for nonlinear system dynamic response

    NASA Astrophysics Data System (ADS)

    Thibault, Louis; Avitabile, Peter; Foley, Jason; Wolfson, Janet

    2013-04-01

    The dynamic response of structural systems commonly involves nonlinear effects. Often times, structural systems are made up of several components, whose individual behavior is essentially linear compared to the total assembled system. However, the assembly of linear components using highly nonlinear connection elements or contact regions causes the entire system to become nonlinear. Conventional transient nonlinear integration of the equations of motion can be extremely computationally intensive, especially when the finite element models describing the components are very large and detailed. In this work, the equivalent reduced model technique (ERMT) is developed to address complicated nonlinear contact problems. ERMT utilizes a highly accurate model reduction scheme, the System equivalent reduction expansion process (SEREP). Extremely reduced order models that provide dynamic characteristics of linear components, which are interconnected with highly nonlinear connection elements, are formulated with SEREP for the dynamic response evaluation using direct integration techniques. The full-space solution will be compared to the response obtained using drastically reduced models to make evident the usefulness of the technique for a variety of analytical cases.

  4. A linear spectral matching technique for retrieving equivalent water thickness and biochemical constituents of green vegetation

    NASA Technical Reports Server (NTRS)

    Gao, Bo-Cai; Goetz, Alexander F. H.

    1992-01-01

    Over the last decade, technological advances in airborne imaging spectrometers, having spectral resolution comparable with laboratory spectrometers, have made it possible to estimate biochemical constituents of vegetation canopies. Wessman estimated lignin concentration from data acquired with NASA's Airborne Imaging Spectrometer (AIS) over Blackhawk Island in Wisconsin. A stepwise linear regression technique was used to determine the single spectral channel or channels in the AIS data that best correlated with measured lignin contents using chemical methods. The regression technique does not take advantage of the spectral shape of the lignin reflectance feature as a diagnostic tool nor the increased discrimination among other leaf components with overlapping spectral features. A nonlinear least squares spectral matching technique was recently reported for deriving both the equivalent water thicknesses of surface vegetation and the amounts of water vapor in the atmosphere from contiguous spectra measured with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). The same technique was applied to a laboratory reflectance spectrum of fresh, green leaves. The result demonstrates that the fresh leaf spectrum in the 1.0-2.5 microns region consists of spectral components of dry leaves and the spectral component of liquid water. A linear least squares spectral matching technique for retrieving equivalent water thickness and biochemical components of green vegetation is described.

  5. A computational algorithm for spacecraft control and momentum management

    NASA Technical Reports Server (NTRS)

    Dzielski, John; Bergmann, Edward; Paradiso, Joseph

    1990-01-01

    Developments in the area of nonlinear control theory have shown how coordinate changes in the state and input spaces of a dynamical system can be used to transform certain nonlinear differential equations into equivalent linear equations. These techniques are applied to the control of a spacecraft equipped with momentum exchange devices. An optimal control problem is formulated that incorporates a nonlinear spacecraft model. An algorithm is developed for solving the optimization problem using feedback linearization to transform to an equivalent problem involving a linear dynamical constraint and a functional approximation technique to solve for the linear dynamics in terms of the control. The original problem is transformed into an unconstrained nonlinear quadratic program that yields an approximate solution to the original problem. Two examples are presented to illustrate the results.

  6. Estimation of hysteretic damping of structures by stochastic subspace identification

    NASA Astrophysics Data System (ADS)

    Bajrić, Anela; Høgsberg, Jan

    2018-05-01

    Output-only system identification techniques can estimate modal parameters of structures represented by linear time-invariant systems. However, the extension of the techniques to structures exhibiting non-linear behavior has not received much attention. This paper presents an output-only system identification method suitable for random response of dynamic systems with hysteretic damping. The method applies the concept of Stochastic Subspace Identification (SSI) to estimate the model parameters of a dynamic system with hysteretic damping. The restoring force is represented by the Bouc-Wen model, for which an equivalent linear relaxation model is derived. Hysteretic properties can be encountered in engineering structures exposed to severe cyclic environmental loads, as well as in vibration mitigation devices, such as Magneto-Rheological (MR) dampers. The identification technique incorporates the equivalent linear damper model in the estimation procedure. Synthetic data, representing the random vibrations of systems with hysteresis, validate the estimated system parameters by the presented identification method at low and high-levels of excitation amplitudes.

  7. Advanced analysis technique for the evaluation of linear alternators and linear motors

    NASA Technical Reports Server (NTRS)

    Holliday, Jeffrey C.

    1995-01-01

    A method for the mathematical analysis of linear alternator and linear motor devices and designs is described, and an example of its use is included. The technique seeks to surpass other methods of analysis by including more rigorous treatment of phenomena normally omitted or coarsely approximated such as eddy braking, non-linear material properties, and power losses generated within structures surrounding the device. The technique is broadly applicable to linear alternators and linear motors involving iron yoke structures and moving permanent magnets. The technique involves the application of Amperian current equivalents to the modeling of the moving permanent magnet components within a finite element formulation. The resulting steady state and transient mode field solutions can simultaneously account for the moving and static field sources within and around the device.

  8. Equivalent model construction for a non-linear dynamic system based on an element-wise stiffness evaluation procedure and reduced analysis of the equivalent system

    NASA Astrophysics Data System (ADS)

    Kim, Euiyoung; Cho, Maenghyo

    2017-11-01

    In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.

  9. Control Law Design in a Computational Aeroelasticity Environment

    NASA Technical Reports Server (NTRS)

    Newsom, Jerry R.; Robertshaw, Harry H.; Kapania, Rakesh K.

    2003-01-01

    A methodology for designing active control laws in a computational aeroelasticity environment is given. The methodology involves employing a systems identification technique to develop an explicit state-space model for control law design from the output of a computational aeroelasticity code. The particular computational aeroelasticity code employed in this paper solves the transonic small disturbance aerodynamic equation using a time-accurate, finite-difference scheme. Linear structural dynamics equations are integrated simultaneously with the computational fluid dynamics equations to determine the time responses of the structure. These structural responses are employed as the input to a modern systems identification technique that determines the Markov parameters of an "equivalent linear system". The Eigensystem Realization Algorithm is then employed to develop an explicit state-space model of the equivalent linear system. The Linear Quadratic Guassian control law design technique is employed to design a control law. The computational aeroelasticity code is modified to accept control laws and perform closed-loop simulations. Flutter control of a rectangular wing model is chosen to demonstrate the methodology. Various cases are used to illustrate the usefulness of the methodology as the nonlinearity of the aeroelastic system is increased through increased angle-of-attack changes.

  10. A New Stochastic Equivalent Linearization Implementation for Prediction of Geometrically Nonlinear Vibrations

    NASA Technical Reports Server (NTRS)

    Muravyov, Alexander A.; Turner, Travis L.; Robinson, Jay H.; Rizzi, Stephen A.

    1999-01-01

    In this paper, the problem of random vibration of geometrically nonlinear MDOF structures is considered. The solutions obtained by application of two different versions of a stochastic linearization method are compared with exact (F-P-K) solutions. The formulation of a relatively new version of the stochastic linearization method (energy-based version) is generalized to the MDOF system case. Also, a new method for determination of nonlinear sti ness coefficients for MDOF structures is demonstrated. This method in combination with the equivalent linearization technique is implemented in a new computer program. Results in terms of root-mean-square (RMS) displacements obtained by using the new program and an existing in-house code are compared for two examples of beam-like structures.

  11. GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING

    PubMed Central

    Liu, Hongcheng; Yao, Tao; Li, Runze

    2015-01-01

    This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126

  12. Measurements of the neutron dose equivalent for various radiation qualities, treatment machines and delivery techniques in radiation therapy

    NASA Astrophysics Data System (ADS)

    Hälg, R. A.; Besserer, J.; Boschung, M.; Mayer, S.; Lomax, A. J.; Schneider, U.

    2014-05-01

    In radiation therapy, high energy photon and proton beams cause the production of secondary neutrons. This leads to an unwanted dose contribution, which can be considerable for tissues outside of the target volume regarding the long term health of cancer patients. Due to the high biological effectiveness of neutrons in regards to cancer induction, small neutron doses can be important. This study quantified the neutron doses for different radiation therapy modalities. Most of the reports in the literature used neutron dose measurements free in air or on the surface of phantoms to estimate the amount of neutron dose to the patient. In this study, dose measurements were performed in terms of neutron dose equivalent inside an anthropomorphic phantom. The neutron dose equivalent was determined using track etch detectors as a function of the distance to the isocenter, as well as for radiation sensitive organs. The dose distributions were compared with respect to treatment techniques (3D-conformal, volumetric modulated arc therapy and intensity-modulated radiation therapy for photons; spot scanning and passive scattering for protons), therapy machines (Varian, Elekta and Siemens linear accelerators) and radiation quality (photons and protons). The neutron dose equivalent varied between 0.002 and 3 mSv per treatment gray over all measurements. Only small differences were found when comparing treatment techniques, but substantial differences were observed between the linear accelerator models. The neutron dose equivalent for proton therapy was higher than for photons in general and in particular for double-scattered protons. The overall neutron dose equivalent measured in this study was an order of magnitude lower than the stray dose of a treatment using 6 MV photons, suggesting that the contribution of the secondary neutron dose equivalent to the integral dose of a radiotherapy patient is small.

  13. Measurements of the neutron dose equivalent for various radiation qualities, treatment machines and delivery techniques in radiation therapy.

    PubMed

    Hälg, R A; Besserer, J; Boschung, M; Mayer, S; Lomax, A J; Schneider, U

    2014-05-21

    In radiation therapy, high energy photon and proton beams cause the production of secondary neutrons. This leads to an unwanted dose contribution, which can be considerable for tissues outside of the target volume regarding the long term health of cancer patients. Due to the high biological effectiveness of neutrons in regards to cancer induction, small neutron doses can be important. This study quantified the neutron doses for different radiation therapy modalities. Most of the reports in the literature used neutron dose measurements free in air or on the surface of phantoms to estimate the amount of neutron dose to the patient. In this study, dose measurements were performed in terms of neutron dose equivalent inside an anthropomorphic phantom. The neutron dose equivalent was determined using track etch detectors as a function of the distance to the isocenter, as well as for radiation sensitive organs. The dose distributions were compared with respect to treatment techniques (3D-conformal, volumetric modulated arc therapy and intensity-modulated radiation therapy for photons; spot scanning and passive scattering for protons), therapy machines (Varian, Elekta and Siemens linear accelerators) and radiation quality (photons and protons). The neutron dose equivalent varied between 0.002 and 3 mSv per treatment gray over all measurements. Only small differences were found when comparing treatment techniques, but substantial differences were observed between the linear accelerator models. The neutron dose equivalent for proton therapy was higher than for photons in general and in particular for double-scattered protons. The overall neutron dose equivalent measured in this study was an order of magnitude lower than the stray dose of a treatment using 6 MV photons, suggesting that the contribution of the secondary neutron dose equivalent to the integral dose of a radiotherapy patient is small.

  14. An equivalent frequency approach for determining non-linear effects on pre-tensioned-cable cross-braced structures

    NASA Astrophysics Data System (ADS)

    Giaccu, Gian Felice

    2018-05-01

    Pre-tensioned cable braces are widely used as bracing systems in various structural typologies. This technology is fundamentally utilized for stiffening purposes in the case of steel and timber structures. The pre-stressing force imparted to the braces provides to the system a remarkable increment of stiffness. On the other hand, the pre-tensioning force in the braces must be properly calibrated in order to satisfactorily meet both serviceability and ultimate limit states. Dynamic properties of these systems are however affected by non-linear behavior due to potential slackening of the pre-tensioned brace. In the recent years the author has been working on a similar problem regarding the non-linear response of cables in cable-stayed bridges and braced structures. In the present paper a displacement-based approach is used to examine the non-linear behavior of a building system. The methodology operates through linearization and allows obtaining an equivalent linearized frequency to approximately characterize, mode by mode, the dynamic behavior of the system. The equivalent frequency depends on both the mechanical characteristics of the system, the pre-tensioning level assigned to the braces and a characteristic vibration amplitude. The proposed approach can be used as a simplified technique, capable of linearizing the response of structural systems, characterized by non-linearity induced by the slackening of pre-tensioned braces.

  15. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    PubMed

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  16. Spacecraft nonlinear control

    NASA Technical Reports Server (NTRS)

    Sheen, Jyh-Jong; Bishop, Robert H.

    1992-01-01

    The feedback linearization technique is applied to the problem of spacecraft attitude control and momentum management with control moment gyros (CMGs). The feedback linearization consists of a coordinate transformation, which transforms the system to a companion form, and a nonlinear feedback control law to cancel the nonlinear dynamics resulting in a linear equivalent model. Pole placement techniques are then used to place the closed-loop poles. The coordinate transformation proposed here evolves from three output functions of relative degree four, three, and two, respectively. The nonlinear feedback control law is presented. Stability in a neighborhood of a controllable torque equilibrium attitude (TEA) is guaranteed and this fact is demonstrated by the simulation results. An investigation of the nonlinear control law shows that singularities exist in the state space outside the neighborhood of the controllable TEA. The nonlinear control law is simplified by a standard linearization technique and it is shown that the linearized nonlinear controller provides a natural way to select control gains for the multiple-input, multiple-output system. Simulation results using the linearized nonlinear controller show good performance relative to the nonlinear controller in the neighborhood of the TEA.

  17. A Comparison of Multivariable Control Design Techniques for a Turbofan Engine Control

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Watts, Stephen R.

    1995-01-01

    This paper compares two previously published design procedures for two different multivariable control design techniques for application to a linear engine model of a jet engine. The two multivariable control design techniques compared were the Linear Quadratic Gaussian with Loop Transfer Recovery (LQG/LTR) and the H-Infinity synthesis. The two control design techniques were used with specific previously published design procedures to synthesize controls which would provide equivalent closed loop frequency response for the primary control loops while assuring adequate loop decoupling. The resulting controllers were then reduced in order to minimize the programming and data storage requirements for a typical implementation. The reduced order linear controllers designed by each method were combined with the linear model of an advanced turbofan engine and the system performance was evaluated for the continuous linear system. Included in the performance analysis are the resulting frequency and transient responses as well as actuator usage and rate capability for each design method. The controls were also analyzed for robustness with respect to structured uncertainties in the unmodeled system dynamics. The two controls were then compared for performance capability and hardware implementation issues.

  18. Efficient techniques for forced response involving linear modal components interconnected by discrete nonlinear connection elements

    NASA Astrophysics Data System (ADS)

    Avitabile, Peter; O'Callahan, John

    2009-01-01

    Generally, response analysis of systems containing discrete nonlinear connection elements such as typical mounting connections require the physical finite element system matrices to be used in a direct integration algorithm to compute the nonlinear response analysis solution. Due to the large size of these physical matrices, forced nonlinear response analysis requires significant computational resources. Usually, the individual components of the system are analyzed and tested as separate components and their individual behavior may essentially be linear when compared to the total assembled system. However, the joining of these linear subsystems using highly nonlinear connection elements causes the entire system to become nonlinear. It would be advantageous if these linear modal subsystems could be utilized in the forced nonlinear response analysis since much effort has usually been expended in fine tuning and adjusting the analytical models to reflect the tested subsystem configuration. Several more efficient techniques have been developed to address this class of problem. Three of these techniques given as: equivalent reduced model technique (ERMT);modal modification response technique (MMRT); andcomponent element method (CEM); are presented in this paper and are compared to traditional methods.

  19. Computational technique for stepwise quantitative assessment of equation correctness

    NASA Astrophysics Data System (ADS)

    Othman, Nuru'l Izzah; Bakar, Zainab Abu

    2017-04-01

    Many of the computer-aided mathematics assessment systems that are available today possess the capability to implement stepwise correctness checking of a working scheme for solving equations. The computational technique for assessing the correctness of each response in the scheme mainly involves checking the mathematical equivalence and providing qualitative feedback. This paper presents a technique, known as the Stepwise Correctness Checking and Scoring (SCCS) technique that checks the correctness of each equation in terms of structural equivalence and provides quantitative feedback. The technique, which is based on the Multiset framework, adapts certain techniques from textual information retrieval involving tokenization, document modelling and similarity evaluation. The performance of the SCCS technique was tested using worked solutions on solving linear algebraic equations in one variable. 350 working schemes comprising of 1385 responses were collected using a marking engine prototype, which has been developed based on the technique. The results show that both the automated analytical scores and the automated overall scores generated by the marking engine exhibit high percent agreement, high correlation and high degree of agreement with manual scores with small average absolute and mixed errors.

  20. Spherical earth gravity and magnetic anomaly analysis by equivalent point source inversion

    NASA Technical Reports Server (NTRS)

    Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.

    1981-01-01

    To facilitate geologic interpretation of satellite elevation potential field data, analysis techniques are developed and verified in the spherical domain that are commensurate with conventional flat earth methods of potential field interpretation. A powerful approach to the spherical earth problem relates potential field anomalies to a distribution of equivalent point sources by least squares matrix inversion. Linear transformations of the equivalent source field lead to corresponding geoidal anomalies, pseudo-anomalies, vector anomaly components, spatial derivatives, continuations, and differential magnetic pole reductions. A number of examples using 1 deg-averaged surface free-air gravity anomalies of POGO satellite magnetometer data for the United States, Mexico, and Central America illustrate the capabilities of the method.

  1. Development of a computer technique for the prediction of transport aircraft flight profile sonic boom signatures. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Coen, Peter G.

    1991-01-01

    A new computer technique for the analysis of transport aircraft sonic boom signature characteristics was developed. This new technique, based on linear theory methods, combines the previously separate equivalent area and F function development with a signature propagation method using a single geometry description. The new technique was implemented in a stand-alone computer program and was incorporated into an aircraft performance analysis program. Through these implementations, both configuration designers and performance analysts are given new capabilities to rapidly analyze an aircraft's sonic boom characteristics throughout the flight envelope.

  2. On the Use of Equivalent Linearization for High-Cycle Fatigue Analysis of Geometrically Nonlinear Structures

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.

    2003-01-01

    The use of stress predictions from equivalent linearization analyses in the computation of high-cycle fatigue life is examined. Stresses so obtained differ in behavior from the fully nonlinear analysis in both spectral shape and amplitude. Consequently, fatigue life predictions made using this data will be affected. Comparisons of fatigue life predictions based upon the stress response obtained from equivalent linear and numerical simulation analyses are made to determine the range over which the equivalent linear analysis is applicable.

  3. Architecture for one-shot compressive imaging using computer-generated holograms.

    PubMed

    Macfaden, Alexander J; Kindness, Stephen J; Wilkinson, Timothy D

    2016-09-10

    We propose a synchronous implementation of compressive imaging. This method is mathematically equivalent to prevailing sequential methods, but uses a static holographic optical element to create a spatially distributed spot array from which the image can be reconstructed with an instantaneous measurement. We present the holographic design requirements and demonstrate experimentally that the linear algebra of compressed imaging can be implemented with this technique. We believe this technique can be integrated with optical metasurfaces, which will allow the development of new compressive sensing methods.

  4. Calibrating Nonlinear Soil Material Properties for Seismic Analysis Using Soil Material Properties Intended for Linear Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spears, Robert Edward; Coleman, Justin Leigh

    2015-08-01

    Seismic analysis of nuclear structures is routinely performed using guidance provided in “Seismic Analysis of Safety-Related Nuclear Structures and Commentary (ASCE 4, 1998).” This document, which is currently under revision, provides detailed guidance on linear seismic soil-structure-interaction (SSI) analysis of nuclear structures. To accommodate the linear analysis, soil material properties are typically developed as shear modulus and damping ratio versus cyclic shear strain amplitude. A new Appendix in ASCE 4-2014 (draft) is being added to provide guidance for nonlinear time domain SSI analysis. To accommodate the nonlinear analysis, a more appropriate form of the soil material properties includes shear stressmore » and energy absorbed per cycle versus shear strain. Ideally, nonlinear soil model material properties would be established with soil testing appropriate for the nonlinear constitutive model being used. However, much of the soil testing done for SSI analysis is performed for use with linear analysis techniques. Consequently, a method is described in this paper that uses soil test data intended for linear analysis to develop nonlinear soil material properties. To produce nonlinear material properties that are equivalent to the linear material properties, the linear and nonlinear model hysteresis loops are considered. For equivalent material properties, the shear stress at peak shear strain and energy absorbed per cycle should match when comparing the linear and nonlinear model hysteresis loops. Consequently, nonlinear material properties are selected based on these criteria.« less

  5. Equivalent linearization for fatigue life estimates of a nonlinear structure

    NASA Technical Reports Server (NTRS)

    Miles, R. N.

    1989-01-01

    An analysis is presented of the suitability of the method of equivalent linearization for estimating the fatigue life of a nonlinear structure. Comparisons are made of the fatigue life of a nonlinear plate as predicted using conventional equivalent linearization and three other more accurate methods. The excitation of the plate is assumed to be Gaussian white noise and the plate response is modeled using a single resonant mode. The methods used for comparison consist of numerical simulation, a probabalistic formulation, and a modification of equivalent linearization which avoids the usual assumption that the response process is Gaussian. Remarkably close agreement is obtained between all four methods, even for cases where the response is significantly linear.

  6. Geometric foundations of the theory of feedback equivalence

    NASA Technical Reports Server (NTRS)

    Hermann, R.

    1987-01-01

    A description of feedback control is presented within the context of differential equations, differential geometry, and Lie theory. Work related to the integration of differential geometry with the control techniques of feedback linearization is summarized. Particular attention is given to the application of the theory of vector field systems. Feedback invariants for control systems in state space form are also addressed.

  7. Improved Equivalent Linearization Implementations Using Nonlinear Stiffness Evaluation

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Muravyov, Alexander A.

    2001-01-01

    This report documents two new implementations of equivalent linearization for solving geometrically nonlinear random vibration problems of complicated structures. The implementations are given the acronym ELSTEP, for "Equivalent Linearization using a STiffness Evaluation Procedure." Both implementations of ELSTEP are fundamentally the same in that they use a novel nonlinear stiffness evaluation procedure to numerically compute otherwise inaccessible nonlinear stiffness terms from commercial finite element programs. The commercial finite element program MSC/NASTRAN (NASTRAN) was chosen as the core of ELSTEP. The FORTRAN implementation calculates the nonlinear stiffness terms and performs the equivalent linearization analysis outside of NASTRAN. The Direct Matrix Abstraction Program (DMAP) implementation performs these operations within NASTRAN. Both provide nearly identical results. Within each implementation, two error minimization approaches for the equivalent linearization procedure are available - force and strain energy error minimization. Sample results for a simply supported rectangular plate are included to illustrate the analysis procedure.

  8. Nonlinear random response prediction using MSC/NASTRAN

    NASA Technical Reports Server (NTRS)

    Robinson, J. H.; Chiang, C. K.; Rizzi, S. A.

    1993-01-01

    An equivalent linearization technique was incorporated into MSC/NASTRAN to predict the nonlinear random response of structures by means of Direct Matrix Abstract Programming (DMAP) modifications and inclusion of the nonlinear differential stiffness module inside the iteration loop. An iterative process was used to determine the rms displacements. Numerical results obtained for validation on simple plates and beams are in good agreement with existing solutions in both the linear and linearized regions. The versatility of the implementation will enable the analyst to determine the nonlinear random responses for complex structures under combined loads. The thermo-acoustic response of a hexagonal thermal protection system panel is used to highlight some of the features of the program.

  9. Capelli bitableaux and Z-forms of general linear Lie superalgebras.

    PubMed Central

    Brini, A; Teolis, A G

    1990-01-01

    The combinatorics of the enveloping algebra UQ(pl(L)) of the general linear Lie superalgebra of a finite dimensional Z2-graded Q-vector space is studied. Three non-equivalent Z-forms of UQ(pl(L)) are introduced: one of these Z-forms is a version of the Kostant Z-form and the others are Lie algebra analogs of Rota and Stein's straightening formulae for the supersymmetric algebra Super[L P] and for its dual Super[L* P*]. The method is based on an extension of Capelli's technique of variabili ausiliarie to algebras containing positively and negatively signed elements. PMID:11607048

  10. A general dual-bolus approach for quantitative DCE-MRI.

    PubMed

    Kershaw, Lucy E; Cheng, Hai-Ling Margaret

    2011-02-01

    To present a dual-bolus technique for quantitative dynamic contrast-enhanced MRI (DCE-MRI) and show that it can give an arterial input function (AIF) measurement equivalent to that from a single-bolus protocol. Five rabbits were imaged using a dual-bolus technique applicable for high-resolution DCE-MRI, incorporating a time resolved imaging of contrast kinetics (TRICKS) sequence for rapid temporal sampling. AIFs were measured from both the low-dose prebolus and the high-dose main bolus in the abdominal aorta. In one animal, TRICKS and fast spoiled gradient echo (FSPGR) acquisitions were compared. The scaled prebolus AIF was shown to match the main bolus AIF, with 95% confidence intervals overlapping for fits of gamma-variate functions to the first pass and linear fits to the washout phase, with the exception of one case. The AIFs measured using TRICKS and FSPGR were shown to be equivalent in one animal. The proposed technique can capture even the rapid circulation kinetics in the rabbit aorta, and the scaled prebolus AIF is equivalent to the AIF from a high-dose injection. This allows separate measurements of the AIF and tissue uptake curves, meaning that each curve can then be acquired using a protocol tailored to its specific requirements. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. The application of Green's theorem to the solution of boundary-value problems in linearized supersonic wing theory

    NASA Technical Reports Server (NTRS)

    Heaslet, Max A; Lomax, Harvard

    1950-01-01

    Following the introduction of the linearized partial differential equation for nonsteady three-dimensional compressible flow, general methods of solution are given for the two and three-dimensional steady-state and two-dimensional unsteady-state equations. It is also pointed out that, in the absence of thickness effects, linear theory yields solutions consistent with the assumptions made when applied to lifting-surface problems for swept-back plan forms at sonic speeds. The solutions of the particular equations are determined in all cases by means of Green's theorem, and thus depend on the use of Green's equivalent layer of sources, sinks, and doublets. Improper integrals in the supersonic theory are treated by means of Hadamard's "finite part" technique.

  12. Detection and Estimation of Multi-Pulse LFMCW Radar Signals

    DTIC Science & Technology

    2010-01-01

    the Hough transform (HT) of the Wigner - Ville distribution ( WVD ), has been shown to be equivalent to the generalized likelihood ratio test (GLRT...virginia.edu Abstract— The Wigner - Ville Hough transform (WVHT) has been applied to detect and estimate the parameters of linear frequency-modulated...well studied in the literature. One of the most prominent techniques is the Wigner - Ville Hough Transform [8], [9]. The Wigner - Ville Hough transform (WVHT

  13. A new neural network model for solving random interval linear programming problems.

    PubMed

    Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza

    2017-05-01

    This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Biological equivalence between LDR and PDR in cervical cancer: multifactor analysis using the linear-quadratic model.

    PubMed

    Couto, José Guilherme; Bravo, Isabel; Pirraco, Rui

    2011-09-01

    The purpose of this work was the biological comparison between Low Dose Rate (LDR) and Pulsed Dose Rate (PDR) in cervical cancer regarding the discontinuation of the afterloading system used for the LDR treatments at our Institution since December 2009. In the first phase we studied the influence of the pulse dose and the pulse time in the biological equivalence between LDR and PDR treatments using the Linear Quadratic Model (LQM). In the second phase, the equivalent dose in 2 Gy/fraction (EQD(2)) for the tumor, rectum and bladder in treatments performed with both techniques was evaluated and statistically compared. All evaluated patients had stage IIB cervical cancer and were treated with External Beam Radiotherapy (EBRT) plus two Brachytherapy (BT) applications. Data were collected from 48 patients (26 patients treated with LDR and 22 patients with PDR). In the analyses of the influence of PDR parameters in the biological equivalence between LDR and PDR treatments (Phase 1), it was calculated that if the pulse dose in PDR was kept equal to the LDR dose rate, a small the-rapeutic loss was expected. If the pulse dose was decreased, the therapeutic window became larger, but a correction in the prescribed dose was necessary. In PDR schemes with 1 hour interval between pulses, the pulse time did not influence significantly the equivalent dose. In the comparison between the groups treated with LDR and PDR (Phase 2) we concluded that they were not equivalent, because in the PDR group the total EQD(2) for the tumor, rectum and bladder was smaller than in the LDR group; the LQM estimated that a correction in the prescribed dose of 6% to 10% was ne-cessary to avoid therapeutic loss. A correction in the prescribed dose was necessary; this correction should be achieved by calculating the PDR dose equivalent to the desired LDR total dose.

  15. Biological equivalence between LDR and PDR in cervical cancer: multifactor analysis using the linear-quadratic model

    PubMed Central

    Bravo, Isabel; Pirraco, Rui

    2011-01-01

    Purpose The purpose of this work was the biological comparison between Low Dose Rate (LDR) and Pulsed Dose Rate (PDR) in cervical cancer regarding the discontinuation of the afterloading system used for the LDR treatments at our Institution since December 2009. Material and methods In the first phase we studied the influence of the pulse dose and the pulse time in the biological equivalence between LDR and PDR treatments using the Linear Quadratic Model (LQM). In the second phase, the equivalent dose in 2 Gy/fraction (EQD2) for the tumor, rectum and bladder in treatments performed with both techniques was evaluated and statistically compared. All evaluated patients had stage IIB cervical cancer and were treated with External Beam Radiotherapy (EBRT) plus two Brachytherapy (BT) applications. Data were collected from 48 patients (26 patients treated with LDR and 22 patients with PDR). Results In the analyses of the influence of PDR parameters in the biological equivalence between LDR and PDR treatments (Phase 1), it was calculated that if the pulse dose in PDR was kept equal to the LDR dose rate, a small the-rapeutic loss was expected. If the pulse dose was decreased, the therapeutic window became larger, but a correction in the prescribed dose was necessary. In PDR schemes with 1 hour interval between pulses, the pulse time did not influence significantly the equivalent dose. In the comparison between the groups treated with LDR and PDR (Phase 2) we concluded that they were not equivalent, because in the PDR group the total EQD2 for the tumor, rectum and bladder was smaller than in the LDR group; the LQM estimated that a correction in the prescribed dose of 6% to 10% was ne-cessary to avoid therapeutic loss. Conclusions A correction in the prescribed dose was necessary; this correction should be achieved by calculating the PDR dose equivalent to the desired LDR total dose. PMID:23346123

  16. Multigrid methods in structural mechanics

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Bigelow, C. A.; Taasan, S.; Hussaini, M. Y.

    1986-01-01

    Although the application of multigrid methods to the equations of elasticity has been suggested, few such applications have been reported in the literature. In the present work, multigrid techniques are applied to the finite element analysis of a simply supported Bernoulli-Euler beam, and various aspects of the multigrid algorithm are studied and explained in detail. In this study, six grid levels were used to model half the beam. With linear prolongation and sequential ordering, the multigrid algorithm yielded results which were of machine accuracy with work equivalent to 200 standard Gauss-Seidel iterations on the fine grid. Also with linear prolongation and sequential ordering, the V(1,n) cycle with n greater than 2 yielded better convergence rates than the V(n,1) cycle. The restriction and prolongation operators were derived based on energy principles. Conserving energy during the inter-grid transfers required that the prolongation operator be the transpose of the restriction operator, and led to improved convergence rates. With energy-conserving prolongation and sequential ordering, the multigrid algorithm yielded results of machine accuracy with a work equivalent to 45 Gauss-Seidel iterations on the fine grid. The red-black ordering of relaxations yielded solutions of machine accuracy in a single V(1,1) cycle, which required work equivalent to about 4 iterations on the finest grid level.

  17. Calculative techniques for transonic flows about certain classes of wing body combinations

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.; Spreiter, J. R.

    1972-01-01

    Procedures based on the method of local linearization and transonic equivalence rule were developed for predicting properties of transonic flows about certain classes of wing-body combinations. The procedures are applicable to transonic flows with free stream Mach number in the ranges near one, below the lower critical and above the upper critical. Theoretical results are presented for surface and flow field pressure distributions for both lifting and nonlifting situations.

  18. A simple linear regression method for quantitative trait loci linkage analysis with censored observations.

    PubMed

    Anderson, Carl A; McRae, Allan F; Visscher, Peter M

    2006-07-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.

  19. Wave propagation in equivalent continuums representing truss lattice materials

    DOE PAGES

    Messner, Mark C.; Barham, Matthew I.; Kumar, Mukul; ...

    2015-07-29

    Stiffness scales linearly with density in stretch-dominated lattice meta-materials offering the possibility of very light yet very stiff structures. Current additive manufacturing techniques can assemble structures from lattice materials, but the design of such structures will require accurate, efficient simulation methods. Equivalent continuum models have several advantages over discrete truss models of stretch dominated lattices, including computational efficiency and ease of model construction. However, the development an equivalent model suitable for representing the dynamic response of a periodic truss in the small deformation regime is complicated by microinertial effects. This study derives a dynamic equivalent continuum model for periodic trussmore » structures suitable for representing long-wavelength wave propagation and verifies it against the full Bloch wave theory and detailed finite element simulations. The model must incorporate microinertial effects to accurately reproduce long wavelength characteristics of the response such as anisotropic elastic soundspeeds. Finally, the formulation presented here also improves upon previous work by preserving equilibrium at truss joints for simple lattices and by improving numerical stability by eliminating vertices in the effective yield surface.« less

  20. Impacts analysis of car following models considering variable vehicular gap policies

    NASA Astrophysics Data System (ADS)

    Xin, Qi; Yang, Nan; Fu, Rui; Yu, Shaowei; Shi, Zhongke

    2018-07-01

    Due to the important roles playing in the vehicles' adaptive cruise control system, variable vehicular gap polices were employed to full velocity difference model (FVDM) to investigate the traffic flow properties. In this paper, two new car following models were put forward by taking constant time headway(CTH) policy and variable time headway(VTH) policy into optimal velocity function, separately. By steady state analysis of the new models, an equivalent optimal velocity function was defined. To determine the linear stable conditions of the new models, we introduce equivalent expressions of safe vehicular gap, and then apply small amplitude perturbation analysis and long terms of wave expansion techniques to obtain the new models' linear stable conditions. Additionally, the first order approximate solutions of the new models were drawn at the stable region, by transforming the models into typical Burger's partial differential equations with reductive perturbation method. The FVDM based numerical simulations indicate that the variable vehicular gap polices with proper parameters directly contribute to the improvement of the traffic flows' stability and the avoidance of the unstable traffic phenomena.

  1. Equivalent linear damping characterization in linear and nonlinear force-stiffness muscle models.

    PubMed

    Ovesy, Marzieh; Nazari, Mohammad Ali; Mahdavian, Mohammad

    2016-02-01

    In the current research, the muscle equivalent linear damping coefficient which is introduced as the force-velocity relation in a muscle model and the corresponding time constant are investigated. In order to reach this goal, a 1D skeletal muscle model was used. Two characterizations of this model using a linear force-stiffness relationship (Hill-type model) and a nonlinear one have been implemented. The OpenSim platform was used for verification of the model. The isometric activation has been used for the simulation. The equivalent linear damping and the time constant of each model were extracted by using the results obtained from the simulation. The results provide a better insight into the characteristics of each model. It is found that the nonlinear models had a response rate closer to the reality compared to the Hill-type models.

  2. Linear phase compressive filter

    DOEpatents

    McEwan, Thomas E.

    1995-01-01

    A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.

  3. Steady-state global optimization of metabolic non-linear dynamic models through recasting into power-law canonical models

    PubMed Central

    2011-01-01

    Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520

  4. A method for the analysis of nonlinearities in aircraft dynamic response to atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Sidwell, K.

    1976-01-01

    An analytical method is developed which combines the equivalent linearization technique for the analysis of the response of nonlinear dynamic systems with the amplitude modulated random process (Press model) for atmospheric turbulence. The method is initially applied to a bilinear spring system. The analysis of the response shows good agreement with exact results obtained by the Fokker-Planck equation. The method is then applied to an example of control-surface displacement limiting in an aircraft with a pitch-hold autopilot.

  5. A Riemannian geometric mapping technique for identifying incompressible equivalents to subsonic potential flows

    NASA Astrophysics Data System (ADS)

    German, Brian Joseph

    This research develops a technique for the solution of incompressible equivalents to planar steady subsonic potential flows. Riemannian geometric formalism is used to develop a gauge transformation of the length measure followed by a curvilinear coordinate transformation to map the given subsonic flow into a canonical Laplacian flow with the same boundary conditions. The effect of the transformation is to distort both the immersed profile shape and the domain interior nonuniformly as a function of local flow properties. The method represents the full nonlinear generalization of the classical methods of Prandtl-Glauert and Karman-Tsien. Unlike the classical methods which are "corrections," this method gives exact results in the sense that the inverse mapping produces the subsonic full potential solution over the original airfoil, up to numerical accuracy. The motivation for this research was provided by an observed analogy between linear potential flow and the special theory of relativity that emerges from the invariance of the d'Alembert wave equation under Lorentz transformations. This analogy is well known in an operational sense, being leveraged widely in linear unsteady aerodynamics and acoustics, stemming largely from the work of Kussner. Whereas elements of the special theory can be invoked for compressibility effects that are linear and global in nature, the question posed in this work was whether other mathematical techniques from the realm of relativity theory could be used to similar advantage for effects that are nonlinear and local. This line of thought led to a transformation leveraging Riemannian geometric methods common to the general theory of relativity. A gauge transformation is used to geometrize compressibility through the metric tensor of the underlying space to produce an equivalent incompressible flow that lives not on a plane but on a curved surface. In this sense, forces owing to compressibility can be ascribed to the geometry of space in much the same way that general relativity ascribes gravitational forces to the curvature of space-time. Although the analogy with general relativity is fruitful, it is important not to overstate the similarities between compressibility and the physics of gravity, as the interest for this thesis is primarily in the mathematical framework and not physical phenomenology or epistemology. The thesis presents the philosophy and theory for the transformation method followed by a numerical method for practical solutions of equivalent incompressible flows over arbitrary closed profiles. The numerical method employs an iterative approach involving the solution of the equivalent incompressible flow with a panel method, the calculation of the metric tensor for the gauge transformation, and the solution of the curvilinear coordinate mapping to the canonical flow with a finite difference approach for the elliptic boundary value problem. This method is demonstrated for non-circulatory flow over a circular cylinder and both symmetric and lifting flows over a NACA 0012 profile. Results are validated with accepted subcritical full potential test cases available in the literature. For chord-preserving mapping boundary conditions, the results indicate that the equivalent incompressible profiles thicken with Mach number and develop a leading edge droop with increased angle of attack. Two promising areas of potential applicability of the method have been identified. The first is in airfoil inverse design methods leveraging incompressible flow knowledge including heuristics and empirical data for the potential field effects on viscous phenomena such as boundary layer transition and separation. The second is in aerodynamic testing using distorted similarity-scaled models.

  6. A feedback linearization approach to spacecraft control using momentum exchange devices. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Dzielski, John Edward

    1988-01-01

    Recent developments in the area of nonlinear control theory have shown how coordiante changes in the state and input spaces can be used with nonlinear feedback to transform certain nonlinear ordinary differential equations into equivalent linear equations. These feedback linearization techniques are applied to resolve two problems arising in the control of spacecraft equipped with control moment gyroscopes (CMGs). The first application involves the computation of rate commands for the gimbals that rotate the individual gyroscopes to produce commanded torques on the spacecraft. The second application is to the long-term management of stored momentum in the system of control moment gyroscopes using environmental torques acting on the vehicle. An approach to distributing control effort among a group of redundant actuators is described that uses feedback linearization techniques to parameterize sets of controls which influence a specified subsystem in a desired way. The approach is adapted for use in spacecraft control with double-gimballed gyroscopes to produce an algorithm that avoids problematic gimbal configurations by approximating sets of gimbal rates that drive CMG rotors into desirable configurations. The momentum management problem is stated as a trajectory optimization problem with a nonlinear dynamical constraint. Feedback linearization and collocation are used to transform this problem into an unconstrainted nonlinear program. The approach to trajectory optimization is fast and robust. A number of examples are presented showing applications to the proposed NASA space station.

  7. Cardinal Equivalence of Small Number in Young Children.

    ERIC Educational Resources Information Center

    Kingma, J.; Roelinga, U.

    1982-01-01

    Children completed three types of equivalent cardination tasks which assessed the influence of different stimulus configurations (linear, linear-nonlinear, and nonlinear), and density of object spacing. Prior results reported by Siegel, Brainerd, and Gelman and Gallistel were not replicated. Implications for understanding cardination concept…

  8. Incomplete data based parameter identification of nonlinear and time-variant oscillators with fractional derivative elements

    NASA Astrophysics Data System (ADS)

    Kougioumtzoglou, Ioannis A.; dos Santos, Ketson R. M.; Comerford, Liam

    2017-09-01

    Various system identification techniques exist in the literature that can handle non-stationary measured time-histories, or cases of incomplete data, or address systems following a fractional calculus modeling. However, there are not many (if any) techniques that can address all three aforementioned challenges simultaneously in a consistent manner. In this paper, a novel multiple-input/single-output (MISO) system identification technique is developed for parameter identification of nonlinear and time-variant oscillators with fractional derivative terms subject to incomplete non-stationary data. The technique utilizes a representation of the nonlinear restoring forces as a set of parallel linear sub-systems. In this regard, the oscillator is transformed into an equivalent MISO system in the wavelet domain. Next, a recently developed L1-norm minimization procedure based on compressive sensing theory is applied for determining the wavelet coefficients of the available incomplete non-stationary input-output (excitation-response) data. Finally, these wavelet coefficients are utilized to determine appropriately defined time- and frequency-dependent wavelet based frequency response functions and related oscillator parameters. Several linear and nonlinear time-variant systems with fractional derivative elements are used as numerical examples to demonstrate the reliability of the technique even in cases of noise corrupted and incomplete data.

  9. Linear phase compressive filter

    DOEpatents

    McEwan, T.E.

    1995-06-06

    A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.

  10. Reduction of a linear complex model for respiratory system during Airflow Interruption.

    PubMed

    Jablonski, Ireneusz; Mroczka, Janusz

    2010-01-01

    The paper presents methodology of a complex model reduction to its simpler version - an identifiable inverse model. Its main tool is a numerical procedure of sensitivity analysis (structural and parametric) applied to the forward linear equivalent designed for the conditions of interrupter experiment. Final result - the reduced analog for the interrupter technique is especially worth of notice as it fills a major gap in occlusional measurements, which typically use simple, one- or two-element physical representations. Proposed electrical reduced circuit, being structural combination of resistive, inertial and elastic properties, can be perceived as a candidate for reliable reconstruction and quantification (in the time and frequency domain) of dynamical behavior of the respiratory system in response to a quasi-step excitation by valve closure.

  11. A Monolithic CMOS Magnetic Hall Sensor with High Sensitivity and Linearity Characteristics

    PubMed Central

    Huang, Haiyun; Wang, Dejun; Xu, Yue

    2015-01-01

    This paper presents a fully integrated linear Hall sensor by means of 0.8 μm high voltage complementary metal-oxide semiconductor (CMOS) technology. This monolithic Hall sensor chip features a highly sensitive horizontal switched Hall plate and an efficient signal conditioner using dynamic offset cancellation technique. An improved cross-like Hall plate achieves high magnetic sensitivity and low offset. A new spinning current modulator stabilizes the quiescent output voltage and improves the reliability of the signal conditioner. The tested results show that at the 5 V supply voltage, the maximum Hall output voltage of the monolithic Hall sensor microsystem, is up to ±2.1 V and the linearity of Hall output voltage is higher than 99% in the magnetic flux density range from ±5 mT to ±175 mT. The output equivalent residual offset is 0.48 mT and the static power consumption is 20 mW. PMID:26516864

  12. A Monolithic CMOS Magnetic Hall Sensor with High Sensitivity and Linearity Characteristics.

    PubMed

    Huang, Haiyun; Wang, Dejun; Xu, Yue

    2015-10-27

    This paper presents a fully integrated linear Hall sensor by means of 0.8 μm high voltage complementary metal-oxide semiconductor (CMOS) technology. This monolithic Hall sensor chip features a highly sensitive horizontal switched Hall plate and an efficient signal conditioner using dynamic offset cancellation technique. An improved cross-like Hall plate achieves high magnetic sensitivity and low offset. A new spinning current modulator stabilizes the quiescent output voltage and improves the reliability of the signal conditioner. The tested results show that at the 5 V supply voltage, the maximum Hall output voltage of the monolithic Hall sensor microsystem, is up to ±2.1 V and the linearity of Hall output voltage is higher than 99% in the magnetic flux density range from ±5 mT to ±175 mT. The output equivalent residual offset is 0.48 mT and the static power consumption is 20 mW.

  13. Experimental demonstration of deep frequency modulation interferometry.

    PubMed

    Isleif, Katharina-Sophie; Gerberding, Oliver; Schwarze, Thomas S; Mehmet, Moritz; Heinzel, Gerhard; Cervantes, Felipe Guzmán

    2016-01-25

    Experiments for space and ground-based gravitational wave detectors often require a large dynamic range interferometric position readout of test masses with 1 pm/√Hz precision over long time scales. Heterodyne interferometer schemes that achieve such precisions are available, but they require complex optical set-ups, limiting their scalability for multiple channels. This article presents the first experimental results on deep frequency modulation interferometry, a new technique that combines sinusoidal laser frequency modulation in unequal arm length interferometers with a non-linear fit algorithm. We have tested the technique in a Michelson and a Mach-Zehnder Interferometer topology, respectively, demonstrated continuous phase tracking of a moving mirror and achieved a performance equivalent to a displacement sensitivity of 250 pm/Hz at 1 mHz between the phase measurements of two photodetectors monitoring the same optical signal. By performing time series fitting of the extracted interference signals, we measured that the linearity of the laser frequency modulation is on the order of 2% for the laser source used.

  14. Use of AMMI and linear regression models to analyze genotype-environment interaction in durum wheat.

    PubMed

    Nachit, M M; Nachit, G; Ketata, H; Gauch, H G; Zobel, R W

    1992-03-01

    The joint durum wheat (Triticum turgidum L var 'durum') breeding program of the International Maize and Wheat Improvement Center (CIMMYT) and the International Center for Agricultural Research in the Dry Areas (ICARDA) for the Mediterranean region employs extensive multilocation testing. Multilocation testing produces significant genotype-environment (GE) interaction that reduces the accuracy for estimating yield and selecting appropriate germ plasm. The sum of squares (SS) of GE interaction was partitioned by linear regression techniques into joint, genotypic, and environmental regressions, and by Additive Main effects and the Multiplicative Interactions (AMMI) model into five significant Interaction Principal Component Axes (IPCA). The AMMI model was more effective in partitioning the interaction SS than the linear regression technique. The SS contained in the AMMI model was 6 times higher than the SS for all three regressions. Postdictive assessment recommended the use of the first five IPCA axes, while predictive assessment AMMI1 (main effects plus IPCA1). After elimination of random variation, AMMI1 estimates for genotypic yields within sites were more precise than unadjusted means. This increased precision was equivalent to increasing the number of replications by a factor of 3.7.

  15. Effect of linear and non-linear blade modelling techniques on simulated fatigue and extreme loads using Bladed

    NASA Astrophysics Data System (ADS)

    Beardsell, Alec; Collier, William; Han, Tao

    2016-09-01

    There is a trend in the wind industry towards ever larger and more flexible turbine blades. Blade tip deflections in modern blades now commonly exceed 10% of blade length. Historically, the dynamic response of wind turbine blades has been analysed using linear models of blade deflection which include the assumption of small deflections. For modern flexible blades, this assumption is becoming less valid. In order to continue to simulate dynamic turbine performance accurately, routine use of non-linear models of blade deflection may be required. This can be achieved by representing the blade as a connected series of individual flexible linear bodies - referred to in this paper as the multi-part approach. In this paper, Bladed is used to compare load predictions using single-part and multi-part blade models for several turbines. The study examines the impact on fatigue and extreme loads and blade deflection through reduced sets of load calculations based on IEC 61400-1 ed. 3. Damage equivalent load changes of up to 16% and extreme load changes of up to 29% are observed at some turbine load locations. It is found that there is no general pattern in the loading differences observed between single-part and multi-part blade models. Rather, changes in fatigue and extreme loads with a multi-part blade model depend on the characteristics of the individual turbine and blade. Key underlying causes of damage equivalent load change are identified as differences in edgewise- torsional coupling between the multi-part and single-part models, and increased edgewise rotor mode damping in the multi-part model. Similarly, a causal link is identified between torsional blade dynamics and changes in ultimate load results.

  16. On the Relation between the Linear Factor Model and the Latent Profile Model

    ERIC Educational Resources Information Center

    Halpin, Peter F.; Dolan, Conor V.; Grasman, Raoul P. P. P.; De Boeck, Paul

    2011-01-01

    The relationship between linear factor models and latent profile models is addressed within the context of maximum likelihood estimation based on the joint distribution of the manifest variables. Although the two models are well known to imply equivalent covariance decompositions, in general they do not yield equivalent estimates of the…

  17. Cotton-type and joint invariants for linear elliptic systems.

    PubMed

    Aslam, A; Mahomed, F M

    2013-01-01

    Cotton-type invariants for a subclass of a system of two linear elliptic equations, obtainable from a complex base linear elliptic equation, are derived both by spliting of the corresponding complex Cotton invariants of the base complex equation and from the Laplace-type invariants of the system of linear hyperbolic equations equivalent to the system of linear elliptic equations via linear complex transformations of the independent variables. It is shown that Cotton-type invariants derived from these two approaches are identical. Furthermore, Cotton-type and joint invariants for a general system of two linear elliptic equations are also obtained from the Laplace-type and joint invariants for a system of two linear hyperbolic equations equivalent to the system of linear elliptic equations by complex changes of the independent variables. Examples are presented to illustrate the results.

  18. Cotton-Type and Joint Invariants for Linear Elliptic Systems

    PubMed Central

    Aslam, A.; Mahomed, F. M.

    2013-01-01

    Cotton-type invariants for a subclass of a system of two linear elliptic equations, obtainable from a complex base linear elliptic equation, are derived both by spliting of the corresponding complex Cotton invariants of the base complex equation and from the Laplace-type invariants of the system of linear hyperbolic equations equivalent to the system of linear elliptic equations via linear complex transformations of the independent variables. It is shown that Cotton-type invariants derived from these two approaches are identical. Furthermore, Cotton-type and joint invariants for a general system of two linear elliptic equations are also obtained from the Laplace-type and joint invariants for a system of two linear hyperbolic equations equivalent to the system of linear elliptic equations by complex changes of the independent variables. Examples are presented to illustrate the results. PMID:24453871

  19. Two-port transmission line technique for dielectric property characterization of polymer electrolyte membranes.

    PubMed

    Lu, Zijie; Lanagan, Michael; Manias, Evangelos; Macdonald, Digby D

    2009-10-15

    Performance improvements of perfluorosulfonic acid membranes, such as Nafion and Flemion, underline a need for dielectric characterization of these materials toward a quantitative understanding of the dynamics of water molecules and protons within the membranes. In this Article, a two-port transmission line technique for measuring the complex permittivity spectra of polymeric electrolytes in the microwave region is described, and the algorithms for permittivity determination are presented. The technique is experimentally validated with liquid water and polytertrafluoroethylene film, whose dielectric properties are well-known. Further, the permittivity spectra of dry and hydrated Flemion SH150 membranes are measured and compared to those of Nafion 117. Two water relaxation modes are observed in the microwave region (0.045-26 GHz) at 25 degrees C. The higher-frequency process observed is identified as the cooperative relaxation of bulk-like water, whose amount was found to increase linearly with water content in the polymer. The lower-frequency process, characterized by longer relaxation times in the range of 20-70 ps, is attributed to water molecules that are loosely bound to sulfonate groups. The loosely bound water amount was found to increase with hydration level at low water content and levels off at higher water contents. Flemion SH150, which has an equivalent weight of 909 g/equiv, displays higher dielectric strengths for both of these water modes as compared to Nafion 117 (equivalent weight of 1100 g/equiv), which probably reflects the effect of equivalent weight on the polymers' hydrated structure, and in particular its effect on the extended ionic cluster domains.

  20. A single-degree-of-freedom model for non-linear soil amplification

    USGS Publications Warehouse

    Erdik, Mustafa Ozder

    1979-01-01

    For proper understanding of soil behavior during earthquakes and assessment of a realistic surface motion, studies of the large-strain dynamic response of non-linear hysteretic soil systems are indispensable. Most of the presently available studies are based on the assumption that the response of a soil deposit is mainly due to the upward propagation of horizontally polarized shear waves from the underlying bedrock. Equivalent-linear procedures, currently in common use in non-linear soil response analysis, provide a simple approach and have been favorably compared with the actual recorded motions in some particular cases. Strain compatibility in these equivalent-linear approaches is maintained by selecting values of shear moduli and damping ratios in accordance with the average soil strains, in an iterative manner. Truly non-linear constitutive models with complete strain compatibility have also been employed. The equivalent-linear approaches often raise some doubt as to the reliability of their results concerning the system response in high frequency regions. In these frequency regions the equivalent-linear methods may underestimate the surface motion by as much as a factor of two or more. Although studies are complete in their methods of analysis, they inevitably provide applications pertaining only to a few specific soil systems, and do not lead to general conclusions about soil behavior. This report attempts to provide a general picture of the soil response through the use of a single-degree-of-freedom non-linear-hysteretic model. Although the investigation is based on a specific type of nonlinearity and a set of dynamic soil properties, the method described does not limit itself to these assumptions and is equally applicable to other types of nonlinearity and soil parameters.

  1. Analytical Validation of Accelerator Mass Spectrometry for Pharmaceutical Development: the Measurement of Carbon-14 Isotope Ratio.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keck, B D; Ognibene, T; Vogel, J S

    2010-02-05

    Accelerator mass spectrometry (AMS) is an isotope based measurement technology that utilizes carbon-14 labeled compounds in the pharmaceutical development process to measure compounds at very low concentrations, empowers microdosing as an investigational tool, and extends the utility of {sup 14}C labeled compounds to dramatically lower levels. It is a form of isotope ratio mass spectrometry that can provide either measurements of total compound equivalents or, when coupled to separation technology such as chromatography, quantitation of specific compounds. The properties of AMS as a measurement technique are investigated here, and the parameters of method validation are shown. AMS, independent of anymore » separation technique to which it may be coupled, is shown to be accurate, linear, precise, and robust. As the sensitivity and universality of AMS is constantly being explored and expanded, this work underpins many areas of pharmaceutical development including drug metabolism as well as absorption, distribution and excretion of pharmaceutical compounds as a fundamental step in drug development. The validation parameters for pharmaceutical analyses were examined for the accelerator mass spectrometry measurement of {sup 14}C/C ratio, independent of chemical separation procedures. The isotope ratio measurement was specific (owing to the {sup 14}C label), stable across samples storage conditions for at least one year, linear over 4 orders of magnitude with an analytical range from one tenth Modern to at least 2000 Modern (instrument specific). Further, accuracy was excellent between 1 and 3 percent while precision expressed as coefficient of variation is between 1 and 6% determined primarily by radiocarbon content and the time spent analyzing a sample. Sensitivity, expressed as LOD and LLOQ was 1 and 10 attomoles of carbon-14 (which can be expressed as compound equivalents) and for a typical small molecule labeled at 10% incorporated with {sup 14}C corresponds to 30 fg equivalents. AMS provides an sensitive, accurate and precise method of measuring drug compounds in biological matrices.« less

  2. Experimental demonstration of four-photon entanglement and high-fidelity teleportation.

    PubMed

    Pan, J W; Daniell, M; Gasparoni, S; Weihs, G; Zeilinger, A

    2001-05-14

    We experimentally demonstrate observation of highly pure four-photon GHZ entanglement produced by parametric down-conversion and a projective measurement. At the same time this also demonstrates teleportation of entanglement with very high purity. Not only does the achieved high visibility enable various novel tests of quantum nonlocality, it also opens the possibility to experimentally investigate various quantum computation and communication schemes with linear optics. Our technique can, in principle, be used to produce entanglement of arbitrarily high order or, equivalently, teleportation and entanglement swapping over multiple stages.

  3. On the classification of elliptic foliations induced by real quadratic fields with center

    NASA Astrophysics Data System (ADS)

    Puchuri, Liliana; Bueno, Orestes

    2016-12-01

    Related to the study of Hilbert's infinitesimal problem, is the problem of determining the existence and estimating the number of limit cycles of the linear perturbation of Hamiltonian fields. A classification of the elliptic foliations in the projective plane induced by the fields obtained by quadratic fields with center was already studied by several authors. In this work, we devise a unified proof of the classification of elliptic foliations induced by quadratic fields with center. This technique involves using a formula due to Cerveau & Lins Neto to calculate the genus of the generic fiber of a first integral of foliations of these kinds. Furthermore, we show that these foliations induce several examples of linear families of foliations which are not bimeromorphically equivalent to certain remarkable examples given by Lins Neto.

  4. A Comparison of Measurement Equivalence Methods Based on Confirmatory Factor Analysis and Item Response Theory.

    ERIC Educational Resources Information Center

    Flowers, Claudia P.; Raju, Nambury S.; Oshima, T. C.

    Current interest in the assessment of measurement equivalence emphasizes two methods of analysis, linear, and nonlinear procedures. This study simulated data using the graded response model to examine the performance of linear (confirmatory factor analysis or CFA) and nonlinear (item-response-theory-based differential item function or IRT-Based…

  5. Measurements of neutron dose equivalent for a proton therapy center using uniform scanning proton beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng Yuanshui; Liu Yaxi; Zeidan, Omar

    Purpose: Neutron exposure is of concern in proton therapy, and varies with beam delivery technique, nozzle design, and treatment conditions. Uniform scanning is an emerging treatment technique in proton therapy, but neutron exposure for this technique has not been fully studied. The purpose of this study is to investigate the neutron dose equivalent per therapeutic dose, H/D, under various treatment conditions for uniform scanning beams employed at our proton therapy center. Methods: Using a wide energy neutron dose equivalent detector (SWENDI-II, ThermoScientific, MA), the authors measured H/D at 50 cm lateral to the isocenter as a function of proton range,more » modulation width, beam scanning area, collimated field size, and snout position. They also studied the influence of other factors on neutron dose equivalent, such as aperture material, the presence of a compensator, and measurement locations. They measured H/D for various treatment sites using patient-specific treatment parameters. Finally, they compared H/D values for various beam delivery techniques at various facilities under similar conditions. Results: H/D increased rapidly with proton range and modulation width, varying from about 0.2 mSv/Gy for a 5 cm range and 2 cm modulation width beam to 2.7 mSv/Gy for a 30 cm range and 30 cm modulation width beam when 18 Multiplication-Sign 18 cm{sup 2} uniform scanning beams were used. H/D increased linearly with the beam scanning area, and decreased slowly with aperture size and snout retraction. The presence of a compensator reduced the H/D slightly compared with that without a compensator present. Aperture material and compensator material also have an influence on neutron dose equivalent, but the influence is relatively small. H/D varied from about 0.5 mSv/Gy for a brain tumor treatment to about 3.5 mSv/Gy for a pelvic case. Conclusions: This study presents H/D as a function of various treatment parameters for uniform scanning proton beams. For similar treatment conditions, the H/D value per uncollimated beam size for uniform scanning beams was slightly lower than that from a passive scattering beam and higher than that from a pencil beam scanning beam, within a factor of 2. Minimizing beam scanning area could effectively reduce neutron dose equivalent for uniform scanning beams, down to the level close to pencil beam scanning.« less

  6. Regression Verification Using Impact Summaries

    NASA Technical Reports Server (NTRS)

    Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana

    2013-01-01

    Regression verification techniques are used to prove equivalence of syntactically similar programs. Checking equivalence of large programs, however, can be computationally expensive. Existing regression verification techniques rely on abstraction and decomposition techniques to reduce the computational effort of checking equivalence of the entire program. These techniques are sound but not complete. In this work, we propose a novel approach to improve scalability of regression verification by classifying the program behaviors generated during symbolic execution as either impacted or unimpacted. Our technique uses a combination of static analysis and symbolic execution to generate summaries of impacted program behaviors. The impact summaries are then checked for equivalence using an o-the-shelf decision procedure. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution. Our evaluation on a set of sequential C artifacts shows that reducing the size of the summaries can help reduce the cost of software equivalence checking. Various reduction, abstraction, and compositional techniques have been developed to help scale software verification techniques to industrial-sized systems. Although such techniques have greatly increased the size and complexity of systems that can be checked, analysis of large software systems remains costly. Regression analysis techniques, e.g., regression testing [16], regression model checking [22], and regression verification [19], restrict the scope of the analysis by leveraging the differences between program versions. These techniques are based on the idea that if code is checked early in development, then subsequent versions can be checked against a prior (checked) version, leveraging the results of the previous analysis to reduce analysis cost of the current version. Regression verification addresses the problem of proving equivalence of closely related program versions [19]. These techniques compare two programs with a large degree of syntactic similarity to prove that portions of one program version are equivalent to the other. Regression verification can be used for guaranteeing backward compatibility, and for showing behavioral equivalence in programs with syntactic differences, e.g., when a program is refactored to improve its performance, maintainability, or readability. Existing regression verification techniques leverage similarities between program versions by using abstraction and decomposition techniques to improve scalability of the analysis [10, 12, 19]. The abstractions and decomposition in the these techniques, e.g., summaries of unchanged code [12] or semantically equivalent methods [19], compute an over-approximation of the program behaviors. The equivalence checking results of these techniques are sound but not complete-they may characterize programs as not functionally equivalent when, in fact, they are equivalent. In this work we describe a novel approach that leverages the impact of the differences between two programs for scaling regression verification. We partition program behaviors of each version into (a) behaviors impacted by the changes and (b) behaviors not impacted (unimpacted) by the changes. Only the impacted program behaviors are used during equivalence checking. We then prove that checking equivalence of the impacted program behaviors is equivalent to checking equivalence of all program behaviors for a given depth bound. In this work we use symbolic execution to generate the program behaviors and leverage control- and data-dependence information to facilitate the partitioning of program behaviors. The impacted program behaviors are termed as impact summaries. The dependence analyses that facilitate the generation of the impact summaries, we believe, could be used in conjunction with other abstraction and decomposition based approaches, [10, 12], as a complementary reduction technique. An evaluation of our regression verification technique shows that our approach is capable of leveraging similarities between program versions to reduce the size of the queries and the time required to check for logical equivalence. The main contributions of this work are: - A regression verification technique to generate impact summaries that can be checked for functional equivalence using an off-the-shelf decision procedure. - A proof that our approach is sound and complete with respect to the depth bound of symbolic execution. - An implementation of our technique using the LLVMcompiler infrastructure, the klee Symbolic Virtual Machine [4], and a variety of Satisfiability Modulo Theory (SMT) solvers, e.g., STP [7] and Z3 [6]. - An empirical evaluation on a set of C artifacts which shows that the use of impact summaries can reduce the cost of regression verification.

  7. Optimum Damping in a Non-Linear Base Isolation System

    NASA Astrophysics Data System (ADS)

    Jangid, R. S.

    1996-02-01

    Optimum isolation damping for minimum acceleration of a base-isolated structure subjected to earthquake ground excitation is investigated. The stochastic model of the El-Centro1940 earthquake, which preserves the non-stationary evolution of amplitude and frequency content of ground motion, is used as an earthquake excitation. The base isolated structure consists of a linear flexible shear type multi-storey building supported on a base isolation system. The resilient-friction base isolator (R-FBI) is considered as an isolation system. The non-stationary stochastic response of the system is obtained by the time dependent equivalent linearization technique as the force-deformation of the R-FBI system is non-linear. The optimum damping of the R-FBI system is obtained under important parametric variations; i.e., the coefficient of friction of the R-FBI system, the period and damping of the superstructure; the effective period of base isolation. The criterion selected for optimality is the minimization of the top floor root mean square (r.m.s.) acceleration. It is shown that the above parameters have significant effects on optimum isolation damping.

  8. Effects of joints in truss structures

    NASA Technical Reports Server (NTRS)

    Ikegami, R.

    1988-01-01

    The response of truss-type structures for future space applications, such as Large Deployable Reflector (LDR), will be directly affected by joint performance. Some of the objectives of research at BAC were to characterize structural joints, establish analytical approaches that incorporate joint characteristics, and experimentally establish the validity of the analytical approaches. The test approach to characterize joints for both erectable and deployable-type structures was based upon a Force State Mapping Technique. The approach pictorially shows how the nonlinear joint results can be used for equivalent linear analysis. Testing of the Space Station joints developed at LaRC (a hinged joint at 2 Hz and a clevis joint at 2 Hz) successfully revealed the nonlinear characteristics of the joints. The Space Station joints were effectively linear when loaded to plus or minus 500 pounds with a corresponding displacement of about plus or minus 0.0015 inch. It was indicated that good linear joints exist which are compatible with errected structures, but that difficulty may be encountered if nonlinear-type joints are incorporated in the structure.

  9. Baryon Acoustic Oscillations reconstruction with pixels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Obuljen, Andrej; Villaescusa-Navarro, Francisco; Castorina, Emanuele

    2017-09-01

    Gravitational non-linear evolution induces a shift in the position of the baryon acoustic oscillations (BAO) peak together with a damping and broadening of its shape that bias and degrades the accuracy with which the position of the peak can be determined. BAO reconstruction is a technique developed to undo part of the effect of non-linearities. We present and analyse a reconstruction method that consists of displacing pixels instead of galaxies and whose implementation is easier than the standard reconstruction method. We show that this method is equivalent to the standard reconstruction technique in the limit where the number of pixelsmore » becomes very large. This method is particularly useful in surveys where individual galaxies are not resolved, as in 21cm intensity mapping observations. We validate this method by reconstructing mock pixelated maps, that we build from the distribution of matter and halos in real- and redshift-space, from a large set of numerical simulations. We find that this method is able to decrease the uncertainty in the BAO peak position by 30-50% over the typical angular resolution scales of 21 cm intensity mapping experiments.« less

  10. Periodic solutions of second-order nonlinear difference equations containing a small parameter. II - Equivalent linearization

    NASA Technical Reports Server (NTRS)

    Mickens, R. E.

    1985-01-01

    The classical method of equivalent linearization is extended to a particular class of nonlinear difference equations. It is shown that the method can be used to obtain an approximation of the periodic solutions of these equations. In particular, the parameters of the limit cycle and the limit points can be determined. Three examples illustrating the method are presented.

  11. Corrosive effect of the type of soil in the systems of grounding more used (copper and stainless steel) for local soil samples from the city of Tunja (Colombia), by means of electrochemical techniques

    NASA Astrophysics Data System (ADS)

    Guerrero, L.; Salas, Y.; Blanco, J.

    2016-02-01

    In this work electrochemical techniques were used to determine the corrosion behaviour of copper and stainless steel electrodes, used in grounding varying soil type with which they react. A slight but significant change in the corrosion rate, linear polarization resistance and equivalent parameters in the technique of electrochemical impedance spectroscopy circuit was observed. Electrolytes in soils are slightly different depending on laboratory study, but the influence was noted in the retention capacity of water, mainly due to clays, affecting ion mobility and therefore measures such as the corrosion rate. Behaviour was noted in lower potential for copper corrosion, though the corrosion rate regardless of the type of soil, was much higher for electrodes based on copper, by several orders of magnitude.

  12. Gauge invariance of excitonic linear and nonlinear optical response

    NASA Astrophysics Data System (ADS)

    Taghizadeh, Alireza; Pedersen, T. G.

    2018-05-01

    We study the equivalence of four different approaches to calculate the excitonic linear and nonlinear optical response of multiband semiconductors. These four methods derive from two choices of gauge, i.e., length and velocity gauges, and two ways of computing the current density, i.e., direct evaluation and evaluation via the time-derivative of the polarization density. The linear and quadratic response functions are obtained for all methods by employing a perturbative density-matrix approach within the mean-field approximation. The equivalence of all four methods is shown rigorously, when a correct interaction Hamiltonian is employed for the velocity gauge approaches. The correct interaction is written as a series of commutators containing the unperturbed Hamiltonian and position operators, which becomes equivalent to the conventional velocity gauge interaction in the limit of infinite Coulomb screening and infinitely many bands. As a case study, the theory is applied to hexagonal boron nitride monolayers, and the linear and nonlinear optical response found in different approaches are compared.

  13. Digital Architecture for a Trace Gas Sensor Platform

    NASA Technical Reports Server (NTRS)

    Gonzales, Paula; Casias, Miguel; Vakhtin, Andrei; Pilgrim, Jeffrey

    2012-01-01

    A digital architecture has been implemented for a trace gas sensor platform, as a companion to standard analog control electronics, which accommodates optical absorption whose fractional absorbance equivalent would result in excess error if assumed to be linear. In cases where the absorption (1-transmission) is not equivalent to the fractional absorbance within a few percent error, it is necessary to accommodate the actual measured absorption while reporting the measured concentration of a target analyte with reasonable accuracy. This requires incorporation of programmable intelligence into the sensor platform so that flexible interpretation of the acquired data may be accomplished. Several different digital component architectures were tested and implemented. Commercial off-the-shelf digital electronics including data acquisition cards (DAQs), complex programmable logic devices (CPLDs), field-programmable gate arrays (FPGAs), and microcontrollers have been used to achieve the desired outcome. The most completely integrated architecture achieved during the project used the CPLD along with a microcontroller. The CPLD provides the initial digital demodulation of the raw sensor signal, and then communicates over a parallel communications interface with a microcontroller. The microcontroller analyzes the digital signal from the CPLD, and applies a non-linear correction obtained through extensive data analysis at the various relevant EVA operating pressures. The microcontroller then presents the quantitatively accurate carbon dioxide partial pressure regardless of optical density. This technique could extend the linear dynamic range of typical absorption spectrometers, particularly those whose low end noise equivalent absorbance is below one-part-in-100,000. In the EVA application, it allows introduction of a path-length-enhancing architecture whose optical interference effects are well understood and quantified without sacrificing the dynamic range that allows quantitative detection at the higher carbon dioxide partial pressures. The digital components are compact and allow reasonably complete integration with separately developed analog control electronics without sacrificing size, mass, or power draw.

  14. Feedback-Equivalence of Nonlinear Systems with Applications to Power System Equations.

    NASA Astrophysics Data System (ADS)

    Marino, Riccardo

    The key concept of the dissertation is feedback equivalence among systems affine in control. Feedback equivalence to linear systems in Brunovsky canonical form and the construction of the corresponding feedback transformation are used to: (i) design a nonlinear regulator for a detailed nonlinear model of a synchronous generator connected to an infinite bus; (ii) establish which power system network structures enjoy the feedback linearizability property and design a stabilizing control law for these networks with a constraint on the control space which comes from the use of d.c. lines. It is also shown that the feedback linearizability property allows the use of state feedback to contruct a linear controllable system with a positive definite linear Hamiltonian structure for the uncontrolled part if the state space is even; a stabilizing control law is derived for such systems. Feedback linearizability property is characterized by the involutivity of certain nested distributions for strongly accessible analytic systems; if the system is defined on a manifold M diffeomorphic to the Euclidean space, it is established that the set where the property holds is a submanifold open and dense in M. If an analytic output map is defined, a set of nested involutive distributions can be always defined and that allows the introduction of an observability property which is the dual concept, in some sense, to feedback linearizability: the goal is to investigate when a nonlinear system affine in control with an analytic output map is feedback equivalent to a linear controllable and observable system. Finally a nested involutive structure of distributions is shown to guarantee the existence of a state feedback that takes a nonlinear system affine in control to a single input one, both feedback equivalent to linear controllable systems, preserving one controlled vector field.

  15. Analysis of high aspect ratio jet flap wings of arbitrary geometry.

    NASA Technical Reports Server (NTRS)

    Lissaman, P. B. S.

    1973-01-01

    Paper presents a design technique for rapidly computing lift, induced drag, and spanwise loading of unswept jet flap wings of arbitrary thickness, chord, twist, blowing, and jet angle, including discontinuities. Linear theory is used, extending Spence's method for elliptically loaded jet flap wings. Curves for uniformly blown rectangular wings are presented for direct performance estimation. Arbitrary planforms require a simple computer program. Method of reducing wing to equivalent stretched, twisted, unblown planform for hand calculation is also given. Results correlate with limited existing data, and show lifting line theory is reasonable down to aspect ratios of 5.

  16. Beamforming strategy of ULA and UCA sensor configuration in multistatic passive radar

    NASA Astrophysics Data System (ADS)

    Hossa, Robert

    2009-06-01

    A Beamforming Network (BN) concept of Uniform Linear Array (ULA) and Uniform Circular Array (UCA) dipole configuration designed to multistatic passive radar is considered in details. In the case of UCA configuration, computationally efficient procedure of beamspace transformation from UCA to virtual ULA configuration with omnidirectional coverage is utilized. If effect, the idea of the proposed solution is equivalent to the techniques of antenna array factor shaping dedicated to ULA structure. Finally, exemplary results from the computer software simulations of elaborated spatial filtering solutions to reference and surveillance channels are provided and discussed.

  17. Using Strassen's algorithm to accelerate the solution of linear systems

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Lee, King; Simon, Horst D.

    1990-01-01

    Strassen's algorithm for fast matrix-matrix multiplication has been implemented for matrices of arbitrary shapes on the CRAY-2 and CRAY Y-MP supercomputers. Several techniques have been used to reduce the scratch space requirement for this algorithm while simultaneously preserving a high level of performance. When the resulting Strassen-based matrix multiply routine is combined with some routines from the new LAPACK library, LU decomposition can be performed with rates significantly higher than those achieved by conventional means. We succeeded in factoring a 2048 x 2048 matrix on the CRAY Y-MP at a rate equivalent to 325 MFLOPS.

  18. Biological effects and equivalent doses in radiotherapy: A software solution

    PubMed Central

    Voyant, Cyril; Julian, Daniel; Roustit, Rudy; Biffi, Katia; Lantieri, Céline

    2013-01-01

    Background The limits of TDF (time, dose, and fractionation) and linear quadratic models have been known for a long time. Medical physicists and physicians are required to provide fast and reliable interpretations regarding delivered doses or any future prescriptions relating to treatment changes. Aim We, therefore, propose a calculation interface under the GNU license to be used for equivalent doses, biological doses, and normal tumor complication probability (Lyman model). Materials and methods The methodology used draws from several sources: the linear-quadratic-linear model of Astrahan, the repopulation effects of Dale, and the prediction of multi-fractionated treatments of Thames. Results and conclusions The results are obtained from an algorithm that minimizes an ad-hoc cost function, and then compared to an equivalent dose computed using standard calculators in seven French radiotherapy centers. PMID:24936319

  19. An approach to checking case-crossover analyses based on equivalence with time-series methods.

    PubMed

    Lu, Yun; Symons, James Morel; Geyh, Alison S; Zeger, Scott L

    2008-03-01

    The case-crossover design has been increasingly applied to epidemiologic investigations of acute adverse health effects associated with ambient air pollution. The correspondence of the design to that of matched case-control studies makes it inferentially appealing for epidemiologic studies. Case-crossover analyses generally use conditional logistic regression modeling. This technique is equivalent to time-series log-linear regression models when there is a common exposure across individuals, as in air pollution studies. Previous methods for obtaining unbiased estimates for case-crossover analyses have assumed that time-varying risk factors are constant within reference windows. In this paper, we rely on the connection between case-crossover and time-series methods to illustrate model-checking procedures from log-linear model diagnostics for time-stratified case-crossover analyses. Additionally, we compare the relative performance of the time-stratified case-crossover approach to time-series methods under 3 simulated scenarios representing different temporal patterns of daily mortality associated with air pollution in Chicago, Illinois, during 1995 and 1996. Whenever a model-be it time-series or case-crossover-fails to account appropriately for fluctuations in time that confound the exposure, the effect estimate will be biased. It is therefore important to perform model-checking in time-stratified case-crossover analyses rather than assume the estimator is unbiased.

  20. Implementation of dual-energy technique for virtual monochromatic and linearly mixed CBCTs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Hao; Giles, William; Ren Lei

    Purpose: To implement dual-energy imaging technique for virtual monochromatic (VM) and linearly mixed (LM) cone beam CTs (CBCTs) and to demonstrate their potential applications in metal artifact reduction and contrast enhancement in image-guided radiation therapy (IGRT). Methods: A bench-top CBCT system was used to acquire 80 kVp and 150 kVp projections, with an additional 0.8 mm tin filtration. To implement the VM technique, these projections were first decomposed into acrylic and aluminum basis material projections to synthesize VM projections, which were then used to reconstruct VM CBCTs. The effect of VM CBCT on the metal artifact reduction was evaluated withmore » an in-house titanium-BB phantom. The optimal VM energy to maximize contrast-to-noise ratio (CNR) for iodine contrast and minimize beam hardening in VM CBCT was determined using a water phantom containing two iodine concentrations. The LM technique was implemented by linearly combining the low-energy (80 kVp) and high-energy (150 kVp) CBCTs. The dose partitioning between low-energy and high-energy CBCTs was varied (20%, 40%, 60%, and 80% for low-energy) while keeping total dose approximately equal to single-energy CBCTs, measured using an ion chamber. Noise levels and CNRs for four tissue types were investigated for dual-energy LM CBCTs in comparison with single-energy CBCTs at 80, 100, 125, and 150 kVp. Results: The VM technique showed substantial reduction of metal artifacts at 100 keV with a 40% reduction in the background standard deviation compared to a 125 kVp single-energy scan of equal dose. The VM energy to maximize CNR for both iodine concentrations and minimize beam hardening in the metal-free object was 50 keV and 60 keV, respectively. The difference of average noise levels measured in the phantom background was 1.2% between dual-energy LM CBCTs and equivalent-dose single-energy CBCTs. CNR values in the LM CBCTs of any dose partitioning are better than those of 150 kVp single-energy CBCTs. The average CNR for four tissue types with 80% dose fraction at low-energy showed 9.0% and 4.1% improvement relative to 100 kVp and 125 kVp single-energy CBCTs, respectively. CNRs for low-contrast objects improved as dose partitioning was more heavily weighted toward low-energy (80 kVp) for LM CBCTs. Conclusions: Dual-energy CBCT imaging techniques were implemented to synthesize VM CBCT and LM CBCTs. VM CBCT was effective at achieving metal artifact reduction. Depending on the dose-partitioning scheme, LM CBCT demonstrated the potential to improve CNR for low contrast objects compared to single-energy CBCT acquired with equivalent dose.« less

  1. Part II: Biomechanical assessment for a footprint-restoring transosseous-equivalent rotator cuff repair technique compared with a double-row repair technique.

    PubMed

    Park, Maxwell C; Tibone, James E; ElAttrache, Neal S; Ahmad, Christopher S; Jun, Bong-Jae; Lee, Thay Q

    2007-01-01

    We hypothesized that a transosseous-equivalent repair would demonstrate improved tensile strength and gap formation between the tendon and tuberosity when compared with a double-row technique. In 6 fresh-frozen human shoulders, a transosseous-equivalent rotator cuff repair was performed: a suture limb from each of two medial anchors was bridged over the tendon and fixed laterally with an interference screw. In 6 contralateral matched-pair specimens, a double-row repair was performed. For all repairs, a materials testing machine was used to load each repair cyclically from 10 N to 180 N for 30 cycles; each repair underwent tensile testing to measure failure loads at a deformation rate of 1 mm/sec. Gap formation between the tendon edge and insertion was measured with a video digitizing system. The mean ultimate load to failure was significantly greater for the transosseous-equivalent technique (443.0 +/- 87.8 N) compared with the double-row technique (299.2 +/- 52.5 N) (P = .043). Gap formation during cyclic loading was not significantly different between the transosseous-equivalent and double-row techniques, with mean values of 3.74 +/- 1.51 mm and 3.79 +/- 0.68 mm, respectively (P = .95). Stiffness for all cycles was not statistically different between the two constructs (P > .40). The transosseous-equivalent rotator cuff repair technique improves ultimate failure loads when compared with a double-row technique. Gap formation is similar for both techniques. A transosseous-equivalent repair helps restore footprint dimensions and provides a stronger repair than the double-row technique, which may help optimize healing biology.

  2. Stochastic Stability of Sampled Data Systems with a Jump Linear Controller

    NASA Technical Reports Server (NTRS)

    Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven

    2004-01-01

    In this paper an equivalence between the stochastic stability of a sampled-data system and its associated discrete-time representation is established. The sampled-data system consists of a deterministic, linear, time-invariant, continuous-time plant and a stochastic, linear, time-invariant, discrete-time, jump linear controller. The jump linear controller models computer systems and communication networks that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. This paper shows that the known equivalence between the stability of a deterministic sampled-data system and the associated discrete-time representation holds even in a stochastic framework.

  3. Radiobiological equivalent of low/high dose rate brachytherapy and evaluation of tumor and normal responses to the dose.

    PubMed

    Manimaran, S

    2007-06-01

    The aim of this study was to compare the biological equivalent of low-dose-rate (LDR) and high-dose-rate (HDR) brachytherapy in terms of the more recent linear quadratic (LQ) model, which leads to theoretical estimation of biological equivalence. One of the key features of the LQ model is that it allows a more systematic radiobiological comparison between different types of treatment because the main parameters alpha/beta and micro are tissue-specific. Such comparisons also allow assessment of the likely change in the therapeutic ratio when switching between LDR and HDR treatments. The main application of LQ methodology, which focuses on by increasing the availability of remote afterloading units, has been to design fractionated HDR treatments that can replace existing LDR techniques. In this study, with LDR treatments (39 Gy in 48 h) equivalent to 11 fractions of HDR irradiation at the experimental level, there are increasing reports of reproducible animal models that may be used to investigate the biological basis of brachytherapy and to help confirm theoretical predictions. This is a timely development owing to the nonavailability of sufficient retrospective patient data analysis. It appears that HDR brachytherapy is likely to be a viable alternative to LDR only if it is delivered without a prohibitively large number of fractions (e.g., fewer than 11). With increased scientific understanding and technological capability, the prospect of a dose equivalent to HDR brachytherapy will allow greater utilization of the concepts discussed in this article.

  4. Radiation-induced second primary cancer risks from modern external beam radiotherapy for early prostate cancer: impact of stereotactic ablative radiotherapy (SABR), volumetric modulated arc therapy (VMAT) and flattening filter free (FFF) radiotherapy

    NASA Astrophysics Data System (ADS)

    Murray, Louise J.; Thompson, Christopher M.; Lilley, John; Cosgrove, Vivian; Franks, Kevin; Sebag-Montefiore, David; Henry, Ann M.

    2015-02-01

    Risks of radiation-induced second primary cancer following prostate radiotherapy using 3D-conformal radiotherapy (3D-CRT), intensity-modulated radiotherapy (IMRT), volumetric modulated arc therapy (VMAT), flattening filter free (FFF) and stereotactic ablative radiotherapy (SABR) were evaluated. Prostate plans were created using 10 MV 3D-CRT (78 Gy in 39 fractions) and 6 MV 5-field IMRT (78 Gy in 39 fractions), VMAT (78 Gy in 39 fractions, with standard flattened and energy-matched FFF beams) and SABR (42.7 Gy in 7 fractions with standard flattened and energy-matched FFF beams). Dose-volume histograms from pelvic planning CT scans of three prostate patients, each planned using all 6 techniques, were used to calculate organ equivalent doses (OED) and excess absolute risks (EAR) of second rectal and bladder cancers, and pelvic bone and soft tissue sarcomas, using mechanistic, bell-shaped and plateau models. For organs distant to the treatment field, chamber measurements recorded in an anthropomorphic phantom were used to calculate OEDs and EARs using a linear model. Ratios of OED give relative radiation-induced second cancer risks. SABR resulted in lower second cancer risks at all sites relative to 3D-CRT. FFF resulted in lower second cancer risks in out-of-field tissues relative to equivalent flattened techniques, with increasing impact in organs at greater distances from the field. For example, FFF reduced second cancer risk by up to 20% in the stomach and up to 56% in the brain, relative to the equivalent flattened technique. Relative to 10 MV 3D-CRT, 6 MV IMRT or VMAT with flattening filter increased second cancer risks in several out-of-field organs, by up to 26% and 55%, respectively. For all techniques, EARs were consistently low. The observed large relative differences between techniques, in absolute terms, were very low, highlighting the importance of considering absolute risks alongside the corresponding relative risks, since when absolute risks are very low, large relative risks become less meaningful. A calculated relative radiation-induced second cancer risk benefit from SABR and FFF techniques was theoretically predicted, although absolute radiation-induced second cancer risks were low for all techniques, and absolute differences between techniques were small.

  5. UCODE_2005 and six other computer codes for universal sensitivity analysis, calibration, and uncertainty evaluation constructed using the JUPITER API

    USGS Publications Warehouse

    Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen

    2006-01-01

    This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con

  6. Fast radiative transfer models for retrieval of cloud properties in the back-scattering region: application to DSCOVR-EPIC sensor

    NASA Astrophysics Data System (ADS)

    Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego

    2017-04-01

    In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.

  7. Nonlinear effects of stretch on the flame front propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halter, F.; Tahtouh, T.; Mounaim-Rousselle, C.

    2010-10-15

    In all experimental configurations, the flames are affected by stretch (curvature and/or strain rate). To obtain the unstretched flame speed, independent of the experimental configuration, the measured flame speed needs to be corrected. Usually, a linear relationship linking the flame speed to stretch is used. However, this linear relation is the result of several assumptions, which may be incorrected. The present study aims at evaluating the error in the laminar burning speed evaluation induced by using the traditional linear methodology. Experiments were performed in a closed vessel at atmospheric pressure for two different mixtures: methane/air and iso-octane/air. The initial temperaturesmore » were respectively 300 K and 400 K for methane and iso-octane. Both methodologies (linear and nonlinear) are applied and results in terms of laminar speed and burned gas Markstein length are compared. Methane and iso-octane were chosen because they present opposite evolutions in their Markstein length when the equivalence ratio is increased. The error induced by the linear methodology is evaluated, taking the nonlinear methodology as the reference. It is observed that the use of the linear methodology starts to induce substantial errors after an equivalence ratio of 1.1 for methane/air mixtures and before an equivalence ratio of 1 for iso-octane/air mixtures. One solution to increase the accuracy of the linear methodology for these critical cases consists in reducing the number of points used in the linear methodology by increasing the initial flame radius used. (author)« less

  8. On the equivalence of Gaussian elimination and Gauss-Jordan reduction in solving linear equations

    NASA Technical Reports Server (NTRS)

    Tsao, Nai-Kuan

    1989-01-01

    A novel general approach to round-off error analysis using the error complexity concepts is described. This is applied to the analysis of the Gaussian Elimination and Gauss-Jordan scheme for solving linear equations. The results show that the two algorithms are equivalent in terms of our error complexity measures. Thus the inherently parallel Gauss-Jordan scheme can be implemented with confidence if parallel computers are available.

  9. Assessing Measurement Equivalence in Ordered-Categorical Data

    ERIC Educational Resources Information Center

    Elosua, Paula

    2011-01-01

    Assessing measurement equivalence in the framework of the common factor linear models (CFL) is known as factorial invariance. This methodology is used to evaluate the equivalence among the parameters of a measurement model among different groups. However, when dichotomous, Likert, or ordered responses are used, one of the assumptions of the CFL is…

  10. Comparative study of the antioxidant capacity and polyphenol content of Douro wines by chemical and electrochemical methods.

    PubMed

    Rebelo, M J; Rego, R; Ferreira, M; Oliveira, M C

    2013-11-01

    A comparative study of the antioxidant capacity and polyphenols content of Douro wines by chemical (ABTS and Folin-Ciocalteau) and electrochemical methods (cyclic voltammetry and differential pulse voltammetry) was performed. A non-linear correlation between cyclic voltammetric results and ABTS or Folin-Ciocalteau data was obtained if all types of wines (white, muscatel, ruby, tawny and red wines) are grouped together in the same correlation plot. In contrast, a very good linear correlation was observed between the electrochemical antioxidant capacity determined by differential pulse voltammetry and the radical scavenging activity of ABTS. It was also found that the antioxidant capacity of wines evaluated by the electrochemical methods (expressed as gallic acid equivalents) depend on background electrolyte of the gallic acid standards, type of electrochemical signal (current or charge) and electrochemical technique. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Method for extracting long-equivalent wavelength interferometric information

    NASA Technical Reports Server (NTRS)

    Hochberg, Eric B. (Inventor)

    1991-01-01

    A process for extracting long-equivalent wavelength interferometric information from a two-wavelength polychromatic or achromatic interferometer. The process comprises the steps of simultaneously recording a non-linear sum of two different frequency visible light interferograms on a high resolution film and then placing the developed film in an optical train for Fourier transformation, low pass spatial filtering and inverse transformation of the film image to produce low spatial frequency fringes corresponding to a long-equivalent wavelength interferogram. The recorded non-linear sum irradiance derived from the two-wavelength interferometer is obtained by controlling the exposure so that the average interferogram irradiance is set at either the noise level threshold or the saturation level threshold of the film.

  12. Program for narrow-band analysis of aircraft flyover noise using ensemble averaging techniques

    NASA Technical Reports Server (NTRS)

    Gridley, D.

    1982-01-01

    A package of computer programs was developed for analyzing acoustic data from an aircraft flyover. The package assumes the aircraft is flying at constant altitude and constant velocity in a fixed attitude over a linear array of ground microphones. Aircraft position is provided by radar and an option exists for including the effects of the aircraft's rigid-body attitude relative to the flight path. Time synchronization between radar and acoustic recording stations permits ensemble averaging techniques to be applied to the acoustic data thereby increasing the statistical accuracy of the acoustic results. Measured layered meteorological data obtained during the flyovers are used to compute propagation effects through the atmosphere. Final results are narrow-band spectra and directivities corrected for the flight environment to an equivalent static condition at a specified radius.

  13. On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.

    PubMed

    Yamazaki, Keisuke

    2012-07-01

    Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. A Pilot Investigation of the Relationship between Climate Variability and Milk Compounds under the Bootstrap Technique

    PubMed Central

    Marami Milani, Mohammad Reza; Hense, Andreas; Rahmani, Elham; Ploeger, Angelika

    2015-01-01

    This study analyzes the linear relationship between climate variables and milk components in Iran by applying bootstrapping to include and assess the uncertainty. The climate parameters, Temperature Humidity Index (THI) and Equivalent Temperature Index (ETI) are computed from the NASA-Modern Era Retrospective-Analysis for Research and Applications (NASA-MERRA) reanalysis (2002–2010). Milk data for fat, protein (measured on fresh matter bases), and milk yield are taken from 936,227 milk records for the same period, using cows fed by natural pasture from April to September. Confidence intervals for the regression model are calculated using the bootstrap technique. This method is applied to the original times series, generating statistically equivalent surrogate samples. As a result, despite the short time data and the related uncertainties, an interesting behavior of the relationships between milk compound and the climate parameters is visible. During spring only, a weak dependency of milk yield and climate variations is obvious, while fat and protein concentrations show reasonable correlations. In summer, milk yield shows a similar level of relationship with ETI, but not with temperature and THI. We suggest this methodology for studies in the field of the impacts of climate change and agriculture, also environment and food with short-term data. PMID:28231215

  15. Multi-temperature state-dependent equivalent circuit discharge model for lithium-sulfur batteries

    NASA Astrophysics Data System (ADS)

    Propp, Karsten; Marinescu, Monica; Auger, Daniel J.; O'Neill, Laura; Fotouhi, Abbas; Somasundaram, Karthik; Offer, Gregory J.; Minton, Geraint; Longo, Stefano; Wild, Mark; Knap, Vaclav

    2016-10-01

    Lithium-sulfur (Li-S) batteries are described extensively in the literature, but existing computational models aimed at scientific understanding are too complex for use in applications such as battery management. Computationally simple models are vital for exploitation. This paper proposes a non-linear state-of-charge dependent Li-S equivalent circuit network (ECN) model for a Li-S cell under discharge. Li-S batteries are fundamentally different to Li-ion batteries, and require chemistry-specific models. A new Li-S model is obtained using a 'behavioural' interpretation of the ECN model; as Li-S exhibits a 'steep' open-circuit voltage (OCV) profile at high states-of-charge, identification methods are designed to take into account OCV changes during current pulses. The prediction-error minimization technique is used. The model is parameterized from laboratory experiments using a mixed-size current pulse profile at four temperatures from 10 °C to 50 °C, giving linearized ECN parameters for a range of states-of-charge, currents and temperatures. These are used to create a nonlinear polynomial-based battery model suitable for use in a battery management system. When the model is used to predict the behaviour of a validation data set representing an automotive NEDC driving cycle, the terminal voltage predictions are judged accurate with a root mean square error of 32 mV.

  16. Double row equivalent for rotator cuff repair: A biomechanical analysis of a new technique.

    PubMed

    Robinson, Sean; Krigbaum, Henry; Kramer, Jon; Purviance, Connor; Parrish, Robin; Donahue, Joseph

    2018-06-01

    There are numerous configurations of double row fixation for rotator cuff tears however, there remains to be a consensus on the best method. In this study, we evaluated three different double-row configurations, including a new method. Our primary question is whether the new anchor and technique compares in biomechanical strength to standard double row techniques. Eighteen prepared fresh frozen bovine infraspinatus tendons were randomized to one of three groups including the New Double Row Equivalent, Arthrex Speedbridge and a transosseous equivalent using standard Stabilynx anchors. Biomechanical testing was performed on humeri sawbones and ultimate load, strain, yield strength, contact area, contact pressure, and a survival plots were evaluated. The new double row equivalent method demonstrated increased survival as well as ultimate strength at 415N compared to the remainder testing groups as well as equivalent contact area and pressure to standard double row techniques. This new anchor system and technique demonstrated higher survival rates and loads to failure than standard double row techniques. This data provides us with a new method of rotator cuff fixation which should be further evaluated in the clinical setting. Basic science biomechanical study.

  17. New Results on the Linear Equating Methods for the Non-Equivalent-Groups Design

    ERIC Educational Resources Information Center

    von Davier, Alina A.

    2008-01-01

    The two most common observed-score equating functions are the linear and equipercentile functions. These are often seen as different methods, but von Davier, Holland, and Thayer showed that any equipercentile equating function can be decomposed into linear and nonlinear parts. They emphasized the dominant role of the linear part of the nonlinear…

  18. Localized surface plasmon resonances in nanostructures to enhance nonlinear vibrational spectroscopies: towards an astonishing molecular sensitivity

    PubMed Central

    2014-01-01

    Summary Vibrational transitions contain some of the richest fingerprints of molecules and materials, providing considerable physicochemical information. Vibrational transitions can be characterized by different spectroscopies, and alternatively by several imaging techniques enabling to reach sub-microscopic spatial resolution. In a quest to always push forward the detection limit and to lower the number of needed vibrational oscillators to get a reliable signal or imaging contrast, surface plasmon resonances (SPR) are extensively used to increase the local field close to the oscillators. Another approach is based on maximizing the collective response of the excited vibrational oscillators through molecular coherence. Both features are often naturally combined in vibrational nonlinear optical techniques. In this frame, this paper reviews the main achievements of the two most common vibrational nonlinear optical spectroscopies, namely surface-enhanced sum-frequency generation (SE-SFG) and surface-enhanced coherent anti-Stokes Raman scattering (SE-CARS). They can be considered as the nonlinear counterpart and/or combination of the linear surface-enhanced infrared absorption (SEIRA) and surface-enhanced Raman scattering (SERS) techniques, respectively, which are themselves a branching of the conventional IR and spontaneous Raman spectroscopies. Compared to their linear equivalent, those nonlinear vibrational spectroscopies have proved to reach higher sensitivity down to the single molecule level, opening the way to astonishing perspectives for molecular analysis. PMID:25551056

  19. Gaussian closure technique applied to the hysteretic Bouc model with non-zero mean white noise excitation

    NASA Astrophysics Data System (ADS)

    Waubke, Holger; Kasess, Christian H.

    2016-11-01

    Devices that emit structure-borne sound are commonly decoupled by elastic components to shield the environment from acoustical noise and vibrations. The elastic elements often have a hysteretic behavior that is typically neglected. In order to take hysteretic behavior into account, Bouc developed a differential equation for such materials, especially joints made of rubber or equipped with dampers. In this work, the Bouc model is solved by means of the Gaussian closure technique based on the Kolmogorov equation. Kolmogorov developed a method to derive probability density functions for arbitrary explicit first-order vector differential equations under white noise excitation using a partial differential equation of a multivariate conditional probability distribution. Up to now no analytical solution of the Kolmogorov equation in conjunction with the Bouc model exists. Therefore a wide range of approximate solutions, especially the statistical linearization, were developed. Using the Gaussian closure technique that is an approximation to the Kolmogorov equation assuming a multivariate Gaussian distribution an analytic solution is derived in this paper for the Bouc model. For the stationary case the two methods yield equivalent results, however, in contrast to statistical linearization the presented solution allows to calculate the transient behavior explicitly. Further, stationary case leads to an implicit set of equations that can be solved iteratively with a small number of iterations and without instabilities for specific parameter sets.

  20. Singular value description of a digital radiographic detector: Theory and measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kyprianou, Iacovos S.; Badano, Aldo; Gallas, Brandon D.

    The H operator represents the deterministic performance of any imaging system. For a linear, digital imaging system, this system operator can be written in terms of a matrix, H, that describes the deterministic response of the system to a set of point objects. A singular value decomposition of this matrix results in a set of orthogonal functions (singular vectors) that form the system basis. A linear combination of these vectors completely describes the transfer of objects through the linear system, where the respective singular values associated with each singular vector describe the magnitude with which that contribution to the objectmore » is transferred through the system. This paper is focused on the measurement, analysis, and interpretation of the H matrix for digital x-ray detectors. A key ingredient in the measurement of the H matrix is the detector response to a single x ray (or infinitestimal x-ray beam). The authors have developed a method to estimate the 2D detector shift-variant, asymmetric ray response function (RRF) from multiple measured line response functions (LRFs) using a modified edge technique. The RRF measurements cover a range of x-ray incident angles from 0 deg. (equivalent location at the detector center) to 30 deg. (equivalent location at the detector edge) for a standard radiographic or cone-beam CT geometric setup. To demonstrate the method, three beam qualities were tested using the inherent, Lu/Er, and Yb beam filtration. The authors show that measures using the LRF, derived from an edge measurement, underestimate the system's performance when compared with the H matrix derived using the RRF. Furthermore, the authors show that edge measurements must be performed at multiple directions in order to capture rotational asymmetries of the RRF. The authors interpret the results of the H matrix SVD and provide correlations with the familiar MTF methodology. Discussion is made about the benefits of the H matrix technique with regards to signal detection theory, and the characterization of shift-variant imaging systems.« less

  1. Response of a tissue equivalent proportional counter to neutrons

    NASA Technical Reports Server (NTRS)

    Badhwar, G. D.; Robbins, D. E.; Gibbons, F.; Braby, L. A.

    2002-01-01

    The absorbed dose as a function of lineal energy was measured at the CERN-EC Reference-field Facility (CERF) using a 512-channel tissue equivalent proportional counter (TEPC), and neutron dose equivalent response evaluated. Although there are some differences, the measured dose equivalent is in agreement with that measured by the 16-channel HANDI tissue equivalent counter. Comparison of TEPC measurements with those made by a silicon solid-state detector for low linear energy transfer particles produced by the same beam, is presented. The measurements show that about 4% of dose equivalent is delivered by particles heavier than protons generated in the conducting tissue equivalent plastic. c2002 Elsevier Science Ltd. All rights reserved.

  2. A high-fidelity method to analyze perturbation evolution in turbulent flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Unnikrishnan, S., E-mail: sasidharannair.1@osu.edu; Gaitonde, Datta V., E-mail: gaitonde.3@osu.edu

    2016-04-01

    Small perturbation propagation in fluid flows is usually examined by linearizing the governing equations about a steady basic state. It is often useful, however, to study perturbation evolution in the unsteady evolving turbulent environment. Such analyses can elucidate the role of perturbations in the generation of coherent structures or the production of noise from jet turbulence. The appropriate equations are still the linearized Navier–Stokes equations, except that the linearization must be performed about the instantaneous evolving turbulent state, which forms the coefficients of the linearized equations. This is a far more difficult problem since in addition to the turbulent state,more » its rate of change and the perturbation field are all required at each instant. In this paper, we develop and use a novel technique for this problem by using a pair (denoted “baseline” and “twin”) of simultaneous synchronized Large-Eddy Simulations (LES). At each time-step, small disturbances whose propagation characteristics are to be studied, are introduced into the twin through a forcing term. At subsequent time steps, the difference between the two simulations is shown to be equivalent to solving the forced Navier–Stokes equations, linearized about the instantaneous turbulent state. The technique does not put constraints on the forcing, which could be arbitrary, e.g., white noise or other stochastic variants. We consider, however, “native” forcing having properties of disturbances that exist naturally in the turbulent environment. The method then isolates the effect of turbulence in a particular region on the rest of the field, which is useful in the study of noise source localization. The synchronized technique is relatively simple to implement into existing codes. In addition to minimizing the storage and retrieval of large time-varying datasets, it avoids the need to explicitly linearize the governing equations, which can be a very complicated task for viscous terms or turbulence closures. The method is illustrated by application to a well-validated Mach 1.3 jet. Specifically, the effects of turbulence on the jet lipline and core collapse regions on the near-acoustic field are isolated. The properties of the method, including linearity and effect of initial transients, are discussed. The results provide insight into how turbulence from different parts of the jet contribute to the observed dominance of low and high frequency content at shallow and sideline angles, respectively.« less

  3. A high-fidelity method to analyze perturbation evolution in turbulent flows

    NASA Astrophysics Data System (ADS)

    Unnikrishnan, S.; Gaitonde, Datta V.

    2016-04-01

    Small perturbation propagation in fluid flows is usually examined by linearizing the governing equations about a steady basic state. It is often useful, however, to study perturbation evolution in the unsteady evolving turbulent environment. Such analyses can elucidate the role of perturbations in the generation of coherent structures or the production of noise from jet turbulence. The appropriate equations are still the linearized Navier-Stokes equations, except that the linearization must be performed about the instantaneous evolving turbulent state, which forms the coefficients of the linearized equations. This is a far more difficult problem since in addition to the turbulent state, its rate of change and the perturbation field are all required at each instant. In this paper, we develop and use a novel technique for this problem by using a pair (denoted "baseline" and "twin") of simultaneous synchronized Large-Eddy Simulations (LES). At each time-step, small disturbances whose propagation characteristics are to be studied, are introduced into the twin through a forcing term. At subsequent time steps, the difference between the two simulations is shown to be equivalent to solving the forced Navier-Stokes equations, linearized about the instantaneous turbulent state. The technique does not put constraints on the forcing, which could be arbitrary, e.g., white noise or other stochastic variants. We consider, however, "native" forcing having properties of disturbances that exist naturally in the turbulent environment. The method then isolates the effect of turbulence in a particular region on the rest of the field, which is useful in the study of noise source localization. The synchronized technique is relatively simple to implement into existing codes. In addition to minimizing the storage and retrieval of large time-varying datasets, it avoids the need to explicitly linearize the governing equations, which can be a very complicated task for viscous terms or turbulence closures. The method is illustrated by application to a well-validated Mach 1.3 jet. Specifically, the effects of turbulence on the jet lipline and core collapse regions on the near-acoustic field are isolated. The properties of the method, including linearity and effect of initial transients, are discussed. The results provide insight into how turbulence from different parts of the jet contribute to the observed dominance of low and high frequency content at shallow and sideline angles, respectively.

  4. Key-Generation Algorithms for Linear Piece In Hand Matrix Method

    NASA Astrophysics Data System (ADS)

    Tadaki, Kohtaro; Tsujii, Shigeo

    The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.

  5. The Effects of Different Training Structures in the Establishment of Conditional Discriminations and Subsequent Performance on Tests for Stimulus Equivalence

    ERIC Educational Resources Information Center

    Arntzen, Erik; Grondahl, Terje; Eilifsen, Christoffer

    2010-01-01

    Previous studies comparing groups of subjects have indicated differential probabilities of stimulus equivalence outcome as a function of training structures. One-to-Many (OTM) and Many-to-One (MTO) training structures seem to produce positive outcomes on tests for stimulus equivalence more often than a Linear Series (LS) training structure does.…

  6. Enhanced performance for the analysis of prostaglandins and thromboxanes by liquid chromatography-tandem mass spectrometry using a new atmospheric pressure ionization source.

    PubMed

    Lubin, Arnaud; Geerinckx, Suzy; Bajic, Steve; Cabooter, Deirdre; Augustijns, Patrick; Cuyckens, Filip; Vreeken, Rob J

    2016-04-01

    Eicosanoids, including prostaglandins and thromboxanes are lipid mediators synthetized from polyunsaturated fatty acids. They play an important role in cell signaling and are often reported as inflammatory markers. LC-MS/MS is the technique of choice for the analysis of these compounds, often in combination with advanced sample preparation techniques. Here we report a head to head comparison between an electrospray ionization source (ESI) and a new atmospheric pressure ionization source (UniSpray). The performance of both interfaces was evaluated in various matrices such as human plasma, pig colon and mouse colon. The UniSpray source shows an increase in method sensitivity up to a factor 5. Equivalent to better linearity and repeatability on various matrices as well as an increase in signal intensity were observed in comparison to ESI. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. On the use of Lineal Energy Measurements to Estimate Linear Energy Transfer Spectra

    NASA Technical Reports Server (NTRS)

    Adams, David A.; Howell, Leonard W., Jr.; Adam, James H., Jr.

    2007-01-01

    This paper examines the error resulting from using a lineal energy spectrum to represent a linear energy transfer spectrum for applications in the space radiation environment. Lineal energy and linear energy transfer spectra are compared in three diverse but typical space radiation environments. Different detector geometries are also studied to determine how they affect the error. LET spectra are typically used to compute dose equivalent for radiation hazard estimation and single event effect rates to estimate radiation effects on electronics. The errors in the estimations of dose equivalent and single event rates that result from substituting lineal energy spectra for linear energy spectra are examined. It is found that this substitution has little effect on dose equivalent estimates in interplanetary quiet-time environment regardless of detector shape. The substitution has more of an effect when the environment is dominated by solar energetic particles or trapped radiation, but even then the errors are minor especially if a spherical detector is used. For single event estimation, the effect of the substitution can be large if the threshold for the single event effect is near where the linear energy spectrum drops suddenly. It is judged that single event rate estimates made from lineal energy spectra are unreliable and the use of lineal energy spectra for single event rate estimation should be avoided.

  8. A Profilometry-Based Dentifrice Abrasion Method for V8 Brushing Machines Part III: Multi-Laboratory Validation Testing of RDA-PE.

    PubMed

    Schneiderman, Eva; Colón, Ellen L; White, Donald J; Schemehorn, Bruce; Ganovsky, Tara; Haider, Amir; Garcia-Godoy, Franklin; Morrow, Brian R; Srimaneepong, Viritpon; Chumprasert, Sujin

    2017-09-01

    We have previously reported on progress toward the refinement of profilometry-based abrasivity testing of dentifrices using a V8 brushing machine and tactile or optical measurement of dentin wear. The general application of this technique may be advanced by demonstration of successful inter-laboratory confirmation of the method. The objective of this study was to explore the capability of different laboratories in the assessment of dentifrice abrasivity using a profilometry-based evaluation technique developed in our Mason laboratories. In addition, we wanted to assess the interchangeability of human and bovine specimens. Participating laboratories were instructed in methods associated with Radioactive Dentin Abrasivity-Profilometry Equivalent (RDA-PE) evaluation, including site visits to discuss critical elements of specimen preparation, masking, profilometry scanning, and procedures. Laboratories were likewise instructed on the requirement for demonstration of proportional linearity as a key condition for validation of the technique. Laboratories were provided with four test dentifrices, blinded for testing, with a broad range of abrasivity. In each laboratory, a calibration curve was developed for varying V8 brushing strokes (0, 4,000, and 10,000 strokes) with the ISO abrasive standard. Proportional linearity was determined as the ratio of standard abrasion mean depths created with 4,000 and 10,000 strokes (2.5 fold differences). Criteria for successful calibration within the method (established in our Mason laboratory) was set at proportional linearity = 2.5 ± 0.3. RDA-PE was compared to Radiotracer RDA for the four test dentifrices, with the latter obtained by averages from three independent Radiotracer RDA sites. Individual laboratories and their results were compared by 1) proportional linearity and 2) acquired RDA-PE values for test pastes. Five sites participated in the study. One site did not pass proportional linearity objectives. Data for this site are not reported at the request of the researchers. Three of the remaining four sites reported herein tested human dentin and all three met proportional linearity objectives for human dentin. Three of four sites participated in testing bovine dentin and all three met the proportional linearity objectives for bovine dentin. RDA-PE values for test dentifrices were similar between sites. All four sites that met proportional linearity requirement successfully identified the dentifrice formulated above the industry standard 250 RDA (as RDA-PE). The profilometry method showed at least as good reproducibility and differentiation as Radiotracer assessments. It was demonstrated that human and bovine specimens could be used interchangeably. The standardized RDA-PE method was reproduced in multiple laboratories in this inter-laboratory study. Evidence supports that this method is a suitable technique for ISO method 11609 Annex B.

  9. What Do Contrast Threshold Equivalent Noise Studies Actually Measure? Noise vs. Nonlinearity in Different Masking Paradigms

    PubMed Central

    Baldwin, Alex S.; Baker, Daniel H.; Hess, Robert F.

    2016-01-01

    The internal noise present in a linear system can be quantified by the equivalent noise method. By measuring the effect that applying external noise to the system’s input has on its output one can estimate the variance of this internal noise. By applying this simple “linear amplifier” model to the human visual system, one can entirely explain an observer’s detection performance by a combination of the internal noise variance and their efficiency relative to an ideal observer. Studies using this method rely on two crucial factors: firstly that the external noise in their stimuli behaves like the visual system’s internal noise in the dimension of interest, and secondly that the assumptions underlying their model are correct (e.g. linearity). Here we explore the effects of these two factors while applying the equivalent noise method to investigate the contrast sensitivity function (CSF). We compare the results at 0.5 and 6 c/deg from the equivalent noise method against those we would expect based on pedestal masking data collected from the same observers. We find that the loss of sensitivity with increasing spatial frequency results from changes in the saturation constant of the gain control nonlinearity, and that this only masquerades as a change in internal noise under the equivalent noise method. Part of the effect we find can be attributed to the optical transfer function of the eye. The remainder can be explained by either changes in effective input gain, divisive suppression, or a combination of the two. Given these effects the efficiency of our observers approaches the ideal level. We show the importance of considering these factors in equivalent noise studies. PMID:26953796

  10. What Do Contrast Threshold Equivalent Noise Studies Actually Measure? Noise vs. Nonlinearity in Different Masking Paradigms.

    PubMed

    Baldwin, Alex S; Baker, Daniel H; Hess, Robert F

    2016-01-01

    The internal noise present in a linear system can be quantified by the equivalent noise method. By measuring the effect that applying external noise to the system's input has on its output one can estimate the variance of this internal noise. By applying this simple "linear amplifier" model to the human visual system, one can entirely explain an observer's detection performance by a combination of the internal noise variance and their efficiency relative to an ideal observer. Studies using this method rely on two crucial factors: firstly that the external noise in their stimuli behaves like the visual system's internal noise in the dimension of interest, and secondly that the assumptions underlying their model are correct (e.g. linearity). Here we explore the effects of these two factors while applying the equivalent noise method to investigate the contrast sensitivity function (CSF). We compare the results at 0.5 and 6 c/deg from the equivalent noise method against those we would expect based on pedestal masking data collected from the same observers. We find that the loss of sensitivity with increasing spatial frequency results from changes in the saturation constant of the gain control nonlinearity, and that this only masquerades as a change in internal noise under the equivalent noise method. Part of the effect we find can be attributed to the optical transfer function of the eye. The remainder can be explained by either changes in effective input gain, divisive suppression, or a combination of the two. Given these effects the efficiency of our observers approaches the ideal level. We show the importance of considering these factors in equivalent noise studies.

  11. Calculated organ doses from selected prostate treatment plans using Monte Carlo simulations and an anatomically realistic computational phantom

    PubMed Central

    Bednarz, Bryan; Hancox, Cindy; Xu, X George

    2012-01-01

    There is growing concern about radiation-induced second cancers associated with radiation treatments. Particular attention has been focused on the risk to patients treated with intensity-modulated radiation therapy (IMRT) due primarily to increased monitor units. To address this concern we have combined a detailed medical linear accelerator model of the Varian Clinac 2100 C with anatomically realistic computational phantoms to calculate organ doses from selected treatment plans. This paper describes the application to calculate organ-averaged equivalent doses using a computational phantom for three different treatments of prostate cancer: a 4-field box treatment, the same box treatment plus a 6-field 3D-CRT boost treatment and a 7-field IMRT treatment. The equivalent doses per MU to those organs that have shown a predilection for second cancers were compared between the different treatment techniques. In addition, the dependence of photon and neutron equivalent doses on gantry angle and energy was investigated. The results indicate that the box treatment plus 6-field boost delivered the highest intermediate- and low-level photon doses per treatment MU to the patient primarily due to the elevated patient scatter contribution as a result of an increase in integral dose delivered by this treatment. In most organs the contribution of neutron dose to the total equivalent dose for the 3D-CRT treatments was less than the contribution of photon dose, except for the lung, esophagus, thyroid and brain. The total equivalent dose per MU to each organ was calculated by summing the photon and neutron dose contributions. For all organs non-adjacent to the primary beam, the equivalent doses per MU from the IMRT treatment were less than the doses from the 3D-CRT treatments. This is due to the increase in the integral dose and the added neutron dose to these organs from the 18 MV treatments. However, depending on the application technique and optimization used, the required MU values for IMRT treatments can be two to three times greater than 3D CRT. Therefore, the total equivalent dose in most organs would be higher from the IMRT treatment compared to the box treatment and comparable to the organ doses from the box treatment plus the 6-field boost. This is the first time when organ dose data for an adult male patient of the ICRP reference anatomy have been calculated and documented. The tools presented in this paper can be used to estimate the second cancer risk to patients undergoing radiation treatment. PMID:19671968

  12. Calculated organ doses from selected prostate treatment plans using Monte Carlo simulations and an anatomically realistic computational phantom

    NASA Astrophysics Data System (ADS)

    Bednarz, Bryan; Hancox, Cindy; Xu, X. George

    2009-09-01

    There is growing concern about radiation-induced second cancers associated with radiation treatments. Particular attention has been focused on the risk to patients treated with intensity-modulated radiation therapy (IMRT) due primarily to increased monitor units. To address this concern we have combined a detailed medical linear accelerator model of the Varian Clinac 2100 C with anatomically realistic computational phantoms to calculate organ doses from selected treatment plans. This paper describes the application to calculate organ-averaged equivalent doses using a computational phantom for three different treatments of prostate cancer: a 4-field box treatment, the same box treatment plus a 6-field 3D-CRT boost treatment and a 7-field IMRT treatment. The equivalent doses per MU to those organs that have shown a predilection for second cancers were compared between the different treatment techniques. In addition, the dependence of photon and neutron equivalent doses on gantry angle and energy was investigated. The results indicate that the box treatment plus 6-field boost delivered the highest intermediate- and low-level photon doses per treatment MU to the patient primarily due to the elevated patient scatter contribution as a result of an increase in integral dose delivered by this treatment. In most organs the contribution of neutron dose to the total equivalent dose for the 3D-CRT treatments was less than the contribution of photon dose, except for the lung, esophagus, thyroid and brain. The total equivalent dose per MU to each organ was calculated by summing the photon and neutron dose contributions. For all organs non-adjacent to the primary beam, the equivalent doses per MU from the IMRT treatment were less than the doses from the 3D-CRT treatments. This is due to the increase in the integral dose and the added neutron dose to these organs from the 18 MV treatments. However, depending on the application technique and optimization used, the required MU values for IMRT treatments can be two to three times greater than 3D CRT. Therefore, the total equivalent dose in most organs would be higher from the IMRT treatment compared to the box treatment and comparable to the organ doses from the box treatment plus the 6-field boost. This is the first time when organ dose data for an adult male patient of the ICRP reference anatomy have been calculated and documented. The tools presented in this paper can be used to estimate the second cancer risk to patients undergoing radiation treatment.

  13. A Profilometry-Based Dentifrice Abrasion Method for V8 Brushing Machines Part II: Comparison of RDA-PE and Radiotracer RDA Measures.

    PubMed

    Schneiderman, Eva; Colón, Ellen; White, Donald J; St John, Samuel

    2015-01-01

    The purpose of this study was to compare the abrasivity of commercial dentifrices by two techniques: the conventional gold standard radiotracer-based Radioactive Dentin Abrasivity (RDA) method; and a newly validated technique based on V8 brushing that included a profilometry-based evaluation of dentin wear. This profilometry-based method is referred to as RDA-Profilometry Equivalent, or RDA-PE. A total of 36 dentifrices were sourced from four global dentifrice markets (Asia Pacific [including China], Europe, Latin America, and North America) and tested blindly using both the standard radiotracer (RDA) method and the new profilometry method (RDA-PE), taking care to follow specific details related to specimen preparation and treatment. Commercial dentifrices tested exhibited a wide range of abrasivity, with virtually all falling well under the industry accepted upper limit of 250; that is, 2.5 times the level of abrasion measured using an ISO 11609 abrasivity reference calcium pyrophosphate as the reference control. RDA and RDA-PE comparisons were linear across the entire range of abrasivity (r2 = 0.7102) and both measures exhibited similar reproducibility with replicate assessments. RDA-PE assessments were not just linearly correlated, but were also proportional to conventional RDA measures. The linearity and proportionality of the results of the current study support that both methods (RDA or RDA-PE) provide similar results and justify a rationale for making the upper abrasivity limit of 250 apply to both RDA and RDA-PE.

  14. Antipsychotic dose equivalents and dose-years: a standardized method for comparing exposure to different drugs.

    PubMed

    Andreasen, Nancy C; Pressler, Marcus; Nopoulos, Peg; Miller, Del; Ho, Beng-Choon

    2010-02-01

    A standardized quantitative method for comparing dosages of different drugs is a useful tool for designing clinical trials and for examining the effects of long-term medication side effects such as tardive dyskinesia. Such a method requires establishing dose equivalents. An expert consensus group has published charts of equivalent doses for various antipsychotic medications for first- and second-generation medications. These charts were used in this study. Regression was used to compare each drug in the experts' charts to chlorpromazine and haloperidol and to create formulas for each relationship. The formulas were solved for chlorpromazine 100 mg and haloperidol 2 mg to derive new chlorpromazine and haloperidol equivalents. The formulas were incorporated into our definition of dose-years such that 100 mg/day of chlorpromazine equivalent or 2 mg/day of haloperidol equivalent taken for 1 year is equal to one dose-year. All comparisons to chlorpromazine and haloperidol were highly linear with R(2) values greater than .9. A power transformation further improved linearity. By deriving a unique formula that converts doses to chlorpromazine or haloperidol equivalents, we can compare otherwise dissimilar drugs. These equivalents can be multiplied by the time an individual has been on a given dose to derive a cumulative value measured in dose-years in the form of (chlorpromazine equivalent in mg) x (time on dose measured in years). After each dose has been converted to dose-years, the results can be summed to provide a cumulative quantitative measure of lifetime exposure. Copyright 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  15. Designing perfect linear polarization converters using perfect electric and magnetic conducting surfaces

    PubMed Central

    Zhou, Gaochao; Tao, Xudong; Shen, Ze; Zhu, Guanghao; Jin, Biaobing; Kang, Lin; Xu, Weiwei; Chen, Jian; Wu, Peiheng

    2016-01-01

    We propose a kind of general framework for the design of a perfect linear polarization converter that works in the transmission mode. Using an intuitive picture that is based on the method of bi-directional polarization mode decomposition, it is shown that when the device under consideration simultaneously possesses two complementary symmetry planes, with one being equivalent to a perfect electric conducting surface and the other being equivalent to a perfect magnetic conducting surface, linear polarization conversion can occur with an efficiency of 100% in the absence of absorptive losses. The proposed framework is validated by two design examples that operate near 10 GHz, where the numerical, experimental and analytic results are in good agreements. PMID:27958313

  16. 40 CFR 53.1 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... followed by a gravimetric mass determination, but which is not a Class I equivalent method because of... MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.1 Definitions. Terms used but not defined... slope of a linear plot fitted to corresponding candidate and reference method mean measurement data...

  17. Identifying equivalent sound sources from aeroacoustic simulations using a numerical phased array

    NASA Astrophysics Data System (ADS)

    Pignier, Nicolas J.; O'Reilly, Ciarán J.; Boij, Susann

    2017-04-01

    An application of phased array methods to numerical data is presented, aimed at identifying equivalent flow sound sources from aeroacoustic simulations. Based on phased array data extracted from compressible flow simulations, sound source strengths are computed on a set of points in the source region using phased array techniques assuming monopole propagation. Two phased array techniques are used to compute the source strengths: an approach using a Moore-Penrose pseudo-inverse and a beamforming approach using dual linear programming (dual-LP) deconvolution. The first approach gives a model of correlated sources for the acoustic field generated from the flow expressed in a matrix of cross- and auto-power spectral values, whereas the second approach results in a model of uncorrelated sources expressed in a vector of auto-power spectral values. The accuracy of the equivalent source model is estimated by computing the acoustic spectrum at a far-field observer. The approach is tested first on an analytical case with known point sources. It is then applied to the example of the flow around a submerged air inlet. The far-field spectra obtained from the source models for two different flow conditions are in good agreement with the spectra obtained with a Ffowcs Williams-Hawkings integral, showing the accuracy of the source model from the observer's standpoint. Various configurations for the phased array and for the sources are used. The dual-LP beamforming approach shows better robustness to changes in the number of probes and sources than the pseudo-inverse approach. The good results obtained with this simulation case demonstrate the potential of the phased array approach as a modelling tool for aeroacoustic simulations.

  18. Monolithically integrated bacteriorhodopsin/semiconductor opto-electronic integrated circuit for a bio-photoreceiver.

    PubMed

    Xu, J; Bhattacharya, P; Váró, G

    2004-03-15

    The light-sensitive protein, bacteriorhodopsin (BR), is monolithically integrated with an InP-based amplifier circuit to realize a novel opto-electronic integrated circuit (OEIC) which performs as a high-speed photoreceiver. The circuit is realized by epitaxial growth of the field-effect transistors, currently used semiconductor device and circuit fabrication techniques, and selective area BR electro-deposition. The integrated photoreceiver has a responsivity of 175 V/W and linear photoresponse, with a dynamic range of 16 dB, with 594 nm photoexcitation. The dynamics of the photochemical cycle of BR has also been modeled and a proposed equivalent circuit simulates the measured BR photoresponse with good agreement.

  19. A spatial operator algebra for manipulator modeling and control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Jain, A.; Kreutz-Delgado, K.

    1991-01-01

    A recently developed spatial operator algebra for manipulator modeling, control, and trajectory design is discussed. The elements of this algebra are linear operators whose domain and range spaces consist of forces, moments, velocities, and accelerations. The effect of these operators is equivalent to a spatial recursion along the span of a manipulator. Inversion of operators can be efficiently obtained via techniques of recursive filtering and smoothing. The operator algebra provides a high-level framework for describing the dynamic and kinematic behavior of a manipulator and for control and trajectory design algorithms. The interpretation of expressions within the algebraic framework leads to enhanced conceptual and physical understanding of manipulator dynamics and kinematics.

  20. SU-F-T-02: Estimation of Radiobiological Doses (BED and EQD2) of Single Fraction Electronic Brachytherapy That Equivalent to I-125 Eye Plaque: By Using Linear-Quadratic and Universal Survival Curve Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Y; Waldron, T; Pennington, E

    Purpose: To test the radiobiological impact of hypofractionated choroidal melanoma brachytherapy, we calculated single fraction equivalent doses (SFED) of the tumor that equivalent to 85 Gy of I125-BT for 20 patients. Corresponding organs-at-risks (OARs) doses were estimated. Methods: Twenty patients treated with I125-BT were retrospectively examined. The tumor SFED values were calculated from tumor BED using a conventional linear-quadratic (L-Q) model and an universal survival curve (USC). The opposite retina (α/β = 2.58), macula (2.58), optic disc (1.75), and lens (1.2) were examined. The % doses of OARs over tumor doses were assumed to be the same as for amore » single fraction delivery. The OAR SFED values were converted into BED and equivalent dose in 2 Gy fraction (EQD2) by using both L-Q and USC models, then compared to I125-BT. Results: The USC-based BED and EQD2 doses of the macula, optic disc, and the lens were on average 118 ± 46% (p < 0.0527), 126 ± 43% (p < 0.0354), and 112 ± 32% (p < 0.0265) higher than those of I125-BT, respectively. The BED and EQD2 doses of the opposite retina were 52 ± 9% lower than I125-BT. The tumor SFED values were 25.2 ± 3.3 Gy and 29.1 ± 2.5 Gy when using USC and LQ models which can be delivered within 1 hour. All BED and EQD2 values using L-Q model were significantly larger when compared to the USC model (p < 0.0274) due to its large single fraction size (> 14 Gy). Conclusion: The estimated single fraction doses were feasible to be delivered within 1 hour using a high dose rate source such as electronic brachytherapy (eBT). However, the estimated OAR doses using eBT were 112 ∼ 118% higher than when using the I125-BT technique. Continued exploration of alternative dose rate or fractionation schedules should be followed.« less

  1. A general procedure to generate models for urban environmental-noise pollution using feature selection and machine learning methods.

    PubMed

    Torija, Antonio J; Ruiz, Diego P

    2015-02-01

    The prediction of environmental noise in urban environments requires the solution of a complex and non-linear problem, since there are complex relationships among the multitude of variables involved in the characterization and modelling of environmental noise and environmental-noise magnitudes. Moreover, the inclusion of the great spatial heterogeneity characteristic of urban environments seems to be essential in order to achieve an accurate environmental-noise prediction in cities. This problem is addressed in this paper, where a procedure based on feature-selection techniques and machine-learning regression methods is proposed and applied to this environmental problem. Three machine-learning regression methods, which are considered very robust in solving non-linear problems, are used to estimate the energy-equivalent sound-pressure level descriptor (LAeq). These three methods are: (i) multilayer perceptron (MLP), (ii) sequential minimal optimisation (SMO), and (iii) Gaussian processes for regression (GPR). In addition, because of the high number of input variables involved in environmental-noise modelling and estimation in urban environments, which make LAeq prediction models quite complex and costly in terms of time and resources for application to real situations, three different techniques are used to approach feature selection or data reduction. The feature-selection techniques used are: (i) correlation-based feature-subset selection (CFS), (ii) wrapper for feature-subset selection (WFS), and the data reduction technique is principal-component analysis (PCA). The subsequent analysis leads to a proposal of different schemes, depending on the needs regarding data collection and accuracy. The use of WFS as the feature-selection technique with the implementation of SMO or GPR as regression algorithm provides the best LAeq estimation (R(2)=0.94 and mean absolute error (MAE)=1.14-1.16 dB(A)). Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Robust and efficient pharmacokinetic parameter non-linear least squares estimation for dynamic contrast enhanced MRI of the prostate.

    PubMed

    Kargar, Soudabeh; Borisch, Eric A; Froemming, Adam T; Kawashima, Akira; Mynderse, Lance A; Stinson, Eric G; Trzasko, Joshua D; Riederer, Stephen J

    2018-05-01

    To describe an efficient numerical optimization technique using non-linear least squares to estimate perfusion parameters for the Tofts and extended Tofts models from dynamic contrast enhanced (DCE) MRI data and apply the technique to prostate cancer. Parameters were estimated by fitting the two Tofts-based perfusion models to the acquired data via non-linear least squares. We apply Variable Projection (VP) to convert the fitting problem from a multi-dimensional to a one-dimensional line search to improve computational efficiency and robustness. Using simulation and DCE-MRI studies in twenty patients with suspected prostate cancer, the VP-based solver was compared against the traditional Levenberg-Marquardt (LM) strategy for accuracy, noise amplification, robustness to converge, and computation time. The simulation demonstrated that VP and LM were both accurate in that the medians closely matched assumed values across typical signal to noise ratio (SNR) levels for both Tofts models. VP and LM showed similar noise sensitivity. Studies using the patient data showed that the VP method reliably converged and matched results from LM with approximate 3× and 2× reductions in computation time for the standard (two-parameter) and extended (three-parameter) Tofts models. While LM failed to converge in 14% of the patient data, VP converged in the ideal 100%. The VP-based method for non-linear least squares estimation of perfusion parameters for prostate MRI is equivalent in accuracy and robustness to noise, while being more reliably (100%) convergent and computationally about 3× (TM) and 2× (ETM) faster than the LM-based method. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Evaluation of optimum room entry times for radiation therapists after high energy whole pelvic photon treatments.

    PubMed

    Ho, Lavine; White, Peter; Chan, Edward; Chan, Kim; Ng, Janet; Tam, Timothy

    2012-01-01

    Linear accelerators operating at or above 10 MV produce neutrons by photonuclear reactions and induce activation in machine components, which are a source of potential exposure for radiation therapists. This study estimated gamma dose contributions to radiation therapists during high energy, whole pelvic, photon beam treatments and determined the optimum room entry times, in terms of safety of radiation therapists. Two types of technique (anterior-posterior opposing and 3-field technique) were studied. An Elekta Precise treatment system, operating up to 18 MV, was investigated. Measurements with an area monitoring device (a Mini 900R radiation monitor) were performed, to calculate gamma dose rates around the radiotherapy facility. Measurements inside the treatment room were performed when the linear accelerator was in use. The doses received by radiation therapists were estimated, and optimum room entry times were determined. The highest gamma dose rates were approximately 7 μSv/h inside the treatment room, while the doses in the control room were close to background (~0 μSv/h) for all techniques. The highest personal dose received by radiation therapists was estimated at 5 mSv/yr. To optimize protection, radiation therapists should wait for up to11 min after beam-off prior to room entry. The potential risks to radiation therapists with standard safety procedures were well below internationally recommended values, but risks could be further decreased by delaying room entry times. Dependent on the technique used, optimum entry times ranged between 7 to 11 min. A balance between moderate treatment times versus reduction in measured equivalent doses should be considered.

  4. Efficient techniques for wave-based sound propagation in interactive applications

    NASA Astrophysics Data System (ADS)

    Mehra, Ravish

    Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data-driven, rotating or time-varying directivity function at runtime. Unlike previous approaches, the listener directivity approach can be used to compute spatial audio (3D audio) for a moving, rotating listener at interactive rates. Lastly, we propose an efficient GPU-based time-domain solver for the wave equation that enables wave simulation up to the mid-frequency range in tens of minutes on a desktop computer. It is demonstrated that by carefully mapping all the components of the wave simulator to match the parallel processing capabilities of the graphics processors, significant improvement in performance can be achieved compared to the CPU-based simulators, while maintaining numerical accuracy. We validate these techniques with offline numerical simulations and measured data recorded in an outdoor scene. We present results of preliminary user evaluations conducted to study the impact of these techniques on user's immersion in virtual environment. We have integrated these techniques with the Half-Life 2 game engine, Oculus Rift head-mounted display, and Xbox game controller to enable users to experience high-quality acoustics effects and spatial audio in the virtual environment.

  5. Elucidating the Relations Between Monotonic and Fatigue Properties of Laser Powder Bed Fusion Stainless Steel 316L

    NASA Astrophysics Data System (ADS)

    Zhang, Meng; Sun, Chen-Nan; Zhang, Xiang; Goh, Phoi Chin; Wei, Jun; Li, Hua; Hardacre, David

    2018-03-01

    The laser powder bed fusion (L-PBF) technique builds parts with higher static strength than the conventional manufacturing processes through the formation of ultrafine grains. However, its fatigue endurance strength σ f does not match the increased monotonic tensile strength σ b. This work examines the monotonic and fatigue properties of as-built and heat-treated L-PBF stainless steel 316L. It was found that the general linear relation σ f = mσ b for describing conventional ferrous materials is not applicable to L-PBF parts because of the influence of porosity. Instead, the ductility parameter correlated linearly with fatigue strength and was proposed as the new fatigue assessment criterion for porous L-PBF parts. Annealed parts conformed to the strength-ductility trade-off. Fatigue resistance was reduced at short lives, but the effect was partially offset by the higher ductility such that comparing with an as-built part of equivalent monotonic strength, the heat-treated parts were more fatigue resistant.

  6. Investigation on a mechanical vibration absorber with tunable piecewise-linear stiffness

    NASA Astrophysics Data System (ADS)

    Shui, Xin; Wang, Shimin

    2018-02-01

    The design and characterization of a mechanical vibration absorber are addressed. A distinctive feature of the absorber is its tunable piecewise-linear stiffness, which is realized by means of a slider with two stop-blocks installed constraining the bilateral deflections of the elastic support. A new analytical approach named as the equivalent stiffness technique (EST) is introduced and then employed to obtain the analytical relations of the frequency, amplitude and phase with a view to exhibit a more comprehensive characterization of the absorber. Experiments are conducted to demonstrate the feasibility of the design. The experimental data show good agreement with the analytical results. The final results indicate that the tunable stiffness absorber (TSA) possesses a typical nonlinear characteristic at each given position of the slider, and its stiffness can be tuned in real time over a wide range by adjusting the slider position. Hence the TSA has a large optimum vibration-absorption range together with a wide suppression band around each optimal position, which contributes to its excellent capacity of vibration absorption.

  7. Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods

    NASA Astrophysics Data System (ADS)

    Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong

    2008-12-01

    Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.

  8. Dose equivalent near the bone-soft tissue interface from nuclear fragments produced by high-energy protons

    NASA Technical Reports Server (NTRS)

    Shavers, M. R.; Poston, J. W.; Cucinotta, F. A.; Wilson, J. W.

    1996-01-01

    During manned space missions, high-energy nucleons of cosmic and solar origin collide with atomic nuclei of the human body and produce a broad linear energy transfer spectrum of secondary particles, called target fragments. These nuclear fragments are often more biologically harmful than the direct ionization of the incident nucleon. That these secondary particles increase tissue absorbed dose in regions adjacent to the bone-soft tissue interface was demonstrated in a previous publication. To assess radiological risks to tissue near the bone-soft tissue interface, a computer transport model for nuclear fragments produced by high energy nucleons was used in this study to calculate integral linear energy transfer spectra and dose equivalents resulting from nuclear collisions of 1-GeV protons transversing bone and red bone marrow. In terms of dose equivalent averaged over trabecular bone marrow, target fragments emitted from interactions in both tissues are predicted to be at least as important as the direct ionization of the primary protons-twice as important, if recently recommended radiation weighting factors and "worst-case" geometry are used. The use of conventional dosimetry (absorbed dose weighted by aa linear energy transfer-dependent quality factor) as an appropriate framework for predicting risk from low fluences of high-linear energy transfer target fragments is discussed.

  9. Comparison of four commercial devices for RapidArc and sliding window IMRT QA

    PubMed Central

    Chandraraj, Varatharaj; Manickam, Ravikumar; Esquivel, Carlos; Supe, Sanjay S.; Papanikolaou, Nikos

    2011-01-01

    For intensity‐modulated radiation therapy, evaluation of the measured dose against the treatment planning calculated dose is essential in the context of patient‐specific quality assurance. The complexity of volumetric arc radiotherapy delivery attributed to its dynamic and synchronization nature require new methods and potentially new tools for the quality assurance of such techniques. In the present study, we evaluated and compared the dosimetric performance of EDR2 film and three other commercially available quality assurance devices: IBA I'MatriXX array, PTW Seven29 array and the Delta 4 array. The evaluation of these dosimetric systems was performed for RapidArc and IMRT deliveries using a Varian NovalisTX linear accelerator. The plans were generated using the Varian Eclipse treatment planning system. Our results showed that all four QA techniques yield equivalent results. All patient QAs passed our institutional clinical criteria of gamma index based on a 3% dose difference and 3 mm distance to agreement. In addition, the Bland‐Altman analysis was performed which showed that all the calculated gamma values of all three QA devices were within 5% from those of the film. The results showed that the four QA systems used in this patient‐specific IMRT QA analysis are equivalent. We concluded that the dosimetric systems under investigation can be used interchangeably for routine patient specific QA. PACS numbers: 87.55.Qr, 87.56.Fc

  10. Sparse Regression as a Sparse Eigenvalue Problem

    NASA Technical Reports Server (NTRS)

    Moghaddam, Baback; Gruber, Amit; Weiss, Yair; Avidan, Shai

    2008-01-01

    We extend the l0-norm "subspectral" algorithms for sparse-LDA [5] and sparse-PCA [6] to general quadratic costs such as MSE in linear (kernel) regression. The resulting "Sparse Least Squares" (SLS) problem is also NP-hard, by way of its equivalence to a rank-1 sparse eigenvalue problem (e.g., binary sparse-LDA [7]). Specifically, for a general quadratic cost we use a highly-efficient technique for direct eigenvalue computation using partitioned matrix inverses which leads to dramatic x103 speed-ups over standard eigenvalue decomposition. This increased efficiency mitigates the O(n4) scaling behaviour that up to now has limited the previous algorithms' utility for high-dimensional learning problems. Moreover, the new computation prioritizes the role of the less-myopic backward elimination stage which becomes more efficient than forward selection. Similarly, branch-and-bound search for Exact Sparse Least Squares (ESLS) also benefits from partitioned matrix inverse techniques. Our Greedy Sparse Least Squares (GSLS) generalizes Natarajan's algorithm [9] also known as Order-Recursive Matching Pursuit (ORMP). Specifically, the forward half of GSLS is exactly equivalent to ORMP but more efficient. By including the backward pass, which only doubles the computation, we can achieve lower MSE than ORMP. Experimental comparisons to the state-of-the-art LARS algorithm [3] show forward-GSLS is faster, more accurate and more flexible in terms of choice of regularization

  11. [Comparison between rapid detection method of enzyme substrate technique and multiple-tube fermentation technique in water coliform bacteria detection].

    PubMed

    Sun, Zong-ke; Wu, Rong; Ding, Pei; Xue, Jin-Rong

    2006-07-01

    To compare between rapid detection method of enzyme substrate technique and multiple-tube fermentation technique in water coliform bacteria detection. Using inoculated and real water samples to compare the equivalence and false positive rate between two methods. Results demonstrate that enzyme substrate technique shows equivalence with multiple-tube fermentation technique (P = 0.059), false positive rate between the two methods has no statistical difference. It is suggested that enzyme substrate technique can be used as a standard method for water microbiological safety evaluation.

  12. The effect of a paraffin screen on the neutron dose at the maze door of a 15 MV linear accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krmar, M.; Kuzmanović, A.; Nikolić, D.

    2013-08-15

    Purpose: The purpose of this study was to explore the effects of a paraffin screen located at various positions in the maze on the neutron dose equivalent at the maze door.Methods: The neutron dose equivalent was measured at the maze door of a room containing a 15 MV linear accelerator for x-ray therapy. Measurements were performed for several positions of the paraffin screen covering only 27.5% of the cross-sectional area of the maze. The neutron dose equivalent was also measured at all screen positions. Two simple models of the neutron source were considered in which the first assumed that themore » source was the cross-sectional area at the inner entrance of the maze, radiating neutrons in an isotropic manner. In the second model the reduction in the neutron dose equivalent at the maze door due to the paraffin screen was considered to be a function of the mean values of the neutron fluence and energy at the screen.Results: The results of this study indicate that the equivalent dose at the maze door was reduced by a factor of 3 through the use of a paraffin screen that was placed inside the maze. It was also determined that the contributions to the dosage from areas that were not covered by the paraffin screen as viewed from the dosimeter, were 2.5 times higher than the contributions from the covered areas. This study also concluded that the contributions of the maze walls, ceiling, and floor to the total neutron dose equivalent were an order of magnitude lower than those from the surface at the far end of the maze.Conclusions: This study demonstrated that a paraffin screen could be used to reduce the neutron dose equivalent at the maze door by a factor of 3. This paper also found that the reduction of the neutron dose equivalent was a linear function of the area covered by the maze screen and that the decrease in the dose at the maze door could be modeled as an exponential function of the product φ·E at the screen.« less

  13. Out-of-field neutron and leakage photon exposures and the associated risk of second cancers in high-energy photon radiotherapy: current status.

    PubMed

    Takam, R; Bezak, E; Marcu, L G; Yeoh, E

    2011-10-01

    Determination and understanding of out-of-field neutron and photon doses in accelerator-based radiotherapy is an important issue since linear accelerators operating at high energies (>10 MV) produce secondary radiations that irradiate parts of the patient's anatomy distal to the target region, potentially resulting in detrimental health effects. This paper provides a compilation of data (technical and clinical) reported in the literature on the measurement and Monte Carlo simulations of peripheral neutron and photon doses produced from high-energy medical linear accelerators and the reported risk and/or incidence of second primary cancer of tissues distal to the target volume. Information in the tables facilitates easier identification of (1) the various methods and measurement techniques used to determine the out-of-field neutron and photon radiations, (2) reported linac-dependent out-of-field doses, and (3) the risk/incidence of second cancers after radiotherapy due to classic and modern treatment methods. Regardless of the measurement technique and type of accelerator, the neutron dose equivalent per unit photon dose ranges from as low as 0.1 mSv/Gy to as high as 20.4 mSv/Gy. This radiation dose potentially contributes to the induction of second primary cancer in normal tissues outside the treated area.

  14. Correntropy-based partial directed coherence for testing multivariate Granger causality in nonlinear processes

    NASA Astrophysics Data System (ADS)

    Kannan, Rohit; Tangirala, Arun K.

    2014-06-01

    Identification of directional influences in multivariate systems is of prime importance in several applications of engineering and sciences such as plant topology reconstruction, fault detection and diagnosis, and neurosciences. A spectrum of related directionality measures, ranging from linear measures such as partial directed coherence (PDC) to nonlinear measures such as transfer entropy, have emerged over the past two decades. The PDC-based technique is simple and effective, but being a linear directionality measure has limited applicability. On the other hand, transfer entropy, despite being a robust nonlinear measure, is computationally intensive and practically implementable only for bivariate processes. The objective of this work is to develop a nonlinear directionality measure, termed as KPDC, that possesses the simplicity of PDC but is still applicable to nonlinear processes. The technique is founded on a nonlinear measure called correntropy, a recently proposed generalized correlation measure. The proposed method is equivalent to constructing PDC in a kernel space where the PDC is estimated using a vector autoregressive model built on correntropy. A consistent estimator of the KPDC is developed and important theoretical results are established. A permutation scheme combined with the sequential Bonferroni procedure is proposed for testing hypothesis on absence of causality. It is demonstrated through several case studies that the proposed methodology effectively detects Granger causality in nonlinear processes.

  15. On structural identifiability analysis of the cascaded linear dynamic systems in isotopically non-stationary 13C labelling experiments.

    PubMed

    Lin, Weilu; Wang, Zejian; Huang, Mingzhi; Zhuang, Yingping; Zhang, Siliang

    2018-06-01

    The isotopically non-stationary 13C labelling experiments, as an emerging experimental technique, can estimate the intracellular fluxes of the cell culture under an isotopic transient period. However, to the best of our knowledge, the issue of the structural identifiability analysis of non-stationary isotope experiments is not well addressed in the literature. In this work, the local structural identifiability analysis for non-stationary cumomer balance equations is conducted based on the Taylor series approach. The numerical rank of the Jacobian matrices of the finite extended time derivatives of the measured fractions with respect to the free parameters is taken as the criterion. It turns out that only one single time point is necessary to achieve the structural identifiability analysis of the cascaded linear dynamic system of non-stationary isotope experiments. The equivalence between the local structural identifiability of the cascaded linear dynamic systems and the local optimum condition of the nonlinear least squares problem is elucidated in the work. Optimal measurements sets can then be determined for the metabolic network. Two simulated metabolic networks are adopted to demonstrate the utility of the proposed method. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Localization of the eigenvalues of linear integral equations with applications to linear ordinary differential equations.

    NASA Technical Reports Server (NTRS)

    Sloss, J. M.; Kranzler, S. K.

    1972-01-01

    The equivalence of a considered integral equation form with an infinite system of linear equations is proved, and the localization of the eigenvalues of the infinite system is expressed. Error estimates are derived, and the problems of finding upper bounds and lower bounds for the eigenvalues are solved simultaneously.

  17. Left-sided breast cancer and risks of secondary lung cancer and ischemic heart disease : Effects of modern radiotherapy techniques.

    PubMed

    Corradini, Stefanie; Ballhausen, Hendrik; Weingandt, Helmut; Freislederer, Philipp; Schönecker, Stephan; Niyazi, Maximilian; Simonetto, Cristoforo; Eidemüller, Markus; Ganswindt, Ute; Belka, Claus

    2018-03-01

    Modern breast cancer radiotherapy techniques, such as respiratory-gated radiotherapy in deep-inspiration breath-hold (DIBH) or volumetric-modulated arc radiotherapy (VMAT) have been shown to reduce the high dose exposure of the heart in left-sided breast cancer. The aim of the present study was to comparatively estimate the excess relative and absolute risks of radiation-induced secondary lung cancer and ischemic heart disease for different modern radiotherapy techniques. Four different treatment plans were generated for ten computed tomography data sets of patients with left-sided breast cancer, using either three-dimensional conformal radiotherapy (3D-CRT) or VMAT, in free-breathing (FB) or DIBH. Dose-volume histograms were used for organ equivalent dose (OED) calculations using linear, linear-exponential, and plateau models for the lung. A linear model was applied to estimate the long-term risk of ischemic heart disease as motivated by epidemiologic data. Excess relative risk (ERR) and 10-year excess absolute risk (EAR) for radiation-induced secondary lung cancer and ischemic heart disease were estimated for different representative baseline risks. The DIBH maneuver resulted in a significant reduction of the ERR and estimated 10-year excess absolute risk for major coronary events compared to FB in 3D-CRT plans (p = 0.04). In VMAT plans, the mean predicted risk reduction through DIBH was less pronounced and not statistically significant (p = 0.44). The risk of radiation-induced secondary lung cancer was mainly influenced by the radiotherapy technique, with no beneficial effect through DIBH. VMAT plans correlated with an increase in 10-year EAR for radiation-induced lung cancer as compared to 3D-CRT plans (DIBH p = 0.007; FB p = 0.005, respectively). However, the EARs were affected more strongly by nonradiation-associated risk factors, such as smoking, as compared to the choice of treatment technique. The results indicate that 3D-CRT plans in DIBH pose the lowest risk for both major coronary events and secondary lung cancer.

  18. Compact lumped circuit model of discharges in DC accelerator using partial element equivalent circuit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banerjee, Srutarshi; Rajan, Rehim N.; Singh, Sandeep K.

    2014-07-01

    DC Accelerators undergoes different types of discharges during its operation. A model depicting the discharges has been simulated to study the different transient conditions. The paper presents a Physics based approach of developing a compact circuit model of the DC Accelerator using Partial Element Equivalent Circuit (PEEC) technique. The equivalent RLC model aids in analyzing the transient behavior of the system and predicting anomalies in the system. The electrical discharges and its properties prevailing in the accelerator can be evaluated by this equivalent model. A parallel coupled voltage multiplier structure is simulated in small scale using few stages of coronamore » guards and the theoretical and practical results are compared. The PEEC technique leads to a simple model for studying the fault conditions in accelerator systems. Compared to the Finite Element Techniques, this technique gives the circuital representation. The lumped components of the PEEC are used to obtain the input impedance and the result is also compared to that of the FEM technique for a frequency range of (0-200) MHz. (author)« less

  19. Equivalent equations of motion for gravity and entropy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czech, Bartlomiej; Lamprou, Lampros; McCandlish, Samuel

    We demonstrate an equivalence between the wave equation obeyed by the entanglement entropy of CFT subregions and the linearized bulk Einstein equation in Anti-de Sitter space. In doing so, we make use of the formalism of kinematic space and fields on this space. We show that the gravitational dynamics are equivalent to a gauge invariant wave-equation on kinematic space and that this equation arises in natural correspondence to the conformal Casimir equation in the CFT.

  20. Equivalent equations of motion for gravity and entropy

    DOE PAGES

    Czech, Bartlomiej; Lamprou, Lampros; McCandlish, Samuel; ...

    2017-02-01

    We demonstrate an equivalence between the wave equation obeyed by the entanglement entropy of CFT subregions and the linearized bulk Einstein equation in Anti-de Sitter space. In doing so, we make use of the formalism of kinematic space and fields on this space. We show that the gravitational dynamics are equivalent to a gauge invariant wave-equation on kinematic space and that this equation arises in natural correspondence to the conformal Casimir equation in the CFT.

  1. Effects of Optical Blur Reduction on Equivalent Intrinsic Blur

    PubMed Central

    Valeshabad, Ali Kord; Wanek, Justin; McAnany, J. Jason; Shahidi, Mahnaz

    2015-01-01

    Purpose To determine the effect of optical blur reduction on equivalent intrinsic blur, an estimate of the blur within the visual system, by comparing optical and equivalent intrinsic blur before and after adaptive optics (AO) correction of wavefront error. Methods Twelve visually normal individuals (age; 31 ± 12 years) participated in this study. Equivalent intrinsic blur (σint) was derived using a previously described model. Optical blur (σopt) due to high-order aberrations was quantified by Shack-Hartmann aberrometry and minimized using AO correction of wavefront error. Results σopt and σint were significantly reduced and visual acuity (VA) was significantly improved after AO correction (P ≤ 0.004). Reductions in σopt and σint were linearly dependent on the values before AO correction (r ≥ 0.94, P ≤ 0.002). The reduction in σint was greater than the reduction in σopt, although it was marginally significant (P = 0.05). σint after AO correlated significantly with σint before AO (r = 0.92, P < 0.001) and the two parameters were related linearly with a slope of 0.46. Conclusions Reduction in equivalent intrinsic blur was greater than the reduction in optical blur due to AO correction of wavefront error. This finding implies that VA in subjects with high equivalent intrinsic blur can be improved beyond that expected from the reduction in optical blur alone. PMID:25785538

  2. Effects of optical blur reduction on equivalent intrinsic blur.

    PubMed

    Kord Valeshabad, Ali; Wanek, Justin; McAnany, J Jason; Shahidi, Mahnaz

    2015-04-01

    To determine the effect of optical blur reduction on equivalent intrinsic blur, an estimate of the blur within the visual system, by comparing optical and equivalent intrinsic blur before and after adaptive optics (AO) correction of wavefront error. Twelve visually normal subjects (mean [±SD] age, 31 [±12] years) participated in this study. Equivalent intrinsic blur (σint) was derived using a previously described model. Optical blur (σopt) caused by high-order aberrations was quantified by Shack-Hartmann aberrometry and minimized using AO correction of wavefront error. σopt and σint were significantly reduced and visual acuity was significantly improved after AO correction (p ≤ 0.004). Reductions in σopt and σint were linearly dependent on the values before AO correction (r ≥ 0.94, p ≤ 0.002). The reduction in σint was greater than the reduction in σopt, although it was marginally significant (p = 0.05). σint after AO correlated significantly with σint before AO (r = 0.92, p < 0.001), and the two parameters were related linearly with a slope of 0.46. Reduction in equivalent intrinsic blur was greater than the reduction in optical blur after AO correction of wavefront error. This finding implies that visual acuity in subjects with high equivalent intrinsic blur can be improved beyond that expected from the reduction in optical blur alone.

  3. Estimation of Biochemical Constituents From Fresh, Green Leaves By Spectrum Matching Techniques

    NASA Technical Reports Server (NTRS)

    Goetz, A. F. H.; Gao, B. C.; Wessman, C. A.; Bowman, W. D.

    1990-01-01

    Estimation of biochemical constituents in vegetation such as lignin, cellulose, starch, sugar and protein by remote sensing methods is an important goal in ecological research. The spectral reflectances of dried leaves exhibit diagnostic absorption features which can be used to estimate the abundance of important constituents. Lignin and nitrogen concentrations have been obtained from canopies by use of imaging spectrometry and multiple linear regression techniques. The difficulty in identifying individual spectra of leaf constituents in the region beyond 1 micrometer is that liquid water contained in the leaf dominates the spectral reflectance of leaves in this region. By use of spectrum matching techniques, originally used to quantify whole column water abundance in the atmosphere and equivalent liquid water thickness in leaves, we have been able to remove the liquid water contribution to the spectrum. The residual spectra resemble spectra for cellulose in the 1.1 micrometer region, lignin in the 1.7 micrometer region, and starch in the 2.0-2.3 micrometer region. In the entire 1.0-2.3 micrometer region each of the major constituents contributes to the spectrum. Quantitative estimates will require using unmixing techniques on the residual spectra.

  4. Cold-air performance of a tip turbine designed to drive a lift fan

    NASA Technical Reports Server (NTRS)

    Haas, J. E.; Kofskey, M. G.; Hotz, G. M.

    1978-01-01

    Performance was obtained over a range of speeds and pressure ratios for a 0.4 linear scale version of the LF460 lift fan turbine with the rotor radial tip clearance reduced to about 2.5 percent of the rotor blade height. These tests covered a range of speeds from 60 to 140 percent of design equivalent speed and a range of scroll inlet total to diffuser exit static pressure ratios from 2.6 to 4.2. Results are presented in terms of equivalent mass flow, equivalent torque, equivalent specific work, and efficiency.

  5. Determination of the optical properties of melanin-pigmented human skin equivalents using terahertz time-domain spectroscopy

    NASA Astrophysics Data System (ADS)

    Lipscomb, Dawn; Echchgadda, Ibtissam; Peralta, Xomalin G.; Wilmink, Gerald J.

    2013-02-01

    Terahertz time-domain spectroscopy (THz-TDS) methods have been utilized in previous studies in order to characterize the optical properties of skin and its primary constituents (i.e., water, collagen, and keratin). However, similar experiments have not yet been performed to investigate whether melanocytes and the melanin pigment that they synthesize contribute to skin's optical properties. In this study, we used THz-TDS methods operating in transmission geometry to measure the optical properties of in vitro human skin equivalents with or without normal human melanocytes. Skin equivalents were cultured for three weeks to promote gradual melanogenesis, and THz time domain data were collected at various time intervals. Frequency-domain analysis techniques were performed to determine the index of refraction (n) and absorption coefficient (μa) for each skin sample over the frequency range of 0.1-2.0 THz. We found that for all samples as frequency increased, n decreased exponentially and the μa increased linearly. Additionally, we observed that skin samples with higher levels of melanin exhibited greater n and μa values than the non-pigmented samples. Our results indicate that melanocytes and the degree of melanin pigmentation contribute in an appreciable manner to the skin's optical properties. Future studies will be performed to examine whether these contributions are observed in human skin in vivo.

  6. Nature of size effects in compact models of field effect transistors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torkhov, N. A., E-mail: trkf@mail.ru; Scientific-Research Institute of Semiconductor Devices, Tomsk 634050; Tomsk State University of Control Systems and Radioelectronics, Tomsk 634050

    Investigations have shown that in the local approximation (for sizes L < 100 μm), AlGaN/GaN high electron mobility transistor (HEMT) structures satisfy to all properties of chaotic systems and can be described in the language of fractal geometry of fractional dimensions. For such objects, values of their electrophysical characteristics depend on the linear sizes of the examined regions, which explain the presence of the so-called size effects—dependences of the electrophysical and instrumental characteristics on the linear sizes of the active elements of semiconductor devices. In the present work, a relationship has been established for the linear model parameters of themore » equivalent circuit elements of internal transistors with fractal geometry of the heteroepitaxial structure manifested through a dependence of its relative electrophysical characteristics on the linear sizes of the examined surface areas. For the HEMTs, this implies dependences of their relative static (A/mm, mA/V/mm, Ω/mm, etc.) and microwave characteristics (W/mm) on the width d of the sink-source channel and on the number of sections n that leads to a nonlinear dependence of the retrieved parameter values of equivalent circuit elements of linear internal transistor models on n and d. Thus, it has been demonstrated that the size effects in semiconductors determined by the fractal geometry must be taken into account when investigating the properties of semiconductor objects on the levels less than the local approximation limit and designing and manufacturing field effect transistors. In general, the suggested approach allows a complex of problems to be solved on designing, optimizing, and retrieving the parameters of equivalent circuits of linear and nonlinear models of not only field effect transistors but also any arbitrary semiconductor devices with nonlinear instrumental characteristics.« less

  7. Detailing the equivalence between real equiangular tight frames and certain strongly regular graphs

    NASA Astrophysics Data System (ADS)

    Fickus, Matthew; Watson, Cody E.

    2015-08-01

    An equiangular tight frame (ETF) is a set of unit vectors whose coherence achieves the Welch bound, and so is as incoherent as possible. They arise in numerous applications. It is well known that real ETFs are equivalent to a certain subclass of strongly regular graphs. In this note, we give some alternative techniques for understanding this equivalence. In a later document, we will use these techniques to further generalize this theory.

  8. MEMS 3-DoF gyroscope design, modeling and simulation through equivalent circuit lumped parameter model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mian, Muhammad Umer, E-mail: umermian@gmail.com; Khir, M. H. Md.; Tang, T. B.

    Pre-fabrication, behavioural and performance analysis with computer aided design (CAD) tools is a common and fabrication cost effective practice. In light of this we present a simulation methodology for a dual-mass oscillator based 3 Degree of Freedom (3-DoF) MEMS gyroscope. 3-DoF Gyroscope is modeled through lumped parameter models using equivalent circuit elements. These equivalent circuits consist of elementary components which are counterpart of their respective mechanical components, used to design and fabricate 3-DoF MEMS gyroscope. Complete designing of equivalent circuit model, mathematical modeling and simulation are being presented in this paper. Behaviors of the equivalent lumped models derived for themore » proposed device design are simulated in MEMSPRO T-SPICE software. Simulations are carried out with the design specifications following design rules of the MetalMUMPS fabrication process. Drive mass resonant frequencies simulated by this technique are 1.59 kHz and 2.05 kHz respectively, which are close to the resonant frequencies found by the analytical formulation of the gyroscope. The lumped equivalent circuit modeling technique proved to be a time efficient modeling technique for the analysis of complex MEMS devices like 3-DoF gyroscopes. The technique proves to be an alternative approach to the complex and time consuming couple field analysis Finite Element Analysis (FEA) previously used.« less

  9. Equivalent circuit simulation of HPEM-induced transient responses at nonlinear loads

    NASA Astrophysics Data System (ADS)

    Kotzev, Miroslav; Bi, Xiaotang; Kreitlow, Matthias; Gronwald, Frank

    2017-09-01

    In this paper the equivalent circuit modeling of a nonlinearly loaded loop antenna and its transient responses to HPEM field excitations are investigated. For the circuit modeling the general strategy to characterize the nonlinearly loaded antenna by a linear and a nonlinear circuit part is pursued. The linear circuit part can be determined by standard methods of antenna theory and numerical field computation. The modeling of the nonlinear circuit part requires realistic circuit models of the nonlinear loads that are given by Schottky diodes. Combining both parts, appropriate circuit models are obtained and analyzed by means of a standard SPICE circuit simulator. It is the main result that in this way full-wave simulation results can be reproduced. Furthermore it is clearly seen that the equivalent circuit modeling offers considerable advantages with respect to computation speed and also leads to improved physical insights regarding the coupling between HPEM field excitation and nonlinearly loaded loop antenna.

  10. Verification of intensity modulated radiation therapy beams using a tissue equivalent plastic scintillator dosimetry system

    NASA Astrophysics Data System (ADS)

    Petric, Martin Peter

    This thesis describes the development and implementation of a novel method for the dosimetric verification of intensity modulated radiation therapy (IMRT) fields with several advantages over current techniques. Through the use of a tissue equivalent plastic scintillator sheet viewed by a charge-coupled device (CCD) camera, this method provides a truly tissue equivalent dosimetry system capable of efficiently and accurately performing field-by-field verification of IMRT plans. This work was motivated by an initial study comparing two IMRT treatment planning systems. The clinical functionality of BrainLAB's BrainSCAN and Varian's Helios IMRT treatment planning systems were compared in terms of implementation and commissioning, dose optimization, and plan assessment. Implementation and commissioning revealed differences in the beam data required to characterize the beam prior to use with the BrainSCAN system requiring higher resolution data compared to Helios. This difference was found to impact on the ability of the systems to accurately calculate dose for highly modulated fields, with BrainSCAN being more successful than Helios. The dose optimization and plan assessment comparisons revealed that while both systems use considerably different optimization algorithms and user-control interfaces, they are both capable of producing substantially equivalent dose plans. The extensive use of dosimetric verification techniques in the IMRT treatment planning comparison study motivated the development and implementation of a novel IMRT dosimetric verification system. The system consists of a water-filled phantom with a tissue equivalent plastic scintillator sheet built into the top surface. Scintillation light is reflected by a plastic mirror within the phantom towards a viewing window where it is captured using a CCD camera. Optical photon spread is removed using a micro-louvre optical collimator and by deconvolving a glare kernel from the raw images. Characterization of this new dosimetric verification system indicates excellent dose response and spatial linearity, high spatial resolution, and good signal uniformity and reproducibility. Dosimetric results from square fields, dynamic wedged fields, and a 7-field head and neck IMRT treatment plan indicate good agreement with film dosimetry distributions. Efficiency analysis of the system reveals a 50% reduction in time requirements for field-by-field verification of a 7-field IMRT treatment plan compared to film dosimetry.

  11. Radio-frequency low-coherence interferometry.

    PubMed

    Fernández-Pousa, Carlos R; Mora, José; Maestre, Haroldo; Corral, Pablo

    2014-06-15

    A method for retrieving low-coherence interferograms, based on the use of a microwave photonics filter, is proposed and demonstrated. The method is equivalent to the double-interferometer technique, with the scanning interferometer replaced by an analog fiber-optics link and the visibility recorded as the amplitude of its radio-frequency (RF) response. As a low-coherence interferometry system, it shows a decrease of resolution induced by the fiber's third-order dispersion (β3). As a displacement sensor, it provides highly linear and slope-scalable readouts of the interferometer's optical path difference in terms of RF, even in the presence of third-order dispersion. In a proof-of-concept experiment, we demonstrate 20-μm displacement readouts using C-band EDFA sources and standard single-mode fiber.

  12. Mesh Deformation Based on Fully Stressed Design: The Method and Two-Dimensional Examples

    NASA Technical Reports Server (NTRS)

    Hsu, Su-Yuen; Chang, Chau-Lyan

    2007-01-01

    Mesh deformation in response to redefined boundary geometry is a frequently encountered task in shape optimization and analysis of fluid-structure interaction. We propose a simple and concise method for deforming meshes defined with three-node triangular or four-node tetrahedral elements. The mesh deformation method is suitable for large boundary movement. The approach requires two consecutive linear elastic finite-element analyses of an isotropic continuum using a prescribed displacement at the mesh boundaries. The first analysis is performed with homogeneous elastic property and the second with inhomogeneous elastic property. The fully stressed design is employed with a vanishing Poisson s ratio and a proposed form of equivalent strain (modified Tresca equivalent strain) to calculate, from the strain result of the first analysis, the element-specific Young s modulus for the second analysis. The theoretical aspect of the proposed method, its convenient numerical implementation using a typical linear elastic finite-element code in conjunction with very minor extra coding for data processing, and results for examples of large deformation of two-dimensional meshes are presented in this paper. KEY WORDS: Mesh deformation, shape optimization, fluid-structure interaction, fully stressed design, finite-element analysis, linear elasticity, strain failure, equivalent strain, Tresca failure criterion

  13. Regularized linearization for quantum nonlinear optical cavities: application to degenerate optical parametric oscillators.

    PubMed

    Navarrete-Benlloch, Carlos; Roldán, Eugenio; Chang, Yue; Shi, Tao

    2014-10-06

    Nonlinear optical cavities are crucial both in classical and quantum optics; in particular, nowadays optical parametric oscillators are one of the most versatile and tunable sources of coherent light, as well as the sources of the highest quality quantum-correlated light in the continuous variable regime. Being nonlinear systems, they can be driven through critical points in which a solution ceases to exist in favour of a new one, and it is close to these points where quantum correlations are the strongest. The simplest description of such systems consists in writing the quantum fields as the classical part plus some quantum fluctuations, linearizing then the dynamical equations with respect to the latter; however, such an approach breaks down close to critical points, where it provides unphysical predictions such as infinite photon numbers. On the other hand, techniques going beyond the simple linear description become too complicated especially regarding the evaluation of two-time correlators, which are of major importance to compute observables outside the cavity. In this article we provide a regularized linear description of nonlinear cavities, that is, a linearization procedure yielding physical results, taking the degenerate optical parametric oscillator as the guiding example. The method, which we call self-consistent linearization, is shown to be equivalent to a general Gaussian ansatz for the state of the system, and we compare its predictions with those obtained with available exact (or quasi-exact) methods. Apart from its operational value, we believe that our work is valuable also from a fundamental point of view, especially in connection to the question of how far linearized or Gaussian theories can be pushed to describe nonlinear dissipative systems which have access to non-Gaussian states.

  14. Research of the impact of coupling between unit cells on performance of linear-to-circular polarization conversion metamaterial with half transmission and half reflection

    NASA Astrophysics Data System (ADS)

    Guo, Mengchao; Zhou, Kan; Wang, Xiaokun; Zhuang, Haiyan; Tang, Dongming; Zhang, Baoshan; Yang, Yi

    2018-04-01

    In this paper, the impact of coupling between unit cells on the performance of linear-to-circular polarization conversion metamaterial with half transmission and half reflection is analyzed by changing the distance between the unit cells. An equivalent electrical circuit model is then built to explain it based on the analysis. The simulated results show that, when the distance between the unit cells is 23 mm, this metamaterial converts half of the incident linearly-polarized wave into reflected left-hand circularly-polarized wave and converts the other half of it into transmitted left-hand circularly-polarized wave at 4.4 GHz; when the distance is 28 mm, this metamaterial reflects all of the incident linearly-polarized wave at 4.4 GHz; and when the distance is 32 mm, this metamaterial converts half of the incident linearly-polarized wave into reflected right-hand circularly-polarized wave and converts the other half of it into transmitted right-hand circularly-polarized wave at 4.4 GHz. The tunability is realized successfully. The analysis shows that the changes of coupling between unit cells lead to the changes of performance of this metamaterial. The coupling between the unit cells is then considered when building the equivalent electrical circuit model. The built equivalent electrical circuit model can be used to perfectly explain the simulated results, which confirms the validity of it. It can also give help to the design of tunable polarization conversion metamaterials.

  15. Singular optimal control and the identically non-regular problem in the calculus of variations

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Kelley, H. J.; Cliff, E. M.

    1985-01-01

    A small but interesting class of optimal control problems featuring a scalar control appearing linearly is equivalent to the class of identically nonregular problems in the Calculus of Variations. It is shown that a condition due to Mancill (1950) is equivalent to the generalized Legendre-Clebsch condition for this narrow class of problems.

  16. A Complete Multimode Equivalent-Circuit Theory for Electrical Design

    PubMed Central

    Williams, Dylan F.; Hayden, Leonard A.; Marks, Roger B.

    1997-01-01

    This work presents a complete equivalent-circuit theory for lossy multimode transmission lines. Its voltages and currents are based on general linear combinations of standard normalized modal voltages and currents. The theory includes new expressions for transmission line impedance matrices, symmetry and lossless conditions, source representations, and the thermal noise of passive multiports. PMID:27805153

  17. Section Preequating under the Equivalent Groups Design without IRT

    ERIC Educational Resources Information Center

    Guo, Hongwen; Puhan, Gautam

    2014-01-01

    In this article, we introduce a section preequating (SPE) method (linear and nonlinear) under the randomly equivalent groups design. In this equating design, sections of Test X (a future new form) and another existing Test Y (an old form already on scale) are administered. The sections of Test X are equated to Test Y, after adjusting for the…

  18. Investigation of oscillating cascade aerodynamics by an experimental influence coefficient technique

    NASA Technical Reports Server (NTRS)

    Buffum, Daniel H.; Fleeter, Sanford

    1988-01-01

    Fundamental experiments are performed in the NASA Lewis Transonic Oscillating Cascade Facility to investigate the torsion mode unsteady aerodynamics of a biconvex airfoil cascade at realistic values of the reduced frequency for all interblade phase angles at a specified mean flow condition. In particular, an unsteady aerodynamic influence coefficient technique is developed and utilized in which only one airfoil in the cascade is oscillated at a time and the resulting airfoil surface unsteady pressure distribution measured on one dynamically instrumented airfoil. The unsteady aerodynamics of an equivalent cascade with all airfoils oscillating at a specified interblade phase angle are then determined through a vector summation of these data. These influence coefficient determined oscillation cascade data are correlated with data obtained in this cascade with all airfoils oscillating at several interblade phase angle values. The influence coefficients are then utilized to determine the unsteady aerodynamics of the cascade for all interblade phase angles, with these unique data subsequently correlated with predictions from a linearized unsteady cascade model.

  19. Label-Free Aptasensor for Lysozyme Detection Using Electrochemical Impedance Spectroscopy.

    PubMed

    Ortiz-Aguayo, Dionisia; Del Valle, Manel

    2018-01-26

    This research develops a label-free aptamer biosensor (aptasensor) based on graphite-epoxy composite electrodes (GECs) for the detection of lysozyme protein using Electrochemical Impedance Spectroscopy (EIS) technique. The chosen immobilization technique was based on covalent bonding using carbodiimide chemistry; for this purpose, carboxylic moieties were first generated on the graphite by electrochemical grafting. The detection was performed using [Fe(CN)₆] 3- /[Fe(CN)₆] 4- as redox probe. After recording the frequency response, values were fitted to its electric model using the principle of equivalent circuits. The aptasensor showed a linear response up to 5 µM for lysozyme and a limit of detection of 1.67 µM. The sensitivity of the established method was 0.090 µM -1 in relative charge transfer resistance values. The interference response by main proteins, such as bovine serum albumin and cytochrome c, has been also characterized. To finally verify the performance of the developed aptasensor, it was applied to wine analysis.

  20. Label-Free Aptasensor for Lysozyme Detection Using Electrochemical Impedance Spectroscopy

    PubMed Central

    2018-01-01

    This research develops a label-free aptamer biosensor (aptasensor) based on graphite-epoxy composite electrodes (GECs) for the detection of lysozyme protein using Electrochemical Impedance Spectroscopy (EIS) technique. The chosen immobilization technique was based on covalent bonding using carbodiimide chemistry; for this purpose, carboxylic moieties were first generated on the graphite by electrochemical grafting. The detection was performed using [Fe(CN)6]3−/[Fe(CN)6]4− as redox probe. After recording the frequency response, values were fitted to its electric model using the principle of equivalent circuits. The aptasensor showed a linear response up to 5 µM for lysozyme and a limit of detection of 1.67 µM. The sensitivity of the established method was 0.090 µM−1 in relative charge transfer resistance values. The interference response by main proteins, such as bovine serum albumin and cytochrome c, has been also characterized. To finally verify the performance of the developed aptasensor, it was applied to wine analysis. PMID:29373502

  1. Translational Vestibulo-Ocular Reflex and Motion Perception During Interaural Linear Acceleration: Comparison of Different Motion Paradigms

    NASA Technical Reports Server (NTRS)

    Beaton, K. H.; Holly, J. E.; Clement, G. R.; Wood, S. J.

    2011-01-01

    The neural mechanisms to resolve ambiguous tilt-translation motion have been hypothesized to be different for motion perception and eye movements. Previous studies have demonstrated differences in ocular and perceptual responses using a variety of motion paradigms, including Off-Vertical Axis Rotation (OVAR), Variable Radius Centrifugation (VRC), translation along a linear track, and tilt about an Earth-horizontal axis. While the linear acceleration across these motion paradigms is presumably equivalent, there are important differences in semicircular canal cues. The purpose of this study was to compare translation motion perception and horizontal slow phase velocity to quantify consistencies, or lack thereof, across four different motion paradigms. Twelve healthy subjects were exposed to sinusoidal interaural linear acceleration between 0.01 and 0.6 Hz at 1.7 m/s/s (equivalent to 10 tilt) using OVAR, VRC, roll tilt, and lateral translation. During each trial, subjects verbally reported the amount of perceived peak-to-peak lateral translation and indicated the direction of motion with a joystick. Binocular eye movements were recorded using video-oculography. In general, the gain of translation perception (ratio of reported linear displacement to equivalent linear stimulus displacement) increased with stimulus frequency, while the phase did not significantly vary. However, translation perception was more pronounced during both VRC and lateral translation involving actual translation, whereas perceptions were less consistent and more variable during OVAR and roll tilt which did not involve actual translation. For each motion paradigm, horizontal eye movements were negligible at low frequencies and showed phase lead relative to the linear stimulus. At higher frequencies, the gain of the eye movements increased and became more inphase with the acceleration stimulus. While these results are consistent with the hypothesis that the neural computational strategies for motion perception and eye movements differ, they also indicate that the specific motion platform employed can have a significant effect on both the amplitude and phase of each.

  2. Design and experimental verification of an equivalent forebody to produce disturbances equivalent to those of a forebody with flowing inlets

    NASA Technical Reports Server (NTRS)

    Haynes, Davy A.; Miller, David S.; Klein, John R.; Louie, Check M.

    1988-01-01

    A method by which a simple equivalent faired body can be designed to replace a more complex body with flowing inlets has been demonstrated for supersonic flow. An analytically defined, geometrically simple faired inlet forebody has been designed using a linear potential code to generate flow perturbations equivalent to those produced by a much more complex forebody with inlets. An equivalent forebody wind-tunnel model was fabricated and a test was conducted in NASA Langley Research Center's Unitary Plan Wind Tunnel. The test Mach number range was 1.60 to 2.16 for angles of attack of -4 to 16 deg. Test results indicate that, for the purposes considered here, the equivalent forebody simulates the original flowfield disturbances to an acceptable degree of accuracy.

  3. Observed Score Linear Equating with Covariates

    ERIC Educational Resources Information Center

    Branberg, Kenny; Wiberg, Marie

    2011-01-01

    This paper examined observed score linear equating in two different data collection designs, the equivalent groups design and the nonequivalent groups design, when information from covariates (i.e., background variables correlated with the test scores) was included. The main purpose of the study was to examine the effect (i.e., bias, variance, and…

  4. Galerkin v. discrete-optimal projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir

    Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less

  5. Bisimulation equivalence of differential-algebraic systems

    NASA Astrophysics Data System (ADS)

    Megawati, Noorma Yulia; Schaft, Arjan van der

    2018-01-01

    In this paper, the notion of bisimulation relation for linear input-state-output systems is extended to general linear differential-algebraic (DAE) systems. Geometric control theory is used to derive a linear-algebraic characterisation of bisimulation relations, and an algorithm for computing the maximal bisimulation relation between two linear DAE systems. The general definition is specialised to the case where the matrix pencil sE - A is regular. Furthermore, by developing a one-sided version of bisimulation, characterisations of simulation and abstraction are obtained.

  6. Toward quantitative estimation of material properties with dynamic mode atomic force microscopy: a comparative study.

    PubMed

    Ghosal, Sayan; Gannepalli, Anil; Salapaka, Murti

    2017-08-11

    In this article, we explore methods that enable estimation of material properties with the dynamic mode atomic force microscopy suitable for soft matter investigation. The article presents the viewpoint of casting the system, comprising of a flexure probe interacting with the sample, as an equivalent cantilever system and compares a steady-state analysis based method with a recursive estimation technique for determining the parameters of the equivalent cantilever system in real time. The steady-state analysis of the equivalent cantilever model, which has been implicitly assumed in studies on material property determination, is validated analytically and experimentally. We show that the steady-state based technique yields results that quantitatively agree with the recursive method in the domain of its validity. The steady-state technique is considerably simpler to implement, however, slower compared to the recursive technique. The parameters of the equivalent system are utilized to interpret storage and dissipative properties of the sample. Finally, the article identifies key pitfalls that need to be avoided toward the quantitative estimation of material properties.

  7. Flatness-based control and Kalman filtering for a continuous-time macroeconomic model

    NASA Astrophysics Data System (ADS)

    Rigatos, G.; Siano, P.; Ghosh, T.; Busawon, K.; Binns, R.

    2017-11-01

    The article proposes flatness-based control for a nonlinear macro-economic model of the UK economy. The differential flatness properties of the model are proven. This enables to introduce a transformation (diffeomorphism) of the system's state variables and to express the state-space description of the model in the linear canonical (Brunowsky) form in which both the feedback control and the state estimation problem can be solved. For the linearized equivalent model of the macroeconomic system, stabilizing feedback control can be achieved using pole placement methods. Moreover, to implement stabilizing feedback control of the system by measuring only a subset of its state vector elements the Derivative-free nonlinear Kalman Filter is used. This consists of the Kalman Filter recursion applied on the linearized equivalent model of the financial system and of an inverse transformation that is based again on differential flatness theory. The asymptotic stability properties of the control scheme are confirmed.

  8. Robust synthetic biology design: stochastic game theory approach.

    PubMed

    Chen, Bor-Sen; Chang, Chia-Hung; Lee, Hsiao-Ching

    2009-07-15

    Synthetic biology is to engineer artificial biological systems to investigate natural biological phenomena and for a variety of applications. However, the development of synthetic gene networks is still difficult and most newly created gene networks are non-functioning due to uncertain initial conditions and disturbances of extra-cellular environments on the host cell. At present, how to design a robust synthetic gene network to work properly under these uncertain factors is the most important topic of synthetic biology. A robust regulation design is proposed for a stochastic synthetic gene network to achieve the prescribed steady states under these uncertain factors from the minimax regulation perspective. This minimax regulation design problem can be transformed to an equivalent stochastic game problem. Since it is not easy to solve the robust regulation design problem of synthetic gene networks by non-linear stochastic game method directly, the Takagi-Sugeno (T-S) fuzzy model is proposed to approximate the non-linear synthetic gene network via the linear matrix inequality (LMI) technique through the Robust Control Toolbox in Matlab. Finally, an in silico example is given to illustrate the design procedure and to confirm the efficiency and efficacy of the proposed robust gene design method. http://www.ee.nthu.edu.tw/bschen/SyntheticBioDesign_supplement.pdf.

  9. Dynamic analysis of space-related linear and non-linear structures

    NASA Technical Reports Server (NTRS)

    Bosela, Paul A.; Shaker, Francis J.; Fertis, Demeter G.

    1990-01-01

    In order to be cost effective, space structures must be extremely light weight, and subsequently, very flexible structures. The power system for Space Station Freedom is such a structure. Each array consists of a deployable truss mast and a split blanket of photo-voltaic solar collectors. The solar arrays are deployed in orbit, and the blanket is stretched into position as the mast is extended. Geometric stiffness due to the preload make this an interesting non-linear problem. The space station will be subjected to various dynamic loads, during shuttle docking, solar tracking, attitude adjustment, etc. Accurate prediction of the natural frequencies and mode shapes of the space station components, including the solar arrays, is critical for determining the structural adequacy of the components, and for designing a dynamic control system. The process used in developing and verifying the finite element dynamic model of the photo-voltaic arrays is documented. Various problems were identified, such as grounding effects due to geometric stiffness, large displacement effects, and pseudo-stiffness (grounding) due to lack of required rigid body modes. Analysis techniques, such as development of rigorous solutions using continuum mechanics, finite element solution sequence altering, equivalent systems using a curvature basis, Craig-Bampton superelement approach, and modal ordering schemes were utilized. The grounding problems associated with the geometric stiffness are emphasized.

  10. Dynamic analysis of space-related linear and non-linear structures

    NASA Technical Reports Server (NTRS)

    Bosela, Paul A.; Shaker, Francis J.; Fertis, Demeter G.

    1990-01-01

    In order to be cost effective, space structures must be extremely light weight, and subsequently, very flexible structures. The power system for Space Station Freedom is such a structure. Each array consists of a deployable truss mast and a split blanket of photovoltaic solar collectors. The solar arrays are deployed in orbit, and the blanket is stretched into position as the mast is extended. Geometric stiffness due to the preload make this an interesting non-linear problem. The space station will be subjected to various dynamic loads, during shuttle docking, solar tracking, attitude adjustment, etc. Accurate prediction of the natural frequencies and mode shapes of the space station components, including the solar arrays, is critical for determining the structural adequacy of the components, and for designing a dynamic controls system. The process used in developing and verifying the finite element dynamic model of the photo-voltaic arrays is documented. Various problems were identified, such as grounding effects due to geometric stiffness, large displacement effects, and pseudo-stiffness (grounding) due to lack of required rigid body modes. Analysis techniques, such as development of rigorous solutions using continuum mechanics, finite element solution sequence altering, equivalent systems using a curvature basis, Craig-Bampton superelement approach, and modal ordering schemes were utilized. The grounding problems associated with the geometric stiffness are emphasized.

  11. Analytic solutions to modelling exponential and harmonic functions using Chebyshev polynomials: fitting frequency-domain lifetime images with photobleaching.

    PubMed

    Malachowski, George C; Clegg, Robert M; Redford, Glen I

    2007-12-01

    A novel approach is introduced for modelling linear dynamic systems composed of exponentials and harmonics. The method improves the speed of current numerical techniques up to 1000-fold for problems that have solutions of multiple exponentials plus harmonics and decaying components. Such signals are common in fluorescence microscopy experiments. Selective constraints of the parameters being fitted are allowed. This method, using discrete Chebyshev transforms, will correctly fit large volumes of data using a noniterative, single-pass routine that is fast enough to analyse images in real time. The method is applied to fluorescence lifetime imaging data in the frequency domain with varying degrees of photobleaching over the time of total data acquisition. The accuracy of the Chebyshev method is compared to a simple rapid discrete Fourier transform (equivalent to least-squares fitting) that does not take the photobleaching into account. The method can be extended to other linear systems composed of different functions. Simulations are performed and applications are described showing the utility of the method, in particular in the area of fluorescence microscopy.

  12. Application of Fast Multipole Methods to the NASA Fast Scattering Code

    NASA Technical Reports Server (NTRS)

    Dunn, Mark H.; Tinetti, Ana F.

    2008-01-01

    The NASA Fast Scattering Code (FSC) is a versatile noise prediction program designed to conduct aeroacoustic noise reduction studies. The equivalent source method is used to solve an exterior Helmholtz boundary value problem with an impedance type boundary condition. The solution process in FSC v2.0 requires direct manipulation of a large, dense system of linear equations, limiting the applicability of the code to small scales and/or moderate excitation frequencies. Recent advances in the use of Fast Multipole Methods (FMM) for solving scattering problems, coupled with sparse linear algebra techniques, suggest that a substantial reduction in computer resource utilization over conventional solution approaches can be obtained. Implementation of the single level FMM (SLFMM) and a variant of the Conjugate Gradient Method (CGM) into the FSC is discussed in this paper. The culmination of this effort, FSC v3.0, was used to generate solutions for three configurations of interest. Benchmarking against previously obtained simulations indicate that a twenty-fold reduction in computational memory and up to a four-fold reduction in computer time have been achieved on a single processor.

  13. Wing box transonic-flutter suppression using piezoelectric self-sensing actuators attached to skin

    NASA Astrophysics Data System (ADS)

    Otiefy, R. A. H.; Negm, H. M.

    2010-12-01

    The main objective of this research is to study the capability of piezoelectric (PZT) self-sensing actuators to suppress the transonic wing box flutter, which is a flow-structure interaction phenomenon. The unsteady general frequency modified transonic small disturbance (TSD) equation is used to model the transonic flow about the wing. The wing box structure and piezoelectric actuators are modeled using the equivalent plate method, which is based on the first order shear deformation plate theory (FSDPT). The piezoelectric actuators are bonded to the skin. The optimal electromechanical coupling conditions between the piezoelectric actuators and the wing are collected from previous work. Three main different control strategies, a linear quadratic Gaussian (LQG) which combines the linear quadratic regulator (LQR) with the Kalman filter estimator (KFE), an optimal static output feedback (SOF), and a classic feedback controller (CFC), are studied and compared. The optimum actuator and sensor locations are determined using the norm of feedback control gains (NFCG) and norm of Kalman filter estimator gains (NKFEG) respectively. A genetic algorithm (GA) optimization technique is used to calculate the controller and estimator parameters to achieve a target response.

  14. Equivalence between a generalized dendritic network and a set of one-dimensional networks as a ground of linear dynamics.

    PubMed

    Koda, Shin-ichi

    2015-05-28

    It has been shown by some existing studies that some linear dynamical systems defined on a dendritic network are equivalent to those defined on a set of one-dimensional networks in special cases and this transformation to the simple picture, which we call linear chain (LC) decomposition, has a significant advantage in understanding properties of dendrimers. In this paper, we expand the class of LC decomposable system with some generalizations. In addition, we propose two general sufficient conditions for LC decomposability with a procedure to systematically realize the LC decomposition. Some examples of LC decomposable linear dynamical systems are also presented with their graphs. The generalization of the LC decomposition is implemented in the following three aspects: (i) the type of linear operators; (ii) the shape of dendritic networks on which linear operators are defined; and (iii) the type of symmetry operations representing the symmetry of the systems. In the generalization (iii), symmetry groups that represent the symmetry of dendritic systems are defined. The LC decomposition is realized by changing the basis of a linear operator defined on a dendritic network into bases of irreducible representations of the symmetry group. The achievement of this paper makes it easier to utilize the LC decomposition in various cases. This may lead to a further understanding of the relation between structure and functions of dendrimers in future studies.

  15. Equivalent Linearization Analysis of Geometrically Nonlinear Random Vibrations Using Commercial Finite Element Codes

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Muravyov, Alexander A.

    2002-01-01

    Two new equivalent linearization implementations for geometrically nonlinear random vibrations are presented. Both implementations are based upon a novel approach for evaluating the nonlinear stiffness within commercial finite element codes and are suitable for use with any finite element code having geometrically nonlinear static analysis capabilities. The formulation includes a traditional force-error minimization approach and a relatively new version of a potential energy-error minimization approach, which has been generalized for multiple degree-of-freedom systems. Results for a simply supported plate under random acoustic excitation are presented and comparisons of the displacement root-mean-square values and power spectral densities are made with results from a nonlinear time domain numerical simulation.

  16. Compartmental and Data-Based Modeling of Cerebral Hemodynamics: Linear Analysis.

    PubMed

    Henley, B C; Shin, D C; Zhang, R; Marmarelis, V Z

    Compartmental and data-based modeling of cerebral hemodynamics are alternative approaches that utilize distinct model forms and have been employed in the quantitative study of cerebral hemodynamics. This paper examines the relation between a compartmental equivalent-circuit and a data-based input-output model of dynamic cerebral autoregulation (DCA) and CO2-vasomotor reactivity (DVR). The compartmental model is constructed as an equivalent-circuit utilizing putative first principles and previously proposed hypothesis-based models. The linear input-output dynamics of this compartmental model are compared with data-based estimates of the DCA-DVR process. This comparative study indicates that there are some qualitative similarities between the two-input compartmental model and experimental results.

  17. Can academic radiology departments become more efficient and cost less?

    PubMed

    Seltzer, S E; Saini, S; Bramson, R T; Kelly, P; Levine, L; Chiango, B F; Jordan, P; Seth, A; Elton, J; Elrick, J; Rosenthal, D; Holman, B L; Thrall, J H

    1998-11-01

    To determine how successful two large academic radiology departments have been in responding to market-driven pressures to reduce costs and improve productivity by downsizing their technical and support staffs while maintaining or increasing volume. A longitudinal study was performed in which benchmarking techniques were used to assess the changes in cost and productivity of the two departments for 5 years (fiscal years 1992-1996). Cost per relative value unit and relative value units per full-time equivalent employee were tracked. Substantial cost reduction and productivity enhancement were realized as linear improvements in two key metrics, namely, cost per relative value unit (decline of 19.0% [decline of $7.60 on a base year cost of $40.00] to 28.8% [$12.18 of $42.21]; P < or = .001) and relative value unit per full-time equivalent employee (increase of 46.0% [increase of 759.55 units over a base year productivity of 1,651.45 units] to 55.8% [968.28 of 1,733.97 units]; P < .001), during the 5 years of study. Academic radiology departments have proved that they can "do more with less" over a sustained period.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kılıç, Emre, E-mail: emre.kilic@tum.de; Eibert, Thomas F.

    An approach combining boundary integral and finite element methods is introduced for the solution of three-dimensional inverse electromagnetic medium scattering problems. Based on the equivalence principle, unknown equivalent electric and magnetic surface current densities on a closed surface are utilized to decompose the inverse medium problem into two parts: a linear radiation problem and a nonlinear cavity problem. The first problem is formulated by a boundary integral equation, the computational burden of which is reduced by employing the multilevel fast multipole method (MLFMM). Reconstructed Cauchy data on the surface allows the utilization of the Lorentz reciprocity and the Poynting's theorems.more » Exploiting these theorems, the noise level and an initial guess are estimated for the cavity problem. Moreover, it is possible to determine whether the material is lossy or not. In the second problem, the estimated surface currents form inhomogeneous boundary conditions of the cavity problem. The cavity problem is formulated by the finite element technique and solved iteratively by the Gauss–Newton method to reconstruct the properties of the object. Regularization for both the first and the second problems is achieved by a Krylov subspace method. The proposed method is tested against both synthetic and experimental data and promising reconstruction results are obtained.« less

  19. Performance assessment of a compressive sensing single-pixel imaging system

    NASA Astrophysics Data System (ADS)

    Du Bosq, Todd W.; Preece, Bradley L.

    2017-04-01

    Conventional sensors measure the light incident at each pixel in a focal plane array. Compressive sensing (CS) involves capturing a smaller number of unconventional measurements from the scene, and then using a companion process to recover the image. CS has the potential to acquire imagery with equivalent information content to a large format array while using smaller, cheaper, and lower bandwidth components. However, the benefits of CS do not come without compromise. The CS architecture chosen must effectively balance between physical considerations, reconstruction accuracy, and reconstruction speed to meet operational requirements. Performance modeling of CS imagers is challenging due to the complexity and nonlinearity of the system and reconstruction algorithm. To properly assess the value of such systems, it is necessary to fully characterize the image quality, including artifacts and sensitivity to noise. Imagery of a two-handheld object target set was collected using an shortwave infrared single-pixel CS camera for various ranges and number of processed measurements. Human perception experiments were performed to determine the identification performance within the trade space. The performance of the nonlinear CS camera was modeled by mapping the nonlinear degradations to an equivalent linear shift invariant model. Finally, the limitations of CS modeling techniques are discussed.

  20. The Capacity Gain of Orbital Angular Momentum Based Multiple-Input-Multiple-Output System

    PubMed Central

    Zhang, Zhuofan; Zheng, Shilie; Chen, Yiling; Jin, Xiaofeng; Chi, Hao; Zhang, Xianmin

    2016-01-01

    Wireless communication using electromagnetic wave carrying orbital angular momentum (OAM) has attracted increasing interest in recent years, and its potential to increase channel capacity has been explored widely. In this paper, we compare the technique of using uniform linear array consist of circular traveling-wave OAM antennas for multiplexing with the conventional multiple-in-multiple-out (MIMO) communication method, and numerical results show that the OAM based MIMO system can increase channel capacity while communication distance is long enough. An equivalent model is proposed to illustrate that the OAM multiplexing system is equivalent to a conventional MIMO system with a larger element spacing, which means OAM waves could decrease the spatial correlation of MIMO channel. In addition, the effects of some system parameters, such as OAM state interval and element spacing, on the capacity advantage of OAM based MIMO are also investigated. Our results reveal that OAM waves are complementary with MIMO method. OAM waves multiplexing is suitable for long-distance line-of-sight (LoS) communications or communications in open area where the multi-path effect is weak and can be used in massive MIMO systems as well. PMID:27146453

  1. Mass Energy Equivalence Formula Must Include Rotational and Vibrational Kinetuic Energies as Well As Potential Energies

    NASA Astrophysics Data System (ADS)

    Brekke, Stewart

    2010-11-01

    Originally Einstein proposed the the mass-energy equivalence at low speeds as E=mc^2 + 1/2 mv^2. However, a mass may also be rotating and vibrating as well as moving linearly. Although small, these kinetic energies must be included in formulating a true mathematical statement of the mass-energy equivalence. Also, gravitational, electromagneic and magnetic potential energies must be included in the mass-energy equivalence mathematical statement. While the kinetic energy factors may differ in each physical situation such as types of vibrations and rotations, the basic equation for the mass- energy equivalence is therefore E = m0c^2 + 1/2m0v^2 + 1/2I2̂+ 1/2kx^2 + WG+ WE+ WM.

  2. Quality factor and dose equivalent investigations aboard the Soviet Space Station Mir

    NASA Astrophysics Data System (ADS)

    Bouisset, P.; Nguyen, V. D.; Parmentier, N.; Akatov, Ia. A.; Arkhangel'Skii, V. V.; Vorozhtsov, A. S.; Petrov, V. M.; Kovalev, E. E.; Siegrist, M.

    1992-07-01

    Since Dec 1988, date of the French-Soviet joint space mission 'ARAGATZ', the CIRCE device, had recorded dose equivalent and quality factor values inside the Mir station (380-410 km, 51.5 deg). After the initial gas filling two years ago, the low pressure tissue equivalent proportional counter is still in good working conditions. Some results of three periods are presented. The average dose equivalent rates measured are respectively 0.6, 0.8 and 0.6 mSv/day with a quality factor equal to 1.9. Some detailed measurements show the increasing of the dose equivalent rates through the SAA and near polar horns. The real time determination of the quality factors allows to point out high linear energy transfer events with quality factors in the range 10-20.

  3. Electric field computation and measurements in the electroporation of inhomogeneous samples

    NASA Astrophysics Data System (ADS)

    Bernardis, Alessia; Bullo, Marco; Campana, Luca Giovanni; Di Barba, Paolo; Dughiero, Fabrizio; Forzan, Michele; Mognaschi, Maria Evelina; Sgarbossa, Paolo; Sieni, Elisabetta

    2017-12-01

    In clinical treatments of a class of tumors, e.g. skin tumors, the drug uptake of tumor tissue is helped by means of a pulsed electric field, which permeabilizes the cell membranes. This technique, which is called electroporation, exploits the conductivity of the tissues: however, the tumor tissue could be characterized by inhomogeneous areas, eventually causing a non-uniform distribution of current. In this paper, the authors propose a field model to predict the effect of tissue inhomogeneity, which can affect the current density distribution. In particular, finite-element simulations, considering non-linear conductivity against field relationship, are developed. Measurements on a set of samples subject to controlled inhomogeneity make it possible to assess the numerical model in view of identifying the equivalent resistance between pairs of electrodes.

  4. A spatial operator algebra for manipulator modeling and control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Kreutz, K.; Jain, A.

    1989-01-01

    A spatial operator algebra for modeling the control and trajectory design of manipulation is discussed, with emphasis on its analytical formulation and implementation in the Ada programming language. The elements of this algebra are linear operators whose domain and range spaces consist of forces, moments, velocities, and accelerations. The effect of these operators is equivalent to a spatial recursion along the span of the manipulator. Inversion is obtained using techniques of recursive filtering and smoothing. The operator alegbra provides a high-level framework for describing the dynamic and kinematic behavior of a manipulator and control and trajectory design algorithms. Implementable recursive algorithms can be immediately derived from the abstract operator expressions by inspection, thus greatly simplifying the transition from an abstract problem formulation and solution to the detailed mechanization of a specific algorithm.

  5. Minimally modified theories of gravity: a playground for testing the uniqueness of general relativity

    NASA Astrophysics Data System (ADS)

    Carballo-Rubio, Ra{úl; Di Filippo, Francesco; Liberati, Stefano

    2018-06-01

    In a recent paper [1], it was introduced a new class of gravitational theories with two local degrees of freedom. The existence of these theories apparently challenges the distinctive role of general relativity as the unique non-linear theory of massless spin-2 particles. Here we perform a comprehensive analysis of these theories with the aim of (i) understanding whether or not these are actually equivalent to general relativity, and (ii) finding the root of the variance in case these are not. We have found that a broad set of seemingly different theories actually pass all the possible tests of equivalence to general relativity (in vacuum) that we were able to devise, including the analysis of scattering amplitudes using on-shell techniques. These results are complemented with the observation that the only examples which are manifestly not equivalent to general relativity either do not contain gravitons in their spectrum, or are not guaranteed to include only two local degrees of freedom once radiative corrections are taken into account. Coupling to matter is also considered: we show that coupling these theories to matter in a consistent way is not as straightforward as one could expect. Minimal coupling, as well as the most straightforward non-minimal couplings, cannot be used. Therefore, before being able to address any issues in the presence of matter, it would be necessary to find a consistent (and in any case rather peculiar) coupling scheme.

  6. Observations on personnel dosimetry for radiotherapy personnel operating high-energy LINACs.

    PubMed

    Glasgow, G P; Eichling, J; Yoder, R C

    1986-06-01

    A series of measurements were conducted to determine the cause of a sudden increase in personnel radiation exposures. One objective of the measurements was to determine if the increases were related to changing from film dosimeters exchanged monthly to TLD-100 dosimeters exchanged quarterly. While small increases were observed in the dose equivalents of most employees, the dose equivalents of personnel operating medical electron linear accelerators with energies greater than 20 MV doubled coincidentally with the change in the personnel dosimeter program. The measurements indicated a small thermal neutron radiation component around the accelerators operated by these personnel. This component caused the doses measured with the TLD-100 dosimeters to be overstated. Therefore, the increase in these personnel dose equivalents was not due to changes in work habits or radiation environments. Either film or TLD-700 dosimeters would be suitable for personnel monitoring around high-energy linear accelerators. The final choice would depend on economics and personal preference.

  7. On the equivalence of case-crossover and time series methods in environmental epidemiology.

    PubMed

    Lu, Yun; Zeger, Scott L

    2007-04-01

    The case-crossover design was introduced in epidemiology 15 years ago as a method for studying the effects of a risk factor on a health event using only cases. The idea is to compare a case's exposure immediately prior to or during the case-defining event with that same person's exposure at otherwise similar "reference" times. An alternative approach to the analysis of daily exposure and case-only data is time series analysis. Here, log-linear regression models express the expected total number of events on each day as a function of the exposure level and potential confounding variables. In time series analyses of air pollution, smooth functions of time and weather are the main confounders. Time series and case-crossover methods are often viewed as competing methods. In this paper, we show that case-crossover using conditional logistic regression is a special case of time series analysis when there is a common exposure such as in air pollution studies. This equivalence provides computational convenience for case-crossover analyses and a better understanding of time series models. Time series log-linear regression accounts for overdispersion of the Poisson variance, while case-crossover analyses typically do not. This equivalence also permits model checking for case-crossover data using standard log-linear model diagnostics.

  8. Slope stability analysis using limit equilibrium method in nonlinear criterion.

    PubMed

    Lin, Hang; Zhong, Wenwen; Xiong, Wei; Tang, Wenyu

    2014-01-01

    In slope stability analysis, the limit equilibrium method is usually used to calculate the safety factor of slope based on Mohr-Coulomb criterion. However, Mohr-Coulomb criterion is restricted to the description of rock mass. To overcome its shortcomings, this paper combined Hoek-Brown criterion and limit equilibrium method and proposed an equation for calculating the safety factor of slope with limit equilibrium method in Hoek-Brown criterion through equivalent cohesive strength and the friction angle. Moreover, this paper investigates the impact of Hoek-Brown parameters on the safety factor of slope, which reveals that there is linear relation between equivalent cohesive strength and weakening factor D. However, there are nonlinear relations between equivalent cohesive strength and Geological Strength Index (GSI), the uniaxial compressive strength of intact rock σ ci , and the parameter of intact rock m i . There is nonlinear relation between the friction angle and all Hoek-Brown parameters. With the increase of D, the safety factor of slope F decreases linearly; with the increase of GSI, F increases nonlinearly; when σ ci is relatively small, the relation between F and σ ci is nonlinear, but when σ ci is relatively large, the relation is linear; with the increase of m i , F decreases first and then increases.

  9. Slope Stability Analysis Using Limit Equilibrium Method in Nonlinear Criterion

    PubMed Central

    Lin, Hang; Zhong, Wenwen; Xiong, Wei; Tang, Wenyu

    2014-01-01

    In slope stability analysis, the limit equilibrium method is usually used to calculate the safety factor of slope based on Mohr-Coulomb criterion. However, Mohr-Coulomb criterion is restricted to the description of rock mass. To overcome its shortcomings, this paper combined Hoek-Brown criterion and limit equilibrium method and proposed an equation for calculating the safety factor of slope with limit equilibrium method in Hoek-Brown criterion through equivalent cohesive strength and the friction angle. Moreover, this paper investigates the impact of Hoek-Brown parameters on the safety factor of slope, which reveals that there is linear relation between equivalent cohesive strength and weakening factor D. However, there are nonlinear relations between equivalent cohesive strength and Geological Strength Index (GSI), the uniaxial compressive strength of intact rock σ ci, and the parameter of intact rock m i. There is nonlinear relation between the friction angle and all Hoek-Brown parameters. With the increase of D, the safety factor of slope F decreases linearly; with the increase of GSI, F increases nonlinearly; when σ ci is relatively small, the relation between F and σ ci is nonlinear, but when σ ci is relatively large, the relation is linear; with the increase of m i, F decreases first and then increases. PMID:25147838

  10. Seismic equivalents of volcanic jet scaling laws and multipoles in acoustics

    NASA Astrophysics Data System (ADS)

    Haney, Matthew M.; Matoza, Robin S.; Fee, David; Aldridge, David F.

    2018-04-01

    We establish analogies between equivalent source theory in seismology (moment-tensor and single-force sources) and acoustics (monopoles, dipoles and quadrupoles) in the context of volcanic eruption signals. Although infrasound (acoustic waves < 20 Hz) from volcanic eruptions may be more complex than a simple monopole, dipole or quadrupole assumption, these elementary acoustic sources are a logical place to begin exploring relations with seismic sources. By considering the radiated power of a harmonic force source at the surface of an elastic half-space, we show that a volcanic jet or plume modelled as a seismic force has similar scaling with respect to eruption parameters (e.g. exit velocity and vent area) as an acoustic dipole. We support this by demonstrating, from first principles, a fundamental relationship that ties together explosion, torque and force sources in seismology and highlights the underlying dipole nature of seismic forces. This forges a connection between the multipole expansion of equivalent sources in acoustics and the use of forces and moments as equivalent sources in seismology. We further show that volcanic infrasound monopole and quadrupole sources exhibit scalings similar to seismicity radiated by volume injection and moment sources, respectively. We describe a scaling theory for seismic tremor during volcanic eruptions that agrees with observations showing a linear relation between radiated power of tremor and eruption rate. Volcanic tremor over the first 17 hr of the 2016 eruption at Pavlof Volcano, Alaska, obeyed the linear relation. Subsequent tremor during the main phase of the eruption did not obey the linear relation and demonstrates that volcanic eruption tremor can exhibit other scalings even during the same eruption.

  11. Bounded Linear Stability Margin Analysis of Nonlinear Hybrid Adaptive Control

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Boskovic, Jovan D.

    2008-01-01

    This paper presents a bounded linear stability analysis for a hybrid adaptive control that blends both direct and indirect adaptive control. Stability and convergence of nonlinear adaptive control are analyzed using an approximate linear equivalent system. A stability margin analysis shows that a large adaptive gain can lead to a reduced phase margin. This method can enable metrics-driven adaptive control whereby the adaptive gain is adjusted to meet stability margin requirements.

  12. A Low Power Digital Accumulation Technique for Digital-Domain CMOS TDI Image Sensor.

    PubMed

    Yu, Changwei; Nie, Kaiming; Xu, Jiangtao; Gao, Jing

    2016-09-23

    In this paper, an accumulation technique suitable for digital domain CMOS time delay integration (TDI) image sensors is proposed to reduce power consumption without degrading the rate of imaging. In terms of the slight variations of quantization codes among different pixel exposures towards the same object, the pixel array is divided into two groups: one is for coarse quantization of high bits only, and the other one is for fine quantization of low bits. Then, the complete quantization codes are composed of both results from the coarse-and-fine quantization. The equivalent operation comparably reduces the total required bit numbers of the quantization. In the 0.18 µm CMOS process, two versions of 16-stage digital domain CMOS TDI image sensor chains based on a 10-bit successive approximate register (SAR) analog-to-digital converter (ADC), with and without the proposed technique, are designed. The simulation results show that the average power consumption of slices of the two versions are 6 . 47 × 10 - 8 J/line and 7 . 4 × 10 - 8 J/line, respectively. Meanwhile, the linearity of the two versions are 99.74% and 99.99%, respectively.

  13. Power Supply Fault Tolerant Reliability Study

    DTIC Science & Technology

    1991-04-01

    easier to design than for equivalent bipolar transistors. MCDONNELL DOUGLAS ELECTRONICS SYSTEMS COMPANY 9. Base circuitry should be designed to drive...SWITCHING REGULATORS (Ref. 28), SWITCHING AND LINEAR POWER SUPPLY DESIGN (Ref. 25) 6. Sequence the turn-off/turn-on logic in an orderly and controllable ...for equivalent bipolar transistors. MCDONNELL DOUGLAS ELECTRONICS SYSTEMS COMPANY 8. Base circuitry should be designed to drive the transistor into

  14. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy.

    PubMed

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-05

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  15. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  16. A Technique of Teaching the Principle of Equivalence at Ground Level

    ERIC Educational Resources Information Center

    Lubrica, Joel V.

    2016-01-01

    This paper presents one way of demonstrating the Principle of Equivalence in the classroom. Teaching the Principle of Equivalence involves someone experiencing acceleration through empty space, juxtaposed with the daily encounter with gravity. This classroom activity is demonstrated with a water-filled bottle containing glass marbles and…

  17. Application of local linearization and the transonic equivalence rule to the flow about slender analytic bodies at Mach numbers near 1.0

    NASA Technical Reports Server (NTRS)

    Tyson, R. W.; Muraca, R. J.

    1975-01-01

    The local linearization method for axisymmetric flow is combined with the transonic equivalence rule to calculate pressure distribution on slender bodies at free-stream Mach numbers from .8 to 1.2. This is an approximate solution to the transonic flow problem which yields results applicable during the preliminary design stages of a configuration development. The method can be used to determine the aerodynamic loads on parabolic arc bodies having either circular or elliptical cross sections. It is particularly useful in predicting pressure distributions and normal force distributions along the body at small angles of attack. The equations discussed may be extended to include wing-body combinations.

  18. Variable structure control of nonlinear systems through simplified uncertain models

    NASA Technical Reports Server (NTRS)

    Sira-Ramirez, Hebertt

    1986-01-01

    A variable structure control approach is presented for the robust stabilization of feedback equivalent nonlinear systems whose proposed model lies in the same structural orbit of a linear system in Brunovsky's canonical form. An attempt to linearize exactly the nonlinear plant on the basis of the feedback control law derived for the available model results in a nonlinearly perturbed canonical system for the expanded class of possible equivalent control functions. Conservatism tends to grow as modeling errors become larger. In order to preserve the internal controllability structure of the plant, it is proposed that model simplification be carried out on the open-loop-transformed system. As an example, a controller is developed for a single link manipulator with an elastic joint.

  19. No differences in subjective knee function between surgical techniques of anterior cruciate ligament reconstruction at 2-year follow-up: a cohort study from the Swedish National Knee Ligament Register.

    PubMed

    Hamrin Senorski, Eric; Sundemo, David; Murawski, Christopher D; Alentorn-Geli, Eduard; Musahl, Volker; Fu, Freddie; Desai, Neel; Stålman, Anders; Samuelsson, Kristian

    2017-12-01

    The purpose of this study was to investigate how different techniques of single-bundle anterior cruciate ligament (ACL) reconstruction affect subjective knee function via the Knee injury and Osteoarthritis Outcome Score (KOOS) evaluation 2 years after surgery. It was hypothesized that the surgical techniques of single-bundle ACL reconstruction would result in equivalent results with respect to subjective knee function 2 years after surgery. This cohort study was based on data from the Swedish National Knee Ligament Register during the 10-year period of 1 January 2005 through 31 December 2014. Patients who underwent primary single-bundle ACL reconstruction with hamstrings tendon autograft were included. Details on surgical technique were collected using a web-based questionnaire comprised of essential AARSC items, including utilization of accessory medial portal drilling, anatomic tunnel placement, and visualization of insertion sites and landmarks. A repeated measures ANOVA and an additional linear mixed model analysis were used to investigate the effect of surgical technique on the KOOS 4 from the pre-operative period to 2-year follow-up. A total of 13,636 patients who had undergone single-bundle ACL reconstruction comprised the study group for this analysis. A repeated measures ANOVA determined that mean subjective knee function differed between the pre-operative time period and at 2-year follow-up (p < 0.001). No differences were found with respect to the interaction between KOOS 4 and surgical technique or gender. Additionally, the linear mixed model adjusted for age at reconstruction, gender, and concomitant injuries showed no difference between surgical techniques in KOOS 4 improvement from baseline to 2-year follow-up. However, KOOS 4 improved significantly in patients for all surgical techniques of single-bundle ACL reconstruction (p < 0.001); the largest improvement was seen between the pre-operative time period and at 1-year follow-up. Surgical techniques of primary single-bundle ACL reconstruction did not demonstrate differences in the improvement in baseline subjective knee function as measured with the KOOS 4 during the first 2 years after surgery. However, subjective knee function improved from pre-operative baseline to 2-year follow-up independently of surgical technique.

  20. Simple taper: Taper equations for the field forester

    Treesearch

    David R. Larsen

    2017-01-01

    "Simple taper" is set of linear equations that are based on stem taper rates; the intent is to provide taper equation functionality to field foresters. The equation parameters are two taper rates based on differences in diameter outside bark at two points on a tree. The simple taper equations are statistically equivalent to more complex equations. The linear...

  1. Influential Nonegligible Parameters under the Search Linear Model.

    DTIC Science & Technology

    1986-04-25

    lack of fit as wi 2 SSL0F(1 ) - I n u~ -(M) (12) and the sum of squares due to pure error as SSPE - I I (Y V-2 (13) For I 1,.,2) we define F(i) SSL0F...SSE (I) Noting that the numerator on the RHS of the above expression does not depend on i, we get the equivalence of (a) and (b). Again, SSE ) SSPE ...SSLOFM I and SSPE does not depend on i. Therefore (a) and (c) are equivalent. - From (14), the equivalence of (c) and (d) is clear. From (3), (6

  2. Asymptotic Stability of Interconnected Passive Non-Linear Systems

    NASA Technical Reports Server (NTRS)

    Isidori, A.; Joshi, S. M.; Kelkar, A. G.

    1999-01-01

    This paper addresses the problem of stabilization of a class of internally passive non-linear time-invariant dynamic systems. A class of non-linear marginally strictly passive (MSP) systems is defined, which is less restrictive than input-strictly passive systems. It is shown that the interconnection of a non-linear passive system and a non-linear MSP system is globally asymptotically stable. The result generalizes and weakens the conditions of the passivity theorem, which requires one of the systems to be input-strictly passive. In the case of linear time-invariant systems, it is shown that the MSP property is equivalent to the marginally strictly positive real (MSPR) property, which is much simpler to check.

  3. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction

    DOE PAGES

    Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir

    2016-10-20

    Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less

  4. Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir

    Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less

  5. Using Neural Networks to Improve the Performance of Radiative Transfer Modeling Used for Geometry Dependent Surface Lambertian-Equivalent Reflectivity Calculations

    NASA Technical Reports Server (NTRS)

    Fasnacht, Zachary; Qin, Wenhan; Haffner, David P.; Loyola, Diego; Joiner, Joanna; Krotkov, Nickolay; Vasilkov, Alexander; Spurr, Robert

    2017-01-01

    Surface Lambertian-equivalent reflectivity (LER) is important for trace gas retrievals in the direct calculation of cloud fractions and indirect calculation of the air mass factor. Current trace gas retrievals use climatological surface LER's. Surface properties that impact the bidirectional reflectance distribution function (BRDF) as well as varying satellite viewing geometry can be important for retrieval of trace gases. Geometry Dependent LER (GLER) captures these effects with its calculation of sun normalized radiances (I/F) and can be used in current LER algorithms (Vasilkov et al. 2016). Pixel by pixel radiative transfer calculations are computationally expensive for large datasets. Modern satellite missions such as the Tropospheric Monitoring Instrument (TROPOMI) produce very large datasets as they take measurements at much higher spatial and spectral resolutions. Look up table (LUT) interpolation improves the speed of radiative transfer calculations but complexity increases for non-linear functions. Neural networks perform fast calculations and can accurately predict both non-linear and linear functions with little effort.

  6. Equivalent square formula for determining the surface dose of rectangular field from 6 MV therapeutic photon beam.

    PubMed

    Apipunyasopon, Lukkana; Srisatit, Somyot; Phaisangittisakul, Nakorn

    2013-09-06

    The purpose of the study was to investigate the use of the equivalent square formula for determining the surface dose from a rectangular photon beam. A 6 MV therapeutic photon beam delivered from a Varian Clinac 23EX medical linear accelerator was modeled using the EGS4nrc Monte Carlo simulation package. It was then used to calculate the dose in the build-up region from both square and rectangular fields. The field patterns were defined by various settings of the X- and Y-collimator jaw ranging from 5 to 20 cm. Dose measurements were performed using a thermoluminescence dosimeter and a Markus parallel-plate ionization chamber on the four square fields (5 × 5, 10 × 10, 15 × 15, and 20 × 20 cm2). The surface dose was acquired by extrapolating the build-up doses to the surface. An equivalent square for a rectangular field was determined using the area-to-perimeter formula, and the surface dose of the equivalent square was estimated using the square-field data. The surface dose of square field increased linearly from approximately 10% to 28% as the side of the square field increased from 5 to 20 cm. The influence of collimator exchange on the surface dose was found to be not significant. The difference in the percentage surface dose of the rectangular field compared to that of the relevant equivalent square was insignificant and can be clinically neglected. The use of the area-to-perimeter formula for an equivalent square field can provide a clinically acceptable surface dose estimation for a rectangular field from a 6 MV therapy photon beam.

  7. Response of Silicon-Based Linear Energy Transfer Spectrometers: Implication for Radiation Risk Assessment in Space Flights

    NASA Technical Reports Server (NTRS)

    Badhwar, G. D.; O'Neill, P. M.

    2001-01-01

    There is considerable interest in developing silicon-based telescopes because of their compactness and low power requirements. Three such telescopes have been flown on board the Space Shuttle to measure the linear energy transfer spectra of trapped, galactic cosmic ray, and solar energetic particles. Dosimeters based on single silicon detectors have also been flown on the Mir orbital station. A comparison of the absorbed dose and radiation quality factors calculated from these telescopes with that estimated from measurements made with a tissue equivalent proportional counter show differences which need to be fully understood if these telescopes are to be used for astronaut radiation risk assessments. Instrument performance is complicated by a variety of factors. A Monte Carlo-based technique was developed to model the behavior of both single element detectors in a proton beam, and the performance of a two-element, wide-angle telescope, in the trapped belt proton field inside the Space Shuttle. The technique is based on: (1) radiation transport intranuclear-evaporation model that takes into account the charge and angular distribution of target fragments, (2) Landau-Vavilov distribution of energy deposition allowing for electron escape, (3) true detector geometry of the telescope, (4) coincidence and discriminator settings, (5) spacecraft shielding geometry, and (6) the external space radiation environment, including albedo protons. The value of such detailed modeling and its implications in astronaut risk assessment is addressed. c2001 Elsevier Science B.V. All rights reserved.

  8. Response of silicon-based linear energy transfer spectrometers: implication for radiation risk assessment in space flights.

    PubMed

    Badhwar, G D; O'Neill, P M

    2001-07-11

    There is considerable interest in developing silicon-based telescopes because of their compactness and low power requirements. Three such telescopes have been flown on board the Space Shuttle to measure the linear energy transfer spectra of trapped, galactic cosmic ray, and solar energetic particles. Dosimeters based on single silicon detectors have also been flown on the Mir orbital station. A comparison of the absorbed dose and radiation quality factors calculated from these telescopes with that estimated from measurements made with a tissue equivalent proportional counter show differences which need to be fully understood if these telescopes are to be used for astronaut radiation risk assessments. Instrument performance is complicated by a variety of factors. A Monte Carlo-based technique was developed to model the behavior of both single element detectors in a proton beam, and the performance of a two-element, wide-angle telescope, in the trapped belt proton field inside the Space Shuttle. The technique is based on: (1) radiation transport intranuclear-evaporation model that takes into account the charge and angular distribution of target fragments, (2) Landau-Vavilov distribution of energy deposition allowing for electron escape, (3) true detector geometry of the telescope, (4) coincidence and discriminator settings, (5) spacecraft shielding geometry, and (6) the external space radiation environment, including albedo protons. The value of such detailed modeling and its implications in astronaut risk assessment is addressed. c2001 Elsevier Science B.V. All rights reserved.

  9. Scalability of the LEU-Modified Cintichem Process: 3-MeV Van de Graaff and 35-MeV Electron Linear Accelerator Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rotsch, David A.; Brossard, Tom; Roussin, Ethan

    Molybdenum-99, the mother of Tc-99m, can be produced from fission of U-235 in nuclear reactors and purified from fission products by the Cintichem process, later modified for low-enriched uranium (LEU) targets. The key step in this process is the precipitation of Mo with α-benzoin oxime (ABO). The stability of this complex to radiation has been examined. Molybdenum-ABO was irradiated with 3 MeV electrons produced by a Van de Graaff generator and 35 MeV electrons produced by a 50 MeV/25 kW electron linear accelerator. Dose equivalents of 1.7–31.2 kCi of Mo-99 were administered to freshly prepared Mo-ABO. Irradiated samples of Mo-ABOmore » were processed according to the LEU Modified-Cintichem process. The Van de Graaff data indicated good radiation stability of the Mo-ABO complex up to ~15 kCi dose equivalents of Mo-99 and nearly complete destruction at doses >24 kCi Mo-99. The linear accelerator data indicate that even at 6.2 kCi of Mo-99 equivalence of dose, the sample lost ~20% of Mo-99. The 20% loss of Mo-99 at this low dose may be attributed to thermal decomposition of the product from the heat deposited in the sample during irradiation.« less

  10. Fabrication and kinetics study of nano-Al/NiO thermite film by electrophoretic deposition.

    PubMed

    Zhang, Daixiong; Li, Xueming

    2015-05-21

    Nano-Al/NiO thermites were successfully prepared as film by electrophoretic deposition (EPD). For the key issue of this EPD, a mixture solvent of ethanol-acetylacetone (1:1 in volume) containing 0.00025 M nitric acid was proved to be a suitable dispersion system for EPD. The kinetics of electrophoretic deposition for both nano-Al and nano-NiO were investigated; the linear relation between deposition weight and deposition time in short time and parabolic relation in prolonged time were observed in both EPDs. The critical transition time between linear deposition kinetics and parabolic deposition kinetics for nano-Al and nano-NiO were 20 and 10 min, respectively. The theoretical calculation of the kinetics of electrophoretic deposition revealed that the equivalence ratio of nano-Al/NiO thermites film would be affected by the behavior of electrophoretic deposition for nano-Al and nano-NiO. The equivalence ratio remained steady when the linear deposition kinetics dominated for both nano-Al and nano-NiO. The equivalence ratio would change with deposition time when deposition kinetics for nano-NiO changed into parabolic kinetics dominated after 10 min. Therefore, the rule was suggested to be suitable for other EPD of bicomposites. We also studied thermodynamic properties of electrophoretic nano-Al/NiO thermites film as well as combustion performance.

  11. Portfolio optimization using fuzzy linear programming

    NASA Astrophysics Data System (ADS)

    Pandit, Purnima K.

    2013-09-01

    Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.

  12. Design of bent waveguide semiconductor lasers using nonlinear equivalent chirp

    NASA Astrophysics Data System (ADS)

    Li, Lianyan; Shi, Yuechun; Zhang, Yunshan; Chen, Xiangfei

    2018-01-01

    Reconstruction equivalent chirp (REC) technique is widely used in the design and fabrication of semiconductor laser arrays and tunable lasers with low cost and high wavelength accuracy. Bent waveguide is a promising method to suppress the zeroth order resonance, which is an intrinsic problem in REC technique. However, it may introduce basic grating chirp and deteriorate the single longitudinal mode (SLM) property of the laser. A nonlinear equivalent chirp pattern is proposed in this paper to compensate the grating chirp and improve the SLM property. It will benefit the realization of low-cost Distributed feedback (DFB) semiconductor laser arrays with accurate lasing wavelength.

  13. Application of closed-form solutions to a mesh point field in silicon solar cells

    NASA Technical Reports Server (NTRS)

    Lamorte, M. F.

    1985-01-01

    A computer simulation method is discussed that provides for equivalent simulation accuracy, but that exhibits significantly lower CPU running time per bias point compared to other techniques. This new method is applied to a mesh point field as is customary in numerical integration (NI) techniques. The assumption of a linear approximation for the dependent variable, which is typically used in the finite difference and finite element NI methods, is not required. Instead, the set of device transport equations is applied to, and the closed-form solutions obtained for, each mesh point. The mesh point field is generated so that the coefficients in the set of transport equations exhibit small changes between adjacent mesh points. Application of this method to high-efficiency silicon solar cells is described; and the method by which Auger recombination, ambipolar considerations, built-in and induced electric fields, bandgap narrowing, carrier confinement, and carrier diffusivities are treated. Bandgap narrowing has been investigated using Fermi-Dirac statistics, and these results show that bandgap narrowing is more pronounced and that it is temperature-dependent in contrast to the results based on Boltzmann statistics.

  14. Robust high-precision attitude control for flexible spacecraft with improved mixed H2/H∞ control strategy under poles assignment constraint

    NASA Astrophysics Data System (ADS)

    Liu, Chuang; Ye, Dong; Shi, Keke; Sun, Zhaowei

    2017-07-01

    A novel improved mixed H2/H∞ control technique combined with poles assignment theory is presented to achieve attitude stabilization and vibration suppression simultaneously for flexible spacecraft in this paper. The flexible spacecraft dynamics system is described and transformed into corresponding state space form. Based on linear matrix inequalities (LMIs) scheme and poles assignment theory, the improved mixed H2/H∞ controller does not restrict the equivalence of the two Lyapunov variables involved in H2 and H∞ performance, which can reduce conservatives compared with traditional mixed H2/H∞ controller. Moreover, it can eliminate the coupling of Lyapunov matrix variables and system matrices by introducing slack variable that provides additional degree of freedom. Several simulations are performed to demonstrate the effectiveness and feasibility of the proposed method in this paper.

  15. Backscatter laser depolarization studies of simulated stratospheric aerosols - Crystallized sulfuric acid droplets

    NASA Technical Reports Server (NTRS)

    Sassen, Kenneth; Zhao, Hongjie; Yu, Bing-Kun

    1989-01-01

    The optical depolarizing properties of simulated stratospheric aerosols were studied in laboratory laser (0.633 micrometer) backscattering experiments for application to polarization lidar observations. Clouds composed of sulfuric acid solution droplets, some treated with ammonia gas, were observed during evaporation. The results indicate that the formation of minute ammonium sulfate particles from the evaporation of acid droplets produces linear depolarization ratios of beta equivalent to 0.02, but beta equivalent to 0.10 to 0.15 are generated from aged acid cloud aerosols and acid droplet crystalization effects following the introduction of ammonia gas into the chamber. It is concluded that partially crystallized sulfuric acid droplets are a likely candidate for explaining the lidar beta equivalent to 0.10 values that have been observed in the lower stratosphere in the absence of the relatively strong backscattering from homogeneous sulfuric acid droplet (beta equivalent to 0) or ice crystal (beta equivalent to 0.5) clouds.

  16. Backscatter laser depolarization studies of simulated stratospheric aerosols: Crystallized sulfuric acid droplets

    NASA Technical Reports Server (NTRS)

    Sassen, Kenneth; Zhao, Hongjie; Yu, Bing-Kun

    1988-01-01

    The optical depolarizing properties of simulated stratospheric aerosols were studied in laboratory laser (0.633 micrometer) backscattering experiments for application to polarization lidar observations. Clouds composed of sulfuric acid solution droplets, some treated with ammonia gas, were observed during evaporation. The results indicate that the formation of minute ammonium sulfate particles from the evaporation of acid droplets produces linear depolarization ratios of beta equivalent to 0.02, but beta equivalent to 0.10 to 0.15 are generated from aged acid cloud aerosols and acid droplet crystallization effects following the introduction of ammonia gas into the chamber. It is concluded that partially crystallized sulfuric acid droplets are a likely candidate for explaining the lidar beta equivalent to 0.10 values that have been observed in the lower stratosphere in the absence of the relatively strong backscattering from homogeneous sulfuric acid droplet (beta equivalent to 0) or ice crystal (beta equivalent to 0.5) clouds.

  17. Stochastic noise characteristics in matrix inversion tomosynthesis (MITS).

    PubMed

    Godfrey, Devon J; McAdams, H P; Dobbins, James T Third

    2009-05-01

    Matrix inversion tomosynthesis (MITS) uses known imaging geometry and linear systems theory to deterministically separate in-plane detail from residual tomographic blur in a set of conventional tomosynthesis ("shift-and-add") planes. A previous investigation explored the effect of scan angle (ANG), number of projections (N), and number of reconstructed planes (NP) on the MITS impulse response and modulation transfer function characteristics, and concluded that ANG = 20 degrees, N = 71, and NP = 69 is the optimal MITS imaging technique for chest imaging on our prototype tomosynthesis system. This article examines the effect of ANG, N, and NP on the MITS exposure-normalized noise power spectra (ENNPS) and seeks to confirm that the imaging parameters selected previously by an analysis of the MITS impulse response also yield reasonable stochastic properties in MITS reconstructed planes. ENNPS curves were generated for experimentally acquired mean-subtracted projection images, conventional tomosynthesis planes, and MITS planes with varying combinations of the parameters ANG, N, and NP. Image data were collected using a prototype tomosynthesis system, with 11.4 cm acrylic placed near the image receptor to produce lung-equivalent beam hardening and scattered radiation. Ten identically acquired tomosynthesis data sets (realizations) were collected for each selected technique and used to generate ensemble mean images that were subtracted from individual image realizations prior to noise power spectra (NPS) estimation. NPS curves were normalized to account for differences in entrance exposure (as measured with an ion chamber), yielding estimates of the ENNPS for each technique. Results suggest that mid- and high-frequency noise in MITS planes is fairly equivalent in magnitude to noise in conventional tomosynthesis planes, but low-frequency noise is amplified in the most anterior and posterior reconstruction planes. Selecting the largest available number of projections (N = 71) does not incur any appreciable additive electronic noise penalty compared to using fewer projections for roughly equivalent cumulative exposure. Stochastic noise is minimized by maximizing N and NP but increases with increasing ANG. The noise trend results for NP and ANG are contrary to what would be predicted by simply considering the MITS matrix conditioning and likely result from the interplay between noise correlation and the polarity of the MITS filters. From this study, the authors conclude that the previously determined optimal MITS imaging strategy based on impulse response considerations produces somewhat suboptimal stochastic noise characteristics, but is probably still the best technique for MITS imaging of the chest.

  18. Identification of Low Order Equivalent System Models From Flight Test Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2000-01-01

    Identification of low order equivalent system dynamic models from flight test data was studied. Inputs were pilot control deflections, and outputs were aircraft responses, so the models characterized the total aircraft response including bare airframe and flight control system. Theoretical investigations were conducted and related to results found in the literature. Low order equivalent system modeling techniques using output error and equation error parameter estimation in the frequency domain were developed and validated on simulation data. It was found that some common difficulties encountered in identifying closed loop low order equivalent system models from flight test data could be overcome using the developed techniques. Implications for data requirements and experiment design were discussed. The developed methods were demonstrated using realistic simulation cases, then applied to closed loop flight test data from the NASA F-18 High Alpha Research Vehicle.

  19. Efficacy and equivalency of an Escherichia coli-derived phytase for replacing inorganic phosphorus in the diets of broiler chickens and young pigs.

    PubMed

    Jendza, J A; Dilger, R N; Sands, J S; Adeola, O

    2006-12-01

    Two studies were conducted to determine the efficacy of an Escherichia coli-derived phytase (ECP) and its equivalency relative to inorganic phosphorus (iP) from monosodium phosphate (MSP). In Exp. 1, one thousand two hundred 1-d-old male broilers were used in a 42-d trial to assess the effect of ECP and iP supplementation on growth performance and nutrient digestibility. Dietary treatments were based on corn-soybean meal basal diets (BD) containing 239 and 221 g of CP, 8.2 and 6.6 g of Ca, and 2.4 and 1.5 g of nonphytate P (nPP) per kg for the starter and grower phases, respectively. Treatments consisted of the BD; the BD + 0.6, 1.2, or 1.8 g of iP from MSP per kg; and the BD + 250, 500, 750, or 1,000 phytase units (FTU) of ECP per kg. Increasing levels of MSP improved gain, gain:feed, and tibia ash (linear, P < 0.01). Increasing levels of ECP improved gain, gain:feed, tibia ash (linear, P < 0.01), apparent ileal digestibility of P, N, Arg, His, Phe, and Trp at d 21 (linear, P < 0.05), and apparent retention of P at d 21 (linear, P < 0.05). Increasing levels of ECP decreased apparent retention of energy (linear, P < 0.01). Five hundred FTU of ECP per kg was determined to be equivalent to the addition of 0.72, 0.78, and 1.19 g of iP from MSP per kg in broiler diets based on gain, feed intake, and bone ash, respectively. In Exp. 2, forty-eight 10-kg pigs were used in a 28-d trial to assess the effect of ECP and iP supplementation on growth performance and nutrient digestibility. Dietary treatments consisted of a positive control containing 6.1 and 3.5 g of Ca and nPP, respectively, per kg; a negative control (NC) containing 4.8 and 1.7 g of Ca and nPP, respectively, per kg; the NC diet plus 0.4, 0.8, or 1.2 g of iP from MSP per kg; and the NC diet plus 500, 750, or 1,000 FTU of ECP per kg. Daily gain improved (linear, P < 0.05) with ECP addition, as did apparent digestibility of Ca and P (linear, P < 0.01). Five hundred FTU of ECP per kg was determined to be equivalent to the addition of 0.49 and 1.00 g of iP from MSP per kg in starter pigs diets, based on ADG and bone ash, respectively.

  20. A numerical approach for assessing effects of shear on equivalent permeability and nonlinear flow characteristics of 2-D fracture networks

    NASA Astrophysics Data System (ADS)

    Liu, Richeng; Li, Bo; Jiang, Yujing; Yu, Liyuan

    2018-01-01

    Hydro-mechanical properties of rock fractures are core issues for many geoscience and geo-engineering practices. Previous experimental and numerical studies have revealed that shear processes could greatly enhance the permeability of single rock fractures, yet the shear effects on hydraulic properties of fractured rock masses have received little attention. In most previous fracture network models, single fractures are typically presumed to be formed by parallel plates and flow is presumed to obey the cubic law. However, related studies have suggested that the parallel plate model cannot realistically represent the surface characters of natural rock fractures, and the relationship between flow rate and pressure drop will no longer be linear at sufficiently large Reynolds numbers. In the present study, a numerical approach was established to assess the effects of shear on the hydraulic properties of 2-D discrete fracture networks (DFNs) in both linear and nonlinear regimes. DFNs considering fracture surface roughness and variation of aperture in space were generated using an originally developed code DFNGEN. Numerical simulations by solving Navier-Stokes equations were performed to simulate the fluid flow through these DFNs. A fracture that cuts through each model was sheared and by varying the shear and normal displacements, effects of shear on equivalent permeability and nonlinear flow characteristics of DFNs were estimated. The results show that the critical condition of quantifying the transition from a linear flow regime to a nonlinear flow regime is: 10-4 〈 J < 10-3, where J is the hydraulic gradient. When the fluid flow is in a linear regime (i.e., J < 10-4), the relative deviation of equivalent permeability induced by shear, δ2, is linearly correlated with J with small variations, while for fluid flow in the nonlinear regime (J 〉 10-3), δ2 is nonlinearly correlated with J. A shear process would reduce the equivalent permeability significantly in the orientation perpendicular to the sheared fracture as much as 53.86% when J = 1, shear displacement Ds = 7 mm, and normal displacement Dn = 1 mm. By fitting the calculated results, the mathematical expression for δ2 is established to help choose proper governing equations when solving fluid flow problems in fracture networks.

  1. Linear energy transfer in water phantom within SHIELD-HIT transport code

    NASA Astrophysics Data System (ADS)

    Ergun, A.; Sobolevsky, N.; Botvina, A. S.; Buyukcizmeci, N.; Latysheva, L.; Ogul, R.

    2017-02-01

    The effect of irradiation in tissue is important in hadron therapy for the dose measurement and treatment planning. This biological effect is defined by an equivalent dose H which depends on the Linear Energy Transfer (LET). Usually, H can be expressed in terms of the absorbed dose D and the quality factor K of the radiation under consideration. In literature, various types of transport codes have been used for modeling and simulation of the interaction of the beams of protons and heavier ions with tissue-equivalent materials. In this presentation we used SHIELD-HIT code to simulate decomposition of the absorbed dose by LET in water for 16O beams. A more detailed description of capabilities of the SHIELD-HIT code can be found in the literature.

  2. Equivalence of quantum Boltzmann equation and Kubo formula for dc conductivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Z.B.; Chen, L.Y.

    1990-02-01

    This paper presents a derivation of the quantum Boltzmann equation for linear dc transport with a correction term to Mahan-Hansch's equations and derive a formal solution to it. Based on this formal solution, the authors find the electric conductivity can be expressed as the retarded current-current correlation. Therefore, the authors explicitly demonstrate the equivalence of the two most important theoretical methods: quantum Boltzmann equation and Kubo formula.

  3. Influence of beam efficiency through the patient-specific collimator on secondary neutron dose equivalent in double scattering and uniform scanning modes of proton therapy.

    PubMed

    Hecksel, D; Anferov, V; Fitzek, M; Shahnazi, K

    2010-06-01

    Conventional proton therapy facilities use double scattering nozzles, which are optimized for delivery of a few fixed field sizes. Similarly, uniform scanning nozzles are commissioned for a limited number of field sizes. However, cases invariably occur where the treatment field is significantly different from these fixed field sizes. The purpose of this work was to determine the impact of the radiation field conformity to the patient-specific collimator on the secondary neutron dose equivalent. Using a WENDI-II neutron detector, the authors experimentally investigated how the neutron dose equivalent at a particular point of interest varied with different collimator sizes, while the beam spreading was kept constant. The measurements were performed for different modes of dose delivery in proton therapy, all of which are available at the Midwest Proton Radiotherapy Institute (MPRI): Double scattering, uniform scanning delivering rectangular fields, and uniform scanning delivering circular fields. The authors also studied how the neutron dose equivalent changes when one changes the amplitudes of the scanned field for a fixed collimator size. The secondary neutron dose equivalent was found to decrease linearly with the collimator area for all methods of dose delivery. The relative values of the neutron dose equivalent for a collimator with a 5 cm diameter opening using 88 MeV protons were 1.0 for the double scattering field, 0.76 for rectangular uniform field, and 0.6 for the circular uniform field. Furthermore, when a single circle wobbling was optimized for delivery of a uniform field 5 cm in diameter, the secondary neutron dose equivalent was reduced by a factor of 6 compared to the double scattering nozzle. Additionally, when the collimator size was kept constant, the neutron dose equivalent at the given point of interest increased linearly with the area of the scanned proton beam. The results of these experiments suggest that the patient-specific collimator is a significant contributor to the secondary neutron dose equivalent to a distant organ at risk. Improving conformity of the radiation field to the patient-specific collimator can significantly reduce secondary neutron dose equivalent to the patient. Therefore, it is important to increase the number of available generic field sizes in double scattering systems as well as in uniform scanning nozzles.

  4. Exoatmospheric intercepts using zero effort miss steering for midcourse guidance

    NASA Astrophysics Data System (ADS)

    Newman, Brett

    The suitability of proportional navigation, or an equivalent zero effort miss formulation, for exatmospheric intercepts during midcourse guidance, followed by a ballistic coast to the endgame, is addressed. The problem is formulated in terms of relative motion in a general, three dimensional framework. The proposed guidance law for the commanded thrust vector orientation consists of the sum of two terms: (1) along the line of sight unit direction and (2) along the zero effort miss component perpendicular to the line of sight and proportional to the miss itself and a guidance gain. If the guidance law is to be suitable for longer range targeting applications with significant ballistic coasting after burnout, determination of the zero effort miss must account for the different gravitational accelerations experienced by each vehicle. The proposed miss determination techniques employ approximations for the true differential gravity effect and thus, are less accurate than a direct numerical propagation of the governing equations, but more accurate than a baseline determination, which assumes equal accelerations for both vehicles. Approximations considered are constant, linear, quadratic, and linearized inverse square models. Theoretical results are applied to a numerical engagement scenario and the resulting performance is evaluated in terms of the miss distances determined from nonlinear simulation.

  5. Porosity Defect Remodeling and Tensile Analysis of Cast Steel

    PubMed Central

    Sun, Linfeng; Liao, Ridong; Lu, Wei; Fu, Sibo

    2016-01-01

    Tensile properties on ASTM A216 WCB cast steel with centerline porosity defect were studied with radiographic mapping and finite element remodeling technique. Non-linear elastic and plastic behaviors dependent on porosity were mathematically described by relevant equation sets. According to the ASTM E8 tensile test standard, matrix and defect specimens were machined into two categories by two types of height. After applying radiographic inspection, defect morphologies were mapped to the mid-sections of the finite element models and the porosity fraction fields had been generated with interpolation method. ABAQUS input parameters were confirmed by trial simulations to the matrix specimen and comparison with experimental outcomes. Fine agreements of the result curves between simulations and experiments could be observed, and predicted positions of the tensile fracture were found to be in accordance with the tests. Chord modulus was used to obtain the equivalent elastic stiffness because of the non-linear features. The results showed that elongation was the most influenced term to the defect cast steel, compared with elastic stiffness and yield stress. Additional visual explanations on the tensile fracture caused by void propagation were also given by the result contours at different mechanical stages, including distributions of Mises stress and plastic strain. PMID:28787919

  6. VRT (verbal reasoning test): a new test for assessment of verbal reasoning. Test realization and Italian normative data from a multicentric study.

    PubMed

    Basagni, Benedetta; Luzzatti, Claudio; Navarrete, Eduardo; Caputo, Marina; Scrocco, Gessica; Damora, Alessio; Giunchi, Laura; Gemignani, Paola; Caiazzo, Annarita; Gambini, Maria Grazia; Avesani, Renato; Mancuso, Mauro; Trojano, Luigi; De Tanti, Antonio

    2017-04-01

    Verbal reasoning is a complex, multicomponent function, which involves activation of functional processes and neural circuits distributed in both brain hemispheres. Thus, this ability is often impaired after brain injury. The aim of the present study is to describe the construction of a new verbal reasoning test (VRT) for patients with brain injury and to provide normative values in a sample of healthy Italian participants. Three hundred and eighty healthy Italian subjects (193 women and 187 men) of different ages (range 16-75 years) and educational level (primary school to postgraduate degree) underwent the VRT. VRT is composed of seven subtests, investigating seven different domains. Multiple linear regression analysis revealed a significant effect of age and education on the participants' performance in terms of both VRT total score and all seven subtest scores. No gender effect was found. A correction grid for raw scores was built from the linear equation derived from the scores. Inferential cut-off scores were estimated using a non-parametric technique, and equivalent scores were computed. We also provided a grid for the correction of results by z scores.

  7. Reconstructions of Soil Moisture for the Upper Colorado River Basin Using Tree-Ring Chronologies

    NASA Astrophysics Data System (ADS)

    Tootle, G.; Anderson, S.; Grissino-Mayer, H.

    2012-12-01

    Soil moisture is an important factor in the global hydrologic cycle, but existing reconstructions of historic soil moisture are limited. Tree-ring chronologies (TRCs) were used to reconstruct annual soil moisture in the Upper Colorado River Basin (UCRB). Gridded soil moisture data were spatially regionalized using principal components analysis and k-nearest neighbor techniques. Moisture sensitive tree-ring chronologies in and adjacent to the UCRB were correlated with regional soil moisture and tested for temporal stability. TRCs that were positively correlated and stable for the calibration period were retained. Stepwise linear regression was applied to identify the best predictor combinations for each soil moisture region. The regressions explained 42-78% of the variability in soil moisture data. We performed reconstructions for individual soil moisture grid cells to enhance understanding of the disparity in reconstructive skill across the regions. Reconstructions that used chronologies based on ponderosa pines (Pinus ponderosa) and pinyon pines (Pinus edulis) explained increased variance in the datasets. Reconstructed soil moisture was standardized and compared with standardized reconstructed streamflow and snow water equivalent from the same region. Soil moisture reconstructions were highly correlated with streamflow and snow water equivalent reconstructions, indicating reconstructions of soil moisture in the UCRB using TRCs successfully represent hydrologic trends, including the identification of periods of prolonged drought.

  8. Transport synthetic acceleration with opposing reflecting boundary conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zika, M.R.; Adams, M.L.

    2000-02-01

    The transport synthetic acceleration (TSA) scheme is extended to problems with opposing reflecting boundary conditions. This synthetic method employs a simplified transport operator as its low-order approximation. A procedure is developed that allows the use of the conjugate gradient (CG) method to solve the resulting low-order system of equations. Several well-known transport iteration algorithms are cast in a linear algebraic form to show their equivalence to standard iterative techniques. Source iteration in the presence of opposing reflecting boundary conditions is shown to be equivalent to a (poorly) preconditioned stationary Richardson iteration, with the preconditioner defined by the method of iteratingmore » on the incident fluxes on the reflecting boundaries. The TSA method (and any synthetic method) amounts to a further preconditioning of the Richardson iteration. The presence of opposing reflecting boundary conditions requires special consideration when developing a procedure to realize the CG method for the proposed system of equations. The CG iteration may be applied only to symmetric positive definite matrices; this condition requires the algebraic elimination of the boundary angular corrections from the low-order equations. As a consequence of this elimination, evaluating the action of the resulting matrix on an arbitrary vector involves two transport sweeps and a transmission iteration. Results of applying the acceleration scheme to a simple test problem are presented.« less

  9. Menstrual blood loss measurement: validation of the alkaline hematin technique for feminine hygiene products containing superabsorbent polymers.

    PubMed

    Magnay, Julia L; Nevatte, Tracy M; Dhingra, Vandana; O'Brien, Shaughn

    2010-12-01

    To validate the alkaline hematin technique for measurement of menstrual blood loss using ultra-thin sanitary towels that contain superabsorbent polymer granules as the absorptive agent. Laboratory study using simulated menstrual fluid (SMF) and Always Ultra Normal, Long, and Night "with wings" sanitary towels. Keele Menstrual Disorders Laboratory. None. None. Recovery of blood, linearity, and interassay variation over a range of SMF volumes applied to towels. Because of the variable percentage of blood in menstrual fluid, blood recovery was assessed from SMF constituted as 10%, 25%, 50%, and 100% blood. The lower limit of reliable detection and the effect of storing soiled towels for up to 4 weeks at 15°C-20°C, 4°C, and -20°C before analysis were determined. Ninety percent recovery was reproducibly achieved up to 30 mL applied volume at all tested SMF compositions, except at low volume or high dilution equivalent to <2 mL whole blood. Samples could be stored for 3 weeks at all tested temperatures without loss of recovery. The technique was suitable for processing towels individually or in batches. The alkaline hematin technique is a suitable and validated method for measuring menstrual blood loss from Always Ultra sanitary towels that contain superabsorbent polymers. Copyright © 2010 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  10. Cavity-enhanced resonant photoacoustic spectroscopy with optical feedback cw diode lasers: A novel technique for ultratrace gas analysis and high-resolution spectroscopy.

    PubMed

    Hippler, Michael; Mohr, Christian; Keen, Katherine A; McNaghten, Edward D

    2010-07-28

    Cavity-enhanced resonant photoacoustic spectroscopy with optical feedback cw diode lasers (OF-CERPAS) is introduced as a novel technique for ultratrace gas analysis and high-resolution spectroscopy. In the scheme, a single-mode cw diode laser (3 mW, 635 nm) is coupled into a high-finesse linear cavity and stabilized to the cavity by optical feedback. Inside the cavity, a build-up of laser power to at least 2.5 W occurs. Absorbing gas phase species inside the cavity are detected with high sensitivity by the photoacoustic effect using a microphone embedded in the cavity. To increase sensitivity further, coupling into the cavity is modulated at a frequency corresponding to a longitudinal resonance of an organ pipe acoustic resonator (f=1.35 kHz and Q approximately 10). The technique has been characterized by measuring very weak water overtone transitions near 635 nm. Normalized noise-equivalent absorption coefficients are determined as alpha approximately 4.4x10(-9) cm(-1) s(1/2) (1 s integration time) and 2.6x10(-11) cm(-1) s(1/2) W (1 s integration time and 1 W laser power). These sensitivities compare favorably with existing state-of-the-art techniques. As an advantage, OF-CERPAS is a "zero-background" method which increases selectivity and sensitivity, and its sensitivity scales with laser power.

  11. Modification of the USLE K factor for soil erodibility assessment on calcareous soils in Iran

    NASA Astrophysics Data System (ADS)

    Ostovari, Yaser; Ghorbani-Dashtaki, Shoja; Bahrami, Hossein-Ali; Naderi, Mehdi; Dematte, Jose Alexandre M.; Kerry, Ruth

    2016-11-01

    The measurement of soil erodibility (K) in the field is tedious, time-consuming and expensive; therefore, its prediction through pedotransfer functions (PTFs) could be far less costly and time-consuming. The aim of this study was to develop new PTFs to estimate the K factor using multiple linear regression, Mamdani fuzzy inference systems, and artificial neural networks. For this purpose, K was measured in 40 erosion plots with natural rainfall. Various soil properties including the soil particle size distribution, calcium carbonate equivalent, organic matter, permeability, and wet-aggregate stability were measured. The results showed that the mean measured K was 0.014 t h MJ- 1 mm- 1 and 2.08 times less than the estimated mean K (0.030 t h MJ- 1 mm- 1) using the USLE model. Permeability, wet-aggregate stability, very fine sand, and calcium carbonate were selected as independent variables by forward stepwise regression in order to assess the ability of multiple linear regression, Mamdani fuzzy inference systems and artificial neural networks to predict K. The calcium carbonate equivalent, which is not accounted for in the USLE model, had a significant impact on K in multiple linear regression due to its strong influence on the stability of aggregates and soil permeability. Statistical indices in validation and calibration datasets determined that the artificial neural networks method with the highest R2, lowest RMSE, and lowest ME was the best model for estimating the K factor. A strong correlation (R2 = 0.81, n = 40, p < 0.05) between the estimated K from multiple linear regression and measured K indicates that the use of calcium carbonate equivalent as a predictor variable gives a better estimation of K in areas with calcareous soils.

  12. Stochastic stability properties of jump linear systems

    NASA Technical Reports Server (NTRS)

    Feng, Xiangbo; Loparo, Kenneth A.; Ji, Yuandong; Chizeck, Howard J.

    1992-01-01

    Jump linear systems are defined as a family of linear systems with randomly jumping parameters (usually governed by a Markov jump process) and are used to model systems subject to failures or changes in structure. The authors study stochastic stability properties in jump linear systems and the relationship among various moment and sample path stability properties. It is shown that all second moment stability properties are equivalent and are sufficient for almost sure sample path stability, and a testable necessary and sufficient condition for second moment stability is derived. The Lyapunov exponent method for the study of almost sure sample stability is discussed, and a theorem which characterizes the Lyapunov exponents of jump linear systems is presented.

  13. Fast simulation techniques for switching converters

    NASA Technical Reports Server (NTRS)

    King, Roger J.

    1987-01-01

    Techniques for simulating a switching converter are examined. The state equations for the equivalent circuits, which represent the switching converter, are presented and explained. The uses of the Newton-Raphson iteration, low ripple approximation, half-cycle symmetry, and discrete time equations to compute the interval durations are described. An example is presented in which these methods are illustrated by applying them to a parallel-loaded resonant inverter with three equivalent circuits for its continuous mode of operation.

  14. Dose estimation and dating of pottery from Turkey

    NASA Astrophysics Data System (ADS)

    Altay Atlıhan, M.; Şahiner, Eren; Soykal Alanyalı, Feriştah

    2012-06-01

    The luminescence method is a widely used technique for environmental dosimetry and dating archaeological, geological materials. In this study, equivalent dose (ED) and annual dose rate (AD) of an archaeological sample were measured. The age of the material was calculated by means of equivalent dose divided by the annual dose rate. The archaeological sample was taken from Antalya, Turkey. Samples were prepared by the fine grain technique and equivalent dose was found using multiple-aliquot-additive-dose (MAAD) and single aliquot regeneration (SAR) techniques. Also the short shine normalization-MAAD and long shine normalization-MAAD were applied and the results of the methods were compared with each other. The optimal preheat temperature was found to be 200 °C for 10 min. The annual doses of concentrations of the major radioactive isotopes were determined using a high-purity germanium detector and a low-level alpha counter. The age of the sample was found to be 510±40 years.

  15. Fall with Linear Drag and Wien's Displacement Law: Approximate Solution and Lambert Function

    ERIC Educational Resources Information Center

    Vial, Alexandre

    2012-01-01

    We present an approximate solution for the downward time of travel in the case of a mass falling with a linear drag force. We show how a quasi-analytical solution implying the Lambert function can be found. We also show that solving the previous problem is equivalent to the search for Wien's displacement law. These results can be of interest for…

  16. Utilization of thermoluminescent dosimetry in total skin electron beam radiotherapy of mycosis fungoides.

    PubMed

    Antolak, J A; Cundiff, J H; Ha, C S

    1998-01-01

    The purpose of this report is to discuss the utilization of thermoluminescent dosimetry (TLD) in total skin electron beam (TSEB) radiotherapy to: (a) compare patient dose distributions for similar techniques on different machines, (b) confirm beam calibration and monitor unit calculations, (c) provide data for making clinical decisions, and (d) study reasons for variations in individual dose readings. We report dosimetric results for 72 cases of mycosis fungoides, using similar irradiation techniques on two different linear accelerators. All patients were treated using a modified Stanford 6-field technique. In vivo TLD was done on all patients, and the data for all patients treated on both machines was collected into a database for analysis. Means and standard deviations (SDs) were computed for all locations. Scatter plots of doses vs. height, weight, and obesity index were generated, and correlation coefficients with these variables were computed. The TLD results show that our current TSEB implementation is dosimetrically equivalent to the previous implementation, and that our beam calibration technique and monitor unit calculation is accurate. Correlations with obesity index were significant at several sites. Individual TLD results allow us to customize the boost treatment for each patient, in addition to revealing patient positioning problems and/or systematic variations in dose caused by patient variability. The data agree well with previously published TLD results for similar TSEB techniques. TLD is an important part of the treatment planning and quality assurance programs for TSEB, and routine use of TLD measurements for TSEB is recommended.

  17. Toward Worldwide Hepcidin Assay Harmonization: Identification of a Commutable Secondary Reference Material.

    PubMed

    van der Vorm, Lisa N; Hendriks, Jan C M; Laarakkers, Coby M; Klaver, Siem; Armitage, Andrew E; Bamberg, Alison; Geurts-Moespot, Anneke J; Girelli, Domenico; Herkert, Matthias; Itkonen, Outi; Konrad, Robert J; Tomosugi, Naohisa; Westerman, Mark; Bansal, Sukhvinder S; Campostrini, Natascia; Drakesmith, Hal; Fillet, Marianne; Olbina, Gordana; Pasricha, Sant-Rayn; Pitts, Kelly R; Sloan, John H; Tagliaro, Franco; Weykamp, Cas W; Swinkels, Dorine W

    2016-07-01

    Absolute plasma hepcidin concentrations measured by various procedures differ substantially, complicating interpretation of results and rendering reference intervals method dependent. We investigated the degree of equivalence achievable by harmonization and the identification of a commutable secondary reference material to accomplish this goal. We applied technical procedures to achieve harmonization developed by the Consortium for Harmonization of Clinical Laboratory Results. Eleven plasma hepcidin measurement procedures (5 mass spectrometry based and 6 immunochemical based) quantified native individual plasma samples (n = 32) and native plasma pools (n = 8) to assess analytical performance and current and achievable equivalence. In addition, 8 types of candidate reference materials (3 concentrations each, n = 24) were assessed for their suitability, most notably in terms of commutability, to serve as secondary reference material. Absolute hepcidin values and reproducibility (intrameasurement procedure CVs 2.9%-8.7%) differed substantially between measurement procedures, but all were linear and correlated well. The current equivalence (intermeasurement procedure CV 28.6%) between the methods was mainly attributable to differences in calibration and could thus be improved by harmonization with a common calibrator. Linear regression analysis and standardized residuals showed that a candidate reference material consisting of native lyophilized plasma with cryolyoprotectant was commutable for all measurement procedures. Mathematically simulated harmonization with this calibrator resulted in a maximum achievable equivalence of 7.7%. The secondary reference material identified in this study has the potential to substantially improve equivalence between hepcidin measurement procedures and contributes to the establishment of a traceability chain that will ultimately allow standardization of hepcidin measurement results. © 2016 American Association for Clinical Chemistry.

  18. USEPA PATHOGEN EQUIVALENCY COMMITTEE RETREAT

    EPA Science Inventory

    The Pathogen Equivalency Committee held its retreat from September 20-21, 2005 at Hueston Woods State Park in College Corner, Ohio. This presentation will update the PEC’s membership on emerging pathogens, analytical methods, disinfection techniques, risk analysis, preparat...

  19. The risk equivalent of an exposure to-, versus a dose of radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bond, V.P.

    The long-term potential carcinogenic effects of low-level exposure (LLE) are addressed. The principal point discussed is linear, no-threshold dose-response curve. That the linear no-threshold, or proportional relationship is widely used is seen in the way in which the values for cancer risk coefficients are expressed - in terms of new cases, per million persons exposed, per year, per unit exposure or dose. This implies that the underlying relationship is proportional, i.e., ''linear, without threshold''. 12 refs., 9 figs., 1 tab.

  20. Frequencies and Flutter Speed Estimation for Damaged Aircraft Wing Using Scaled Equivalent Plate Analysis

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2010-01-01

    Equivalent plate analysis is often used to replace the computationally expensive finite element analysis in initial design stages or in conceptual design of aircraft wing structures. The equivalent plate model can also be used to design a wind tunnel model to match the stiffness characteristics of the wing box of a full-scale aircraft wing model while satisfying strength-based requirements An equivalent plate analysis technique is presented to predict the static and dynamic response of an aircraft wing with or without damage. First, a geometric scale factor and a dynamic pressure scale factor are defined to relate the stiffness, load and deformation of the equivalent plate to the aircraft wing. A procedure using an optimization technique is presented to create scaled equivalent plate models from the full scale aircraft wing using geometric and dynamic pressure scale factors. The scaled models are constructed by matching the stiffness of the scaled equivalent plate with the scaled aircraft wing stiffness. It is demonstrated that the scaled equivalent plate model can be used to predict the deformation of the aircraft wing accurately. Once the full equivalent plate geometry is obtained, any other scaled equivalent plate geometry can be obtained using the geometric scale factor. Next, an average frequency scale factor is defined as the average ratio of the frequencies of the aircraft wing to the frequencies of the full-scaled equivalent plate. The average frequency scale factor combined with the geometric scale factor is used to predict the frequency response of the aircraft wing from the scaled equivalent plate analysis. A procedure is outlined to estimate the frequency response and the flutter speed of an aircraft wing from the equivalent plate analysis using the frequency scale factor and geometric scale factor. The equivalent plate analysis is demonstrated using an aircraft wing without damage and another with damage. Both of the problems show that the scaled equivalent plate analysis can be successfully used to predict the frequencies and flutter speed of a typical aircraft wing.

  1. Characterizing hydrochemical properties of springs in Taiwan based on their geological origins.

    PubMed

    Jang, Cheng-Shin; Chen, Jui-Sheng; Lin, Yun-Bin; Liu, Chen-Wuing

    2012-01-01

    This study was performed to characterize hydrochemical properties of springs based on their geological origins in Taiwan. Stepwise discriminant analysis (DA) was used to establish a linear classification model of springs using hydrochemical parameters. Two hydrochemical datasets-ion concentrations and relative proportions of equivalents per liter of major ions-were included to perform prediction of the geological origins of springs. Analyzed results reveal that DA using relative proportions of equivalents per liter of major ions yields a 95.6% right assignation, which is superior to DA using ion concentrations. This result indicates that relative proportions of equivalents of major hydrochemical parameters in spring water are more highly associated with the geological origins than ion concentrations do. Low percentages of Na(+) equivalents are common properties of springs emerging from acid-sulfate and neutral-sulfate igneous rock. Springs emerging from metamorphic rock show low percentages of Cl( - ) equivalents and high percentages of HCO[Formula: see text] equivalents, and springs emerging from sedimentary rock exhibit high Cl( - )/SO(2-)(4) ratios.

  2. Experimental study of turbulent flame kernel propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mansour, Mohy; Peters, Norbert; Schrader, Lars-Uve

    2008-07-15

    Flame kernels in spark ignited combustion systems dominate the flame propagation and combustion stability and performance. They are likely controlled by the spark energy, flow field and mixing field. The aim of the present work is to experimentally investigate the structure and propagation of the flame kernel in turbulent premixed methane flow using advanced laser-based techniques. The spark is generated using pulsed Nd:YAG laser with 20 mJ pulse energy in order to avoid the effect of the electrodes on the flame kernel structure and the variation of spark energy from shot-to-shot. Four flames have been investigated at equivalence ratios, {phi}{submore » j}, of 0.8 and 1.0 and jet velocities, U{sub j}, of 6 and 12 m/s. A combined two-dimensional Rayleigh and LIPF-OH technique has been applied. The flame kernel structure has been collected at several time intervals from the laser ignition between 10 {mu}s and 2 ms. The data show that the flame kernel structure starts with spherical shape and changes gradually to peanut-like, then to mushroom-like and finally disturbed by the turbulence. The mushroom-like structure lasts longer in the stoichiometric and slower jet velocity. The growth rate of the average flame kernel radius is divided into two linear relations; the first one during the first 100 {mu}s is almost three times faster than that at the later stage between 100 and 2000 {mu}s. The flame propagation is slightly faster in leaner flames. The trends of the flame propagation, flame radius, flame cross-sectional area and mean flame temperature are related to the jet velocity and equivalence ratio. The relations obtained in the present work allow the prediction of any of these parameters at different conditions. (author)« less

  3. On the Importance of Electronic Symmetry for Triplet State Delocalization

    DOE PAGES

    Richert, Sabine; Bullard, George; Rawson, Jeff; ...

    2017-03-29

    The influence of electronic symmetry on triplet state delocalization in linear zinc porphyrin oligomers is explored by electron paramagnetic resonance techniques. Using a combination of transient continuous wave and pulse electron nuclear double resonance spectroscopies, it is demonstrated experimentally that complete triplet state delocalization requires the chemical equivalence of all porphyrin units. These results are supported by density functional theory calculations, showing uneven delocalization in a porphyrin dimer in which a terminal ethynyl group renders the two porphyrin units inequivalent. When the conjugation length of the molecule is further increased upon addition of a second terminal ethynyl group that restoresmore » the symmetry of the system, the triplet state is again found to be completely delocalized. Finally, the observations suggest that electronic symmetry is of greater importance for triplet state delocalization than other frequently invoked factors such as conformational rigidity or fundamental length-scale limitations.« less

  4. Further studies of propellant sloshing under low-gravity conditions

    NASA Technical Reports Server (NTRS)

    Dodge, F. T.

    1971-01-01

    A variational integral is formulated from Hamilton's Principle and is proved to be equivalent to the usual differential equations of low-gravity sloshing in ellipsoidal tanks. It is shown that for a zero-degree contact angle the contact line boundary condition corresponds to the stuck condition, a result that is due to the linearization of the equations and the ambiguity in the definition of the wave height at the wall. The variational integral is solved by a Rayleigh-Ritz technique. Results for slosh frequency when the free surface is not bent-over compare well with previous numerical solutions. When the free surface is bent over, however, the results for slosh frequency are considerably larger than those predicted by previous finite-difference, numerical approaches: the difference may be caused by the use of a zero degree contact angle in the present theory in contrast to the nonzero contact angle used in the numerical approaches.

  5. HgCdTe Avalanche Photodiode Detectors for Airborne and Spaceborne Lidar at Infrared Wavelengths

    NASA Technical Reports Server (NTRS)

    Sun, Xiaoli; Abshire, James B.; Beck, Jeffrey D.; Mitra, Pradip; Reiff, Kirk; Yang, Guangning

    2017-01-01

    We report results from characterizing the HgCdTe avalanche photodiode (APD) sensorchip assemblies (SCA) developed for lidar at infrared wavelength using the high density vertically integrated photodiodes (HDVIP) technique. These devices demonstrated high quantum efficiency, typically greater than 90 between 0.8 micrometers and the cut-off wavelength, greater than 600 APD gain, near unity excess noise factor, 6-10 MHz electrical bandwidth and less than 0.5 fW/Hz(exp.1/2) noise equivalent power (NEP). The detectors provide linear analog output with a dynamic range of 2-3 orders of magnitude at a fixed APD gain without averaging, and over 5 orders of magnitude by adjusting the APD and preamplifier gain settings. They have been successfully used in airborne CO2 and CH4 integrated path differential absorption (IPDA) lidar as a precursor for space lidar applications.

  6. Nonlinear analysis of a rotor-bearing system using describing functions

    NASA Astrophysics Data System (ADS)

    Maraini, Daniel; Nataraj, C.

    2018-04-01

    This paper presents a technique for modelling the nonlinear behavior of a rotor-bearing system with Hertzian contact, clearance, and rotating unbalance. The rotor-bearing system is separated into linear and nonlinear components, and the nonlinear bearing force is replaced with an equivalent describing function gain. The describing function captures the relationship between the amplitude of the fundamental input to the nonlinearity and the fundamental output. The frequency response is constructed for various values of the clearance parameter, and the results show the presence of a jump resonance in bearings with both clearance and preload. Nonlinear hardening type behavior is observed in the case with clearance and softening behavior is observed for the case with preload. Numerical integration is also carried out on the nonlinear equations of motion showing strong agreement with the approximate solution. This work could easily be extended to include additional nonlinearities that arise from defects, providing a powerful diagnostic tool.

  7. Application of thermal model for pan evaporation to the hydrology of a defined medium, the sponge

    NASA Technical Reports Server (NTRS)

    Trenchard, M. H.; Artley, J. A. (Principal Investigator)

    1981-01-01

    A technique is presented which estimates pan evaporation from the commonly observed values of daily maximum and minimum air temperatures. These two variables are transformed to saturation vapor pressure equivalents which are used in a simple linear regression model. The model provides reasonably accurate estimates of pan evaporation rates over a large geographic area. The derived evaporation algorithm is combined with precipitation to obtain a simple moisture variable. A hypothetical medium with a capacity of 8 inches of water is initialized at 4 inches. The medium behaves like a sponge: it absorbs all incident precipitation, with runoff or drainage occurring only after it is saturated. Water is lost from this simple system through evaporation just as from a Class A pan, but at a rate proportional to its degree of saturation. The contents of the sponge is a moisture index calculated from only the maximum and minium temperatures and precipitation.

  8. [Not Available].

    PubMed

    Bernard, A M; Burgot, J L

    1981-12-01

    The reversibility of the determination reaction is the most frequent cause of deviations from linearity of thermometric titration curves. Because of this, determination of the equivalence point by the tangent method is associated with a systematic error. The authors propose a relationship which connects this error quantitatively with the equilibrium constant. The relation, verified experimentally, is deduced from a mathematical study of the thermograms and could probably be generalized to apply to other linear methods of determination.

  9. Global strength assessment in oblique waves of a large gas carrier ship, based on a non-linear iterative method

    NASA Astrophysics Data System (ADS)

    Domnisoru, L.; Modiga, A.; Gasparotti, C.

    2016-08-01

    At the ship's design, the first step of the hull structural assessment is based on the longitudinal strength analysis, with head wave equivalent loads by the ships' classification societies’ rules. This paper presents an enhancement of the longitudinal strength analysis, considering the general case of the oblique quasi-static equivalent waves, based on the own non-linear iterative procedure and in-house program. The numerical approach is developed for the mono-hull ships, without restrictions on 3D-hull offset lines non-linearities, and involves three interlinked iterative cycles on floating, pitch and roll trim equilibrium conditions. Besides the ship-wave equilibrium parameters, the ship's girder wave induced loads are obtained. As numerical study case we have considered a large LPG liquefied petroleum gas carrier. The numerical results of the large LPG are compared with the statistical design values from several ships' classification societies’ rules. This study makes possible to obtain the oblique wave conditions that are inducing the maximum loads into the large LPG ship's girder. The numerical results of this study are pointing out that the non-linear iterative approach is necessary for the computation of the extreme loads induced by the oblique waves, ensuring better accuracy of the large LPG ship's longitudinal strength assessment.

  10. Solvent-based and solvent-free characterization of low solubility and low molecular weight polyamides by mass spectrometry: a complementary approach.

    PubMed

    Barrère, Caroline; Hubert-Roux, Marie; Lange, Catherine M; Rejaibi, Majed; Kebir, Nasreddine; Désilles, Nicolas; Lecamp, Laurence; Burel, Fabrice; Loutelier-Bourhis, Corinne

    2012-06-15

    Polyamides (PA) belong to the most used classes of polymers because of their attractive chemical and mechanical properties. In order to monitor original PA design, it is essential to develop analytical methods for the characterization of these compounds that are mostly insoluble in usual solvents. A low molecular weight polyamide (PA11), synthesized with a chain limiter, has been used as a model compound and characterized by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS). In the solvent-based approach, specific solvents for PA, i.e. trifluoroacetic acid (TFA) and hexafluoroisopropanol (HFIP), were tested. Solvent-based sample preparation methods, dried-droplet and thin layer, were optimized through the choice of matrix and salt. Solvent-based (thin layer) and solvent-free methods were then compared for this low solubility polymer. Ultra-high-performance liquid chromatography/electrospray ionization (UHPLC/ESI)-TOF-MS analyses were then used to confirm elemental compositions through accurate mass measurement. Sodium iodide (NaI) and 2,5-dihydroxybenzoic acid (2,5-DHB) are, respectively, the best cationizing agent and matrix. The dried-droplet sample preparation method led to inhomogeneous deposits, but the thin-layer method could overcome this problem. Moreover, the solvent-free approach was the easiest and safest sample preparation method giving equivalent results to solvent-based methods. Linear as well as cyclic oligomers were observed. Although the PA molecular weights obtained by MALDI-TOF-MS were lower than those obtained by (1)H NMR and acido-basic titration, this technique allowed us to determine the presence of cyclic and linear species, not differentiated by the other techniques. TFA was shown to induce modification of linear oligomers that permitted cyclic and linear oligomers to be clearly highlighted in spectra. Optimal sample preparation conditions were determined for the MALDI-TOF-MS analysis of PA11, a model of polyamide analogues. The advantages of the solvent-free and solvent-based approaches were shown. Molecular weight determination using MALDI was discussed. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Linear accelerator stereotactic radiosurgery for trigeminal neuralgia.

    PubMed

    Varela-Lema, Leonor; Lopez-Garcia, Marisa; Maceira-Rozas, Maria; Munoz-Garzon, Victor

    2015-01-01

    Stereotactic radiosurgery is accepted as an alternative for patients with refractory trigeminal neuralgia, but existing evidence is fundamentally based on the Gamma Knife, which is a specific device for intracranial neurosurgery, available in few facilities. Over the last decade it has been shown that the use of linear accelerators can achieve similar diagnostic accuracy and equivalent dose distribution. To assess the effectiveness and safety of linear-accelerator stereotactic radiosurgery for the treatment of patients with refractory trigeminal neuralgia. We carried out a systematic search of the literature in the main electronic databases (PubMed, Embase, ISI Web of Knowledge, Cochrane, Biomed Central, IBECS, IME, CRD) and reviewed grey literature. All original studies on the subject published in Spanish, French, English, and Portuguese were eligible for inclusion. The selection and critical assessment was carried out by 2 independent reviewers based on pre-defined criteria. In view of the impossibility of carrying out a pooled analysis, data were analyzed in a qualitative way. Eleven case series were included. In these, satisfactory pain relief (BIN I-IIIb or reduction in pain = 50) was achieved in 75% to 95.7% of the patients treated. The mean time to relief from pain ranged from 8.5 days to 3.8 months. The percentage of patients who presented with recurrences after one year of follow-up ranged from 5% to 28.8%. Facial swelling or hypoesthesia, mostly of a mild-moderate grade appeared in 7.5% - 51.9% of the patients. Complete anaesthesia dolorosa was registered in only study (5.3%). Cases of hearing loss (2.5%), brainstem edema (5.8%), and neurotrophic keratoplasty (3.5%) were also isolated. The results suggest that stereotactic radiosurgery with linear accelerators could constitute an effective and safe therapeutic alternative for drug-resistant trigeminal neuralgia. However, existing studies leave important doubts as to optimal treatment doses or the therapeutic target, long-term recurrence, and do not help identify which subgroups of patients could most benefit from this technique. Paucity of literature and clear lack of clarification for clinical utilization of this technique.

  12. The Short Form 36 English and Chinese versions were equivalent in a multiethnic Asian population.

    PubMed

    Tan, Maudrene L S; Wee, Hwee-Lin; Lee, Jeannette; Ma, Stefan; Heng, Derrick; Tai, E-Shyong; Thumboo, Julian

    2013-07-01

    The primary aim of this article was to evaluate measurement equivalence of the English and Chinese versions of the Short Form 36 version 2 (SF-36v2) and Short Form 6D (SF-6D). In this cross-sectional study, health-related quality of life (HRQoL) was measured from 4,973 ethnic Chinese subjects using the SF-36v2 questionnaire. Measurement equivalence of domain and utility scores for the English- and Chinese-language SF-36v2 and SF-6D were assessed by examining the score differences between the two languages using linear regression models, with and without adjustment for known determinants of HRQoL. Equivalence was achieved if the 90% confidence interval (CI) of the differences in scores, due to language, fell within a predefined equivalence margin. Compared with English-speaking Chinese, Chinese-speaking Chinese were significantly older (47.6 vs. 55.5 years). All SF-36v2 domains were equivalent after adjusting for known HRQoL. SF-6D utility/items had the 90% CI either fully or partially overlap their predefined equivalence margin. The English- and Chinese-language versions of the SF-36v2 and SF-6D demonstrated equivalence. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. An analytical method to calculate equivalent fields to irregular symmetric and asymmetric photon fields.

    PubMed

    Tahmasebi Birgani, Mohamad J; Chegeni, Nahid; Zabihzadeh, Mansoor; Hamzian, Nima

    2014-01-01

    Equivalent field is frequently used for central axis depth-dose calculations of rectangular- and irregular-shaped photon beams. As most of the proposed models to calculate the equivalent square field are dosimetry based, a simple physical-based method to calculate the equivalent square field size was used as the basis of this study. The table of the sides of the equivalent square or rectangular fields was constructed and then compared with the well-known tables by BJR and Venselaar, et al. with the average relative error percentage of 2.5 ± 2.5% and 1.5 ± 1.5%, respectively. To evaluate the accuracy of this method, the percentage depth doses (PDDs) were measured for some special irregular symmetric and asymmetric treatment fields and their equivalent squares for Siemens Primus Plus linear accelerator for both energies, 6 and 18MV. The mean relative differences of PDDs measurement for these fields and their equivalent square was approximately 1% or less. As a result, this method can be employed to calculate equivalent field not only for rectangular fields but also for any irregular symmetric or asymmetric field. © 2013 American Association of Medical Dosimetrists Published by American Association of Medical Dosimetrists All rights reserved.

  14. A Note on Equivalence Among Various Scalar Field Models of Dark Energies

    NASA Astrophysics Data System (ADS)

    Mandal, Jyotirmay Das; Debnath, Ujjal

    2017-08-01

    In this work, we have tried to find out similarities between various available models of scalar field dark energies (e.g., quintessence, k-essence, tachyon, phantom, quintom, dilatonic dark energy, etc). We have defined an equivalence relation from elementary set theory between scalar field models of dark energies and used fundamental ideas from linear algebra to set up our model. Consequently, we have obtained mutually disjoint subsets of scalar field dark energies with similar properties and discussed our observation.

  15. Virasoro constraints and polynomial recursion for the linear Hodge integrals

    NASA Astrophysics Data System (ADS)

    Guo, Shuai; Wang, Gehao

    2017-04-01

    The Hodge tau-function is a generating function for the linear Hodge integrals. It is also a tau-function of the KP hierarchy. In this paper, we first present the Virasoro constraints for the Hodge tau-function in the explicit form of the Virasoro equations. The expression of our Virasoro constraints is simply a linear combination of the Virasoro operators, where the coefficients are restored from a power series for the Lambert W function. Then, using this result, we deduce a simple version of the Virasoro constraints for the linear Hodge partition function, where the coefficients are restored from the Gamma function. Finally, we establish the equivalence relation between the Virasoro constraints and polynomial recursion formula for the linear Hodge integrals.

  16. Fabricating fiber Bragg gratings with two phase masks based on reconstruction-equivalent-chirp technique.

    PubMed

    Gao, Liang; Chen, Xiangfei; Xiong, Jintian; Liu, Shengchun; Pu, Tao

    2012-01-30

    Based on reconstruction-equivalent-chirp (REC) technique, a novel solution for fabricating low-cost long fiber Bragg gratings (FBGs) with desired properties is proposed and initially studied. A proof-of-concept experiment is demonstrated with two conventional uniform phase masks and a submicron-precision translation stage, successfully. It is shown that the original phase shift (OPS) caused by phase mismatch of the two phase masks can be compensated by the equivalent phase shift (EPS) at the ±1st channels of sampled FBGs, separately. Furthermore, as an example, a π phase-shifted FBG of about 90 mm is fabricated by using these two 50mm-long uniform phase masks based on the presented method.

  17. Electrochemical degradation, kinetics & performance studies of solid oxide fuel cells

    NASA Astrophysics Data System (ADS)

    Das, Debanjan

    Linear and Non-linear electrochemical characterization techniques and equivalent circuit modelling were carried out on miniature and sub-commercial Solid Oxide Fuel Cell (SOFC) stacks as an in-situ diagnostic approach to evaluate and analyze their performance under the presence of simulated alternative fuel conditions. The main focus of the study was to track the change in cell behavior and response live, as the cell was generating power. Electrochemical Impedance Spectroscopy (EIS) was the most important linear AC technique used for the study. The distinct effects of inorganic components usually present in hydrocarbon fuel reformates on SOFC behavior have been determined, allowing identification of possible "fingerprint" impedance behavior corresponding to specific fuel conditions and reaction mechanisms. Critical electrochemical processes and degradation mechanisms which might affect cell performance were identified and quantified. Sulfur and siloxane cause the most prominent degradation and the associated electrochemical cell parameters such as Gerisher and Warburg elements are applied respectively for better understanding of the degradation processes. Electrochemical Frequency Modulation (EFM) was applied for kinetic studies in SOFCs for the very first time for estimating the exchange current density and transfer coefficients. EFM is a non-linear in-situ electrochemical technique conceptually different from EIS and is used extensively in corrosion work, but rarely used on fuel cells till now. EFM is based on exploring information obtained from non-linear higher harmonic contributions from potential perturbations of electrochemical systems, otherwise not obtained by EIS. The baseline fuel used was 3 % humidified hydrogen with a 5-cell SOFC sub-commercial planar stack to perform the analysis. Traditional methods such as EIS and Tafel analysis were carried out at similar operating conditions to verify and correlate with the EFM data and ensure the validity of the obtained information. The obtained values closely range from around 11 mA cm-2 - 16 mA cm -2 with reasonable repeatability and excellent accuracy. The potential advantages of EFM compared to traditional methods were realized and our primary aim at demonstrating this technique on a SOFC system are presented which can act as a starting point for future research efforts in this area. Finally, an approach based on in-situ State of Health tests by EIS was formulated and investigated to understand the most efficient fuel conditions for suitable long term operation of a solid oxide fuel cell stack under power generation conditions. The procedure helped to reflect the individual effects of three most important fuel characteristics CO/H2 volumetric ratio, S/C ratio and fuel utilization under the presence of a simulated alternative fuel at 0.4 A cm-2. Variation tests helped to identify corresponding electrochemical/chemical processes, narrow down the most optimum operating regimes considering practical behavior of simulated reformer-SOFC system arrangements. At the end, 8 different combinations of the optimized parameters were tested long term with the stack, and the most efficient blend was determined.

  18. A boundary condition to the Khokhlov-Zabolotskaya equation for modeling strongly focused nonlinear ultrasound fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosnitskiy, P., E-mail: pavrosni@yandex.ru; Yuldashev, P., E-mail: petr@acs366.phys.msu.ru; Khokhlova, V., E-mail: vera@acs366.phys.msu.ru

    2015-10-28

    An equivalent source model was proposed as a boundary condition to the nonlinear parabolic Khokhlov-Zabolotskaya (KZ) equation to simulate high intensity focused ultrasound (HIFU) fields generated by medical ultrasound transducers with the shape of a spherical shell. The boundary condition was set in the initial plane; the aperture, the focal distance, and the initial pressure of the source were chosen based on the best match of the axial pressure amplitude and phase distributions in the Rayleigh integral analytic solution for a spherical transducer and the linear parabolic approximation solution for the equivalent source. Analytic expressions for the equivalent source parametersmore » were derived. It was shown that the proposed approach allowed us to transfer the boundary condition from the spherical surface to the plane and to achieve a very good match between the linear field solutions of the parabolic and full diffraction models even for highly focused sources with F-number less than unity. The proposed method can be further used to expand the capabilities of the KZ nonlinear parabolic equation for efficient modeling of HIFU fields generated by strongly focused sources.« less

  19. An Experimental Study in Determining Energy Expenditure from Treadmill Walking using Hip-Worn Inertial Sensors

    PubMed Central

    Vathsangam, Harshvardhan; Emken, Adar; Schroeder, E. Todd; Spruijt-Metz, Donna; Sukhatme, Gaurav S.

    2011-01-01

    This paper describes an experimental study in estimating energy expenditure from treadmill walking using a single hip-mounted triaxial inertial sensor comprised of a triaxial accelerometer and a triaxial gyroscope. Typical physical activity characterization using accelerometer generated counts suffers from two drawbacks - imprecison (due to proprietary counts) and incompleteness (due to incomplete movement description). We address these problems in the context of steady state walking by directly estimating energy expenditure with data from a hip-mounted inertial sensor. We represent the cyclic nature of walking with a Fourier transform of sensor streams and show how one can map this representation to energy expenditure (as measured by V O2 consumption, mL/min) using three regression techniques - Least Squares Regression (LSR), Bayesian Linear Regression (BLR) and Gaussian Process Regression (GPR). We perform a comparative analysis of the accuracy of sensor streams in predicting energy expenditure (measured by RMS prediction accuracy). Triaxial information is more accurate than uniaxial information. LSR based approaches are prone to outlier sensitivity and overfitting. Gyroscopic information showed equivalent if not better prediction accuracy as compared to accelerometers. Combining accelerometer and gyroscopic information provided better accuracy than using either sensor alone. We also analyze the best algorithmic approach among linear and nonlinear methods as measured by RMS prediction accuracy and run time. Nonlinear regression methods showed better prediction accuracy but required an order of magnitude of run time. This paper emphasizes the role of probabilistic techniques in conjunction with joint modeling of triaxial accelerations and rotational rates to improve energy expenditure prediction for steady-state treadmill walking. PMID:21690001

  20. Methods for removal of unwanted signals from gravity time-series: Comparison using linear techniques complemented with analysis of system dynamics

    NASA Astrophysics Data System (ADS)

    Valencio, Arthur; Grebogi, Celso; Baptista, Murilo S.

    2017-10-01

    The presence of undesirable dominating signals in geophysical experimental data is a challenge in many subfields. One remarkable example is surface gravimetry, where frequencies from Earth tides correspond to time-series fluctuations up to a thousand times larger than the phenomena of major interest, such as hydrological gravity effects or co-seismic gravity changes. This work discusses general methods for the removal of unwanted dominating signals by applying them to 8 long-period gravity time-series of the International Geodynamics and Earth Tides Service, equivalent to the acquisition from 8 instruments in 5 locations representative of the network. We compare three different conceptual approaches for tide removal: frequency filtering, physical modelling, and data-based modelling. Each approach reveals a different limitation to be considered depending on the intended application. Vestiges of tides remain in the residues for the modelling procedures, whereas the signal was distorted in different ways by the filtering and data-based procedures. The linear techniques employed were power spectral density, spectrogram, cross-correlation, and classical harmonics decomposition, while the system dynamics was analysed by state-space reconstruction and estimation of the largest Lyapunov exponent. Although the tides could not be completely eliminated, they were sufficiently reduced to allow observation of geophysical events of interest above the 10 nm s-2 level, exemplified by a hydrology-related event of 60 nm s-2. The implementations adopted for each conceptual approach are general, so that their principles could be applied to other kinds of data affected by undesired signals composed mainly by periodic or quasi-periodic components.

  1. Evaluating ecological equivalence of created marshes: comparing structural indicators with stable isotope indicators of blue crab trophic support

    USGS Publications Warehouse

    Llewellyn, Chris; LaPeyre, Megan K.

    2010-01-01

    This study sought to examine ecological equivalence of created marshes of different ages using traditional structural measures of equivalence, and tested a relatively novel approach using stable isotopes as a measure of functional equivalence. We compared soil properties, vegetation, nekton communities, and δ13C and δ15N isotope values of blue crab muscle and hepatopancreas tissue and primary producers at created (5-24 years old) and paired reference marshes in SW Louisiana. Paired contrasts indicated that created and reference marshes supported equivalent plant and nekton communities, but differed in soil characteristics. Stable isotope indicators examining blue crab food web support found that the older marshes (8 years+) were characterized by comparable trophic diversity and breadth compared to their reference marshes. Interpretation of results for the youngest site was confounded by the fact that the paired reference, which represented the desired end goal of restoration, contained a greater diversity of basal resources. Stable isotope techniques may give coastal managers an additional tool to assess functional equivalency of created marshes, as measured by trophic support, but may be limited to comparisons of marshes with similar vegetative communities and basal resources, or require the development of robust standardization techniques.

  2. Depth dependence of absorbed dose, dose equivalent and linear energy transfer spectra of galactic and trapped particles in polyethylene and comparison with calculations of models

    NASA Technical Reports Server (NTRS)

    Badhwar, G. D.; Cucinotta, F. A.; Wilson, J. W. (Principal Investigator)

    1998-01-01

    A matched set of five tissue-equivalent proportional counters (TEPCs), embedded at the centers of 0 (bare), 3, 5, 8 and 12-inch-diameter polyethylene spheres, were flown on the Shuttle flight STS-81 (inclination 51.65 degrees, altitude approximately 400 km). The data obtained were separated into contributions from trapped protons and galactic cosmic radiation (GCR). From the measured linear energy transfer (LET) spectra, the absorbed dose and dose-equivalent rates were calculated. The results were compared to calculations made with the radiation transport model HZETRN/NUCFRG2, using the GCR free-space spectra, orbit-averaged geomagnetic transmission function and Shuttle shielding distributions. The comparison shows that the model fits the dose rates to a root mean square (rms) error of 5%, and dose-equivalent rates to an rms error of 10%. Fairly good agreement between the LET spectra was found; however, differences are seen at both low and high LET. These differences can be understood as due to the combined effects of chord-length variation and detector response function. These results rule out a number of radiation transport/nuclear fragmentation models. Similar comparisons of trapped-proton dose rates were made between calculations made with the proton transport model BRYNTRN using the AP-8 MIN trapped-proton model and Shuttle shielding distributions. The predictions of absorbed dose and dose-equivalent rates are fairly good. However, the prediction of the LET spectra below approximately 30 keV/microm shows the need to improve the AP-8 model. These results have strong implications for shielding requirements for an interplanetary manned mission.

  3. Reduced-Order Models Based on POD-Tpwl for Compositional Subsurface Flow Simulation

    NASA Astrophysics Data System (ADS)

    Durlofsky, L. J.; He, J.; Jin, L. Z.

    2014-12-01

    A reduced-order modeling procedure applicable for compositional subsurface flow simulation will be described and applied. The technique combines trajectory piecewise linearization (TPWL) and proper orthogonal decomposition (POD) to provide highly efficient surrogate models. The method is based on a molar formulation (which uses pressure and overall component mole fractions as the primary variables) and is applicable for two-phase, multicomponent systems. The POD-TPWL procedure expresses new solutions in terms of linearizations around solution states generated and saved during previously simulated 'training' runs. High-dimensional states are projected into a low-dimensional subspace using POD. Thus, at each time step, only a low-dimensional linear system needs to be solved. Results will be presented for heterogeneous three-dimensional simulation models involving CO2 injection. Both enhanced oil recovery and carbon storage applications (with horizontal CO2 injectors) will be considered. Reasonably close agreement between full-order reference solutions and compositional POD-TPWL simulations will be demonstrated for 'test' runs in which the well controls differ from those used for training. Construction of the POD-TPWL model requires preprocessing overhead computations equivalent to about 3-4 full-order runs. Runtime speedups using POD-TPWL are, however, very significant - typically O(100-1000). The use of POD-TPWL for well control optimization will also be illustrated. For this application, some amount of retraining during the course of the optimization is required, which leads to smaller, but still significant, speedup factors.

  4. Time-Reversal Symmetry Breaking and Consequent Physical Responses Induced by All-In-All-Out Type Magnetic Order on the Pyrochlore Lattice

    NASA Astrophysics Data System (ADS)

    Arima, Taka-Hisa

    2014-03-01

    Pyrochlore-type 5d transition-metal oxide compounds Cd2Os2O7 and R2Ir2O7 (R =rare earth) undergo a metal-insulator transition accompanied by a magnetic transition. Recently, the magnetic structures of Cd2Os2O7 and Eu2Ir2O7 were investigated by means of resonant x-ray magnetic scattering. The x-ray data indicated the all-in/all-out type magnetic order. The all-in/all-out order breaks the time-reversal symmetry, while the spontaneous magnetization is essentially absent. The magnetic order can be viewed as ferroic magnetic octupolar order. The magnetic order is expected to provide several unique physical properties like quadratic magnetization. linear magneto-capacitance, linear magneto-resistance, linear magneto-mechanical coupling and so on. The symmetry breaking results in two non-equivalent domains, ``all-in/all-out'' and ``all-out/all-in.'' Interestingly, some theoretical works predict that a peculiar metallic state would appear on the domain wall. The observation and control of the domain distribution are essential for studying verious exotic physical responses. We have developed an x-ray technique for domain imaging and started studying the effects of external stimuli on the domain distribution. This work was performed in collaboration with S. Tardif, S. Takeshita, H. Ohsumi, D. Uematsu, H. Sagayama, J. J. Ishikawa, S. Nakatsuji, J. Yamaura, and Z. Hiroi.

  5. Predictors of outcomes after arthroscopic transosseous equivalent rotator cuff repair in 155 cases: a propensity score weighted analysis of knotted and knotless self-reinforcing repair techniques at a minimum of 2 years.

    PubMed

    Millett, Peter J; Espinoza, Chris; Horan, Marilee P; Ho, Charles P; Warth, Ryan J; Dornan, Grant J; Christoph Katthagen, J

    2017-10-01

    To evaluate the outcomes of two commonly used transosseous-equivalent (TOE) arthroscopic rotator cuff repair (RCR) techniques for full-thickness supraspinatus tendon tears (FTST) using a robust multi-predictor model. 155 shoulders in 151 patients (109 men, 42 women; mean age 59 ± 10 years) who underwent arthroscopic RCR of FTST, using either a knotted suture bridging (KSB) or a knotless tape bridging (KTB) TOE technique were included. ASES and SF-12 PCS scores assessed at a minimum of 2 years postoperatively were modeled using propensity score weighting in a multiple linear regression model. Patients able to return to the study center underwent a follow-up MRI for evaluation of rotator cuff integrity. The outcome data were available for 137 shoulders (88%; n = 35/41 KSB; n = 102/114 KTB). Seven patients (5.1%) that underwent revision rotator cuff surgery were considered failures. The median postoperative ASES score of the remaining 130 shoulders was 98 at a mean follow-up of 2.9 years (range 2.0-5.4 years). A higher preoperative baseline outcome score and a longer follow-up had a positive effect, whereas a previous RCR and workers' compensation claims (WCC) had a negative effect on final ASES or SF 12 PCS scores. The repair technique, age, gender and the number of anchors used for the RCR had no significant influence. Fifty-two patients returned for a follow-up MRI at a mean of 4.4 years postoperatively. Patients with a KSB RCR were significantly more likely to have an MRI-diagnosed full-thickness rotator cuff re-tear (p < 0.05). Excellent outcomes can be achieved at a minimum of 2 years following arthroscopic KSB or KTB TOE RCR of FTST. The preoperative baseline outcome score, a prior RCR, WCC and the length of follow-up significantly influenced the outcome scores. The repair technique did not affect the final functional outcomes, but patients with KTB TOE RCR were less likely to have a full-thickness rotator cuff re-tear. Level III, Retrospective Comparative Study.

  6. Quantile equivalence to evaluate compliance with habitat management objectives

    USGS Publications Warehouse

    Cade, Brian S.; Johnson, Pamela R.

    2011-01-01

    Equivalence estimated with linear quantile regression was used to evaluate compliance with habitat management objectives at Arapaho National Wildlife Refuge based on monitoring data collected in upland (5,781 ha; n = 511 transects) and riparian and meadow (2,856 ha, n = 389 transects) habitats from 2005 to 2008. Quantiles were used because the management objectives specified proportions of the habitat area that needed to comply with vegetation criteria. The linear model was used to obtain estimates that were averaged across 4 y. The equivalence testing framework allowed us to interpret confidence intervals for estimated proportions with respect to intervals of vegetative criteria (equivalence regions) in either a liberal, benefit-of-doubt or conservative, fail-safe approach associated with minimizing alternative risks. Simple Boolean conditional arguments were used to combine the quantile equivalence results for individual vegetation components into a joint statement for the multivariable management objectives. For example, management objective 2A required at least 809 ha of upland habitat with a shrub composition ≥0.70 sagebrush (Artemisia spp.), 20–30% canopy cover of sagebrush ≥25 cm in height, ≥20% canopy cover of grasses, and ≥10% canopy cover of forbs on average over 4 y. Shrub composition and canopy cover of grass each were readily met on >3,000 ha under either conservative or liberal interpretations of sampling variability. However, there were only 809–1,214 ha (conservative to liberal) with ≥10% forb canopy cover and 405–1,098 ha with 20–30%canopy cover of sagebrush ≥25 cm in height. Only 91–180 ha of uplands simultaneously met criteria for all four components, primarily because canopy cover of sagebrush and forbs was inversely related when considered at the spatial scale (30 m) of a sample transect. We demonstrate how the quantile equivalence analyses also can help refine the numerical specification of habitat objectives and explore specification of spatial scales for objectives with respect to sampling scales used to evaluate those objectives.

  7. Out-of-field doses and neutron dose equivalents for electron beams from modern Varian and Elekta linear accelerators.

    PubMed

    Cardenas, Carlos E; Nitsch, Paige L; Kudchadker, Rajat J; Howell, Rebecca M; Kry, Stephen F

    2016-07-08

    Out-of-field doses from radiotherapy can cause harmful side effects or eventually lead to secondary cancers. Scattered doses outside the applicator field, neutron source strength values, and neutron dose equivalents have not been broadly investigated for high-energy electron beams. To better understand the extent of these exposures, we measured out-of-field dose characteristics of electron applicators for high-energy electron beams on two Varian 21iXs, a Varian TrueBeam, and an Elekta Versa HD operating at various energy levels. Out-of-field dose profiles and percent depth-dose curves were measured in a Wellhofer water phantom using a Farmer ion chamber. Neutron dose was assessed using a combination of moderator buckets and gold activation foils placed on the treatment couch at various locations in the patient plane on both the Varian 21iX and Elekta Versa HD linear accelerators. Our findings showed that out-of-field electron doses were highest for the highest electron energies. These doses typically decreased with increasing distance from the field edge but showed substantial increases over some distance ranges. The Elekta linear accelerator had higher electron out-of-field doses than the Varian units examined, and the Elekta dose profiles exhibited a second dose peak about 20 to 30 cm from central-axis, which was found to be higher than typical out-of-field doses from photon beams. Electron doses decreased sharply with depth before becoming nearly constant; the dose was found to decrease to a depth of approximately E(MeV)/4 in cm. With respect to neutron dosimetry, Q values and neutron dose equivalents increased with electron beam energy. Neutron contamination from electron beams was found to be much lower than that from photon beams. Even though the neutron dose equivalent for electron beams represented a small portion of neutron doses observed under photon beams, neutron doses from electron beams may need to be considered for special cases.

  8. Transosseous-equivalent rotator cuff repair: a systematic review on the biomechanical importance of tying the medial row.

    PubMed

    Mall, Nathan A; Lee, Andrew S; Chahal, Jaskarndip; Van Thiel, Geoffrey S; Romeo, Anthony A; Verma, Nikhil N; Cole, Brian J

    2013-02-01

    Double-row and transosseous-equivalent repair techniques have shown greater strength and improved healing than single-row techniques. The purpose of this study was to determine whether tying of the medial-row sutures provides added stability during biomechanical testing of a transosseous-equivalent rotator cuff repair. We performed a systematic review of studies directly comparing biomechanical differences. Five studies met the inclusion and exclusion criteria. Of the 5 studies, 4 showed improved biomechanical properties with tying the medial-row anchors before bringing the sutures laterally to the lateral-row anchors, whereas the remaining study showed no difference in contact pressure, mean failure load, or gap formation with a standard suture bridge with knots tied at the medial row compared with knotless repairs. The results of this systematic review and quantitative synthesis indicate that the biomechanical factors ultimate load, stiffness, gap formation, and contact area are significantly improved when medial knots are tied as part of a transosseous-equivalent suture bridge construct compared with knotless constructs. Further studies comparing the clinical healing rates and functional outcomes between medial knotted and knotless repair techniques are needed. This review indicates that biomechanical factors are improved when the medial row of a transosseous-equivalent rotator cuff is tied compared with a knotless repair. However, this has not been definitively proven to translate to improved healing rates clinically. Copyright © 2013 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  9. Conjugate gradient type methods for linear systems with complex symmetric coefficient matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1989-01-01

    We consider conjugate gradient type methods for the solution of large sparse linear system Ax equals b with complex symmetric coefficient matrices A equals A(T). Such linear systems arise in important applications, such as the numerical solution of the complex Helmholtz equation. Furthermore, most complex non-Hermitian linear systems which occur in practice are actually complex symmetric. We investigate conjugate gradient type iterations which are based on a variant of the nonsymmetric Lanczos algorithm for complex symmetric matrices. We propose a new approach with iterates defined by a quasi-minimal residual property. The resulting algorithm presents several advantages over the standard biconjugate gradient method. We also include some remarks on the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  10. There's a Green Glob in Your Classroom.

    ERIC Educational Resources Information Center

    Dugdale, Sharon

    1983-01-01

    Discusses computer games (called intrinsic models) focusing on mathematics rather than on unrelated motivations (flashing lights or sounds). Games include "Green Globs," (equations/linear functions), "Darts"/"Torpedo" (fractions), "Escape" (graphing), and "Make-a-Monster" (equivalent fractions and…

  11. Testing the Einstein's equivalence principle with polarized gamma-ray bursts

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Zou, Yuan-Chuan; Zhang, Yue-Yang; Liao, Bin; Lei, Wei-Hua

    2017-07-01

    The Einstein's equivalence principle can be tested by using parametrized post-Newtonian parameters, of which the parameter γ has been constrained by comparing the arrival times of photons with different energies. It has been constrained by a variety of astronomical transient events, such as gamma-ray bursts (GRBs), fast radio bursts as well as pulses of pulsars, with the most stringent constraint of Δγ ≲ 10-15. In this Letter, we consider the arrival times of lights with different circular polarization. For a linearly polarized light, it is the combination of two circularly polarized lights. If the arrival time difference between the two circularly polarized lights is too large, their combination may lose the linear polarization. We constrain the value of Δγp < 1.6 × 10-27 by the measurement of the polarization of GRB 110721A, which is the most stringent constraint ever achieved.

  12. Fair Package Assignment

    NASA Astrophysics Data System (ADS)

    Lahaie, Sébastien; Parkes, David C.

    We consider the problem of fair allocation in the package assignment model, where a set of indivisible items, held by single seller, must be efficiently allocated to agents with quasi-linear utilities. A fair assignment is one that is efficient and envy-free. We consider a model where bidders have superadditive valuations, meaning that items are pure complements. Our central result is that core outcomes are fair and even coalition-fair over this domain, while fair distributions may not even exist for general valuations. Of relevance to auction design, we also establish that the core is equivalent to the set of anonymous-price competitive equilibria, and that superadditive valuations are a maximal domain that guarantees the existence of anonymous-price competitive equilibrium. Our results are analogs of core equivalence results for linear prices in the standard assignment model, and for nonlinear, non-anonymous prices in the package assignment model with general valuations.

  13. Analysis and modeling of a family of two-transistor parallel inverters

    NASA Technical Reports Server (NTRS)

    Lee, F. C. Y.; Wilson, T. G.

    1973-01-01

    A family of five static dc-to-square-wave inverters, each employing a square-loop magnetic core in conjunction with two switching transistors, is analyzed using piecewise-linear models for the nonlinear characteristics of the transistors, diodes, and saturable-core devices. Four of the inverters are analyzed in detail for the first time. These analyses show that, by proper choice of a frame of reference, each of the five quite differently appearing inverter circuits can be described by a common equivalent circuit. This equivalent circuit consists of a five-segment nonlinear resistor, a nonlinear saturable reactor, and a linear capacitor. Thus, by proper interpretation and identification of the parameters in the different circuits, the results of a detailed solution for one of the inverter circuits provide similar information and insight into the local and global behavior of each inverter in the family.

  14. A linear polarization converter with near unity efficiency in microwave regime

    NASA Astrophysics Data System (ADS)

    Xu, Peng; Wang, Shen-Yun; Geyi, Wen

    2017-04-01

    In this paper, we present a linear polarization converter in the reflective mode with near unity conversion efficiency. The converter is designed in an array form on the basis of a pair of orthogonally arranged three-dimensional split-loop resonators sharing a common terminal coaxial port and a continuous metallic ground slab. It converts the linearly polarized incident electromagnetic wave at resonance to its orthogonal counterpart upon the reflection mode. The conversion mechanism is explained by an equivalent circuit model, and the conversion efficiency can be tuned by changing the impedance of the terminal port. Such a scheme of the linear polarization converter has potential applications in microwave communications, remote sensing, and imaging.

  15. A canonical form of the equation of motion of linear dynamical systems

    NASA Astrophysics Data System (ADS)

    Kawano, Daniel T.; Salsa, Rubens Goncalves; Ma, Fai; Morzfeld, Matthias

    2018-03-01

    The equation of motion of a discrete linear system has the form of a second-order ordinary differential equation with three real and square coefficient matrices. It is shown that, for almost all linear systems, such an equation can always be converted by an invertible transformation into a canonical form specified by two diagonal coefficient matrices associated with the generalized acceleration and displacement. This canonical form of the equation of motion is unique up to an equivalence class for non-defective systems. As an important by-product, a damped linear system that possesses three symmetric and positive definite coefficients can always be recast as an undamped and decoupled system.

  16. A comparison of linear and nonlinear statistical techniques in performance attribution.

    PubMed

    Chan, N H; Genovese, C R

    2001-01-01

    Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.

  17. The generation of gravitational waves. 2: The post-linear formalism revisited

    NASA Technical Reports Server (NTRS)

    Crowley, R. J.; Thorne, K. S.

    1975-01-01

    Two different versions of the Green's function for the scalar wave equation in weakly curved spacetime (one due to DeWitt and DeWitt, the other to Thorne and Kovacs) are compared and contrasted; and their mathematical equivalence is demonstrated. The DeWitt-DeWitt Green's function is used to construct several alternative versions of the Thorne-Kovacs post-linear formalism for gravitational-wave generation. Finally it is shown that, in calculations of gravitational bremsstrahlung radiation, some of our versions of the post-linear formalism allow one to treat the interacting bodies as point masses, while others do not.

  18. Study on static and dynamic characteristics of moving magnet linear compressors

    NASA Astrophysics Data System (ADS)

    Chen, N.; Tang, Y. J.; Wu, Y. N.; Chen, X.; Xu, L.

    2007-09-01

    With the development of high-strength NdFeB magnetic material, moving magnet linear compressors have been gradually introduced in the fields of refrigeration and cryogenic engineering, especially in Stirling and pulse tube cryocoolers. This paper presents simulation and experimental investigations on the static and dynamic characteristics of a moving magnet linear motor and a moving magnet linear compressor. Both equivalent magnetic circuits and finite element approaches have been used to model the moving magnet linear motor. Subsequently, the force and equilibrium characteristics of the linear motor have been predicted and verified by detailed static experimental analyses. In combination with a harmonic analysis, experimental investigations were conducted on a prototype of a moving magnet linear compressor. A voltage-stroke relationship, the effect of charging pressure on the performance and dynamic frequency response characteristics are investigated. Finally, the method to identify optimal points of the linear compressor has been described, which is indispensable to the design and operation of moving magnet linear compressors.

  19. Frequency-domain beamformers using conjugate gradient techniques for speech enhancement.

    PubMed

    Zhao, Shengkui; Jones, Douglas L; Khoo, Suiyang; Man, Zhihong

    2014-09-01

    A multiple-iteration constrained conjugate gradient (MICCG) algorithm and a single-iteration constrained conjugate gradient (SICCG) algorithm are proposed to realize the widely used frequency-domain minimum-variance-distortionless-response (MVDR) beamformers and the resulting algorithms are applied to speech enhancement. The algorithms are derived based on the Lagrange method and the conjugate gradient techniques. The implementations of the algorithms avoid any form of explicit or implicit autocorrelation matrix inversion. Theoretical analysis establishes formal convergence of the algorithms. Specifically, the MICCG algorithm is developed based on a block adaptation approach and it generates a finite sequence of estimates that converge to the MVDR solution. For limited data records, the estimates of the MICCG algorithm are better than the conventional estimators and equivalent to the auxiliary vector algorithms. The SICCG algorithm is developed based on a continuous adaptation approach with a sample-by-sample updating procedure and the estimates asymptotically converge to the MVDR solution. An illustrative example using synthetic data from a uniform linear array is studied and an evaluation on real data recorded by an acoustic vector sensor array is demonstrated. Performance of the MICCG algorithm and the SICCG algorithm are compared with the state-of-the-art approaches.

  20. Four decades of implicit Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaber, Allan B.

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less

  1. Modeling and Control for Microgrids

    NASA Astrophysics Data System (ADS)

    Steenis, Joel

    Traditional approaches to modeling microgrids include the behavior of each inverter operating in a particular network configuration and at a particular operating point. Such models quickly become computationally intensive for large systems. Similarly, traditional approaches to control do not use advanced methodologies and suffer from poor performance and limited operating range. In this document a linear model is derived for an inverter connected to the Thevenin equivalent of a microgrid. This model is then compared to a nonlinear simulation model and analyzed using the open and closed loop systems in both the time and frequency domains. The modeling error is quantified with emphasis on its use for controller design purposes. Control design examples are given using a Glover McFarlane controller, gain scheduled Glover McFarlane controller, and bumpless transfer controller which are compared to the standard droop control approach. These examples serve as a guide to illustrate the use of multi-variable modeling techniques in the context of robust controller design and show that gain scheduled MIMO control techniques can extend the operating range of a microgrid. A hardware implementation is used to compare constant gain droop controllers with Glover McFarlane controllers and shows a clear advantage of the Glover McFarlane approach.

  2. Accelerating Electrostatic Surface Potential Calculation with Multiscale Approximation on Graphics Processing Units

    PubMed Central

    Anandakrishnan, Ramu; Scogland, Tom R. W.; Fenley, Andrew T.; Gordon, John C.; Feng, Wu-chun; Onufriev, Alexey V.

    2010-01-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multiscale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. PMID:20452792

  3. Four decades of implicit Monte Carlo

    DOE PAGES

    Wollaber, Allan B.

    2016-02-23

    In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less

  4. Linguistic Adaptation of the Clinical Dementia Rating Scale for a Spanish-Speaking Population

    PubMed Central

    Oquendo-Jiménez, Ilia; Mena, Rafaela; Antoun, Mikhail D.; Wojna, Valerie

    2012-01-01

    Background Alzheimer's disease (AD) is the most common form of dementia worldwide. In Hispanic populations there are few validated tests for the accurate identification and diagnosis of AD. The Clinical Dementia Rating (CDR) scale is an internationally recognized questionnaire used to stage dementia. This study's objective was to develop a linguistic adaptation of the CDR for the Puerto Rican population. Methods The linguistic adaptation consisted of the evaluation of each CDR question (item) and the questionnaire's instructions, for similarities in meaning (semantic equivalence), relevance of content (content equivalence), and appropriateness of the questionnaire's format and measuring technique (technical equivalence). A focus group methodology was used to assess cultural relevance, clarity, and suitability of the measuring technique in the Argentinean version of the CDR for use in a Puerto Rican population. Results A total of 27 semantic equivalence changes were recommended in four categories: higher than 6th grade level of reading, meaning, common use, and word preference. Four content equivalence changes were identified, all focused on improving the applicability of the test questions to the general population's concept of street addresses and common dietary choices. There were no recommendations for changes in the assessment of technical equivalence. Conclusions We developed a linguistically adapted CDR instrument for the Puerto Rican population, preserving the semantic, content, and technical equivalences of the original version. Further studies are needed to validate the CDR instrument with the staging of Alzheimer's disease in the Puerto Rican population. PMID:20496524

  5. Pediatric patient and staff dose measurements in barium meal fluoroscopic procedures

    NASA Astrophysics Data System (ADS)

    Filipov, D.; Schelin, H. R.; Denyak, V.; Paschuk, S. A.; Porto, L. E.; Ledesma, J. A.; Nascimento, E. X.; Legnani, A.; Andrade, M. E. A.; Khoury, H. J.

    2015-11-01

    This study investigates patient and staff dose measurements in pediatric barium meal series fluoroscopic procedures. It aims to analyze radiographic techniques, measure the air kerma-area product (PKA), and estimate the staff's eye lens, thyroid and hands equivalent doses. The procedures of 41 patients were studied, and PKA values were calculated using LiF:Mg,Ti thermoluminescent dosimeters (TLDs) positioned at the center of the patient's upper chest. Furthermore, LiF:Mg,Cu,P TLDs were used to estimate the equivalent doses. The results showed a discrepancy in the radiographic techniques when compared to the European Commission recommendations. Half of the results of the analyzed literature presented lower PKA and dose reference level values than the present study. The staff's equivalent doses strongly depends on the distance from the beam. A 55-cm distance can be considered satisfactory. However, a distance decrease of ~20% leads to, at least, two times higher equivalent doses. For eye lenses this dose is significantly greater than the annual limit set by the International Commission on Radiological Protection. In addition, the occupational doses were found to be much higher than in the literature. Changing the used radiographic techniques to the ones recommended by the European Communities, it is expected to achieve lower PKA values ​​and occupational doses.

  6. Human exposure assessment in the near field of GSM base-station antennas using a hybrid finite element/method of moments technique.

    PubMed

    Meyer, Frans J C; Davidson, David B; Jakobus, Ulrich; Stuchly, Maria A

    2003-02-01

    A hybrid finite-element method (FEM)/method of moments (MoM) technique is employed for specific absorption rate (SAR) calculations in a human phantom in the near field of a typical group special mobile (GSM) base-station antenna. The MoM is used to model the metallic surfaces and wires of the base-station antenna, and the FEM is used to model the heterogeneous human phantom. The advantages of each of these frequency domain techniques are, thus, exploited, leading to a highly efficient and robust numerical method for addressing this type of bioelectromagnetic problem. The basic mathematical formulation of the hybrid technique is presented. This is followed by a discussion of important implementation details-in particular, the linear algebra routines for sparse, complex FEM matrices combined with dense MoM matrices. The implementation is validated by comparing results to MoM (surface equivalence principle implementation) and finite-difference time-domain (FDTD) solutions of human exposure problems. A comparison of the computational efficiency of the different techniques is presented. The FEM/MoM implementation is then used for whole-body and critical-organ SAR calculations in a phantom at different positions in the near field of a base-station antenna. This problem cannot, in general, be solved using the MoM or FDTD due to computational limitations. This paper shows that the specific hybrid FEM/MoM implementation is an efficient numerical tool for accurate assessment of human exposure in the near field of base-station antennas.

  7. [The effect of composition and structure of radiological equivalent materials on radiological equivalent].

    PubMed

    Wang, Y; Lin, D; Fu, T

    1997-03-01

    Morphology of inorganic material powders before and after being treated by ultrafine crush was observed by transformite electron microscope. The length and diameter of granules were measured. Polymers inorganic material powders before and after being treated by ultrafine crush were used for preparing radiological equivalent materials. Blending compatibility of inorganic meterials with polymer materials was observed by scanning electron microscope. CT values of tissue equivalent materials were measured by X-ray CT. Distribution of inorganic materials was examined. The compactness of materials was determined by the water absorbed method. The elastic module of materials was measured by laser speckle interferementry method. The results showed that the inorganic material powders treated by the ultrafine crush blent well with polymer and the distribution of these powders in the polymer was homogeneous. The equivalent errors of linear attenuation coefficients and CT values of equivalent materials were small. Their elastic modules increased one order of magnitude from 6.028 x 10(2) kg/cm2 to 9.753 x 10(3) kg/cm2. In addition, the rod inorganic material powders having rod granule blent easily with polymer. The present study provides a theoretical guidance and experimental basis for the design and synthesis of radiological equivalent materials.

  8. Quantum mechanics in noninertial reference frames: Violations of the nonrelativistic equivalence principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klink, W.H.; Wickramasekara, S., E-mail: wickrama@grinnell.edu; Department of Physics, Grinnell College, Grinnell, IA 50112

    2014-01-15

    In previous work we have developed a formulation of quantum mechanics in non-inertial reference frames. This formulation is grounded in a class of unitary cocycle representations of what we have called the Galilean line group, the generalization of the Galilei group that includes transformations amongst non-inertial reference frames. These representations show that in quantum mechanics, just as is the case in classical mechanics, the transformations to accelerating reference frames give rise to fictitious forces. A special feature of these previously constructed representations is that they all respect the non-relativistic equivalence principle, wherein the fictitious forces associated with linear acceleration canmore » equivalently be described by gravitational forces. In this paper we exhibit a large class of cocycle representations of the Galilean line group that violate the equivalence principle. Nevertheless the classical mechanics analogue of these cocycle representations all respect the equivalence principle. -- Highlights: •A formulation of Galilean quantum mechanics in non-inertial reference frames is given. •The key concept is the Galilean line group, an infinite dimensional group. •A large class of general cocycle representations of the Galilean line group is constructed. •These representations show violations of the equivalence principle at the quantum level. •At the classical limit, no violations of the equivalence principle are detected.« less

  9. The use of displacement damage dose to correlate degradation in solar cells exposed to different radiations

    NASA Technical Reports Server (NTRS)

    Summers, Geoffrey P.; Burke, Edward A.; Shapiro, Philip; Statler, Richard; Messenger, Scott R.; Walters, Robert J.

    1994-01-01

    It has been found useful in the past to use the concept of 'equivalent fluence' to compare the radiation response of different solar cell technologies. Results are usually given in terms of an equivalent 1 MeV electron or an equivalent 10 MeV proton fluence. To specify cell response in a complex space-radiation environment in terms of an equivalent fluence, it is necessary to measure damage coefficients for a number of representative electron and proton energies. However, at the last Photovoltaic Specialist Conference we showed that nonionizing energy loss (NIEL) could be used to correlate damage coefficients for protons, using measurements for GaAs as an example. This correlation means that damage coefficients for all proton energies except near threshold can be predicted from a measurement made at one particular energy. NIEL is the exact equivalent for displacement damage of linear energy transfer (LET) for ionization energy loss. The use of NIEL in this way leads naturally to the concept of 10 MeV equivalent proton fluence. The situation for electron damage is more complex, however. It is shown that the concept of 'displacement damage dose' gives a more general way of unifying damage coefficients. It follows that 1 MeV electron equivalent fluence is a special case of a more general quantity for unifying electron damage coefficients which we call the 'effective 1 MeV electron equivalent dose'.

  10. A normative study of the Italian printed word version of the free and cued selective reminding test.

    PubMed

    Girtler, N; De Carli, F; Amore, M; Arnaldi, D; Bosia, L E; Bruzzaniti, C; Cappa, S F; Cocito, L; Colazzo, G; Ghio, L; Magi, E; Mancardi, G L; Nobili, F; Pardini, M; Picco, A; Rissotto, R; Serrati, C; Brugnolo, A

    2015-07-01

    According to the new research criteria for the diagnosis of Alzheimer's disease, episodic memory impairment, not significantly improved by cueing, is the core neuropsychological marker, even at a pre-dementia stage. The FCSRT assesses verbal learning and memory using semantic cues and is widely used in Europe. Standardization values for the Italian population are available for the colored picture version, but not for the 16-item printed word version. In this study, we present age- and education-adjusted normative data for FCSRT-16 obtained using linear regression techniques and generalized linear model, and critical values for classifying sub-test performance into equivalent scores. Six scores were derived from the performance of 194 normal subjects (MMSE score, range 27-30, mean 29.5 ± 0.5) divided per decade (from 20 to 90), per gender and per level of education (4 levels: 3-5, 6-8, 9-13, >13 years): immediate free recall (IFR), immediate total recall (ITR), recognition phase (RP), delayed free recall (DFR), delayed total recall (DTR), Index of Sensitivity of Cueing (ISC), number of intrusions. This study confirms the effect of age and education, but not of gender on immediate and delayed free and cued recall. The Italian version of the FCSRT-16 can be useful for both clinical and research purposes.

  11. HPC Programming on Intel Many-Integrated-Core Hardware with MAGMA Port to Xeon Phi

    DOE PAGES

    Dongarra, Jack; Gates, Mark; Haidar, Azzam; ...

    2015-01-01

    This paper presents the design and implementation of several fundamental dense linear algebra (DLA) algorithms for multicore with Intel Xeon Phi coprocessors. In particular, we consider algorithms for solving linear systems. Further, we give an overview of the MAGMA MIC library, an open source, high performance library, that incorporates the developments presented here and, more broadly, provides the DLA functionality equivalent to that of the popular LAPACK library while targeting heterogeneous architectures that feature a mix of multicore CPUs and coprocessors. The LAPACK-compliance simplifies the use of the MAGMA MIC library in applications, while providing them with portably performant DLA.more » High performance is obtained through the use of the high-performance BLAS, hardware-specific tuning, and a hybridization methodology whereby we split the algorithm into computational tasks of various granularities. Execution of those tasks is properly scheduled over the heterogeneous hardware by minimizing data movements and mapping algorithmic requirements to the architectural strengths of the various heterogeneous hardware components. Our methodology and programming techniques are incorporated into the MAGMA MIC API, which abstracts the application developer from the specifics of the Xeon Phi architecture and is therefore applicable to algorithms beyond the scope of DLA.« less

  12. The evaluation of the neutron dose equivalent in the two-bend maze.

    PubMed

    Tóth, Á Á; Petrović, B; Jovančević, N; Krmar, M; Rutonjski, L; Čudić, O

    2017-04-01

    The purpose of this study was to explore the effect of the second bend of the maze, on the neutron dose equivalent, in the 15MV linear accelerator vault, with two bend maze. These two bends of the maze were covered by 32 points where the neutron dose equivalent was measured. There is one available method for estimation of the neutron dose equivalent at the entrance door of the two bend maze which was tested using the results of the measurements. The results of this study show that the neutron equivalent dose at the door of the two bend maze was reduced almost three orders of magnitude. The measured TVD in the first bend (closer to the inner maze entrance) is about 5m. The measured TVD result is close to the TVD values usually used in the proposed models for estimation of neutron dose equivalent at the entrance door of the single bend maze. The results also determined that the TVD in the second bend (next to the maze entrance door) is significantly lower than the TVD values found in the first maze bend. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  13. Consistent Principal Component Modes from Molecular Dynamics Simulations of Proteins.

    PubMed

    Cossio-Pérez, Rodrigo; Palma, Juliana; Pierdominici-Sottile, Gustavo

    2017-04-24

    Principal component analysis is a technique widely used for studying the movements of proteins using data collected from molecular dynamics simulations. In spite of its extensive use, the technique has a serious drawback: equivalent simulations do not afford the same PC-modes. In this article, we show that concatenating equivalent trajectories and calculating the PC-modes from the concatenated one significantly enhances the reproducibility of the results. Moreover, the consistency of the modes can be systematically improved by adding more individual trajectories to the concatenated one.

  14. Ionizing radiation measurements on LDEF: A0015 Free flyer biostack experiment

    NASA Technical Reports Server (NTRS)

    Benton, E. V.; Frank, A. L.; Benton, E. R.; Csige, I.; Frigo, L. A.

    1995-01-01

    This report covers the analysis of passive radiation detectors flown as part of the A0015 Free Flyer Biostack on LDEF (Long Duration Exposure Facility). LET (linear energy transfer) spectra and track density measurements were made with CR-39 and Polycarbonate plastic nuclear track detectors. Measurements of total absorbed dose were carried out using Thermoluminescent Detectors. Thermal and resonance neutron dose equivalents were measured with LiF/CR-39 detectors. High energy neutron and proton dose equivalents were measured with fission foil/CR-39 detectors.

  15. Investigation of Cepstrum Analysis for Seismic/Acoustic Signal Sensor Range Determination.

    DTIC Science & Technology

    1981-01-01

    distorted by transmission through a linear system . For example, the effect of multipath and reverberation may be modeled in terms of a signal that is...called the short time averaged cepstrum. To derive some analytical expressions for short time average cepstrums we choose some functions of interest...linear process applied to the time series or any equivalent time function Repiod Period The amount of time required for one cycle of a time series Saphe

  16. Structure-Property Relationships of Silicone Biofouling-Release Coatings: Effect of Silicone Network Architecture on Pseudobarnacle Attachment Strengths

    DTIC Science & Technology

    2003-01-01

    ambient conditions prior to testing. A masterbatch for hydrosilylation-curable model systems was prepared by combining 200 g of hexamethydisilazane treated...fumed silica and 800 g of vinylterminated polydimethylsiloxane (equivalent weight ¼ 4111). The masterbatch was combined with additional vinyl polymer...followed by 10ml of Karstedt’s catalyst (10.9% Pt, 4.8mmol Pt). The amounts of masterbatch , linear vinyl, linear hydride, and crosslinkable hydride

  17. Agent based reasoning for the non-linear stochastic models of long-range memory

    NASA Astrophysics Data System (ADS)

    Kononovicius, A.; Gontis, V.

    2012-02-01

    We extend Kirman's model by introducing variable event time scale. The proposed flexible time scale is equivalent to the variable trading activity observed in financial markets. Stochastic version of the extended Kirman's agent based model is compared to the non-linear stochastic models of long-range memory in financial markets. The agent based model providing matching macroscopic description serves as a microscopic reasoning of the earlier proposed stochastic model exhibiting power law statistics.

  18. Panel Flutter Emulation Using a Few Concentrated Forces

    NASA Astrophysics Data System (ADS)

    Dhital, Kailash; Han, Jae-Hung

    2018-04-01

    The objective of this paper is to study the feasibility of panel flutter emulation using a few concentrated forces. The concentrated forces are considered to be equivalent to aerodynamic forces. The equivalence is carried out using surface spline method and principle of virtual work. The structural modeling of the plate is based on the classical plate theory and the aerodynamic modeling is based on the piston theory. The present approach differs from the linear panel flutter analysis in scheming the modal aerodynamics forces with unchanged structural properties. The solutions for the flutter problem are obtained numerically using the standard eigenvalue procedure. A few concentrated forces were considered with an optimization effort to decide their optimal locations. The optimization process is based on minimizing the error between the flutter bounds from emulated and linear flutter analysis method. The emulated flutter results for the square plate of four different boundary conditions using six concentrated forces are obtained with minimal error to the reference value. The results demonstrated the workability and viability of using concentrated forces in emulating real panel flutter. In addition, the paper includes the parametric studies of linear panel flutter whose proper literatures are not available.

  19. Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1991-01-01

    We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  20. Are visual cue masking and removal techniques equivalent for studying perceptual skills in sport?

    PubMed

    Mecheri, Sami; Gillet, Eric; Thouvarecq, Regis; Leroy, David

    2011-01-01

    The spatial-occlusion paradigm makes use of two techniques (masking and removing visual cues) to provide information about the anticipatory cues used by viewers. The visual scene resulting from the removal technique appears to be incongruous, but the assumed equivalence of these two techniques is spreading. The present study was designed to address this issue by combining eye-movement recording with the two types of occlusion (removal versus masking) in a tennis serve-return task. Response accuracy and decision onsets were analysed. The results indicated that subjects had longer reaction times under the removal condition, with an identical proportion of correct responses. Also, the removal technique caused the subjects to rely on atypical search patterns. Our findings suggest that, when the removal technique was used, viewers were unable to systematically count on stored memories to help them accomplish the interception task. The persistent failure to question some of the assumptions about the removal technique in applied visual research is highlighted, and suggestions for continued use of the masking technique are advanced.

  1. Equivalent Quantum Equations in a System Inspired by Bouncing Droplets Experiments

    NASA Astrophysics Data System (ADS)

    Borghesi, Christian

    2017-07-01

    In this paper we study a classical and theoretical system which consists of an elastic medium carrying transverse waves and one point-like high elastic medium density, called concretion. We compute the equation of motion for the concretion as well as the wave equation of this system. Afterwards we always consider the case where the concretion is not the wave source any longer. Then the concretion obeys a general and covariant guidance formula, which leads in low-velocity approximation to an equivalent de Broglie-Bohm guidance formula. The concretion moves then as if exists an equivalent quantum potential. A strictly equivalent free Schrödinger equation is retrieved, as well as the quantum stationary states in a linear or spherical cavity. We compute the energy (and momentum) of the concretion, naturally defined from the energy (and momentum) density of the vibrating elastic medium. Provided one condition about the amplitude of oscillation is fulfilled, it strikingly appears that the energy and momentum of the concretion not only are written in the same form as in quantum mechanics, but also encapsulate equivalent relativistic formulas.

  2. Truncated Linear Statistics Associated with the Eigenvalues of Random Matrices II. Partial Sums over Proper Time Delays for Chaotic Quantum Dots

    NASA Astrophysics Data System (ADS)

    Grabsch, Aurélien; Majumdar, Satya N.; Texier, Christophe

    2017-06-01

    Invariant ensembles of random matrices are characterized by the distribution of their eigenvalues \\{λ _1,\\ldots ,λ _N\\}. We study the distribution of truncated linear statistics of the form \\tilde{L}=\\sum _{i=1}^p f(λ _i) with p

  3. Linear decentralized systems with special structure. [for twin lift helicopters

    NASA Technical Reports Server (NTRS)

    Martin, C. F.

    1982-01-01

    Certain fundamental structures associated with linear systems having internal symmetries are outlined. It is shown that the theory of finite-dimensional algebras and their representations are closely related to such systems. It is also demonstrated that certain problems in the decentralized control of symmetric systems are equivalent to long-standing problems of linear systems theory. Even though the structure imposed arose in considering the problems of twin-lift helicopters, any large system composed of several identical intercoupled control systems can be modeled by a linear system that satisfies the constraints imposed. Internal symmetry can be exploited to yield new system-theoretic invariants and a better understanding of the way in which the underlying structure affects overall system performance.

  4. Humidity and Gravimetric Equivalency Adjustments for Nephelometer-Based Particulate Matter Measurements of Emissions from Solid Biomass Fuel Use in Cookstoves

    PubMed Central

    Soneja, Sutyajeet; Chen, Chen; Tielsch, James M.; Katz, Joanne; Zeger, Scott L.; Checkley, William; Curriero, Frank C.; Breysse, Patrick N.

    2014-01-01

    Great uncertainty exists around indoor biomass burning exposure-disease relationships due to lack of detailed exposure data in large health outcome studies. Passive nephelometers can be used to estimate high particulate matter (PM) concentrations during cooking in low resource environments. Since passive nephelometers do not have a collection filter they are not subject to sampler overload. Nephelometric concentration readings can be biased due to particle growth in high humid environments and differences in compositional and size dependent aerosol characteristics. This paper explores relative humidity (RH) and gravimetric equivalency adjustment approaches to be used for the pDR-1000 used to assess indoor PM concentrations for a cookstove intervention trial in Nepal. Three approaches to humidity adjustment performed equivalently (similar root mean squared error). For gravimetric conversion, the new linear regression equation with log-transformed variables performed better than the traditional linear equation. In addition, gravimetric conversion equations utilizing a spline or quadratic term were examined. We propose a humidity adjustment equation encompassing the entire RH range instead of adjusting for RH above an arbitrary 60% threshold. Furthermore, we propose new integrated RH and gravimetric conversion methods because they have one response variable (gravimetric PM2.5 concentration), do not contain an RH threshold, and is straightforward. PMID:24950062

  5. Humidity and gravimetric equivalency adjustments for nephelometer-based particulate matter measurements of emissions from solid biomass fuel use in cookstoves.

    PubMed

    Soneja, Sutyajeet; Chen, Chen; Tielsch, James M; Katz, Joanne; Zeger, Scott L; Checkley, William; Curriero, Frank C; Breysse, Patrick N

    2014-06-19

    Great uncertainty exists around indoor biomass burning exposure-disease relationships due to lack of detailed exposure data in large health outcome studies. Passive nephelometers can be used to estimate high particulate matter (PM) concentrations during cooking in low resource environments. Since passive nephelometers do not have a collection filter they are not subject to sampler overload. Nephelometric concentration readings can be biased due to particle growth in high humid environments and differences in compositional and size dependent aerosol characteristics. This paper explores relative humidity (RH) and gravimetric equivalency adjustment approaches to be used for the pDR-1000 used to assess indoor PM concentrations for a cookstove intervention trial in Nepal. Three approaches to humidity adjustment performed equivalently (similar root mean squared error). For gravimetric conversion, the new linear regression equation with log-transformed variables performed better than the traditional linear equation. In addition, gravimetric conversion equations utilizing a spline or quadratic term were examined. We propose a humidity adjustment equation encompassing the entire RH range instead of adjusting for RH above an arbitrary 60% threshold. Furthermore, we propose new integrated RH and gravimetric conversion methods because they have one response variable (gravimetric PM2.5 concentration), do not contain an RH threshold, and is straightforward.

  6. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Develop real-time dosimetry concepts and instrumentation for long term missions

    NASA Technical Reports Server (NTRS)

    Braby, L. A.

    1982-01-01

    The development of a rugged portable instrument to evaluate dose and dose equivalent is described. A tissue-equivalent proportional counter simulating a 2 micrometer spherical tissue volume was operated satisfactorily for over a year. The basic elements of the electronic system were designed and tested. And finally, the most suitable mathematical technique for evaluating dose equivalent with a portable instrument was selected. Design and fabrication of a portable prototype, based on the previously tested circuits, is underway.

  8. Precision Tests of a Quantum Hall Effect Device DC Equivalent Circuit Using Double-Series and Triple-Series Connections

    PubMed Central

    Jeffery, A.; Elmquist, R. E.; Cage, M. E.

    1995-01-01

    Precision tests verify the dc equivalent circuit used by Ricketts and Kemeny to describe a quantum Hall effect device in terms of electrical circuit elements. The tests employ the use of cryogenic current comparators and the double-series and triple-series connection techniques of Delahaye. Verification of the dc equivalent circuit in double-series and triple-series connections is a necessary step in developing the ac quantum Hall effect as an intrinsic standard of resistance. PMID:29151768

  9. Exact folded-band chaotic oscillator.

    PubMed

    Corron, Ned J; Blakely, Jonathan N

    2012-06-01

    An exactly solvable chaotic oscillator with folded-band dynamics is shown. The oscillator is a hybrid dynamical system containing a linear ordinary differential equation and a nonlinear switching condition. Bounded oscillations are provably chaotic, and successive waveform maxima yield a one-dimensional piecewise-linear return map with segments of both positive and negative slopes. Continuous-time dynamics exhibit a folded-band topology similar to Rössler's oscillator. An exact solution is written as a linear convolution of a fixed basis pulse and a discrete binary sequence, from which an equivalent symbolic dynamics is obtained. The folded-band topology is shown to be dependent on the symbol grammar.

  10. A generalized interval fuzzy mixed integer programming model for a multimodal transportation problem under uncertainty

    NASA Astrophysics Data System (ADS)

    Tian, Wenli; Cao, Chengxuan

    2017-03-01

    A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.

  11. Nonlinear analysis of a family of LC tuned inverters. [dc to square wave circuits for power conditioning

    NASA Technical Reports Server (NTRS)

    Lee, F. C. Y.; Wilson, T. G.

    1974-01-01

    A family of four dc-to-square-wave LC tuned inverters are analyzed using singular point. Limit cycles and waveshape characteristics are given for three modes of oscillation: quasi-harmonic, relaxation, and discontinuous. An inverter in which the avalanche breakdown of the transistor emitter-to-base junction occurs is discussed and the starting characteristics of this family of inverters are presented. The LC tuned inverters are shown to belong to a family of inverters with a common equivalent circuit consisting of only three 'series' elements: a five-segment piecewise-linear current-controlled resistor, linear inductor, and linear capacitor.

  12. TEPC Response Functions

    NASA Technical Reports Server (NTRS)

    Shinn, J. L.; Wilson, J. W.

    2003-01-01

    The tissue equivalent proportional counter had the purpose of providing the energy absorbed from a radiation field and an estimate of the corresponding linear energy transfer (LET) for evaluation of radiation quality to convert to dose equivalent. It was the recognition of the limitations in estimating LET which lead to a new approach to dosimetry, microdosimetry, and the corresponding emphasis on energy deposit in a small tissue volume as the driver of biological response with the defined quantity of lineal energy. In many circumstances, the average of the lineal energy and LET are closely related and has provided a basis for estimating dose equivalent. Still in many cases the lineal is poorly related to LET and brings into question the usefulness as a general purpose device. These relationships are examined in this paper.

  13. All the noncontextuality inequalities for arbitrary prepare-and-measure experiments with respect to any fixed set of operational equivalences

    NASA Astrophysics Data System (ADS)

    Schmid, David; Spekkens, Robert W.; Wolfe, Elie

    2018-06-01

    Within the framework of generalized noncontextuality, we introduce a general technique for systematically deriving noncontextuality inequalities for any experiment involving finitely many preparations and finitely many measurements, each of which has a finite number of outcomes. Given any fixed sets of operational equivalences among the preparations and among the measurements as input, the algorithm returns a set of noncontextuality inequalities whose satisfaction is necessary and sufficient for a set of operational data to admit of a noncontextual model. Additionally, we show that the space of noncontextual data tables always defines a polytope. Finally, we provide a computationally efficient means for testing whether any set of numerical data admits of a noncontextual model, with respect to any fixed operational equivalences. Together, these techniques provide complete methods for characterizing arbitrary noncontextuality scenarios, both in theory and in practice. Because a quantum prepare-and-measure experiment admits of a noncontextual model if and only if it admits of a positive quasiprobability representation, our techniques also determine the necessary and sufficient conditions for the existence of such a representation.

  14. Approaches to linear local gauge-invariant observables in inflationary cosmologies

    NASA Astrophysics Data System (ADS)

    Fröb, Markus B.; Hack, Thomas-Paul; Khavkine, Igor

    2018-06-01

    We review and relate two recent complementary constructions of linear local gauge-invariant observables for cosmological perturbations in generic spatially flat single-field inflationary cosmologies. After briefly discussing their physical significance, we give explicit, covariant and mutually invertible transformations between the two sets of observables, thus resolving any doubts about their equivalence. In this way, we get a geometric interpretation and show the completeness of both sets of observables, while previously each of these properties was available only for one of them.

  15. Application of Logic to Integer Sequences: A Survey

    NASA Astrophysics Data System (ADS)

    Makowsky, Johann A.

    Chomsky and Schützenberger showed in 1963 that the sequence d L (n), which counts the number of words of a given length n in a regular language L, satisfies a linear recurrence relation with constant coefficients for n, or equivalently, the generating function g_L(x)=sumn d_L(n) x^n is a rational function. In this talk we survey results concerning sequences a(n) of natural numbers which satisfy linear recurrence relations over ℤ or ℤ m , and

  16. Detection of Bioaerosols Using Single Particle Thermal Emission Spectroscopy (First-year Report)

    DTIC Science & Technology

    2012-02-01

    cooled MCT detector with a noise equivalent power (NEP) of 7x10(–13) W/Hz, yields a detection S/N > 13 (assuming a sufficiently cooled background). We...dispersively resolved using 190-mm Horiba spectrometer that houses a time-gated 32-element mercury cadmium telluride ( MCT ) linear array. In this report...to 10.0 ms. Minimum integration (and readout) periods for the time-gated 32-element mercury cadmium telluride ( MCT ) linear array are 10 µs. Based

  17. Analyses of Multishaft Rotor-Bearing Response

    NASA Technical Reports Server (NTRS)

    Nelson, H. D.; Meacham, W. L.

    1985-01-01

    Method works for linear and nonlinear systems. Finite-element-based computer program developed to analyze free and forced response of multishaft rotor-bearing systems. Acronym, ARDS, denotes Analysis of Rotor Dynamic Systems. Systems with nonlinear interconnection or support bearings or both analyzed by numerically integrating reduced set of coupledsystem equations. Linear systems analyzed in closed form for steady excitations and treated as equivalent to nonlinear systems for transient excitation. ARDS is FORTRAN program developed on an Amdahl 470 (similar to IBM 370).

  18. TH-CD-201-12: Preliminary Evaluation of Organic Field Effect Transistors as Radiation Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Syme, A; Lin, H; Rubio-Sanchez, J

    Purpose: To fabricate organic field effect transistors (OFETs) and evaluate their performance before and after exposure to ionizing radiation. To determine if OFETs have potential to function as radiation dosimeters. Methods: OFETs were fabricated on both Si/SiO{sub 2} wafers and flexible polymer substrates using standard processing techniques. Pentacene was used as the organic semiconductor material and the devices were fabricated in a bottom gate configuration. Devices were irradiated using an orthovoltage treatment unit (120 kVp x-rays). Threshold voltage values were measured with the devices in saturation mode and quantified as a function of cumulative dose. Current-voltage characteristics of the devicesmore » were measured using a Keithley 2614 SourceMeter SMU Instrument. The devices were connected to the reader but unpowered during irradiations. Results: Devices fabricated on Si/SiO2 wafers demonstrated excellent linearity (R{sup 2} > 0.997) with threshold voltages that ranged between 15 and 36 V. Devices fabricated on a flexible polymer substrate had substantially smaller threshold voltages (∼ 4 – 8 V) and slightly worse linearity (R{sup 2} > 0.98). The devices demonstrated excellent stability in I–V characteristics over a large number (>2000) cycles. Conclusion: OFETs have demonstrated excellent potential in radiation dosimetry applications. A key advantage of these devices is their composition, which can be substantially more tissue-equivalent at low photon energies relative to many other types of radiation detector. In addition, fabrication of organic electronics can employ techniques that are faster, simpler and cheaper than conventional silicon-based devices. These results support further development of organic electronic devices for radiation detection purposes. Funding Support, Disclosures, and Conflict of Interest: This work was funded by the Natural Sciences and Engineering Research Council of Canada.« less

  19. Equivalent Expressions Using CAS and Paper-and-Pencil Techniques

    ERIC Educational Resources Information Center

    Fonger, Nicole L.

    2014-01-01

    How can the key concept of equivalent expressions be addressed so that students strengthen their representational fluency with symbols, graphs, and numbers? How can research inform the synergistic use of both paper-and-pencil analysis and computer algebra systems (CAS) in a classroom learning environment? These and other related questions have…

  20. Articulating Syntactic and Numeric Perspectives on Equivalence: The Case of Rational Expressions

    ERIC Educational Resources Information Center

    Solares, Armando; Kieran, Carolyn

    2013-01-01

    Our study concerns the conceptual mathematical knowledge that emerges during the resolution of tasks on the equivalence of polynomial and rational algebraic expressions, by using CAS and paper-and-pencil techniques. The theoretical framework we adopt is the Anthropological Theory of Didactics ("Chevallard" 19:221-266, 1999), in…

  1. Do Adjusting-Amount and Adjusting-Delay Procedures Produce Equivalent Estimates of Subjective Value in Pigeons?

    ERIC Educational Resources Information Center

    Green, Leonard; Myerson, Joel; Shah, Anuj K.; Estle, Sara J.; Holt, Daniel D.

    2007-01-01

    The current experiment examined whether adjusting-amount and adjusting-delay procedures provide equivalent measures of discounting. Pigeons' discounting on the two procedures was compared using a within-subject yoking technique in which the indifference point (number of pellets or time until reinforcement) obtained with one procedure determined…

  2. Evaluation of site effects on ground motions based on equivalent linear site response analysis and liquefaction potential in Chennai, south India

    NASA Astrophysics Data System (ADS)

    Nampally, Subhadra; Padhy, Simanchal; Trupti, S.; Prabhakar Prasad, P.; Seshunarayana, T.

    2018-05-01

    We study local site effects with detailed geotechnical and geophysical site characterization to evaluate the site-specific seismic hazard for the seismic microzonation of the Chennai city in South India. A Maximum Credible Earthquake (MCE) of magnitude 6.0 is considered based on the available seismotectonic and geological information of the study area. We synthesized strong ground motion records for this target event using stochastic finite-fault technique, based on a dynamic corner frequency approach, at different sites in the city, with the model parameters for the source, site, and path (attenuation) most appropriately selected for this region. We tested the influence of several model parameters on the characteristics of ground motion through simulations and found that stress drop largely influences both the amplitude and frequency of ground motion. To minimize its influence, we estimated stress drop after finite bandwidth correction, as expected from an M6 earthquake in Indian peninsula shield for accurately predicting the level of ground motion. Estimates of shear wave velocity averaged over the top 30 m of soil (V S30) are obtained from multichannel analysis of surface wave (MASW) at 210 sites at depths of 30 to 60 m below the ground surface. Using these V S30 values, along with the available geotechnical information and synthetic ground motion database obtained, equivalent linear one-dimensional site response analysis that approximates the nonlinear soil behavior within the linear analysis framework was performed using the computer program SHAKE2000. Fundamental natural frequency, Peak Ground Acceleration (PGA) at surface and rock levels, response spectrum at surface level for different damping coefficients, and amplification factors are presented at different sites of the city. Liquefaction study was done based on the V S30 and PGA values obtained. The major findings suggest show that the northeast part of the city is characterized by (i) low V S30 values (< 200 m/s) associated with alluvial deposits, (ii) relatively high PGA value, at the surface, of about 0.24 g, and (iii) factor of safety and liquefaction below unity at three sites (no. 12, no. 37, and no. 70). Thus, this part of the city is expected to experience damage for the expected M6 target event.

  3. Overcoming learning barriers through knowledge management.

    PubMed

    Dror, Itiel E; Makany, Tamas; Kemp, Jonathan

    2011-02-01

    The ability to learn highly depends on how knowledge is managed. Specifically, different techniques for note-taking utilize different cognitive processes and strategies. In this paper, we compared dyslexic and control participants when using linear and non-linear note-taking. All our participants were professionals working in the banking and financial sector. We examined comprehension, accuracy, mental imagery & complexity, metacognition, and memory. We found that participants with dyslexia, when using a non-linear note-taking technique outperformed the control group using linear note-taking and matched the performance of the control group using non-linear note-taking. These findings emphasize how different knowledge management techniques can avoid some of the barriers to learners. Copyright © 2010 John Wiley & Sons, Ltd.

  4. Research on the time-temperature-damage superposition principle of NEPE propellant

    NASA Astrophysics Data System (ADS)

    Han, Long; Chen, Xiong; Xu, Jin-sheng; Zhou, Chang-sheng; Yu, Jia-quan

    2015-11-01

    To describe the relaxation behavior of NEPE (Nitrate Ester Plasticized Polyether) propellant, we analyzed the equivalent relationships between time, temperature, and damage. We conducted a series of uniaxial tensile tests and employed a cumulative damage model to calculate the damage values for relaxation tests at different strain levels. The damage evolution curve of the tensile test at 100 mm/min was obtained through numerical analysis. Relaxation tests were conducted over a range of temperature and strain levels, and the equivalent relationship between time, temperature, and damage was deduced based on free volume theory. The equivalent relationship was then used to generate predictions of the long-term relaxation behavior of the NEPE propellant. Subsequently, the equivalent relationship between time and damage was introduced into the linear viscoelastic model to establish a nonlinear model which is capable of describing the mechanical behavior of composite propellants under a uniaxial tensile load. The comparison between model prediction and experimental data shows that the presented model provides a reliable forecast of the mechanical behavior of propellants.

  5. Setting Age Limits for TT-OSL Dating - the Local Effect

    NASA Astrophysics Data System (ADS)

    Faershtein, G.; Porat, N.; Guralnik, B.; Matmon, A.

    2017-12-01

    Luminescence dating techniques, especially Optically Stimulated Luminescence (OSL) on quartz, are widely used for dating middle Pleistocene to late Holocene sediments from different geological settings. The dating limit of a particular luminescence method depends on signal saturation and its thermal stability. The OSL signal saturates at doses of 200 Gy, equivalent to ages of 150-300 ka. Thermally Transferred OSL (TT-OSL) is a developmental technique, which potentially extends the luminescence dating range up to 1000 ka. For the Chinese Loess Plateau, experiments have shown that the natural TT-OSL signal saturates at 2200 Gy (Chapot et al., 2016). Regarding thermal stability, different studies report a wide range of estimates (0.24-861 Ma), suggesting that the thermal lifetime of TT-OSL is (i) currently poorly constrained, and (ii) may vary both by sample and region. Here, we investigated the dating limit of TT-OSL, using quartz of Nilotic origin (Israel), obtained from two sediment sections of similar depth but different dose rates. Natural dose response curves (DRC) of the TT-OSL signal were constructed for each section separately. In both sections, luminescence intensity grows sub-linearly up to 450 Gy, beyond which it remains constant with depth. The absence of equivalent doses (De) over 600 Gy, at both sections (as well as elsewhere regionally), suggest that TT-OSL signal saturation may be an intrinsic property, related to quartz provenance, and independent of the specific ionizing dose rate at each section. The thermal stability of TT-OSL was investigated on a modern sample from one section, using a combination of analytical techniques (varying heating rates, and isothermal storage). The obtained TT-OSL lifetimes range between 105-107 ka, and reinforce a significant inter sample variability. A synthesis of our results suggests that TT-OSL ages of Nilotic quartz derived from De values over 450 Gy, are likely underestimates, and should be treated as minimum ages. The limiting value of 600 Gy for local quartz TT-OSL is likely representative of a steady-state between TT-OSL trap filling due to ionizing radiation, and the concurrent thermal empting of these traps.

  6. Predicting tropical cyclone intensity using satellite measured equivalent blackbody temperatures of cloud tops. [regression analysis

    NASA Technical Reports Server (NTRS)

    Gentry, R. C.; Rodgers, E.; Steranka, J.; Shenk, W. E.

    1978-01-01

    A regression technique was developed to forecast 24 hour changes of the maximum winds for weak (maximum winds less than or equal to 65 Kt) and strong (maximum winds greater than 65 Kt) tropical cyclones by utilizing satellite measured equivalent blackbody temperatures around the storm alone and together with the changes in maximum winds during the preceding 24 hours and the current maximum winds. Independent testing of these regression equations shows that the mean errors made by the equations are lower than the errors in forecasts made by the peristence techniques.

  7. Equivalent isotropic scattering formulation for transient short-pulse radiative transfer in anisotropic scattering planar media.

    PubMed

    Guo, Z; Kumar, S

    2000-08-20

    An isotropic scaling formulation is evaluated for transient radiative transfer in a one-dimensional planar slab subject to collimated and/or diffuse irradiation. The Monte Carlo method is used to implement the equivalent scattering and exact simulations of the transient short-pulse radiation transport through forward and backward anisotropic scattering planar media. The scaled equivalent isotropic scattering results are compared with predictions of anisotropic scattering in various problems. It is found that the equivalent isotropic scaling law is not appropriate for backward-scattering media in transient radiative transfer. Even for an optically diffuse medium, the differences in temporal transmittance and reflectance profiles between predictions of backward anisotropic scattering and equivalent isotropic scattering are large. Additionally, for both forward and backward anisotropic scattering media, the transient equivalent isotropic results are strongly affected by the change of photon flight time, owing to the change of flight direction associated with the isotropic scaling technique.

  8. WE-DE-BRA-06: Evaluation of the Imaging Performance of a Novel Water-Equivalent EPID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blake, SJ; The Ingham Institute, Liverpool, NSW; Cheng, J

    Purpose: To evaluate the megavoltage imaging performance of a novel, water-equivalent electronic portal imaging device (EPID) developed for simultaneous imaging and dosimetry applications in radiotherapy. Methods: A novel EPID prototype based on active matrix flat panel imager technology has been developed by our group and previously reported to exhibit a water-equivalent dose response. It was constructed by replacing all components above the photodiode detector in a standard clinical EPID (including the copper plate and phosphor screen) with a 15 × 15 cm{sup 2} array of plastic scintillator fibers. Individual fibers measured 0.5 × 0.5 × 30 mm{sup 3}. Spatial resolutionmore » was evaluated experimentally relative to that of a standard EPID with the thin slit technique to measure the modulation transfer function (MTF) for 6 MV x-ray beams. Monte Carlo (MC) EPID models were used to benchmark simulated MTFs against the measurements. The zero spatial frequency detective quantum efficiency (DQE(0)) was simulated for both EPID configurations and a preliminary optimization of the prototype was performed by evaluating DQE(0) as a function of fiber length up to 50 mm. Results: The MC-simulated DQE(0) for the prototype EPID configuration was ∼7 times greater than that of the standard EPID. The prototype’s DQE(0) also increased approximately linearly with fiber length, from ∼1% at 5 mm length to ∼11% at 50 mm length. The standard EPID MTF was greater than the prototype EPID’s for all spatial frequencies, reflecting the trade off between x-ray detection efficiency and spatial resolution with thick scintillators. Conclusion: This study offers promising evidence that a water-equivalent EPID previously demonstrated for radiotherapy dosimetry may also be used for radiotherapy imaging applications. Future studies on optimising the detector design will be performed to develop a next-generation prototype that offers improved megavoltage imaging performance, with the aim to at least match that of current clinical EPIDs. Funding for this project was provided by an Australian Research Council Linkage Project grant (2015) between The University of Sydney, South Western Sydney Local Health District and Perkin-Elmer Pty Ltd.« less

  9. Laser-induced breakdown spectroscopy for in-cylinder equivalence ratio measurements in laser-ignited natural gas engines.

    PubMed

    Joshi, Sachin; Olsen, Daniel B; Dumitrescu, Cosmin; Puzinauskas, Paulius V; Yalin, Azer P

    2009-05-01

    In this contribution we present the first demonstration of simultaneous use of laser sparks for engine ignition and laser-induced breakdown spectroscopy (LIBS) measurements of in-cylinder equivalence ratios. A 1064 nm neodynium yttrium aluminum garnet (Nd:YAG) laser beam is used with an optical spark plug to ignite a single cylinder natural gas engine. The optical emission from the combustion initiating laser spark is collected through the optical spark plug and cycle-by-cycle spectra are analyzed for H(alpha)(656 nm), O(777 nm), and N(742 nm, 744 nm, and 746 nm) neutral atomic lines. The line area ratios of H(alpha)/O(777), H(alpha)/N(746), and H(alpha)/N(tot) (where N(tot) is the sum of areas of the aforementioned N lines) are correlated with equivalence ratios measured by a wide band universal exhaust gas oxygen (UEGO) sensor. Experiments are performed for input laser energy levels of 21 mJ and 26 mJ, compression ratios of 9 and 11, and equivalence ratios between 0.6 and 0.95. The results show a linear correlation (R(2) > 0.99) of line intensity ratio with equivalence ratio, thereby suggesting an engine diagnostic method for cylinder resolved equivalence ratio measurements.

  10. Linear Self-Referencing Techiques for Short-Optical-Pulse Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorrer, C.; Kang, I.

    2008-04-04

    Linear self-referencing techniques for the characterization of the electric field of short optical pulses are presented. The theoretical and practical advantages of these techniques are developed. Experimental implementations are described, and their performance is compared to the performance of their nonlinear counterparts. Linear techniques demonstrate unprecedented sensitivity and are a perfect fit in many domains where the precise, accurate measurement of the electric field of an optical pulse is required.

  11. N-person differential games. Part 1: Duality-finite element methods

    NASA Technical Reports Server (NTRS)

    Chen, G.; Zheng, Q.

    1983-01-01

    The duality approach, which is motivated by computational needs and is done by introducing N + 1 Language multipliers is addressed. For N-person linear quadratic games, the primal min-max problem is shown to be equivalent to the dual min-max problem.

  12. Arthroscopic Double-Row Transosseous Equivalent Rotator Cuff Repair with a Knotless Self-Reinforcing Technique.

    PubMed

    Mook, William R; Greenspoon, Joshua A; Millett, Peter J

    2016-01-01

    Rotator cuff tears are a significant cause of shoulder morbidity. Surgical techniques for repair have evolved to optimize the biologic and mechanical variables critical to tendon healing. Double-row repairs have demonstrated superior biomechanical advantages to a single-row. The preferred technique for rotator cuff repair of the senior author was reviewed and described in a step by step fashion. The final construct is a knotless double row transosseous equivalent construct. The described technique includes the advantages of a double-row construct while also offering self reinforcement, decreased risk of suture cut through, decreased risk of medial row overtensioning and tissue strangulation, improved vascularity, the efficiency of a knotless system, and no increased risk for subacromial impingement from the burden of suture knots. Arthroscopic knotless double row rotator cuff repair is a safe and effective method to repair rotator cuff tears.

  13. Arthroscopic Double-Row Transosseous Equivalent Rotator Cuff Repair with a Knotless Self-Reinforcing Technique

    PubMed Central

    Mook, William R.; Greenspoon, Joshua A.; Millett, Peter J.

    2016-01-01

    Background: Rotator cuff tears are a significant cause of shoulder morbidity. Surgical techniques for repair have evolved to optimize the biologic and mechanical variables critical to tendon healing. Double-row repairs have demonstrated superior biomechanical advantages to a single-row. Methods: The preferred technique for rotator cuff repair of the senior author was reviewed and described in a step by step fashion. The final construct is a knotless double row transosseous equivalent construct. Results: The described technique includes the advantages of a double-row construct while also offering self reinforcement, decreased risk of suture cut through, decreased risk of medial row overtensioning and tissue strangulation, improved vascularity, the efficiency of a knotless system, and no increased risk for subacromial impingement from the burden of suture knots. Conclusion: Arthroscopic knotless double row rotator cuff repair is a safe and effective method to repair rotator cuff tears. PMID:27733881

  14. Molecular electronics in pinnae of Mimosa pudica

    PubMed Central

    Foster, Justin C; Markin, Vladislav S

    2010-01-01

    Bioelectrochemical circuits operate in all plants including the sensitive plant Mimosa pudica Linn. The activation of biologically closed circuits with voltage gated ion channels can lead to various mechanical, hydrodynamical, physiological, biochemical and biophysical responses. Here the biologically closed electrochemical circuit in pinnae of Mimosa pudica is analyzed using the charged capacitor method for electrostimulation at different voltages. Also the equivalent electrical scheme of electrical signal transduction inside the plant's pinna is evaluated. These circuits remain linear at small potentials not exceeding 0.5 V. At higher potentials the circuits become strongly non-linear pointing to the opening of ion channels in plant tissues. Changing the polarity of electrodes leads to a strong rectification effect and to different kinetics of a capacitor. These effects can be caused by a redistribution of K+, Cl−, Ca2+ and H+ ions through voltage gated ion channels. The electrical properties of Mimosa pudica were investigated and equivalent electrical circuits within the pinnae were proposed to explain the experimental data. PMID:20448476

  15. Molecular electronics in pinnae of Mimosa pudica.

    PubMed

    Volkov, Alexander G; Foster, Justin C; Markin, Vladislav S

    2010-07-01

    Bioelectrochemical circuits operate in all plants including the sensitive plant Mimosa pudica Linn. The activation of biologically closed circuits with voltage gated ion channels can lead to various mechanical, hydrodynamical, physiological, biochemical, and biophysical responses. Here the biologically closed electrochemical circuit in pinnae of Mimosa pudica is analyzed using the charged capacitor method for electrostimulation at different voltages. Also the equivalent electrical scheme of electrical signal transduction inside the plant's pinna is evaluated. These circuits remain linear at small potentials not exceeding 0.5 V. At higher potentials the circuits become strongly non-linear pointing to the opening of ion channels in plant tissues. Changing the polarity of electrodes leads to a strong rectification effect and to different kinetics of a capacitor. These effects can be caused by a redistribution of K(+), Cl(-), Ca(2+), and H(+) ions through voltage gated ion channels. The electrical properties of Mimosa pudica were investigated and equivalent electrical circuits within the pinnae were proposed to explain the experimental data.

  16. Estimating Causal Effects with Ancestral Graph Markov Models

    PubMed Central

    Malinsky, Daniel; Spirtes, Peter

    2017-01-01

    We present an algorithm for estimating bounds on causal effects from observational data which combines graphical model search with simple linear regression. We assume that the underlying system can be represented by a linear structural equation model with no feedback, and we allow for the possibility of latent variables. Under assumptions standard in the causal search literature, we use conditional independence constraints to search for an equivalence class of ancestral graphs. Then, for each model in the equivalence class, we perform the appropriate regression (using causal structure information to determine which covariates to include in the regression) to estimate a set of possible causal effects. Our approach is based on the “IDA” procedure of Maathuis et al. (2009), which assumes that all relevant variables have been measured (i.e., no unmeasured confounders). We generalize their work by relaxing this assumption, which is often violated in applied contexts. We validate the performance of our algorithm on simulated data and demonstrate improved precision over IDA when latent variables are present. PMID:28217244

  17. On the equivalence of experimental B(E2) values determined by various techniques

    DOE PAGES

    Birch, M.; Pritychenko, B.; Singh, B.

    2016-06-30

    In this paper, we establish the equivalence of the various techniques for measuring B(E2) values using a statistical analysis. Data used in this work come from the recent compilation by B. Pritychenko et al. (2016). We consider only those nuclei for which the B(E2) values were measured by at least two different methods, with each method being independently performed at least twice. Our results indicate that most prevalent methods of measuring B(E2) values are equivalent, with some weak evidence that Doppler-shift attenuation method (DSAM) measurements may differ from Coulomb excitation (CE) and nuclear resonance fluorescence (NRF) measurements. However, such anmore » evidence appears to arise from discrepant DSAM measurements of the lifetimes for 60Ni and some Sn nuclei rather than a systematic deviation in the method itself.« less

  18. Measurement of absorbed dose with a bone-equivalent extrapolation chamber.

    PubMed

    DeBlois, François; Abdel-Rahman, Wamied; Seuntjens, Jan P; Podgorsak, Ervin B

    2002-03-01

    A hybrid phantom-embedded extrapolation chamber (PEEC) made of Solid Water and bone-equivalent material was used for determining absorbed dose in a bone-equivalent phantom irradiated with clinical radiation beams (cobalt-60 gamma rays; 6 and 18 MV x rays; and 9 and 15 MeV electrons). The dose was determined with the Spencer-Attix cavity theory, using ionization gradient measurements and an indirect determination of the chamber air-mass through measurements of chamber capacitance. The collected charge was corrected for ionic recombination and diffusion in the chamber air volume following the standard two-voltage technique. Due to the hybrid chamber design, correction factors accounting for scatter deficit and electrode composition were determined and applied in the dose equation to obtain absorbed dose in bone for the equivalent homogeneous bone phantom. Correction factors for graphite electrodes were calculated with Monte Carlo techniques and the calculated results were verified through relative air cavity dose measurements for three different polarizing electrode materials: graphite, steel, and brass in conjunction with a graphite collecting electrode. Scatter deficit, due mainly to loss of lateral scatter in the hybrid chamber, reduces the dose to the air cavity in the hybrid PEEC in comparison with full bone PEEC by 0.7% to approximately 2% depending on beam quality and energy. In megavoltage photon and electron beams, graphite electrodes do not affect the dose measurement in the Solid Water PEEC but decrease the cavity dose by up to 5% in the bone-equivalent PEEC even for very thin graphite electrodes (<0.0025 cm). In conjunction with appropriate correction factors determined with Monte Carlo techniques, the uncalibrated hybrid PEEC can be used for measuring absorbed dose in bone material to within 2% for high-energy photon and electron beams.

  19. Novel three-dimensional autologous tissue-engineered vaginal tissues using the self-assembly technique.

    PubMed

    Orabi, Hazem; Saba, Ingrid; Rousseau, Alexandre; Bolduc, Stéphane

    2017-02-01

    Many diseases necessitate the substitution of vaginal tissues. Current replacement therapies are associated with many complications. In this study, we aimed to create bioengineered neovaginas with the self-assembly technique using autologous vaginal epithelial (VE) and vaginal stromal (VS) cells without the use of exogenous materials and to document the survival and incorporation of these grafts into the tissues of nude female mice. Epithelial and stromal cells were isolated from vaginal biopsies. Stromal cells were driven to form collagen sheets, 3 of which were superimposed to form vaginal stromas. VE cells were seeded on top of these stromas and allowed to mature in an air-liquid interface. The vaginal equivalents were implanted subcutaneously in female nude mice, which were sacrificed after 1 and 2 weeks after surgery. The in vitro and animal-retrieved equivalents were assessed using histologic, functional, and mechanical evaluations. Vaginal equivalents could be handled easily. VE cells formed a well-differentiated epithelial layer with a continuous basement membrane. The equivalent matrix was composed of collagen I and III and elastin. The epithelium, basement membrane, and stroma were comparable to those of native vaginal tissues. The implanted equivalents formed mature vaginal epithelium and matrix that were integrated into the mice tissues. Using the self-assembly technique, in vitro vaginal tissues were created with many functional and biological similarities to native vagina without any foreign material. They formed functional vaginal tissues after in vivo animal implantation. It is appropriate for vaginal substitution and disease modeling for infectious studies, vaginal applicants, and drug testing. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Biodosimetry Based on γ-H2AX Quantification and Cytogenetics after Partial- and Total-Body Irradiation during Fractionated Radiotherapy.

    PubMed

    Zahnreich, Sebastian; Ebersberger, Anne; Kaina, Bernd; Schmidberger, Heinz

    2015-04-01

    The aim of this current study was to quantitatively describe radiation-induced DNA damage and its distribution in leukocytes of cancer patients after fractionated partial- or total-body radiotherapy. Specifically, the impact of exposed anatomic region and administered dose was investigated in breast and prostate cancer patients receiving partial-body radiotherapy. DNA double-strand breaks (DSBs) were quantified by γ-H2AX immunostaining. The frequency of unstable chromosomal aberrations in stimulated lymphocytes was also determined and compared with the frequency of DNA DSBs in the same samples. The frequency of radiation-induced DNA damage was converted into dose, using ex vivo generated calibration curves, and was then compared with the administered physical dose. This study showed that 0.5 h after partial-body radiotherapy the quantity of radiation-induced γ-H2AX foci increased linearly with the administered equivalent whole-body dose for both tumor entities. Foci frequencies dropped 1 day thereafter but proportionality to the equivalent whole-body dose was maintained. Conversely, the frequency of radiation-induced cytogenetic damage increased from 0.5 h to 1 day after the first partial-body exposure with a linear dependence on the administered equivalent whole-body dose, for prostate cancer patients only. Only γ-H2AX foci assessment immediately after partial-body radiotherapy was a reliable measure of the expected equivalent whole-body dose. Local tumor doses could be approximated with both assays after one day. After total-body radiotherapy satisfactory dose estimates were achieved with both assays up to 8 h after exposure. In conclusion, the quantification of radiation-induced γ-H2AX foci, but not cytogenetic damage in peripheral leukocytes was a sensitive and rapid biodosimeter after acute heterogeneous irradiation of partial body volumes that was able to primarily assess the absorbed equivalent whole-body dose.

  1. Identification of active sources inside cavities using the equivalent source method-based free-field recovery technique

    NASA Astrophysics Data System (ADS)

    Bi, Chuan-Xing; Hu, Ding-Yu; Zhang, Yong-Bin; Jing, Wen-Qian

    2015-06-01

    In previous studies, an equivalent source method (ESM)-based technique for recovering the free sound field in a noisy environment has been successfully applied to exterior problems. In order to evaluate its performance when applied to a more general noisy environment, that technique is used to identify active sources inside cavities where the sound field is composed of the field radiated by active sources and that reflected by walls. A patch approach with two semi-closed surfaces covering the target active sources is presented to perform the measurements, and the field that would be radiated by these target active sources into free space is extracted from the mixed field by using the proposed technique, which will be further used as the input of nearfield acoustic holography for source identification. Simulation and experimental results validate the effectiveness of the proposed technique for source identification in cavities, and show the feasibility of performing the measurements with a double layer planar array.

  2. Hyperspherical nuclear motion of H3 + and D3 + in the electronic triplet state, a 3Sigmau +.

    PubMed

    Ferreira, Tiago Mendes; Alijah, Alexander; Varandas, António J C

    2008-02-07

    The potential energy surface of H(3) (+) in the lowest electronic triplet state, a (3)Sigma(u) (+), shows three equivalent minima at linear nuclear configurations. The vibrational levels of H(3) (+) and D(3) (+) on this surface can therefore be described as superimposed linear molecule states. Owing to such a superposition, each vibrational state characterized by quantum numbers of an isolated linear molecule obtains a one- and a two-dimensional component. The energy splittings between the two components have now been rationalized within a hyperspherical picture. It is shown that nuclear motion along the hyperangle phi mainly accounts for the splittings and provides upper bounds. This hyperspherical motion can be considered an extension of the antisymmetric stretching motion of the individual linear molecule.

  3. MIDAS: Regionally linear multivariate discriminative statistical mapping.

    PubMed

    Varol, Erdem; Sotiras, Aristeidis; Davatzikos, Christos

    2018-07-01

    Statistical parametric maps formed via voxel-wise mass-univariate tests, such as the general linear model, are commonly used to test hypotheses about regionally specific effects in neuroimaging cross-sectional studies where each subject is represented by a single image. Despite being informative, these techniques remain limited as they ignore multivariate relationships in the data. Most importantly, the commonly employed local Gaussian smoothing, which is important for accounting for registration errors and making the data follow Gaussian distributions, is usually chosen in an ad hoc fashion. Thus, it is often suboptimal for the task of detecting group differences and correlations with non-imaging variables. Information mapping techniques, such as searchlight, which use pattern classifiers to exploit multivariate information and obtain more powerful statistical maps, have become increasingly popular in recent years. However, existing methods may lead to important interpretation errors in practice (i.e., misidentifying a cluster as informative, or failing to detect truly informative voxels), while often being computationally expensive. To address these issues, we introduce a novel efficient multivariate statistical framework for cross-sectional studies, termed MIDAS, seeking highly sensitive and specific voxel-wise brain maps, while leveraging the power of regional discriminant analysis. In MIDAS, locally linear discriminative learning is applied to estimate the pattern that best discriminates between two groups, or predicts a variable of interest. This pattern is equivalent to local filtering by an optimal kernel whose coefficients are the weights of the linear discriminant. By composing information from all neighborhoods that contain a given voxel, MIDAS produces a statistic that collectively reflects the contribution of the voxel to the regional classifiers as well as the discriminative power of the classifiers. Critically, MIDAS efficiently assesses the statistical significance of the derived statistic by analytically approximating its null distribution without the need for computationally expensive permutation tests. The proposed framework was extensively validated using simulated atrophy in structural magnetic resonance imaging (MRI) and further tested using data from a task-based functional MRI study as well as a structural MRI study of cognitive performance. The performance of the proposed framework was evaluated against standard voxel-wise general linear models and other information mapping methods. The experimental results showed that MIDAS achieves relatively higher sensitivity and specificity in detecting group differences. Together, our results demonstrate the potential of the proposed approach to efficiently map effects of interest in both structural and functional data. Copyright © 2018. Published by Elsevier Inc.

  4. Numerical prediction of turbulent flame stability in premixed/prevaporized (HSCT) combustors

    NASA Technical Reports Server (NTRS)

    Winowich, Nicholas S.

    1990-01-01

    A numerical analysis of combustion instabilities that induce flashback in a lean, premixed, prevaporized dump combustor is performed. KIVA-II, a finite volume CFD code for the modeling of transient, multidimensional, chemically reactive flows, serves as the principal analytical tool. The experiment of Proctor and T'ien is used as a reference for developing the computational model. An experimentally derived combustion instability mechanism is presented on the basis of the observations of Proctor and T'ien and other investigators of instabilities in low speed (M less than 0.1) dump combustors. The analysis comprises two independent procedures that begin from a calculated stable flame: The first is a linear increase of the equivalence ratio and the second is the linear decrease of the inflow velocity. The objective is to observe changes in the aerothermochemical features of the flow field prior to flashback. It was found that only the linear increase of the equivalence ratio elicits a calculated flashback result. Though this result did not exhibit large scale coherent vortices in the turbulent shear layer coincident with a flame flickering mode as was observed experimentally, there were interesting acoustic effects which were resolved quite well in the calculation. A discussion of the k-e turbulence model used by KIVA-II is prompted by the absence of combustion instabilities in the model as the inflow velocity is linearly decreased. Finally, recommendations are made for further numerical analysis that may improve correlation with experimentally observed combustion instabilities.

  5. Simulation of broadband ground motion including nonlinear soil effects for a magnitude 6.5 earthquake on the Seattle fault, Seattle, Washington

    USGS Publications Warehouse

    Hartzell, S.; Leeds, A.; Frankel, A.; Williams, R.A.; Odum, J.; Stephenson, W.; Silva, W.

    2002-01-01

    The Seattle fault poses a significant seismic hazard to the city of Seattle, Washington. A hybrid, low-frequency, high-frequency method is used to calculate broadband (0-20 Hz) ground-motion time histories for a M 6.5 earthquake on the Seattle fault. Low frequencies (1 Hz) are calculated by a stochastic method that uses a fractal subevent size distribution to give an ω-2 displacement spectrum. Time histories are calculated for a grid of stations and then corrected for the local site response using a classification scheme based on the surficial geology. Average shear-wave velocity profiles are developed for six surficial geologic units: artificial fill, modified land, Esperance sand, Lawton clay, till, and Tertiary sandstone. These profiles together with other soil parameters are used to compare linear, equivalent-linear, and nonlinear predictions of ground motion in the frequency band 0-15 Hz. Linear site-response corrections are found to yield unreasonably large ground motions. Equivalent-linear and nonlinear calculations give peak values similar to the 1994 Northridge, California, earthquake and those predicted by regression relationships. Ground-motion variance is estimated for (1) randomization of the velocity profiles, (2) variation in source parameters, and (3) choice of nonlinear model. Within the limits of the models tested, the results are found to be most sensitive to the nonlinear model and soil parameters, notably the over consolidation ratio.

  6. A discrete component low-noise preamplifier readout for a linear (1×16) SiC photodiode array

    NASA Astrophysics Data System (ADS)

    Kahle, Duncan; Aslam, Shahid; Herrero, Federico A.; Waczynski, Augustyn

    2016-09-01

    A compact, low-noise and inexpensive preamplifier circuit has been designed and fabricated to optimally readout a common cathode (1×16) channel 4H-SiC Schottky photodiode array for use in ultraviolet experiments. The readout uses an operational amplifier with 10 pF capacitor in the feedback loop in parallel with a low leakage switch for each of the channels. This circuit configuration allows for reiterative sample, integrate and reset. A sampling technique is given to remove Johnson noise, enabling a femtoampere level readout noise performance. Commercial-off-the-shelf acquisition electronics are used to digitize the preamplifier analog signals. The data logging acquisition electronics has a different integration circuit, which allows the bandwidth and gain to be independently adjusted. Using this readout, photoresponse measurements across the array between spectral wavelengths 200 nm and 370 nm are made to establish the array pixels external quantum efficiency, current responsivity and noise equivalent power.

  7. A theoretical measure technique for determining 3D symmetric nearly optimal shapes with a given center of mass

    NASA Astrophysics Data System (ADS)

    Alimorad D., H.; Fakharzadeh J., A.

    2017-07-01

    In this paper, a new approach is proposed for designing the nearly-optimal three dimensional symmetric shapes with desired physical center of mass. Herein, the main goal is to find such a shape whose image in ( r, θ)-plane is a divided region into a fixed and variable part. The nearly optimal shape is characterized in two stages. Firstly, for each given domain, the nearly optimal surface is determined by changing the problem into a measure-theoretical one, replacing this with an equivalent infinite dimensional linear programming problem and approximating schemes; then, a suitable function that offers the optimal value of the objective function for any admissible given domain is defined. In the second stage, by applying a standard optimization method, the global minimizer surface and its related domain will be obtained whose smoothness is considered by applying outlier detection and smooth fitting methods. Finally, numerical examples are presented and the results are compared to show the advantages of the proposed approach.

  8. Real-Time Correction By Optical Tracking with Integrated Geometric Distortion Correction for Reducing Motion Artifacts in fMRI

    NASA Astrophysics Data System (ADS)

    Rotenberg, David J.

    Artifacts caused by head motion are a substantial source of error in fMRI that limits its use in neuroscience research and clinical settings. Real-time scan-plane correction by optical tracking has been shown to correct slice misalignment and non-linear spin-history artifacts, however residual artifacts due to dynamic magnetic field non-uniformity may remain in the data. A recently developed correction technique, PLACE, can correct for absolute geometric distortion using the complex image data from two EPI images, with slightly shifted k-space trajectories. We present a correction approach that integrates PLACE into a real-time scan-plane update system by optical tracking, applied to a tissue-equivalent phantom undergoing complex motion and an fMRI finger tapping experiment with overt head motion to induce dynamic field non-uniformity. Experiments suggest that including volume by volume geometric distortion correction by PLACE can suppress dynamic geometric distortion artifacts in a phantom and in vivo and provide more robust activation maps.

  9. On effective temperature in network models of collective behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porfiri, Maurizio, E-mail: mporfiri@nyu.edu; Ariel, Gil, E-mail: arielg@math.biu.ac.il

    Collective behavior of self-propelled units is studied analytically within the Vectorial Network Model (VNM), a mean-field approximation of the well-known Vicsek model. We propose a dynamical systems framework to study the stochastic dynamics of the VNM in the presence of general additive noise. We establish that a single parameter, which is a linear function of the circular mean of the noise, controls the macroscopic phase of the system—ordered or disordered. By establishing a fluctuation–dissipation relation, we posit that this parameter can be regarded as an effective temperature of collective behavior. The exact critical temperature is obtained analytically for systems withmore » small connectivity, equivalent to low-density ensembles of self-propelled units. Numerical simulations are conducted to demonstrate the applicability of this new notion of effective temperature to the Vicsek model. The identification of an effective temperature of collective behavior is an important step toward understanding order–disorder phase transitions, informing consistent coarse-graining techniques and explaining the physics underlying the emergence of collective phenomena.« less

  10. A Discrete Component Low-Noise Preamplifier Readout for a Linear (1x16) SiC Photodiode Array

    NASA Technical Reports Server (NTRS)

    Kahle, Duncan; Aslam, Shahid; Herrero, Frederico A.; Waczynski, Augustyn

    2016-01-01

    A compact, low-noise and inexpensive preamplifier circuit has been designed and fabricated to optimally readout a common cathode (1x16) channel 4H-SiC Schottky photodiode array for use in ultraviolet experiments. The readout uses an operational amplifier with 10 pF capacitor in the feedback loop in parallel with a low leakage switch for each of the channels. This circuit configuration allows for reiterative sample, integrate and reset. A sampling technique is given to remove Johnson noise, enabling a femtoampere level readout noise performance. Commercial-off-the-shelf acquisition electronics are used to digitize the preamplifier analogue signals. The data logging acquisition electronics has a different integration circuit, which allows the bandwidth and gain to be independently adjusted. Using this readout, photoresponse measurements across the array between spectral wavelengths 200 nm and 370 nm are made to establish the array pixels external quantum efficiency, current responsivity and noise equivalent power.

  11. Effects of ocular aberrations on contrast detection in noise.

    PubMed

    Liang, Bo; Liu, Rong; Dai, Yun; Zhou, Jiawei; Zhou, Yifeng; Zhang, Yudong

    2012-08-06

    We use adaptive optics (AO) techniques to manipulate the ocular aberrations and elucidate the effects of these ocular aberrations on contrast detection in a noisy background. The detectability of sine wave gratings at frequencies of 4, 8, and 16 circles per degree (cpd) was measured in a standard two-interval force-choice staircase procedure against backgrounds of various levels of white noise. The observer's ocular aberrations were either corrected with AO or left uncorrected. In low levels of external noise, contrast detection thresholds are always lowered by AO correction, whereas in high levels of external noise, they are generally elevated by AO correction. Higher levels of external noise are required to make this threshold elevation observable when signal spatial frequencies increase from 4 to 16 cpd. The linear-amplifier-model fit shows that mostly sampling efficiency and equivalent noise both decrease with AO correction. Our findings indicate that ocular aberrations could be beneficial for contrast detection in high-level noises. The implications of these findings are discussed.

  12. Diffusion in random networks

    DOE PAGES

    Zhang, Duan Z.; Padrino, Juan C.

    2017-06-01

    The ensemble averaging technique is applied to model mass transport by diffusion in random networks. The system consists of an ensemble of random networks, where each network is made of pockets connected by tortuous channels. Inside a channel, fluid transport is assumed to be governed by the one-dimensional diffusion equation. Mass balance leads to an integro-differential equation for the pocket mass density. The so-called dual-porosity model is found to be equivalent to the leading order approximation of the integration kernel when the diffusion time scale inside the channels is small compared to the macroscopic time scale. As a test problem,more » we consider the one-dimensional mass diffusion in a semi-infinite domain. Because of the required time to establish the linear concentration profile inside a channel, for early times the similarity variable is xt $-$1/4 rather than xt $-$1/2 as in the traditional theory. We found this early time similarity can be explained by random walk theory through the network.« less

  13. Mathematical properties and bounds on haplotyping populations by pure parsimony.

    PubMed

    Wang, I-Lin; Chang, Chia-Yuan

    2011-06-01

    Although the haplotype data can be used to analyze the function of DNA, due to the significant efforts required in collecting the haplotype data, usually the genotype data is collected and then the population haplotype inference (PHI) problem is solved to infer haplotype data from genotype data for a population. This paper investigates the PHI problem based on the pure parsimony criterion (HIPP), which seeks the minimum number of distinct haplotypes to infer a given genotype data. We analyze the mathematical structure and properties for the HIPP problem, propose techniques to reduce the given genotype data into an equivalent one of much smaller size, and analyze the relations of genotype data using a compatible graph. Based on the mathematical properties in the compatible graph, we propose a maximal clique heuristic to obtain an upper bound, and a new polynomial-sized integer linear programming formulation to obtain a lower bound for the HIPP problem. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. A spatial operator algebra for manipulator modeling and control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Kreutz, Kenneth; Jain, Abhinandan

    1989-01-01

    A recently developed spatial operator algebra, useful for modeling, control, and trajectory design of manipulators is discussed. The elements of this algebra are linear operators whose domain and range spaces consist of forces, moments, velocities, and accelerations. The effect of these operators is equivalent to a spatial recursion along the span of a manipulator. Inversion of operators can be efficiently obtained via techniques of recursive filtering and smoothing. The operator algebra provides a high level framework for describing the dynamic and kinematic behavior of a manipulator and control and trajectory design algorithms. The interpretation of expressions within the algebraic framework leads to enhanced conceptual and physical understanding of manipulator dynamics and kinematics. Furthermore, implementable recursive algorithms can be immediately derived from the abstract operator expressions by inspection. Thus, the transition from an abstract problem formulation and solution to the detailed mechanizaton of specific algorithms is greatly simplified. The analytical formulation of the operator algebra, as well as its implementation in the Ada programming language are discussed.

  15. Grating lobe elimination in steerable parametric loudspeaker.

    PubMed

    Shi, Chuang; Gan, Woon-Seng

    2011-02-01

    In the past two decades, the majority of research on the parametric loudspeaker has concentrated on the nonlinear modeling of acoustic propagation and pre-processing techniques to reduce nonlinear distortion in sound reproduction. There are, however, very few studies on directivity control of the parametric loudspeaker. In this paper, we propose an equivalent circular Gaussian source array that approximates the directivity characteristics of the linear ultrasonic transducer array. By using this approximation, the directivity of the sound beam from the parametric loudspeaker can be predicted by the product directivity principle. New theoretical results, which are verified through measurements, are presented to show the effectiveness of the delay-and-sum beamsteering structure for the parametric loudspeaker. Unlike the conventional loudspeaker array, where the spacing between array elements must be less than half the wavelength to avoid spatial aliasing, the parametric loudspeaker can take advantage of grating lobe elimination to extend the spacing of ultrasonic transducer array to more than 1.5 wavelengths in a typical application.

  16. Nuclear spin circular dichroism.

    PubMed

    Vaara, Juha; Rizzo, Antonio; Kauczor, Joanna; Norman, Patrick; Coriani, Sonia

    2014-04-07

    Recent years have witnessed a growing interest in magneto-optic spectroscopy techniques that use nuclear magnetization as the source of the magnetic field. Here we present a formulation of magnetic circular dichroism (CD) due to magnetically polarized nuclei, nuclear spin-induced CD (NSCD), in molecules. The NSCD ellipticity and nuclear spin-induced optical rotation (NSOR) angle correspond to the real and imaginary parts, respectively, of (complex) quadratic response functions involving the dynamic second-order interaction of the electron system with the linearly polarized light beam, as well as the static magnetic hyperfine interaction. Using the complex polarization propagator framework, NSCD and NSOR signals are obtained at frequencies in the vicinity of optical excitations. Hartree-Fock and density-functional theory calculations on relatively small model systems, ethene, benzene, and 1,4-benzoquinone, demonstrate the feasibility of the method for obtaining relatively strong nuclear spin-induced ellipticity and optical rotation signals. Comparison of the proton and carbon-13 signals of ethanol reveals that these resonant phenomena facilitate chemical resolution between non-equivalent nuclei in magneto-optic spectra.

  17. Computation of linear acceleration through an internal model in the macaque cerebellum

    PubMed Central

    Laurens, Jean; Meng, Hui; Angelaki, Dora E.

    2013-01-01

    A combination of theory and behavioral findings has supported a role for internal models in the resolution of sensory ambiguities and sensorimotor processing. Although the cerebellum has been proposed as a candidate for implementation of internal models, concrete evidence from neural responses is lacking. Here we exploit un-natural motion stimuli, which induce incorrect self-motion perception and eye movements, to explore the neural correlates of an internal model proposed to compensate for Einstein’s equivalence principle and generate neural estimates of linear acceleration and gravity. We show that caudal cerebellar vermis Purkinje cells and cerebellar nuclei neurons selective for actual linear acceleration also encode erroneous linear acceleration, as expected from the internal model hypothesis, even when no actual linear acceleration occurs. These findings provide strong evidence that the cerebellum might be involved in the implementation of internal models that mimic physical principles to interpret sensory signals, as previously hypothesized by theorists. PMID:24077562

  18. Practical Methodology for the Inclusion of Nonlinear Slosh Damping in the Stability Analysis of Liquid-Propelled Space Vehicles

    NASA Technical Reports Server (NTRS)

    Ottander, John A.; Hall, Robert A.; Powers, J. F.

    2018-01-01

    A method is presented that allows for the prediction of the magnitude of limit cycles due to adverse control-slosh interaction in liquid propelled space vehicles using non-linear slosh damping. Such a method is an alternative to the industry practice of assuming linear damping and relying on: mechanical slosh baffles to achieve desired stability margins; accepting minimal slosh stability margins; or time domain non-linear analysis to accept time periods of poor stability. Sinusoidal input describing functional analysis is used to develop a relationship between the non-linear slosh damping and an equivalent linear damping at a given slosh amplitude. In addition, a more accurate analytical prediction of the danger zone for slosh mass locations in a vehicle under proportional and derivative attitude control is presented. This method is used in the control-slosh stability analysis of the NASA Space Launch System.

  19. Analysis of linear and cyclic oligomers in polyamide-6 without sample preparation by liquid chromatography using the sandwich injection method. II. Methods of detection and quantification and overall long-term performance.

    PubMed

    Mengerink, Y; Peters, R; Kerkhoff, M; Hellenbrand, J; Omloo, H; Andrien, J; Vestjens, M; van der Wal, S

    2000-05-05

    By separating the first six linear and cyclic oligomers of polyamide-6 on a reversed-phase high-performance liquid chromatographic system after sandwich injection, quantitative determination of these oligomers becomes feasible. Low-wavelength UV detection of the different oligomers and selective post-column reaction detection of the linear oligomers with o-phthalic dicarboxaldehyde (OPA) and 3-mercaptopropionic acid (3-MPA) are discussed. A general methodology for quantification of oligomers in polymers was developed. It is demonstrated that the empirically determined group-equivalent absorption coefficients and quench factors are a convenient way of quantifying linear and cyclic oligomers of nylon-6. The overall long-term performance of the method was studied by monitoring a reference sample and the calibration factors of the linear and cyclic oligomers.

  20. Origin of nonsaturating linear magnetoresistivity

    NASA Astrophysics Data System (ADS)

    Kisslinger, Ferdinand; Ott, Christian; Weber, Heiko B.

    2017-01-01

    The observation of nonsaturating classical linear magnetoresistivity has been an enigmatic phenomenon in solid-state physics. We present a study of a two-dimensional ohmic conductor, including local Hall effect and a self-consistent consideration of the environment. An equivalent-circuit scheme delivers a simple and convincing argument why the magnetoresistivity is linear in strong magnetic field, provided that current and biasing electric field are misaligned by a nonlocal mechanism. A finite-element model of a two-dimensional conductor is suited to display the situations that create such deviating currents. Besides edge effects next to electrodes, charge carrier density fluctuations are efficiently generating this effect. However, mobility fluctuations that have frequently been related to linear magnetoresistivity are barely relevant. Despite its rare observation, linear magnetoresitivity is rather the rule than the exception in a regime of low charge carrier densities, misaligned current pathways and strong magnetic field.

  1. A mechanical comparison of linear and double-looped hung supplemental heavy chain resistance to the back squat: a case study.

    PubMed

    Neelly, Kurt R; Terry, Joseph G; Morris, Martin J

    2010-01-01

    A relatively new and scarcely researched technique to increase strength is the use of supplemental heavy chain resistance (SHCR) in conjunction with plate weights to provide variable resistance to free weight exercises. The purpose of this case study was to determine the actual resistance being provided by a double-looped versus a linear hung SHCR to the back squat exercise. The linear technique simply hangs the chain directly from the bar, whereas the double-looped technique uses a smaller chain to adjust the height of the looped chain. In both techniques, as the squat descends, chain weight is unloaded onto the floor, and as the squat ascends, chain weight is progressively loaded back as resistance. One experienced and trained male weight lifter (age = 33 yr; height = 1.83 m; weight = 111.4 kg) served as the subject. Plate weight was set at 84.1 kg, approximately 50% of the subject's 1 repetition maximum. The SHCR was affixed to load cells, sampling at a frequency of 500 Hz, which were affixed to the Olympic bar. Data were collected as the subject completed the back squat under the following conditions: double-looped 1 chain (9.6 kg), double-looped 2 chains (19.2 kg), linear 1 chain, and linear 2 chains. The double-looped SHCR resulted in a 78-89% unloading of the chain weight at the bottom of the squat, whereas the linear hanging SHCR resulted in only a 36-42% unloading. The double-looped technique provided nearly 2 times the variable resistance at the top of the squat compared with the linear hanging technique, showing that attention must be given to the technique used to hang SHCR.

  2. Derivative information recovery by a selective integration technique

    NASA Technical Reports Server (NTRS)

    Johnson, M. A.

    1974-01-01

    A nonlinear stationary homogeneous digital filter DIRSIT (derivative information recovery by a selective integration technique) is investigated. The spectrum of a quasi-linear discrete describing function (DDF) to DIRSIT is obtained by a digital measuring scheme. A finite impulse response (FIR) approximation to the quasi-linearization is then obtained. Finally, DIRSIT is compared with its quasi-linear approximation and with a standard digital differentiating technique. Results indicate the effects of DIRSIT on a wide variety of practical signals.

  3. Correlations between homologue concentrations of PCDD/Fs and toxic equivalency values in laboratory-, package boiler-, and field-scale incinerators.

    PubMed

    Iino, Fukuya; Takasuga, Takumi; Touati, Abderrahmane; Gullett, Brian K

    2003-01-01

    The toxic equivalency (TEQ) values of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs) are predicted with a model based on the homologue concentrations measured from a laboratory-scale reactor (124 data points), a package boiler (61 data points), and operating municipal waste incinerators (114 data points). Regardless of the three scales and types of equipment, the different temperature profiles, sampling emissions and/or solids (fly ash), and the various chemical and physical properties of the fuels, all the PCDF plots showed highly linear correlations (R(2)>0.99). The fitting lines of the reactor and the boiler data were almost linear with slope of unity, whereas the slope of the municipal waste incinerator data was 0.86, which is caused by higher predicted values for samples with high measured TEQ. The strong correlation also implies that each of the 10 toxic PCDF congeners has a constant concentration relative to its respective total homologue concentration despite a wide range of facility types and combustion conditions. The PCDD plots showed significant scatter and poor linearity, which implies that the relative concentration of PCDD TEQ congeners is more sensitive to variations in reaction conditions than that of the PCDF congeners.

  4. An Efficient Test for Gene-Environment Interaction in Generalized Linear Mixed Models with Family Data.

    PubMed

    Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza

    2017-09-27

    Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.

  5. Integration of different data gap filling techniques to facilitate assessment of polychlorinated biphenyls: A proof of principle case study (ASCCT meeting)

    EPA Science Inventory

    Data gap filling techniques are commonly used to predict hazard in the absence of empirical data. The most established techniques are read-across, trend analysis and quantitative structure-activity relationships (QSARs). Toxic equivalency factors (TEFs) are less frequently used d...

  6. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    ERIC Educational Resources Information Center

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  7. Linearization of digital derived rate algorithm for use in linear stability analysis

    NASA Technical Reports Server (NTRS)

    Graham, R. E.; Porada, T. W.

    1985-01-01

    The digital derived rate (DDR) algorithm is used to calculate the rate of rotation of the Centaur upper-stage rocket. The DDR is highly nonlinear algorithm, and classical linear stability analysis of the spacecraft cannot be performed without linearization. The performance of this rate algorithm is characterized by a gain and phase curve that drop off at the same frequency. This characteristic is desirable for many applications. A linearization technique for the DDR algorithm is investigated. The linearization method is described. Examples of the results of the linearization technique are illustrated, and the effects of linearization are described. A linear digital filter may be used as a substitute for performing classical linear stability analyses, while the DDR itself may be used in time response analysis.

  8. Space Radiation Organ Doses for Astronauts on Past and Future Missions

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.

    2007-01-01

    We review methods and data used for determining astronaut organ dose equivalents on past space missions including Apollo, Skylab, Space Shuttle, NASA-Mir, and International Space Station (ISS). Expectations for future lunar missions are also described. Physical measurements of space radiation include the absorbed dose, dose equivalent, and linear energy transfer (LET) spectra, or a related quantity, the lineal energy (y) spectra that is measured by a tissue equivalent proportional counter (TEPC). These data are used in conjunction with space radiation transport models to project organ specific doses used in cancer and other risk projection models. Biodosimetry data from Mir, STS, and ISS missions provide an alternative estimate of organ dose equivalents based on chromosome aberrations. The physical environments inside spacecraft are currently well understood with errors in organ dose projections estimated as less than plus or minus 15%, however understanding the biological risks from space radiation remains a difficult problem because of the many radiation types including protons, heavy ions, and secondary neutrons for which there are no human data to estimate risks. The accuracy of projections of organ dose equivalents described here must be supplemented with research on the health risks of space exposure to properly assess crew safety for exploration missions.

  9. Design and simulation of optoelectronic complementary dual neural elements for realizing a family of normalized vector 'equivalence-nonequivalence' operations

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Aleksandr I.; Lazarev, Alexander A.; Magas, Taras E.

    2010-04-01

    Equivalence models (EM) advantages of neural networks (NN) are shown in paper. EMs are based on vectormatrix procedures with basic operations of continuous neurologic: normalized vector operations "equivalence", "nonequivalence", "autoequivalence", "autononequivalence". The capacity of NN on the basis of EM and of its modifications, including auto-and heteroassociative memories for 2D images, exceeds in several times quantity of neurons. Such neuroparadigms are very perspective for processing, recognition, storing large size and strongly correlated images. A family of "normalized equivalence-nonequivalence" neuro-fuzzy logic operations on the based of generalized operations fuzzy-negation, t-norm and s-norm is elaborated. A biologically motivated concept and time pulse encoding principles of continuous logic photocurrent reflexions and sample-storage devices with pulse-width photoconverters have allowed us to design generalized structures for realization of the family of normalized linear vector operations "equivalence"-"nonequivalence". Simulation results show, that processing time in such circuits does not exceed units of micro seconds. Circuits are simple, have low supply voltage (1-3 V), low power consumption (milliwatts), low levels of input signals (microwatts), integrated construction, satisfy the problem of interconnections and cascading.

  10. Battery-Charge-State Model

    NASA Technical Reports Server (NTRS)

    Vivian, H. C.

    1985-01-01

    Charge-state model for lead/acid batteries proposed as part of effort to make equivalent of fuel gage for battery-powered vehicles. Models based on equations that approximate observable characteristics of battery electrochemistry. Uses linear equations, easier to simulate on computer, and gives smooth transitions between charge, discharge, and recuperation.

  11. Description of a computer program and numerical techniques for developing linear perturbation models from nonlinear systems simulations

    NASA Technical Reports Server (NTRS)

    Dieudonne, J. E.

    1978-01-01

    A numerical technique was developed which generates linear perturbation models from nonlinear aircraft vehicle simulations. The technique is very general and can be applied to simulations of any system that is described by nonlinear differential equations. The computer program used to generate these models is discussed, with emphasis placed on generation of the Jacobian matrices, calculation of the coefficients needed for solving the perturbation model, and generation of the solution of the linear differential equations. An example application of the technique to a nonlinear model of the NASA terminal configured vehicle is included.

  12. Thermospheric dynamics - A system theory approach

    NASA Technical Reports Server (NTRS)

    Codrescu, M.; Forbes, J. M.; Roble, R. G.

    1990-01-01

    A system theory approach to thermospheric modeling is developed, based upon a linearization method which is capable of preserving nonlinear features of a dynamical system. The method is tested using a large, nonlinear, time-varying system, namely the thermospheric general circulation model (TGCM) of the National Center for Atmospheric Research. In the linearized version an equivalent system, defined for one of the desired TGCM output variables, is characterized by a set of response functions that is constructed from corresponding quasi-steady state and unit sample response functions. The linearized version of the system runs on a personal computer and produces an approximation of the desired TGCM output field height profile at a given geographic location.

  13. Linear decomposition approach for a class of nonconvex programming problems.

    PubMed

    Shen, Peiping; Wang, Chunfeng

    2017-01-01

    This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.

  14. Use of a Linear Paul Trap to Study Random Noise-Induced Beam Degradation in High-Intensity Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, Moses; Gilson, Erik P.; Davidson, Ronald C.

    2009-04-10

    A random noise-induced beam degradation that can affect intense beam transport over long propagation distances has been experimentally studied by making use of the transverse beam dynamics equivalence between an alternating-gradient (AG) focusing system and a linear Paul trap system. For the present studies, machine imperfections in the quadrupole focusing lattice are considered, which are emulated by adding small random noise on the voltage waveform of the quadrupole electrodes in the Paul trap. It is observed that externally driven noise continuously produces a nonthermal tail of trapped ions, and increases the transverse emittance almost linearly with the duration of themore » noise.« less

  15. Global invariants of paths and curves for the group of all linear similarities in the two-dimensional Euclidean space

    NASA Astrophysics Data System (ADS)

    Khadjiev, Djavvat; Ören, Idri˙s; Pekşen, Ömer

    Let E2 be the 2-dimensional Euclidean space, LSim(2) be the group of all linear similarities of E2 and LSim+(2) be the group of all orientation-preserving linear similarities of E2. The present paper is devoted to solutions of problems of global G-equivalence of paths and curves in E2 for the groups G = LSim(2),LSim+(2). Complete systems of global G-invariants of a path and a curve in E2 are obtained. Existence and uniqueness theorems are given. Evident forms of a path and a curve with the given global invariants are obtained.

  16. Correlation of open cell-attached and excised patch clamp techniques.

    PubMed

    Filipovic, D; Hayslett, J P

    1995-11-01

    The excised patch clamp configuration provides a unique technique for some types of single channel analyses, but maintenance of stable, long-lasting preparations may be confounded by rundown and/or rapid loss of seal. Studies were performed on the amiloride-sensitive Na+ channel, located on the apical surface of A6 cells, to determine whether the nystatin-induced open cell-attached patch could serve as an alternative configuration. Compared to excised inside-out patches, stable preparations were achieved more readily with the open cell-attached patch (9% vs. 56% of attempts). In both preparations, the current voltage (I-V) relation was linear, current amplitudes were equal at opposite equivalent clamped voltages, and Erev was zero in symmetrical Na+ solutions, indicating similar Na+ activities on the cytosolic and external surfaces of the patch. Moreover, there was no evidence that nystatin altered channel activity in the patch because slope conductance (3-4pS) and Erev (75 mV), when the bath was perfused with a high K:low Na solution (ENa = 80 mV), were nearly equal in both patch configurations. Our results therefore indicate that the nystatin-induced open cell-attached patch can serve as an alternative approach to the excised inside-out patch when experiments require modulation of univalent ions in the cytosol.

  17. Accelerating electrostatic surface potential calculation with multi-scale approximation on graphics processing units.

    PubMed

    Anandakrishnan, Ramu; Scogland, Tom R W; Fenley, Andrew T; Gordon, John C; Feng, Wu-chun; Onufriev, Alexey V

    2010-06-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed-up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson-Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multi-scale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  18. Performance assessment of a single-pixel compressive sensing imaging system

    NASA Astrophysics Data System (ADS)

    Du Bosq, Todd W.; Preece, Bradley L.

    2016-05-01

    Conventional electro-optical and infrared (EO/IR) systems capture an image by measuring the light incident at each of the millions of pixels in a focal plane array. Compressive sensing (CS) involves capturing a smaller number of unconventional measurements from the scene, and then using a companion process known as sparse reconstruction to recover the image as if a fully populated array that satisfies the Nyquist criteria was used. Therefore, CS operates under the assumption that signal acquisition and data compression can be accomplished simultaneously. CS has the potential to acquire an image with equivalent information content to a large format array while using smaller, cheaper, and lower bandwidth components. However, the benefits of CS do not come without compromise. The CS architecture chosen must effectively balance between physical considerations (SWaP-C), reconstruction accuracy, and reconstruction speed to meet operational requirements. To properly assess the value of such systems, it is necessary to fully characterize the image quality, including artifacts and sensitivity to noise. Imagery of the two-handheld object target set at range was collected using a passive SWIR single-pixel CS camera for various ranges, mirror resolution, and number of processed measurements. Human perception experiments were performed to determine the identification performance within the trade space. The performance of the nonlinear CS camera was modeled with the Night Vision Integrated Performance Model (NV-IPM) by mapping the nonlinear degradations to an equivalent linear shift invariant model. Finally, the limitations of CS modeling techniques will be discussed.

  19. Using crosscorrelation techniques to determine the impulse response of linear systems

    NASA Technical Reports Server (NTRS)

    Dallabetta, Michael J.; Li, Harry W.; Demuth, Howard B.

    1993-01-01

    A crosscorrelation method of measuring the impulse response of linear systems is presented. The technique, implementation, and limitations of this method are discussed. A simple system is designed and built using discrete components and the impulse response of a linear circuit is measured. Theoretical and software simulation results are presented.

  20. Modeling and possible implementation of self-learning equivalence-convolutional neural structures for auto-encoding-decoding and clusterization of images

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.

    2017-08-01

    Self-learning equivalent-convolutional neural structures (SLECNS) for auto-coding-decoding and image clustering are discussed. The SLECNS architectures and their spatially invariant equivalent models (SI EMs) using the corresponding matrix-matrix procedures with basic operations of continuous logic and non-linear processing are proposed. These SI EMs have several advantages, such as the ability to recognize image fragments with better efficiency and strong cross correlation. The proposed clustering method of fragments with regard to their structural features is suitable not only for binary, but also color images and combines self-learning and the formation of weight clustered matrix-patterns. Its model is constructed and designed on the basis of recursively processing algorithms and to k-average method. The experimental results confirmed that larger images and 2D binary fragments with a large numbers of elements may be clustered. For the first time the possibility of generalization of these models for space invariant case is shown. The experiment for an image with dimension of 256x256 (a reference array) and fragments with dimensions of 7x7 and 21x21 for clustering is carried out. The experiments, using the software environment Mathcad, showed that the proposed method is universal, has a significant convergence, the small number of iterations is easily, displayed on the matrix structure, and confirmed its prospects. Thus, to understand the mechanisms of self-learning equivalence-convolutional clustering, accompanying her to the competitive processes in neurons, and the neural auto-encoding-decoding and recognition principles with the use of self-learning cluster patterns is very important which used the algorithm and the principles of non-linear processing of two-dimensional spatial functions of images comparison. These SIEMs can simply describe the signals processing during the all training and recognition stages and they are suitable for unipolar-coding multilevel signals. We show that the implementation of SLECNS based on known equivalentors or traditional correlators is possible if they are based on proposed equivalental two-dimensional functions of image similarity. The clustering efficiency in such models and their implementation depends on the discriminant properties of neural elements of hidden layers. Therefore, the main models and architecture parameters and characteristics depends on the applied types of non-linear processing and function used for image comparison or for adaptive-equivalental weighing of input patterns. Real model experiments in Mathcad are demonstrated, which confirm that non-linear processing on equivalent functions allows you to determine the neuron winners and adjust the weight matrix. Experimental results have shown that such models can be successfully used for auto- and hetero-associative recognition. They can also be used to explain some mechanisms known as "focus" and "competing gain-inhibition concept". The SLECNS architecture and hardware implementations of its basic nodes based on multi-channel convolvers and correlators with time integration are proposed. The parameters and performance of such architectures are estimated.

  1. Broadband linearisation of high-efficiency power amplifiers

    NASA Technical Reports Server (NTRS)

    Kenington, Peter B.; Parsons, Kieran J.; Bennett, David W.

    1993-01-01

    A feedforward-based amplifier linearization technique is presented which is capable of yielding significant improvements in both linearity and power efficiency over conventional amplifier classes (e.g. class-A or class-AB). Theoretical and practical results are presented showing that class-C stages may be used for both the main and error amplifiers yielding practical efficiencies well in excess of 30 percent, with theoretical efficiencies of much greater than 40 percent being possible. The levels of linearity which may be achieved are required for most satellite systems, however if greater linearity is required, the technique may be used in addition to conventional pre-distortion techniques.

  2. Investigation of the response characteristics of OSL albedo neutron dosimeters in a 241AmBe reference neutron field

    NASA Astrophysics Data System (ADS)

    Liamsuwan, T.; Wonglee, S.; Channuie, J.; Esoa, J.; Monthonwattana, S.

    2017-06-01

    The objective of this work was to systematically investigate the response characteristics of optically stimulated luminescence Albedo neutron (OSLN) dosimeters to ensure reliable personal dosimetry service provided by Thailand Institute of Nuclear Technology (TINT). Several batches of InLight® OSLN dosimeters were irradiated in a reference neutron field generated by the in-house 241AmBe neutron irradiator. The OSL signals were typically measured 24 hours after irradiation using the InLight® Auto 200 Reader. Based on known values of delivered neutron dose equivalent, the reading correction factor to be used by the reader was evaluated. Subsequently, batch homogeneity, dose linearity, lower limit of detection and fading of the OSLN dosimeters were examined. Batch homogeneity was evaluated to be 0.12 ± 0.05. The neutron dose response exhibited a linear relationship (R2=0.9974) within the detectable neutron dose equivalent range under test (0.4-3 mSv). For this neutron field, the lower limit of detection was between 0.2 and 0.4 mSv. Over different post-irradiation storage times of up to 180 days, the readings fluctuated within ±5%. Personal dosimetry based on the investigated OSLN dosimeter is considered to be reliable under similar neutron exposure conditions, i.e. similar neutron energy spectra and dose equivalent values.

  3. Parallel But Not Equivalent: Challenges and Solutions for Repeated Assessment of Cognition over Time

    PubMed Central

    Gross, Alden L.; Inouye, Sharon K.; Rebok, George W.; Brandt, Jason; Crane, Paul K.; Parisi, Jeanine M.; Tommet, Doug; Bandeen-Roche, Karen; Carlson, Michelle C.; Jones, Richard N.

    2013-01-01

    Objective Analyses of individual differences in change may be unintentionally biased when versions of a neuropsychological test used at different follow-ups are not of equivalent difficulty. This study’s objective was to compare mean, linear, and equipercentile equating methods and demonstrate their utility in longitudinal research. Study Design and Setting The Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE, N=1,401) study is a longitudinal randomized trial of cognitive training. The Alzheimer’s Disease Neuroimaging Initiative (ADNI, n=819) is an observational cohort study. Nonequivalent alternate versions of the Auditory Verbal Learning Test (AVLT) were administered in both studies. Results Using visual displays, raw and mean-equated AVLT scores in both studies showed obvious nonlinear trajectories in reference groups that should show minimal change, poor equivalence over time (ps≤0.001), and raw scores demonstrated poor fits in models of within-person change (RMSEAs>0.12). Linear and equipercentile equating produced more similar means in reference groups (ps≥0.09) and performed better in growth models (RMSEAs<0.05). Conclusion Equipercentile equating is the preferred equating method because it accommodates tests more difficult than a reference test at different percentiles of performance and performs well in models of within-person trajectory. The method has broad applications in both clinical and research settings to enhance the ability to use nonequivalent test forms. PMID:22540849

  4. Folk Theorems on the Correspondence between State-Based and Event-Based Systems

    NASA Astrophysics Data System (ADS)

    Reniers, Michel A.; Willemse, Tim A. C.

    Kripke Structures and Labelled Transition Systems are the two most prominent semantic models used in concurrency theory. Both models are commonly believed to be equi-expressive. One can find many ad-hoc embeddings of one of these models into the other. We build upon the seminal work of De Nicola and Vaandrager that firmly established the correspondence between stuttering equivalence in Kripke Structures and divergence-sensitive branching bisimulation in Labelled Transition Systems. We show that their embeddings can also be used for a range of other equivalences of interest, such as strong bisimilarity, simulation equivalence, and trace equivalence. Furthermore, we extend the results by De Nicola and Vaandrager by showing that there are additional translations that allow one to use minimisation techniques in one semantic domain to obtain minimal representatives in the other semantic domain for these equivalences.

  5. Serum 25-hydroxyvitamin D level is associated with myopia in the Korea national health and nutrition examination survey.

    PubMed

    Kwon, Jin-Woo; Choi, Jin A; La, Tae Yoon

    2016-11-01

    The aim of this article was to assess the associations of serum 25-hydroxyvitamin D [25(OH)D] and daily sun exposure time with myopia in Korean adults.This study is based on the Korea National Health and Nutrition Examination Survey (KNHANES) of Korean adults in 2010-2012; multiple logistic regression analyses were performed to examine the associations of serum 25(OH)D levels and daily sun exposure time with myopia, defined as spherical equivalent ≤-0.5D, after adjustment for age, sex, household income, body mass index (BMI), exercise, intraocular pressure (IOP), and education level. Also, multiple linear regression analyses were performed to examine the relationship between serum 25(OH)D levels with spherical equivalent after adjustment for daily sun exposure time in addition to the confounding factors above.Between the nonmyopic and myopic groups, spherical equivalent, age, IOP, BMI, waist circumference, education level, household income, and area of residence differed significantly (all P < 0.05). Compared with subjects with daily sun exposure time <2 hour, subjects with sun exposure time ≥2 to <5 hour, and those with sun exposure time ≥5 hour had significantly less myopia (P < 0.001). In addition, compared with subjects were categorized into quartiles of serum 25(OH)D, the higher quartiles had gradually lower prevalences of myopia after adjustment for confounding factors (P < 0.001). In multiple linear regression analyses, spherical equivalent was significantly associated with serum 25(OH)D concentration after adjustment for confounding factors (P = 0.002).Low serum 25(OH)D levels and shorter daily sun exposure time may be independently associated with a high prevalence of myopia in Korean adults. These data suggest a direct role for vitamin D in the development of myopia.

  6. On-orbit identifying the inertia parameters of space robotic systems using simple equivalent dynamics

    NASA Astrophysics Data System (ADS)

    Xu, Wenfu; Hu, Zhonghua; Zhang, Yu; Liang, Bin

    2017-03-01

    After being launched into space to perform some tasks, the inertia parameters of a space robotic system may change due to fuel consumption, hardware reconfiguration, target capturing, and so on. For precision control and simulation, it is required to identify these parameters on orbit. This paper proposes an effective method for identifying the complete inertia parameters (including the mass, inertia tensor and center of mass position) of a space robotic system. The key to the method is to identify two types of simple dynamics systems: equivalent single-body and two-body systems. For the former, all of the joints are locked into a designed configuration and the thrusters are used for orbital maneuvering. The object function for optimization is defined in terms of acceleration and velocity of the equivalent single body. For the latter, only one joint is unlocked and driven to move along a planned (exiting) trajectory in free-floating mode. The object function is defined based on the linear and angular momentum equations. Then, the parameter identification problems are transformed into non-linear optimization problems. The Particle Swarm Optimization (PSO) algorithm is applied to determine the optimal parameters, i.e. the complete dynamic parameters of the two equivalent systems. By sequentially unlocking the 1st to nth joints (or unlocking the nth to 1st joints), the mass properties of body 0 to n (or n to 0) are completely identified. For the proposed method, only simple dynamics equations are needed for identification. The excitation motion (orbit maneuvering and joint motion) is also easily realized. Moreover, the method does not require prior knowledge of the mass properties of any body. It is general and practical for identifying a space robotic system on-orbit.

  7. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    PubMed

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  8. Measurement and simulation of lineal energy distribution at the CERN high energy facility with a tissue equivalent proportional counter.

    PubMed

    Rollet, S; Autischer, M; Beck, P; Latocha, M

    2007-01-01

    The response of a tissue equivalent proportional counter (TEPC) in a mixed radiation field with a neutron energy distribution similar to the radiation field at commercial flight altitudes has been studied. The measurements have been done at the CERN-EU High-Energy Reference Field (CERF) facility where a well-characterised radiation field is available for intercomparison. The TEPC instrument used by the ARC Seibersdorf Research is filled with pure propane gas at low pressure and can be used to determine the lineal energy distribution of the energy deposition in a mass of gas equivalent to a 2 microm diameter volume of unit density tissue, of similar size to the nuclei of biological cells. The linearity of the detector response was checked both in term of dose and dose rate. The effect of dead-time has been corrected. The influence of the detector exposure location and orientation in the radiation field on the dose distribution was also studied as a function of the total dose. The microdosimetric distribution of the absorbed dose as a function of the lineal energy has been obtained and compared with the same distribution simulated with the FLUKA Monte Carlo transport code. The dose equivalent was calculated by folding this distribution with the quality factor as a function of linear energy transfer. The comparison between the measured and simulated distributions show that they are in good agreement. As a result of this study the detector is well characterised, thanks also to the numerical simulations the instrument response is well understood, and it's currently being used onboard the aircrafts to evaluate the dose to aircraft crew caused by cosmic radiation.

  9. Domain switching kinetics in ferroelectric-resistive BiFeO3 thin film memories

    NASA Astrophysics Data System (ADS)

    Meng, Jianwei; Jiang, Jun; Geng, Wenping; Chen, Zhihui; Zhang, Wei; Jiang, Anquan

    2015-02-01

    We fabricated (00l) BiFeO3 (BFO) thin films in different growth modes on SrRuO3/SrTiO3 substrates using a pulsed laser deposition technique. X-ray diffraction patterns show an out-of-plane lattice constant of 4.03 Å and ferroelectric polarization of 82 µC/cm2 for the BFO thin film in a layer-by-layer growth mode (2D-BFO), larger than 3.96 Å and 51 µC/cm2 for the thin film in the 3D-island formation growth mode (3D-BFO). The 2D-BFO thin film at 300 K shows switchable on/off diode currents upon polarization flipping near a negative coercive voltage, which is nevertheless absent from the above 3D-BFO thin film. From a positive-up-negative-down pulse characterization technique, we measured domain switching current transients as well as polarization-voltage (Pf-Vf) hysteresis loops in both semiconducting thin films. Pf-Vf hysteresis loops after 1 µs-retention time show the preferred domain orientation pointing to bottom electrodes in a 3D-BFO thin film. The poor retention of the domains pointing to top electrodes can be improved considerably in a 2D-BFO thin film. From these measurements, we extracted domain switching time dependence of coercive voltage at temperatures of 78-300 K. From these dependences, we found coercive voltages in semiconducting ferroelectric thin films much higher than those in insulating thin films, disobeying the traditional Merz equation. Finally, an equivalent resistance model in description of free-carrier compensation of the front domain boundary charge is developed to interpret this difference. This equivalent resistance can be coincidently extracted either from domain switching time dependence of coercive voltage or from applied voltage dependence of domain switching current, which drops almost linearly with the temperature until down to 0 in a ferroelectric insulator at 78 K.

  10. Image processing techniques revealing the relationship between the field-measured ambient gamma dose equivalent rate and geological conditions at a granitic area, Velence Mountains, Hungary

    NASA Astrophysics Data System (ADS)

    Beltran Torres, Silvana; Petrik, Attila; Zsuzsanna Szabó, Katalin; Jordan, Gyozo; Szabó, Csaba

    2017-04-01

    In order to estimate the annual dose that the public receive from natural radioactivity, the identification of the potential risk areas is required which, in turn, necessitates understanding the relationship between the spatial distribution of natural radioactivity and the geogenic risk factors (e.g., rock types, dykes, faults, soil conditions, etc.). A detailed spatial analysis of ambient gamma dose equivalent rate was performed in the western side of Velence Mountains, the largest outcropped granitic area in Hungary. In order to assess the role of local geology in the spatial distribution of ambient gamma dose rates, field measurements were carried out at ground level at 300 sites along a 250 m x 250 m regular grid in a total surface of 14.7 km2. Digital image processing methods were applied to identify anomalies, heterogeneities and spatial patterns in the measured gamma dose rates, including local maxima and minima determination, digital cross sections, gradient magnitude and gradient direction, second derivative profile curvature, local variability, lineament density, 2D autocorrelation and directional variogram analyses. Statistical inference showed that different gamma dose rate levels are associated with the rock types (i.e., Carboniferous granite, Pleistocene colluvial, proluvial, deluvial sediments and talus, and Pannonian sand and pebble), with the highest level on the Carboniferous granite including outlying values. Moreover, digital image processing revealed that linear gamma dose rate spatial features are parallel to the SW-NE dyke system and possibly to the NW-SE main fractures. The results of this study underline the importance of understanding the role of geogenic risk factors influencing the ambient gamma dose rate received by public. The study also demonstrates the power of the image processing techniques for the identification of spatial pattern in field-measured geogenic radiation.

  11. Cross-cultural equivalence of the patient- and parent-reported quality of life in short stature youth (QoLISSY) questionnaire.

    PubMed

    Bullinger, Monika; Quitmann, Julia; Silva, Neuza; Rohenkohl, Anja; Chaplin, John E; DeBusk, Kendra; Mimoun, Emmanuelle; Feigerlova, Eva; Herdman, Michael; Sanz, Dolores; Wollmann, Hartmut; Pleil, Andreas; Power, Michael

    2014-01-01

    Testing cross-cultural equivalence of patient-reported outcomes requires sufficiently large samples per country, which is difficult to achieve in rare endocrine paediatric conditions. We describe a novel approach to cross-cultural testing of the Quality of Life in Short Stature Youth (QoLISSY) questionnaire in five countries by sequentially taking one country out (TOCO) from the total sample and iteratively comparing the resulting psychometric performance. Development of the QoLISSY proceeded from focus group discussions through pilot testing to field testing in 268 short-statured patients and their parents. To explore cross-cultural equivalence, the iterative TOCO technique was used to examine and compare the validity, reliability, and convergence of patient and parent responses on QoLISSY in the field test dataset, and to predict QoLISSY scores from clinical, socio-demographic and psychosocial variables. Validity and reliability indicators were satisfactory for each sample after iteratively omitting one country. Comparisons with the total sample revealed cross-cultural equivalence in internal consistency and construct validity for patients and parents, high inter-rater agreement and a substantial proportion of QoLISSY variance explained by predictors. The TOCO technique is a powerful method to overcome problems of country-specific testing of patient-reported outcome instruments. It provides an empirical support to QoLISSY's cross-cultural equivalence and is recommended for future research.

  12. Resonant Rectifier ICs for Piezoelectric Energy Harvesting Using Low-Voltage Drop Diode Equivalents

    PubMed Central

    Din, Amad Ud; Chandrathna, Seneke Chamith; Lee, Jong-Wook

    2017-01-01

    Herein, we present the design technique of a resonant rectifier for piezoelectric (PE) energy harvesting. We propose two diode equivalents to reduce the voltage drop in the rectifier operation, a minuscule-drop-diode equivalent (MDDE) and a low-drop-diode equivalent (LDDE). The diode equivalents are embedded in resonant rectifier integrated circuits (ICs), which use symmetric bias-flip to reduce the power used for charging and discharging the internal capacitance of a PE transducer. The self-startup function is supported by synchronously generating control pulses for the bias-flip from the PE transducer. Two resonant rectifier ICs, using both MDDE and LDDE, are fabricated in a 0.18 μm CMOS process and their performances are characterized under external and self-power conditions. Under the external-power condition, the rectifier using LDDE delivers an output power POUT of 564 μW and a rectifier output voltage VRECT of 3.36 V with a power transfer efficiency of 68.1%. Under self-power conditions, the rectifier using MDDE delivers a POUT of 288 μW and a VRECT of 2.4 V with a corresponding efficiency of 78.4%. Using the proposed bias-flip technique, the power extraction capability of the proposed rectifier is 5.9 and 3.0 times higher than that of a conventional full-bridge rectifier. PMID:28422085

  13. Resonant Rectifier ICs for Piezoelectric Energy Harvesting Using Low-Voltage Drop Diode Equivalents.

    PubMed

    Din, Amad Ud; Chandrathna, Seneke Chamith; Lee, Jong-Wook

    2017-04-19

    Herein, we present the design technique of a resonant rectifier for piezoelectric (PE) energy harvesting. We propose two diode equivalents to reduce the voltage drop in the rectifier operation, a minuscule-drop-diode equivalent (MDDE) and a low-drop-diode equivalent (LDDE). The diode equivalents are embedded in resonant rectifier integrated circuits (ICs), which use symmetric bias-flip to reduce the power used for charging and discharging the internal capacitance of a PE transducer. The self-startup function is supported by synchronously generating control pulses for the bias-flip from the PE transducer. Two resonant rectifier ICs, using both MDDE and LDDE, are fabricated in a 0.18 μm CMOS process and their performances are characterized under external and self-power conditions. Under the external-power condition, the rectifier using LDDE delivers an output power P OUT of 564 μW and a rectifier output voltage V RECT of 3.36 V with a power transfer efficiency of 68.1%. Under self-power conditions, the rectifier using MDDE delivers a P OUT of 288 μW and a V RECT of 2.4 V with a corresponding efficiency of 78.4%. Using the proposed bias-flip technique, the power extraction capability of the proposed rectifier is 5.9 and 3.0 times higher than that of a conventional full-bridge rectifier.

  14. Consistent searches for SMEFT effects in non-resonant dijet events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alte, Stefan; Konig, Matthias; Shepherd, William

    Here, we investigate the bounds which can be placed on generic new-physics contributions to dijet production at the LHC using the framework of the Standard Model Effective Field Theory, deriving the first consistently-treated EFT bounds from non-resonant high-energy data. We recast an analysis searching for quark compositeness, equivalent to treating the SM with one higher-dimensional operator as a complete UV model. In order to reach consistent, model-independent EFT conclusions, it is necessary to truncate the EFT effects consistently at ordermore » $$1/\\Lambda^2$$ and to include the possibility of multiple operators simultaneously contributing to the observables, neither of which has been done in previous searches of this nature. Furthermore, it is important to give consistent error estimates for the theoretical predictions of the signal model, particularly in the region of phase space where the probed energy is approaching the cutoff scale of the EFT. There are two linear combinations of operators which contribute to dijet production in the SMEFT with distinct angular behavior; we identify those linear combinations and determine the ability of LHC searches to constrain them simultaneously. Consistently treating the EFT generically leads to weakened bounds on new-physics parameters. These constraints will be a useful input to future global analyses in the SMEFT framework, and the techniques used here to consistently search for EFT effects are directly applicable to other off-resonance signals.« less

  15. Methods for the accurate estimation of confidence intervals on protein folding ϕ-values

    PubMed Central

    Ruczinski, Ingo; Sosnick, Tobin R.; Plaxco, Kevin W.

    2006-01-01

    ϕ-Values provide an important benchmark for the comparison of experimental protein folding studies to computer simulations and theories of the folding process. Despite the growing importance of ϕ measurements, however, formulas to quantify the precision with which ϕ is measured have seen little significant discussion. Moreover, a commonly employed method for the determination of standard errors on ϕ estimates assumes that estimates of the changes in free energy of the transition and folded states are independent. Here we demonstrate that this assumption is usually incorrect and that this typically leads to the underestimation of ϕ precision. We derive an analytical expression for the precision of ϕ estimates (assuming linear chevron behavior) that explicitly takes this dependence into account. We also describe an alternative method that implicitly corrects for the effect. By simulating experimental chevron data, we show that both methods accurately estimate ϕ confidence intervals. We also explore the effects of the commonly employed techniques of calculating ϕ from kinetics estimated at non-zero denaturant concentrations and via the assumption of parallel chevron arms. We find that these approaches can produce significantly different estimates for ϕ (again, even for truly linear chevron behavior), indicating that they are not equivalent, interchangeable measures of transition state structure. Lastly, we describe a Web-based implementation of the above algorithms for general use by the protein folding community. PMID:17008714

  16. A self-consistent estimate for linear viscoelastic polycrystals with internal variables inferred from the collocation method

    NASA Astrophysics Data System (ADS)

    Vu, Q. H.; Brenner, R.; Castelnau, O.; Moulinec, H.; Suquet, P.

    2012-03-01

    The correspondence principle is customarily used with the Laplace-Carson transform technique to tackle the homogenization of linear viscoelastic heterogeneous media. The main drawback of this method lies in the fact that the whole stress and strain histories have to be considered to compute the mechanical response of the material during a given macroscopic loading. Following a remark of Mandel (1966 Mécanique des Milieux Continus(Paris, France: Gauthier-Villars)), Ricaud and Masson (2009 Int. J. Solids Struct. 46 1599-1606) have shown the equivalence between the collocation method used to invert Laplace-Carson transforms and an internal variables formulation. In this paper, this new method is developed for the case of polycrystalline materials with general anisotropic properties for local and macroscopic behavior. Applications are provided for the case of constitutive relations accounting for glide of dislocations on particular slip systems. It is shown that the method yields accurate results that perfectly match the standard collocation method and reference full-field results obtained with a FFT numerical scheme. The formulation is then extended to the case of time- and strain-dependent viscous properties, leading to the incremental collocation method (ICM) that can be solved efficiently by a step-by-step procedure. Specifically, the introduction of isotropic and kinematic hardening at the slip system scale is considered.

  17. Lowering whole-body radiation doses in pediatric intensity-modulated radiotherapy through the use of unflattened photon beams.

    PubMed

    Cashmore, Jason; Ramtohul, Mark; Ford, Dan

    2011-07-15

    Intensity modulated radiotherapy (IMRT) has been linked with an increased risk of secondary cancer induction due to the extra leakage radiation associated with delivery of these techniques. Removal of the flattening filter offers a simple way of reducing head leakage, and it may be possible to generate equivalent IMRT plans and to deliver these on a standard linear accelerator operating in unflattened mode. An Elekta Precise linear accelerator has been commissioned to operate in both conventional and unflattened modes (energy matched at 6 MV) and a direct comparison made between the treatment planning and delivery of pediatric intracranial treatments using both approaches. These plans have been evaluated and delivered to an anthropomorphic phantom. Plans generated in unflattened mode are clinically identical to those for conventional IMRT but can be delivered with greatly reduced leakage radiation. Measurements in an anthropomorphic phantom at clinically relevant positions including the thyroid, lung, ovaries, and testes show an average reduction in peripheral doses of 23.7%, 29.9%, 64.9%, and 70.0%, respectively, for identical plan delivery compared to conventional IMRT. IMRT delivery in unflattened mode removes an unwanted and unnecessary source of scatter from the treatment head and lowers leakage doses by up to 70%, thereby reducing the risk of radiation-induced second cancers. Removal of the flattening filter is recommended for IMRT treatments. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. The Addenbrooke's Cognitive Examination Revised (ACE-R) and its sub-scores: normative values in an Italian population sample.

    PubMed

    Siciliano, Mattia; Raimo, Simona; Tufano, Dario; Basile, Giuseppe; Grossi, Dario; Santangelo, Franco; Trojano, Luigi; Santangelo, Gabriella

    2016-03-01

    The Addenbrooke's Cognitive Examination Revised (ACE-R) is a rapid screening battery, including five sub-scales to explore different cognitive domains: attention/orientation, memory, fluency, language and visuospatial. ACE-R is considered useful in discriminating cognitively normal subjects from patients with mild dementia. The aim of present study was to provide normative values for ACE-R total score and sub-scale scores in a large sample of Italian healthy subjects. Five hundred twenty-six Italian healthy subjects (282 women and 246 men) of different ages (age range 20-93 years) and educational level (from primary school to university) underwent ACE-R and Montreal Cognitive Assessment (MoCA). Multiple linear regression analysis revealed that age and education significantly influenced performance on ACE-R total score and sub-scale scores. A significant effect of gender was found only in sub-scale attention/orientation. From the derived linear equation, a correction grid for raw scores was built. Inferential cut-offs score were estimated using a non-parametric technique and equivalent scores (ES) were computed. Correlation analysis showed a good significant correlation between ACE-R adjusted scores with MoCA adjusted scores (r = 0.612, p < 0.001). The present study provided normative data for the ACE-R in an Italian population useful for both clinical and research purposes.

  19. Consistent searches for SMEFT effects in non-resonant dijet events

    DOE PAGES

    Alte, Stefan; Konig, Matthias; Shepherd, William

    2018-01-19

    Here, we investigate the bounds which can be placed on generic new-physics contributions to dijet production at the LHC using the framework of the Standard Model Effective Field Theory, deriving the first consistently-treated EFT bounds from non-resonant high-energy data. We recast an analysis searching for quark compositeness, equivalent to treating the SM with one higher-dimensional operator as a complete UV model. In order to reach consistent, model-independent EFT conclusions, it is necessary to truncate the EFT effects consistently at ordermore » $$1/\\Lambda^2$$ and to include the possibility of multiple operators simultaneously contributing to the observables, neither of which has been done in previous searches of this nature. Furthermore, it is important to give consistent error estimates for the theoretical predictions of the signal model, particularly in the region of phase space where the probed energy is approaching the cutoff scale of the EFT. There are two linear combinations of operators which contribute to dijet production in the SMEFT with distinct angular behavior; we identify those linear combinations and determine the ability of LHC searches to constrain them simultaneously. Consistently treating the EFT generically leads to weakened bounds on new-physics parameters. These constraints will be a useful input to future global analyses in the SMEFT framework, and the techniques used here to consistently search for EFT effects are directly applicable to other off-resonance signals.« less

  20. Assessment of the application of an automated electronic milk analyzer for the enumeration of total bacteria in raw goat milk.

    PubMed

    Ramsahoi, L; Gao, A; Fabri, M; Odumeru, J A

    2011-07-01

    Automated electronic milk analyzers for rapid enumeration of total bacteria counts (TBC) are widely used for raw milk testing by many analytical laboratories worldwide. In Ontario, Canada, Bactoscan flow cytometry (BsnFC; Foss Electric, Hillerød, Denmark) is the official anchor method for TBC in raw cow milk. Penalties are levied at the BsnFC equivalent level of 50,000 cfu/mL, the standard plate count (SPC) regulatory limit. This study was conducted to assess the BsnFC for TBC in raw goat milk, to determine the mathematical relationship between the SPC and BsnFC methods, and to identify probable reasons for the difference in the SPC:BsnFC equivalents for goat and cow milks. Test procedures were conducted according to International Dairy Federation Bulletin guidelines. Approximately 115 farm bulk tank milk samples per month were tested for inhibitor residues, SPC, BsnFC, psychrotrophic bacteria count, composition (fat, protein, lactose, lactose and other solids, and freezing point), and somatic cell count from March 2009 to February 2010. Data analysis of the results for the samples tested indicated that the BsnFC method would be a good alternative to the SPC method, providing accurate and more precise results with a faster turnaround time. Although a linear regression model showed good correlation and prediction, tests for linearity indicated that the relationship was linear only beyond log 4.1 SPC. The logistic growth curve best modeled the relationship between the SPC and BsnFC for the entire sample population. The BsnFC equivalent to the SPC 50,000 cfu/mL regulatory limit was estimated to be 321,000 individual bacteria count (ibc)/mL. This estimate differs considerably from the BsnFC equivalent for cow milk (121,000 ibc/mL). Because of the low frequency of bulk tank milk pickups at goat farms, 78.5% of the samples had their oldest milking in the tank to be 6.5 to 9.0 d old when tested, compared with the cow milk samples, which had their oldest milking at 4 d old when tested. This may be one of the major factors contributing to the larger goat milk BsnFC equivalence. Correlations and interactions between various test results were also discussed to further understand differences between the 2 methods for goat and cow milks. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  1. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2014-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  2. Kernel PLS-SVC for Linear and Nonlinear Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Matthews, Bryan

    2003-01-01

    A new methodology for discrimination is proposed. This is based on kernel orthonormalized partial least squares (PLS) dimensionality reduction of the original data space followed by support vector machines for classification. Close connection of orthonormalized PLS and Fisher's approach to linear discrimination or equivalently with canonical correlation analysis is described. This gives preference to use orthonormalized PLS over principal component analysis. Good behavior of the proposed method is demonstrated on 13 different benchmark data sets and on the real world problem of the classification finger movement periods versus non-movement periods based on electroencephalogram.

  3. [Not Available].

    PubMed

    Bernard, A M; Burgot, J L

    1981-12-01

    The variation in heat capacity and the thermal shifts which accompany a thermometric determination make the thermogram, even in the case of a very rapid and irreversible reaction, hyperbolic instead of formed of straight segments. These departures from linearity, which are inconvenient in the interpretation and exploitation of the thermograms, can be calculated as a function of the degree of titration. The relation obtained introduces a parameter which the authors call the apparent change of capacity at the equivalence point, and which takes into account the two causes of deviation from linearity. This relationship is confirmed experimentally.

  4. Linear Covariance Analysis and Epoch State Estimators

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Carpenter, J. Russell

    2012-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  5. Method of Preparing Polymers with Low Melt Viscosity

    NASA Technical Reports Server (NTRS)

    Jensen, Brian J. (Inventor)

    2001-01-01

    This invention is an improvement in standard polymerizations procedures, i.e., addition-type and step-growth type polymerizations, wherein monomers are reacted to form a growing polymer chain. The improvement includes employing an effective amount of a trifunctional monomer (such as a trifunctional amine anhydride, or phenol) in the polymerization procedure to form a mixture of polymeric materials consisting of branced polymers, star-shaped polymers, and linear polymers. This mixture of polymeric materials has a lower melt temperature and a lower melt viscosity than corresponding linear polymeric materials of equivalent molecular weight.

  6. Conformal array design on arbitrary polygon surface with transformation optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Li, E-mail: dengl@bupt.edu.cn; Hong, Weijun, E-mail: hongwj@bupt.edu.cn; Zhu, Jianfeng

    2016-06-15

    A transformation-optics based method to design a conformal antenna array on an arbitrary polygon surface is proposed and demonstrated in this paper. This conformal antenna array can be adjusted to behave equivalently as a uniformly spaced linear array by applying an appropriate transformation medium. An typical example of general arbitrary polygon conformal arrays, not limited to circular array, is presented, verifying the proposed approach. In summary, the novel arbitrary polygon surface conformal array can be utilized in array synthesis and beam-forming, maintaining all benefits of linear array.

  7. A statistical assessment of differences and equivalences between genetically modified and reference plant varieties

    PubMed Central

    2011-01-01

    Background Safety assessment of genetically modified organisms is currently often performed by comparative evaluation. However, natural variation of plant characteristics between commercial varieties is usually not considered explicitly in the statistical computations underlying the assessment. Results Statistical methods are described for the assessment of the difference between a genetically modified (GM) plant variety and a conventional non-GM counterpart, and for the assessment of the equivalence between the GM variety and a group of reference plant varieties which have a history of safe use. It is proposed to present the results of both difference and equivalence testing for all relevant plant characteristics simultaneously in one or a few graphs, as an aid for further interpretation in safety assessment. A procedure is suggested to derive equivalence limits from the observed results for the reference plant varieties using a specific implementation of the linear mixed model. Three different equivalence tests are defined to classify any result in one of four equivalence classes. The performance of the proposed methods is investigated by a simulation study, and the methods are illustrated on compositional data from a field study on maize grain. Conclusions A clear distinction of practical relevance is shown between difference and equivalence testing. The proposed tests are shown to have appropriate performance characteristics by simulation, and the proposed simultaneous graphical representation of results was found to be helpful for the interpretation of results from a practical field trial data set. PMID:21324199

  8. Nonlinear Principal Components Analysis: Introduction and Application

    ERIC Educational Resources Information Center

    Linting, Marielle; Meulman, Jacqueline J.; Groenen, Patrick J. F.; van der Koojj, Anita J.

    2007-01-01

    The authors provide a didactic treatment of nonlinear (categorical) principal components analysis (PCA). This method is the nonlinear equivalent of standard PCA and reduces the observed variables to a number of uncorrelated principal components. The most important advantages of nonlinear over linear PCA are that it incorporates nominal and ordinal…

  9. A new single crystal diamond dosimeter for small beam: comparison with different commercial active detectors.

    PubMed

    Marsolat, F; Tromson, D; Tranchant, N; Pomorski, M; Le Roy, M; Donois, M; Moignau, F; Ostrowsky, A; De Carlan, L; Bassinet, C; Huet, C; Derreumaux, S; Chea, M; Cristina, K; Boisserie, G; Bergonzo, P

    2013-11-07

    Recent developments of new therapy techniques using small photon beams, such as stereotactic radiotherapy, require suitable detectors to determine the delivered dose with a high accuracy. The dosimeter has to be as close as possible to tissue equivalence and to exhibit a small detection volume compared to the size of the irradiation field, because of the lack of lateral electronic equilibrium in small beam. Characteristics of single crystal diamond (tissue equivalent material Z = 6, high density) make it an ideal candidate to fulfil most of small beam dosimetry requirements. A commercially available Element Six electronic grade synthetic diamond was used to develop a single crystal diamond dosimeter (SCDDo) with a small detection volume (0.165 mm(3)). Long term stability was studied by irradiating the SCDDo in a (60)Co beam over 14 h. A good stability (deviation less than ± 0.1%) was observed. Repeatability, dose linearity, dose rate dependence and energy dependence were studied in a 10 × 10 cm(2) beam produced by a Varian Clinac 2100 C linear accelerator. SCDDo lateral dose profile, depth dose curve and output factor (OF) measurements were performed for small photon beams with a micro multileaf collimator m3 (BrainLab) attached to the linac. This study is focused on the comparison of SCDDo measurements to those obtained with different commercially available active detectors: an unshielded silicon diode (PTW 60017), a shielded silicon diode (Sun Nuclear EDGE), a PinPoint ionization chamber (PTW 31014) and two natural diamond detectors (PTW 60003). SCDDo presents an excellent spatial resolution for dose profile measurements, due to its small detection volume. Low energy dependence (variation of 1.2% between 6 and 18 MV photon beam) and low dose rate dependence of the SCDDo (variation of 1% between 0.53 and 2.64 Gy min(-1)) are obtained, explaining the good agreement between the SCDDo and the efficient unshielded diode (PTW 60017) in depth dose curve measurements. For field sizes ranging from 0.6 × 0.6 to 10 × 10 cm(2), OFs obtained with the SCDDo are between the OFs measured with the PinPoint ionization chamber and the Sun Nuclear EDGE diode that are known to respectively underestimate and overestimate OF values in small beam, due to the large detection volume of the chamber and the non-water equivalence of both detectors.

  10. S{sub 2}SA preconditioning for the S{sub n} equations with strictly non negative spatial discretization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruss, D. E.; Morel, J. E.; Ragusa, J. C.

    2013-07-01

    Preconditioners based upon sweeps and diffusion-synthetic acceleration have been constructed and applied to the zeroth and first spatial moments of the 1-D S{sub n} transport equation using a strictly non negative nonlinear spatial closure. Linear and nonlinear preconditioners have been analyzed. The effectiveness of various combinations of these preconditioners are compared. In one dimension, nonlinear sweep preconditioning is shown to be superior to linear sweep preconditioning, and DSA preconditioning using nonlinear sweeps in conjunction with a linear diffusion equation is found to be essentially equivalent to nonlinear sweeps in conjunction with a nonlinear diffusion equation. The ability to use amore » linear diffusion equation has important implications for preconditioning the S{sub n} equations with a strictly non negative spatial discretization in multiple dimensions. (authors)« less

  11. Frequency domain system identification of helicopter rotor dynamics incorporating models with time periodic coefficients

    NASA Astrophysics Data System (ADS)

    Hwang, Sunghwan

    1997-08-01

    One of the most prominent features of helicopter rotor dynamics in forward flight is the periodic coefficients in the equations of motion introduced by the rotor rotation. The frequency response characteristics of such a linear time periodic system exhibits sideband behavior, which is not the case for linear time invariant systems. Therefore, a frequency domain identification methodology for linear systems with time periodic coefficients was developed, because the linear time invariant theory cannot account for sideband behavior. The modulated complex Fourier series was introduced to eliminate the smearing effect of Fourier series expansions of exponentially modulated periodic signals. A system identification theory was then developed using modulated complex Fourier series expansion. Correlation and spectral density functions were derived using the modulated complex Fourier series expansion for linear time periodic systems. Expressions of the identified harmonic transfer function were then formulated using the spectral density functions both with and without additive noise processes at input and/or output. A procedure was developed to identify parameters of a model to match the frequency response characteristics between measured and estimated harmonic transfer functions by minimizing an objective function defined in terms of the trace of the squared frequency response error matrix. Feasibility was demonstrated by the identification of the harmonic transfer function and parameters for helicopter rigid blade flapping dynamics in forward flight. This technique is envisioned to satisfy the needs of system identification in the rotating frame, especially in the context of individual blade control. The technique was applied to the coupled flap-lag-inflow dynamics of a rigid blade excited by an active pitch link. The linear time periodic technique results were compared with the linear time invariant technique results. Also, the effect of noise processes and initial parameter guess on the identification procedure were investigated. To study the effect of elastic modes, a rigid blade with a trailing edge flap excited by a smart actuator was selected and system parameters were successfully identified, but with some expense of computational storage and time. Conclusively, the linear time periodic technique substantially improved the identified parameter accuracy compared to the linear time invariant technique. Also, the linear time periodic technique was robust to noises and initial guess of parameters. However, an elastic mode of higher frequency relative to the system pumping frequency tends to increase the computer storage requirement and computing time.

  12. Estimation of neutron dose equivalent at the mezzanine of the Advanced Light Source and the laboratory boundary using the ORNL program MORSE.

    PubMed

    Sun, R K

    1990-12-01

    To investigate the radiation effect of neutrons near the Advanced Light Source (ALS) at Lawrence Berkeley Laboratory (LBL) with respect to the neutron dose equivalents in nearby occupied areas and at the site boundary, the neutron transport code MORSE, from Oak Ridge National Laboratory (ORNL), was used. These dose equivalents result from both skyshine neutrons transported by air scattering and direct neutrons penetrating the shielding. The ALS neutron sources are a 50-MeV linear accelerator and its transfer line, a 1.5-GeV booster, a beam extraction line, and a 1.9-GeV storage ring. The most conservative total occupational-dose-equivalent rate in the center of the ALS mezzanine, 39 m from the ALS center, was found to be 1.14 X 10(-3) Sv y-1 per 2000-h "occupational" year, and the total environmental-dose-equivalent rate at the ALS boundary, 125 m from the ALS center, was found to be 3.02 X 10(-4) Sv y-1 per 8760-h calendar year. More realistic dose-equivalent rates, using the nominal (expected) storage-ring current, were calculated to be 1.0 X 10(-4) Sv y-1 and 2.65 X 10(-5) Sv y-1 occupational year and calendar year, respectively, which are much lower than the DOE reporting levels.

  13. Turbulent premixed combustion in V-shaped flames: Characteristics of flame front

    NASA Astrophysics Data System (ADS)

    Kheirkhah, S.; Gülder, Ö. L.

    2013-05-01

    Flame front characteristics of turbulent premixed V-shaped flames were investigated experimentally using the Mie scattering and the particle image velocimetry techniques. The experiments were performed at mean streamwise exit velocities of 4.0, 6.2, and 8.6 m/s, along with fuel-air equivalence ratios of 0.7, 0.8, and 0.9. Effects of vertical distance from the flame-holder, mean streamwise exit velocity, and fuel-air equivalence ratio on statistics of the distance between the flame front and the vertical axis, flame brush thickness, flame front curvature, and angle between tangent to the flame front and the horizontal axis were studied. The results show that increasing the vertical distance from the flame-holder and the fuel-air equivalence ratio increase the mean and root-mean-square (RMS) of the distance between the flame front and the vertical axis; however, increasing the mean streamwise exit velocity decreases these statistics. Spectral analysis of the fluctuations of the flame front position depicts that the normalized and averaged power-spectrum-densities collapse and show a power-law relation with the normalized wave number. The flame brush thickness is linearly correlated with RMS of the distance between the flame front and the vertical axis. Analysis of the curvature of the flame front data shows that the mean curvature is independent of the experimental conditions tested and equals to zero. Values of the inverse of the RMS of flame front curvature are similar to those of the integral length scale, suggesting that the large eddies in the flow make a significant contribution in wrinkling of the flame front. Spectral analyses of the flame front curvature as well as the angle between tangent to the flame front and the horizontal axis show that the power-spectrum-densities feature a peak. Value of the inverse of the wave number pertaining to the peak is larger than that of the integral length scale.

  14. SU-E-T-480: Radiobiological Dose Comparison of Single Fraction SRS, Multi-Fraction SRT and Multi-Stage SRS of Large Target Volumes Using the Linear-Quadratic Formula

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, C; Hrycushko, B; Jiang, S

    2014-06-01

    Purpose: To compare the radiobiological effect on large tumors and surrounding normal tissues from single fraction SRS, multi-fractionated SRT, and multi-staged SRS treatment. Methods: An anthropomorphic head phantom with a centrally located large volume target (18.2 cm{sup 3}) was scanned using a 16 slice large bore CT simulator. Scans were imported to the Multiplan treatment planning system where a total prescription dose of 20Gy was used for a single, three staged and three fractionated treatment. Cyber Knife treatment plans were inversely optimized for the target volume to achieve at least 95% coverage of the prescription dose. For the multistage plan,more » the target was segmented into three subtargets having similar volume and shape. Staged plans for individual subtargets were generated based on a planning technique where the beam MUs of the original plan on the total target volume are changed by weighting the MUs based on projected beam lengths within each subtarget. Dose matrices for each plan were export in DICOM format and used to calculate equivalent dose distributions in 2Gy fractions using an alpha beta ratio of 10 for the target and 3 for normal tissue. Results: Singe fraction SRS, multi-stage plan and multi-fractionated SRT plans had an average 2Gy dose equivalent to the target of 62.89Gy, 37.91Gy and 33.68Gy, respectively. The normal tissue within 12Gy physical dose region had an average 2Gy dose equivalent of 29.55Gy, 16.08Gy and 13.93Gy, respectively. Conclusion: The single fraction SRS plan had the largest predicted biological effect for the target and the surrounding normal tissue. The multi-stage treatment provided for a more potent biologically effect on target compared to the multi-fraction SRT treatments with less biological normal tissue than single-fraction SRS treatment.« less

  15. In-flight radiation measurements on STS-60

    NASA Technical Reports Server (NTRS)

    Badhwar, G. D.; Golightly, M. J.; Konradi, A.; Atwell, W.; Kern, J. W.; Cash, B.; Benton, E. V.; Frank, A. L.; Sanner, D.; Keegan, R. P.; hide

    1996-01-01

    A joint investigation between the United States and Russia to study the radiation environment inside the Space Shuttle flight STS-60 was carried out as part of the Shuttle-Mir Science Program (Phase 1). This is the first direct comparison of a number of different dosimetric measurement techniques between the two countries. STS-60 was launched on 3 February 1994 in a nearly circular 57 degrees x 353 km orbit with five U.S. astronauts and one Russian cosmonaut for 8.3 days. A variety of instruments provided crew radiation exposure, absorbed doses at fixed locations, neutron fluence and dose equivalent, linear energy transfer (LET) spectra of trapped and galactic cosmic radiation, and energy spectra and angular distribution of trapped protons. In general, there is good agreement between the U.S. and Russian measurements. The AP8 Min trapped proton model predicts an average of 1.8 times the measured absorbed dose. The average quality factor determined from measured lineal energy, y, spectra using a tissue equivalent proportional counter (TEPC), is in good agreement with that derived from the high temperature peak in the 6LiF thermoluminescent detectors (TLDs). The radiation exposure in the mid-deck locker from neutrons below 1 MeV was 2.53 +/- 1.33 microSv/day. The absorbed dose rates measured using a tissue equivalent proportional counter, were 171.1 +/- 0.4 and 127.4 +/- 0.4 microGy/day for trapped particles and galactic cosmic rays, respectively. The combined dose rate of 298.5 +/- 0.82 microGy/day is about a factor of 1.4 higher than that measured using TLDs. The westward longitude drift of the South Atlantic Anomaly (SAA) is estimated to be 0.22 +/- 0.02 degrees/y. We evaluated the effects of spacecraft attitudes on TEPC dose rates due to the highly anisotropic low-earth orbit proton environment. Changes in spacecraft attitude resulted in dose-rate variations by factors of up to 2 at the location of the TEPC.

  16. A study of data analysis techniques for the multi-needle Langmuir probe

    NASA Astrophysics Data System (ADS)

    Hoang, H.; Røed, K.; Bekkeng, T. A.; Moen, J. I.; Spicher, A.; Clausen, L. B. N.; Miloch, W. J.; Trondsen, E.; Pedersen, A.

    2018-06-01

    In this paper we evaluate two data analysis techniques for the multi-needle Langmuir probe (m-NLP). The instrument uses several cylindrical Langmuir probes, which are positively biased with respect to the plasma potential in order to operate in the electron saturation region. Since the currents collected by these probes can be sampled at kilohertz rates, the instrument is capable of resolving the ionospheric plasma structure down to the meter scale. The two data analysis techniques, a linear fit and a non-linear least squares fit, are discussed in detail using data from the Investigation of Cusp Irregularities 2 sounding rocket. It is shown that each technique has pros and cons with respect to the m-NLP implementation. Even though the linear fitting technique seems to be better than measurements from incoherent scatter radar and in situ instruments, m-NLPs can be longer and can be cleaned during operation to improve instrument performance. The non-linear least squares fitting technique would be more reliable provided that a higher number of probes are deployed.

  17. Reduction of the two dimensional stationary Navier-Stokes problem to a sequence of Fredholm integral equations of the second kind

    NASA Technical Reports Server (NTRS)

    Gabrielsen, R. E.

    1981-01-01

    Present approaches to solving the stationary Navier-Stokes equations are of limited value; however, there does exist an equivalent representation of the problem that has significant potential in solving such problems. This is due to the fact that the equivalent representation consists of a sequence of Fredholm integral equations of the second kind, and the solving of this type of problem is very well developed. For the problem in this form, there is an excellent chance to also determine explicit error estimates, since bounded, rather than unbounded, linear operators are dealt with.

  18. Measurements of energetic particle radiation in transit to Mars on the Mars Science Laboratory.

    PubMed

    Zeitlin, C; Hassler, D M; Cucinotta, F A; Ehresmann, B; Wimmer-Schweingruber, R F; Brinza, D E; Kang, S; Weigle, G; Böttcher, S; Böhm, E; Burmeister, S; Guo, J; Köhler, J; Martin, C; Posner, A; Rafkin, S; Reitz, G

    2013-05-31

    The Mars Science Laboratory spacecraft, containing the Curiosity rover, was launched to Mars on 26 November 2011, and for most of the 253-day, 560-million-kilometer cruise to Mars, the Radiation Assessment Detector made detailed measurements of the energetic particle radiation environment inside the spacecraft. These data provide insights into the radiation hazards that would be associated with a human mission to Mars. We report measurements of the radiation dose, dose equivalent, and linear energy transfer spectra. The dose equivalent for even the shortest round-trip with current propulsion systems and comparable shielding is found to be 0.66 ± 0.12 sievert.

  19. Earthquake response analysis of 11-story RC building that suffered damage in 2011 East Japan Earthquake

    NASA Astrophysics Data System (ADS)

    Shibata, Akenori; Masuno, Hidemasa

    2017-10-01

    An eleven-story RC apartment building suffered medium damage in the 2011 East Japan earthquake and was retrofitted for re-use. Strong motion records were obtained near the building. This paper discusses the inelastic earthquake response analysis of the building using the equivalent single-degree-of-freedom (1-DOF) system to account for the features of damage. The method of converting the building frame into 1-DOF system with tri-linear reducing-stiffness restoring force characteristics was given. The inelastic response analysis of the building against the earthquake using the inelastic 1-DOF equivalent system could interpret well the level of actual damage.

  20. Discriminative components of data.

    PubMed

    Peltonen, Jaakko; Kaski, Samuel

    2005-01-01

    A simple probabilistic model is introduced to generalize classical linear discriminant analysis (LDA) in finding components that are informative of or relevant for data classes. The components maximize the predictability of the class distribution which is asymptotically equivalent to 1) maximizing mutual information with the classes, and 2) finding principal components in the so-called learning or Fisher metrics. The Fisher metric measures only distances that are relevant to the classes, that is, distances that cause changes in the class distribution. The components have applications in data exploration, visualization, and dimensionality reduction. In empirical experiments, the method outperformed, in addition to more classical methods, a Renyi entropy-based alternative while having essentially equivalent computational cost.

  1. Modeling Percolation in Polymer Nanocomposites by Stochastic Microstructuring

    PubMed Central

    Soto, Matias; Esteva, Milton; Martínez-Romero, Oscar; Baez, Jesús; Elías-Zúñiga, Alex

    2015-01-01

    A methodology was developed for the prediction of the electrical properties of carbon nanotube-polymer nanocomposites via Monte Carlo computational simulations. A two-dimensional microstructure that takes into account waviness, fiber length and diameter distributions is used as a representative volume element. Fiber interactions in the microstructure are identified and then modeled as an equivalent electrical circuit, assuming one-third metallic and two-thirds semiconductor nanotubes. Tunneling paths in the microstructure are also modeled as electrical resistors, and crossing fibers are accounted for by assuming a contact resistance associated with them. The equivalent resistor network is then converted into a set of linear equations using nodal voltage analysis, which is then solved by means of the Gauss–Jordan elimination method. Nodal voltages are obtained for the microstructure, from which the percolation probability, equivalent resistance and conductivity are calculated. Percolation probability curves and electrical conductivity values are compared to those found in the literature. PMID:28793594

  2. Intrinsic character of Stokes matrices

    NASA Astrophysics Data System (ADS)

    Gagnon, Jean-François; Rousseau, Christiane

    2017-02-01

    Two germs of linear analytic differential systems x k + 1Y‧ = A (x) Y with a non-resonant irregular singularity are analytically equivalent if and only if they have the same eigenvalues and equivalent collections of Stokes matrices. The Stokes matrices are the transition matrices between sectors on which the system is analytically equivalent to its formal normal form. Each sector contains exactly one separating ray for each pair of eigenvalues. A rotation in S allows supposing that R+ lies in the intersection of two sectors. Reordering of the coordinates of Y allows ordering the real parts of the eigenvalues, thus yielding triangular Stokes matrices. However, the choice of the rotation in x is not canonical. In this paper we establish how the collection of Stokes matrices depends on this rotation, and hence on a chosen order of the projection of the eigenvalues on a line through the origin.

  3. Two conditions for equivalence of 0-norm solution and 1-norm solution in sparse representation.

    PubMed

    Li, Yuanqing; Amari, Shun-Ichi

    2010-07-01

    In sparse representation, two important sparse solutions, the 0-norm and 1-norm solutions, have been receiving much of attention. The 0-norm solution is the sparsest, however it is not easy to obtain. Although the 1-norm solution may not be the sparsest, it can be easily obtained by the linear programming method. In many cases, the 0-norm solution can be obtained through finding the 1-norm solution. Many discussions exist on the equivalence of the two sparse solutions. This paper analyzes two conditions for the equivalence of the two sparse solutions. The first condition is necessary and sufficient, however, difficult to verify. Although the second is necessary but is not sufficient, it is easy to verify. In this paper, we analyze the second condition within the stochastic framework and propose a variant. We then prove that the equivalence of the two sparse solutions holds with high probability under the variant of the second condition. Furthermore, in the limit case where the 0-norm solution is extremely sparse, the second condition is also a sufficient condition with probability 1.

  4. Variationally consistent discretization schemes and numerical algorithms for contact problems

    NASA Astrophysics Data System (ADS)

    Wohlmuth, Barbara

    We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of possible applications and show the performance of the space discretization scheme, non-linear solver, adaptive refinement process and time integration.

  5. Semiempirical Theories of the Affinities of Negative Atomic Ions

    NASA Technical Reports Server (NTRS)

    Edie, John W.

    1961-01-01

    The determination of the electron affinities of negative atomic ions by means of direct experimental investigation is limited. To supplement the meager experimental results, several semiempirical theories have been advanced. One commonly used technique involves extrapolating the electron affinities along the isoelectronic sequences, The most recent of these extrapolations Is studied by extending the method to Include one more member of the isoelectronic sequence, When the results show that this extension does not increase the accuracy of the calculations, several possible explanations for this situation are explored. A different approach to the problem is suggested by the regularities appearing in the electron affinities. Noting that the regular linear pattern that exists for the ionization potentials of the p electrons as a function of Z, repeats itself for different degrees of ionization q, the slopes and intercepts of these curves are extrapolated to the case of the negative Ion. The method is placed on a theoretical basis by calculating the Slater parameters as functions of q and n, the number of equivalent p-electrons. These functions are no more than quadratic in q and n. The electron affinities are calculated by extending the linear relations that exist for the neutral atoms and positive ions to the negative ions. The extrapolated. slopes are apparently correct, but the intercepts must be slightly altered to agree with experiment. For this purpose one or two experimental affinities (depending on the extrapolation method) are used in each of the two short periods. The two extrapolation methods used are: (A) an isoelectronic sequence extrapolation of the linear pattern as such; (B) the same extrapolation of a linearization of this pattern (configuration centers) combined with an extrapolation of the other terms of the ground configurations. The latter method Is preferable, since it requires only experimental point for each period. The results agree within experimental error with all data, except with the most recent value of C, which lies 10% lower.

  6. Analysis of a Spatial Point Pattern: Examining the Damage to Pavement and Pipes in Santa Clara Valley Resulting from the Loma Prieta Earthquake

    USGS Publications Warehouse

    Phelps, G.A.

    2008-01-01

    This report describes some simple spatial statistical methods to explore the relationships of scattered points to geologic or other features, represented by points, lines, or areas. It also describes statistical methods to search for linear trends and clustered patterns within the scattered point data. Scattered points are often contained within irregularly shaped study areas, necessitating the use of methods largely unexplored in the point pattern literature. The methods take advantage of the power of modern GIS toolkits to numerically approximate the null hypothesis of randomly located data within an irregular study area. Observed distributions can then be compared with the null distribution of a set of randomly located points. The methods are non-parametric and are applicable to irregularly shaped study areas. Patterns within the point data are examined by comparing the distribution of the orientation of the set of vectors defined by each pair of points within the data with the equivalent distribution for a random set of points within the study area. A simple model is proposed to describe linear or clustered structure within scattered data. A scattered data set of damage to pavement and pipes, recorded after the 1989 Loma Prieta earthquake, is used as an example to demonstrate the analytical techniques. The damage is found to be preferentially located nearer a set of mapped lineaments than randomly scattered damage, suggesting range-front faulting along the base of the Santa Cruz Mountains is related to both the earthquake damage and the mapped lineaments. The damage also exhibit two non-random patterns: a single cluster of damage centered in the town of Los Gatos, California, and a linear alignment of damage along the range front of the Santa Cruz Mountains, California. The linear alignment of damage is strongest between 45? and 50? northwest. This agrees well with the mean trend of the mapped lineaments, measured as 49? northwest.

  7. Distinguishing Provenance Equivalence of Earth Science Data

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt; Yesha, Ye; Halem, M.

    2010-01-01

    Reproducibility of scientific research relies on accurate and precise citation of data and the provenance of that data. Earth science data are often the result of applying complex data transformation and analysis workflows to vast quantities of data. Provenance information of data processing is used for a variety of purposes, including understanding the process and auditing as well as reproducibility. Certain provenance information is essential for producing scientifically equivalent data. Capturing and representing that provenance information and assigning identifiers suitable for precisely distinguishing data granules and datasets is needed for accurate comparisons. This paper discusses scientific equivalence and essential provenance for scientific reproducibility. We use the example of an operational earth science data processing system to illustrate the application of the technique of cascading digital signatures or hash chains to precisely identify sets of granules and as provenance equivalence identifiers to distinguish data made in an an equivalent manner.

  8. Language Measurement Equivalence of the Ethnic Identity Scale With Mexican American Early Adolescents

    PubMed Central

    White, Rebecca M. B.; Umaña-Taylor, Adriana J.; Knight, George P.; Zeiders, Katharine H.

    2011-01-01

    The current study considers methodological challenges in developmental research with linguistically diverse samples of young adolescents. By empirically examining the cross-language measurement equivalence of a measure assessing three components of ethnic identity development (i.e., exploration, resolution, and affirmation) among Mexican American adolescents, the study both assesses the cross-language measurement equivalence of a common measure of ethnic identity and provides an appropriate conceptual and analytical model for researchers needing to evaluate measurement scales translated into multiple languages. Participants are 678 Mexican-origin early adolescents and their mothers. Measures of exploration and resolution achieve the highest levels of equivalence across language versions. The measure of affirmation achieves high levels of equivalence. Results highlight potential ways to correct for any problems of nonequivalence across language versions of the affirmation measure. Suggestions are made for how researchers working with linguistically diverse samples can use the highlighted techniques to evaluate their own translated measures. PMID:22116736

  9. Problem Based Learning Technique and Its Effect on Acquisition of Linear Programming Skills by Secondary School Students in Kenya

    ERIC Educational Resources Information Center

    Nakhanu, Shikuku Beatrice; Musasia, Amadalo Maurice

    2015-01-01

    The topic Linear Programming is included in the compulsory Kenyan secondary school mathematics curriculum at form four. The topic provides skills for determining best outcomes in a given mathematical model involving some linear relationship. This technique has found application in business, economics as well as various engineering fields. Yet many…

  10. Effects of different representations of transport in the new EMAC-SWIFT chemistry climate model

    NASA Astrophysics Data System (ADS)

    Scheffler, Janice; Langematz, Ulrike; Wohltmann, Ingo; Kreyling, Daniel; Rex, Markus

    2017-04-01

    It is well known that the representation of atmospheric ozone chemistry in weather and climate models is essential for a realistic simulation of the atmospheric state. Interactively coupled chemistry climate models (CCMs) provide a means to realistically simulate the interaction between atmospheric chemistry and dynamics. The calculation of chemistry in CCMs, however, is computationally expensive which renders the use of complex chemistry models not suitable for ensemble simulations or simulations with multiple climate change scenarios. In these simulations ozone is therefore usually prescribed as a climatological field or included by incorporating a fast linear ozone scheme into the model. While prescribed climatological ozone fields are often not aligned with the modelled dynamics, a linear ozone scheme may not be applicable for a wide range of climatological conditions. An alternative approach to represent atmospheric chemistry in climate models which can cope with non-linearities in ozone chemistry and is applicable to a wide range of climatic states is the Semi-empirical Weighted Iterative Fit Technique (SWIFT) that is driven by reanalysis data and has been validated against observational satellite data and runs of a full Chemistry and Transport Model. SWIFT has been implemented into the ECHAM/MESSy (EMAC) chemistry climate model that uses a modular approach to climate modelling where individual model components can be switched on and off. When using SWIFT in EMAC, there are several possibilities to represent the effect of transport inside the polar vortex: the semi-Lagrangian transport scheme of EMAC and a transport parameterisation that can be useful when using SWIFT in models not having transport of their own. Here, we present results of equivalent simulations with different handling of transport, compare with EMAC simulations with full interactive chemistry and evaluate the results with observations.

  11. The scattering analog for infiltration in porous media

    NASA Astrophysics Data System (ADS)

    Philip, J. R.

    1989-11-01

    This review takes the form of a set of Chinese boxes. The outermost box gives a brief general account of modem developments in the mathematical physics of unsaturated flow in soils and porous media. This provides the necessary foundations for the second box, which describes the quasi-linear analysis of steady multidimensional unsaturated flow, which is an essential prerequisite to the analog. Only then can we proceed to the innermost box, devoted to our major theme. An exact analog exists between steady quasi-linear flow in unsaturated soils and porous media and the scattering of plane pulses, and the analog carries over to the scattering of plane harmonic waves. Numerous established results, and powerful techniques such as Watson transforms, far-field scattering functions, and optical theorems, become available for the solution and understanding of problems of multidimensional infiltration. These are needed, in particular, to provide the asymptotics of the physically interesting and practically important limit of flows strongly dominated by gravity, with capillary effects weak but nonzero. This is the limit of large s, where s is a characteristic length of the water supply surface normalized with respect to the sorptive length of the soil. These problems are singular in the sense that ignoring capillarity gives a totally incorrect picture of the wetted region. In terms of the optical analog, neglecting capillarity is equivalent to using geometrical optics, with coherent shadows projected to infinity. When exact solutions involve exotic functions, difficulties of both analysis and series summation may be avoided through use of small-s and large-s expansions provided by the analog. Numerous examples are given of solutions obtained through the analog. The scope for extending the application to flows from surface sources, to anisotropic and heterogeneous media, to unsteady flows, and to linear convection-diffusion processes in general is described briefly.

  12. LFSPMC: Linear feature selection program using the probability of misclassification

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Marion, B. P.

    1975-01-01

    The computational procedure and associated computer program for a linear feature selection technique are presented. The technique assumes that: a finite number, m, of classes exists; each class is described by an n-dimensional multivariate normal density function of its measurement vectors; the mean vector and covariance matrix for each density function are known (or can be estimated); and the a priori probability for each class is known. The technique produces a single linear combination of the original measurements which minimizes the one-dimensional probability of misclassification defined by the transformed densities.

  13. A study of microwave downcoverters operating in the K sub u band

    NASA Technical Reports Server (NTRS)

    Fellers, R. G.; Simpson, T. L.; Tseng, B.

    1982-01-01

    A computer program for parametric amplifier design is developed with special emphasis on practical design considerations for microwave integrated circuit degenerate amplifiers. Precision measurement techniques are developed to obtain a more realistic varactor equivalent circuit. The existing theory of a parametric amplifier is modified to include the equivalent circuit, and microwave properties, such as loss characteristics and circuit discontinuities are investigated.

  14. Reliable Early Classification on Multivariate Time Series with Numerical and Categorical Attributes

    DTIC Science & Technology

    2015-05-22

    design a procedure of feature extraction in REACT named MEG (Mining Equivalence classes with shapelet Generators) based on the concept of...Equivalence Classes Mining [12, 15]. MEG can efficiently and effectively generate the discriminative features. In addition, several strategies are proposed...technique of parallel computing [4] to propose a process of pa- rallel MEG for substantially reducing the computational overhead of discovering shapelet

  15. Channeled polarimetric technique for the measurement of spectral dependence of linearly Stokes parameters

    NASA Astrophysics Data System (ADS)

    Quan, Naicheng; Zhang, Chunmin; Mu, Tingkui; Li, Qiwei

    2018-05-01

    The principle and experimental demonstration of a method based on channeled polarimetric technique (CPT) to measure spectrally resolved linearly Stokes parameters (SRLS) is presented. By replacing front retarder with an achromatic quarter wave-plate of CPT, the linearly SRLS can be measured simultaneously. It also retains the advantages of static and compact of CPT. Besides, comparing with CPT, it can reduce the RMS error by nearly a factor of 2-5 for the individual linear Stokes parameters.

  16. Application of quadratic optimization to supersonic inlet control

    NASA Technical Reports Server (NTRS)

    Lehtinen, B.; Zeller, J. R.

    1971-01-01

    The application of linear stochastic optimal control theory to the design of the control system for the air intake (inlet) of a supersonic air-breathing propulsion system is discussed. The controls must maintain a stable inlet shock position in the presence of random airflow disturbances and prevent inlet unstart. Two different linear time invariant control systems are developed. One is designed to minimize a nonquadratic index, the expected frequency of inlet unstart, and the other is designed to minimize the mean square value of inlet shock motion. The quadratic equivalence principle is used to obtain the best linear controller that minimizes the nonquadratic performance index. The two systems are compared on the basis of unstart prevention, control effort requirements, and sensitivity to parameter variations.

  17. Viscoelastic properties of dendrimers in the melt from nonequlibrium molecular dynamics

    NASA Astrophysics Data System (ADS)

    Bosko, Jaroslaw T.; Todd, B. D.; Sadus, Richard J.

    2004-12-01

    The viscoelastic properties of dendrimers of generation 1-4 are studied using nonequilibrium molecular dynamics. Flow properties of dendrimer melts under shear are compared to systems composed of linear chain polymers of the same molecular weight, and the influence of molecular architecture is discussed. Rheological material properties, such as the shear viscosity and normal stress coefficients, are calculated and compared for both systems. We also calculate and compare the microscopic properties of both linear chain and dendrimer molecules, such as their molecular alignment, order parameters and rotational velocities. We find that the highly symmetric shape of dendrimers and their highly constrained geometry allows for substantial differences in their material properties compared to traditional linear polymers of equivalent molecular weight.

  18. Multihelix rotating shield brachytherapy for cervical cancer

    PubMed Central

    Dadkhah, Hossein; Kim, Yusung; Wu, Xiaodong; Flynn, Ryan T.

    2015-01-01

    Purpose: To present a novel brachytherapy technique, called multihelix rotating shield brachytherapy (H-RSBT), for the precise angular and linear positioning of a partial shield in a curved applicator. H-RSBT mechanically enables the dose delivery using only linear translational motion of the radiation source/shield combination. The previously proposed approach of serial rotating shield brachytherapy (S-RSBT), in which the partial shield is rotated to several angular positions at each source dwell position [W. Yang et al., “Rotating-shield brachytherapy for cervical cancer,” Phys. Med. Biol. 58, 3931–3941 (2013)], is mechanically challenging to implement in a curved applicator, and H-RSBT is proposed as a feasible solution. Methods: A Henschke-type applicator, designed for an electronic brachytherapy source (Xoft Axxent™) and a 0.5 mm thick tungsten partial shield with 180° or 45° azimuthal emission angles and 116° asymmetric zenith angle, is proposed. The interior wall of the applicator contains six evenly spaced helical keyways that rigidly define the emission direction of the partial radiation shield as a function of depth in the applicator. The shield contains three uniformly distributed protruding keys on its exterior wall and is attached to the source such that it rotates freely, thus longitudinal translational motion of the source is transferred to rotational motion of the shield. S-RSBT and H-RSBT treatment plans with 180° and 45° azimuthal emission angles were generated for five cervical cancer patients with a diverse range of high-risk target volume (HR-CTV) shapes and applicator positions. For each patient, the total number of emission angles was held nearly constant for S-RSBT and H-RSBT by using dwell positions separated by 5 and 1.7 mm, respectively, and emission directions separated by 22.5° and 60°, respectively. Treatment delivery time and tumor coverage (D90 of HR-CTV) were the two metrics used as the basis for evaluation and comparison. For all the generated treatment plans, the D90 of the HR-CTV in units of equivalent dose in 2 Gy fractions (EQD2) was escalated until the D2cc (minimum dose to hottest 2 cm3) tolerance of either the bladder (90 Gy3), rectum (75 Gy3), or sigmoid colon (75 Gy3) was reached. Results: Treatment time changed for H-RSBT versus S-RSBT by −7.62% to 1.17% with an average change of −2.8%, thus H-RSBT treatments times tended to be shorter than for S-RSBT. The HR-CTV D90 also changed by −2.7% to 2.38% with an average of −0.65%. Conclusions: H-RSBT is a mechanically feasible delivery technique for use in the curved applicators needed for cervical cancer brachytherapy. S-RSBT and H-RSBT were clinically equivalent for all patients considered, with the H-RSBT technique tending to require less time for delivery. PMID:26520749

  19. Turbulent flow separation control through passive techniques

    NASA Technical Reports Server (NTRS)

    Lin, J. C.; Howard, F. G.; Selby, G. V.

    1989-01-01

    Several passive separation control techniques for controlling moderate two-dimensional turbulent flow separation over a backward-facing ramp are studied. Small transverse and swept grooves, passive porous surfaces, large longitudinal grooves, and vortex generators were among the techniques used. It was found that, unlike the transverse and longitudinal grooves of an equivalent size, the 45-deg swept-groove configurations tested tended to enhance separation.

  20. Object matching using a locally affine invariant and linear programming techniques.

    PubMed

    Li, Hongsheng; Huang, Xiaolei; He, Lei

    2013-02-01

    In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.

  1. Contextual Control by Function and Form of Transfer of Functions

    ERIC Educational Resources Information Center

    Perkins, David R.; Dougher, Michael J.; Greenway, David E.

    2007-01-01

    This study investigated conditions leading to contextual control by stimulus topography over transfer of functions. Three 4-member stimulus equivalence classes, each consisting of four (A, B, C, D) topographically distinct visual stimuli, were established for 5 college students. Across classes, designated A stimuli were open-ended linear figures,…

  2. MICRO-U 70.1: Training Model of an Instructional Institution, Users Manual.

    ERIC Educational Resources Information Center

    Springer, Colby H.

    MICRO-U is a student demand driven deterministic model. Student enrollment, by degree program, is used to develop an Instructional Work Load Matrix. Linear equations using Weekly Student Contact Hours (WSCH), Full Time Equivalent (FTE) students, FTE faculty, and number of disciplines determine library, central administration, and physical plant…

  3. Fillet Weld Stress Using Finite Element Methods

    NASA Technical Reports Server (NTRS)

    Lehnhoff, T. F.; Green, G. W.

    1985-01-01

    Average elastic Von Mises equivalent stresses were calculated along the throat of a single lap fillet weld. The average elastic stresses were compared to initial yield and to plastic instability conditions to modify conventional design formulas is presented. The factor is a linear function of the thicknesses of the parent plates attached by the fillet weld.

  4. Examining Factor Score Distributions to Determine the Nature of Latent Spaces

    ERIC Educational Resources Information Center

    Steinley, Douglas; McDonald, Roderick P.

    2007-01-01

    Similarities between latent class models with K classes and linear factor models with K-1 factors are investigated. Specifically, the mathematical equivalence between the covariance structure of the two models is discussed, and a Monte Carlo simulation is performed using generated data that represents both latent factors and latent classes with…

  5. Patient turnover and nursing employment in Massachusetts hospitals before and after health insurance reform: implications for the Patient Protection and Affordable Care Act.

    PubMed

    Shindul-Rothschild, Judith; Gregas, Matt

    2013-01-01

    The Affordable Care Act is modeled after Massachusetts insurance reforms enacted in 2006. A linear mixed effect model examined trends in patient turnover and nurse employment in Massachusetts, New York, and California nonfederal hospitals from 2000 to 2011. The linear mixed effect analysis found that the rate of increase in hospital admissions was significantly higher in Massachusetts hospitals (p<.001) than that in California and New York (p=.007). The rate of change in registered nurses full-time equivalent hours per patient day was significantly less (p=.02) in Massachusetts than that in California and was not different from zero. The rate of change in admissions to registered nurses full-time equivalent hours per patient day was significantly greater in Massachusetts than California (p=.001) and New York (p<.01). Nurse staffing remained flat in Massachusetts, despite a significant increase in hospital admissions. The implications of the findings for nurse employment and hospital utilization following the implementation of national health insurance reform are discussed.

  6. Additivity of nonsimultaneous masking for short Gaussian-shaped sinusoids.

    PubMed

    Laback, Bernhard; Balazs, Peter; Necciari, Thibaud; Savel, Sophie; Ystad, Solvi; Meunier, Sabine; Kronland-Martinet, Richard

    2011-02-01

    The additivity of nonsimultaneous masking was studied using Gaussian-shaped tone pulses (referred to as Gaussians) as masker and target stimuli. Combinations of up to four temporally separated Gaussian maskers with an equivalent rectangular bandwidth of 600 Hz and an equivalent rectangular duration of 1.7 ms were tested. Each masker was level-adjusted to produce approximately 8 dB of masking. Excess masking (exceeding linear additivity) was generally stronger than reported in the literature for longer maskers and comparable target levels. A model incorporating a compressive input/output function, followed by a linear summation stage, underestimated excess masking when using an input/output function derived from literature data for longer maskers and comparable target levels. The data could be predicted with a more compressive input/output function. Stronger compression may be explained by assuming that the Gaussian stimuli were too short to evoke the medial olivocochlear reflex (MOCR), whereas for longer maskers tested previously the MOCR caused reduced compression. Overall, the interpretation of the data suggests strong basilar membrane compression for very short stimuli.

  7. Vibration mitigation in partially liquid-filled vessel using passive energy absorbers

    NASA Astrophysics Data System (ADS)

    Farid, M.; Levy, N.; Gendelman, O. V.

    2017-10-01

    We consider possible solutions for vibration mitigation in reduced-order model (ROM) of partially filled liquid tank under impulsive forcing. Such excitations may lead to strong hydraulic impacts applied to the tank inner walls. Finite stiffness of the tank walls is taken into account. In order to mitigate the dangerous internal stresses in the tank walls, we explore both linear (Tuned Mass Damper) and nonlinear (Nonlinear Energy Sink) passive vibration absorbers; mitigation performance in both cases is examined numerically. The liquid sloshing mass is modeled by equivalent mass-spring-dashpot system, which can both perform small-amplitude linear oscillations and hit the vessel walls. We use parameters of the equivalent mass-spring-dashpot system for a well-explored case of cylindrical tanks. The hydraulic impacts are modeled by high-power potential and dissipation functions. Critical location in the tank structure is determined and expression of the corresponding local mechanical stress is derived. We use finite element approach to assess the natural frequencies for specific system parameters. Numerical evaluation criteria are suggested to determine the energy absorption performance.

  8. The Prediction of Scattered Broadband Shock-Associated Noise

    NASA Technical Reports Server (NTRS)

    Miller, Steven A. E.

    2015-01-01

    A mathematical model is developed for the prediction of scattered broadband shock-associated noise. Model arguments are dependent on the vector Green's function of the linearized Euler equations, steady Reynolds-averaged Navier-Stokes solutions, and the two-point cross-correlation of the equivalent source. The equivalent source is dependent on steady Reynolds-averaged Navier-Stokes solutions of the jet flow, that capture the nozzle geometry and airframe surface. Contours of the time-averaged streamwise velocity component and turbulent kinetic energy are examined with varying airframe position relative to the nozzle exit. Propagation effects are incorporated by approximating the vector Green's function of the linearized Euler equations. This approximation involves the use of ray theory and an assumption that broadband shock-associated noise is relatively unaffected by the refraction of the jet shear layer. A non-dimensional parameter is proposed that quantifies the changes of the broadband shock-associated noise source with varying jet operating condition and airframe position. Scattered broadband shock-associated noise possesses a second set of broadband lobes that are due to the effect of scattering. Presented predictions demonstrate relatively good agreement compared to a wide variety of measurements.

  9. Transformation to equivalent dimensions—a new methodology to study earthquake clustering

    NASA Astrophysics Data System (ADS)

    Lasocki, Stanislaw

    2014-05-01

    A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.

  10. Comparison of sound reproduction using higher order loudspeakers and equivalent line arrays in free-field conditions.

    PubMed

    Poletti, Mark A; Betlehem, Terence; Abhayapala, Thushara D

    2014-07-01

    Higher order sound sources of Nth order can radiate sound with 2N + 1 orthogonal radiation patterns, which can be represented as phase modes or, equivalently, amplitude modes. This paper shows that each phase mode response produces a spiral wave front with a different spiral rate, and therefore a different direction of arrival of sound. Hence, for a given receiver position a higher order source is equivalent to a linear array of 2N + 1 monopole sources. This interpretation suggests performance similar to a circular array of higher order sources can be produced by an array of sources, each of which consists of a line array having monopoles at the apparent source locations of the corresponding phase modes. Simulations of higher order arrays and arrays of equivalent line sources are presented. It is shown that the interior fields produced by the two arrays are essentially the same, but that the exterior fields differ because the higher order sources produces different equivalent source locations for field positions outside the array. This work provides an explanation of the fact that an array of L Nth order sources can reproduce sound fields whose accuracy approaches the performance of (2N + 1)L monopoles.

  11. Equivalent Young's modulus of composite resin for simulation of stress during dental restoration.

    PubMed

    Park, Jung-Hoon; Choi, Nak-Sam

    2017-02-01

    For shrinkage stress simulation in dental restoration, the elastic properties of composite resins should be acquired beforehand. This study proposes a formula to measure the equivalent Young's modulus of a composite resin through a calculation scheme of the shrinkage stress in dental restoration. Two types of composite resins remarkably different in the polymerization shrinkage strain were used for experimental verification: the methacrylate-type (Clearfil AP-X) and the silorane-type (Filtek P90). The linear shrinkage strains of the composite resins were gained through the bonded disk method. A formula to calculate the equivalent Young's moduli of composite resin was derived on the basis of the restored ring substrate. Equivalent Young's moduli were measured for the two types of composite resins through the formula. Those values were applied as input to a finite element analysis (FEA) for validation of the calculated shrinkage stress. Both of the measured moduli through the formula were appropriate for stress simulation of dental restoration in that the shrinkage stresses calculated by the FEA were in good agreement within 3.5% with the experimental values. The concept of equivalent Young's modulus so measured could be applied for stress simulation of 2D and 3D dental restoration. Copyright © 2016 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  12. Blood Density Is Nearly Equal to Water Density: A Validation Study of the Gravimetric Method of Measuring Intraoperative Blood Loss.

    PubMed

    Vitello, Dominic J; Ripper, Richard M; Fettiplace, Michael R; Weinberg, Guy L; Vitello, Joseph M

    2015-01-01

    Purpose. The gravimetric method of weighing surgical sponges is used to quantify intraoperative blood loss. The dry mass minus the wet mass of the gauze equals the volume of blood lost. This method assumes that the density of blood is equivalent to water (1 gm/mL). This study's purpose was to validate the assumption that the density of blood is equivalent to water and to correlate density with hematocrit. Methods. 50 µL of whole blood was weighed from eighteen rats. A distilled water control was weighed for each blood sample. The averages of the blood and water were compared utilizing a Student's unpaired, one-tailed t-test. The masses of the blood samples and the hematocrits were compared using a linear regression. Results. The average mass of the eighteen blood samples was 0.0489 g and that of the distilled water controls was 0.0492 g. The t-test showed P = 0.2269 and R (2) = 0.03154. The hematocrit values ranged from 24% to 48%. The linear regression R (2) value was 0.1767. Conclusions. The R (2) value comparing the blood and distilled water masses suggests high correlation between the two populations. Linear regression showed the hematocrit was not proportional to the mass of the blood. The study confirmed that the measured density of blood is similar to water.

  13. Theoretical Studies of Microstrip Antennas : Volume I, General Design Techniques and Analyses of Single and Coupled Elements

    DOT National Transportation Integrated Search

    1979-09-01

    Volume 1 of Theoretical Studies of Microstrip Antennas deals with general techniques and analyses of single and coupled radiating elements. Specifically, we review and then employ an important equivalence theorem that allows a pair of vector potentia...

  14. Linear network representation of multistate models of transport.

    PubMed Central

    Sandblom, J; Ring, A; Eisenman, G

    1982-01-01

    By introducing external driving forces in rate-theory models of transport we show how the Eyring rate equations can be transformed into Ohm's law with potentials that obey Kirchhoff's second law. From such a formalism the state diagram of a multioccupancy multicomponent system can be directly converted into linear network with resistors connecting nodal (branch) points and with capacitances connecting each nodal point with a reference point. The external forces appear as emf or current generators in the network. This theory allows the algebraic methods of linear network theory to be used in solving the flux equations for multistate models and is particularly useful for making proper simplifying approximation in models of complex membrane structure. Some general properties of linear network representation are also deduced. It is shown, for instance, that Maxwell's reciprocity relationships of linear networks lead directly to Onsager's relationships in the near equilibrium region. Finally, as an example of the procedure, the equivalent circuit method is used to solve the equations for a few transport models. PMID:7093425

  15. Stochastic Stability of Nonlinear Sampled Data Systems with a Jump Linear Controller

    NASA Technical Reports Server (NTRS)

    Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven

    2004-01-01

    This paper analyzes the stability of a sampled- data system consisting of a deterministic, nonlinear, time- invariant, continuous-time plant and a stochastic, discrete- time, jump linear controller. The jump linear controller mod- els, for example, computer systems and communication net- works that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. To analyze stability, appropriate topologies are introduced for the signal spaces of the sampled- data system. With these topologies, the ideal sampling and zero-order-hold operators are shown to be measurable maps. This paper shows that the known equivalence between the stability of a deterministic, linear sampled-data system and its associated discrete-time representation as well as between a nonlinear sampled-data system and a linearized representation holds even in a stochastic framework.

  16. Inductive Linear-Position Sensor/Limit-Sensor Units

    NASA Technical Reports Server (NTRS)

    Alhom, Dean; Howard, David; Smith, Dennis; Dutton, Kenneth

    2007-01-01

    A new sensor provides an absolute position measurement. A schematic view of a motorized linear-translation stage that contains, at each end, an electronic unit that functions as both (1) a non-contact sensor that measures the absolute position of the stage and (2) a non-contact equivalent of a limit switch that is tripped when the stage reaches the nominal limit position. The need for such an absolute linear position-sensor/limit-sensor unit arises in the case of a linear-translation stage that is part of a larger system in which the actual stopping position of the stage (relative to the nominal limit position) must be known. Because inertia inevitably causes the stage to run somewhat past the nominal limit position, tripping of a standard limit switch or other limit sensor does not provide the required indication of the actual stopping position. This innovative sensor unit operates on an electromagnetic-induction principle similar to that of linear variable differential transformers (LVDTs)

  17. Thermally assisted OSL application for equivalent dose estimation; comparison of multiple equivalent dose values as well as saturation levels determined by luminescence and ESR techniques for a sedimentary sample collected from a fault gouge

    NASA Astrophysics Data System (ADS)

    Şahiner, Eren; Meriç, Niyazi; Polymeris, George S.

    2017-02-01

    Equivalent dose estimation (De) constitutes the most important part of either trap-charge dating techniques or dosimetry applications. In the present work, multiple, independent equivalent dose estimation approaches were adopted, using both luminescence and ESR techniques; two different minerals were studied, namely quartz as well as feldspathic polymineral samples. The work is divided into three independent parts, depending on the type of signal employed. Firstly, different De estimation approaches were carried out on both polymineral and contaminated quartz, using single aliquot regenerative dose protocols employing conventional OSL and IRSL signals, acquired at different temperatures. Secondly, ESR equivalent dose estimations using the additive dose procedure both at room temperature and at 90 K were discussed. Lastly, for the first time in the literature, a single aliquot regenerative protocol employing a thermally assisted OSL signal originating from Very Deep Traps was applied for natural minerals. Rejection criteria such as recycling and recovery ratios are also presented. The SAR protocol, whenever applied, provided with compatible De estimations with great accuracy, independent on either the type of mineral or the stimulation temperature. Low temperature ESR signals resulting from Al and Ti centers indicate very large De values due to bleaching in-ability, associated with large uncertainty values. Additionally, dose saturation of different approaches was investigated. For the signal arising from Very Deep Traps in quartz saturation is extended almost by one order of magnitude. It is interesting that most of De values yielded using different luminescence signals agree with each other and ESR Ge center has very large D0 values. The results presented above highly support the argument that the stability and the initial ESR signal of the Ge center is highly sample-dependent, without any instability problems for the cases of quartz resulting from fault gouge.

  18. SU-E-J-28: Gantry Speed Significantly Affects Image Quality and Imaging Dose for 4D Cone-Beam Computed Tomography On the Varian Edge Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santoso, A; Song, K; Gardner, S

    Purpose: 4D-CBCT facilitates assessment of tumor motion at treatment position. We investigated the effect of gantry speed on 4D-CBCT image quality and dose using the Varian Edge On-Board Imager (OBI). Methods: A thoracic protocol was designed using a 125 kVp spectrum. Image quality parameters were obtained via 4D acquisition using a Catphan phantom with a gating system. A sinusoidal waveform was executed with a five second period and superior-inferior motion. 4D-CBCT scans were sorted into 4 and 10 phases. Image quality metrics included spatial resolution, contrast-to-noise ratio (CNR), uniformity index (UI), Hounsfield unit (HU) sensitivity, and RMS error (RMSE) ofmore » motion amplitude. Dosimetry was accomplished using Gafchromic XR-QA2 films within a CIRS Thorax phantom. This was placed on the gating phantom using the same motion waveform. Results: High contrast resolution decreased linearly from 5.93 to 4.18 lp/cm, 6.54 to 4.18 lp/cm, and 5.19 to 3.91 lp/cm for averaged, 4 phase, and 10 phase 4DCBCT volumes respectively as gantry speed increased from 1.0 to 6.0 degs/sec. CNRs decreased linearly from 4.80 to 1.82 as the gantry speed increased from 1.0 to 6.0 degs/sec, respectively. No significant variations in UIs, HU sensitivities, or RMSEs were observed with variable gantry speed. Ion chamber measurements compared to film yielded small percent differences in plastic water regions (0.1–9.6%), larger percent differences in lung equivalent regions (7.5–34.8%), and significantly larger percent differences in bone equivalent regions (119.1–137.3%). Ion chamber measurements decreased from 17.29 to 2.89 cGy with increasing gantry speed from 1.0 to 6.0 degs/sec. Conclusion: Maintaining technique factors while changing gantry speed changes the number of projections used for reconstruction. Increasing the number of projections by decreasing gantry speed decreases noise, however, dose is increased. The future of 4DCBCT’s clinical utility relies on further investigation of image optimization.« less

  19. Improved linearity in AlGaN/GaN metal-insulator-semiconductor high electron mobility transistors with nonlinear polarization dielectric

    NASA Astrophysics Data System (ADS)

    Gao, Tao; Xu, Ruimin; Kong, Yuechan; Zhou, Jianjun; Kong, Cen; Dong, Xun; Chen, Tangsheng

    2015-06-01

    We demonstrate highly improved linearity in a nonlinear ferroelectric of Pb(Zr0.52Ti0.48)-gated AlGaN/GaN metal-insulator-semiconductor high electron mobility transistor (MIS-HEMT). Distinct double-hump feature in the transconductance-gate voltage (gm-Vg) curve is observed, yielding remarkable enhancement in gate voltage swing as compared to MIS-HEMT with conventional linear gate dielectric. By incorporating the ferroelectric polarization into a self-consistent calculation, it is disclosed that in addition to the common hump corresponding to the onset of electron accumulation, the second hump at high current level is originated from the nonlinear polar nature of ferroelectric, which enhances the gate capacitance by increasing equivalent dielectric constant nonlinearly. This work paves a way for design of high linearity GaN MIS-HEMT by exploiting the nonlinear properties of dielectric.

  20. Breadboard linear array scan imager using LSI solid-state technology

    NASA Technical Reports Server (NTRS)

    Tracy, R. A.; Brennan, J. A.; Frankel, D. G.; Noll, R. E.

    1976-01-01

    The performance of large scale integration photodiode arrays in a linear array scan (pushbroom) breadboard was evaluated for application to multispectral remote sensing of the earth's resources. The technical approach, implementation, and test results of the program are described. Several self scanned linear array visible photodetector focal plane arrays were fabricated and evaluated in an optical bench configuration. A 1728-detector array operating in four bands (0.5 - 1.1 micrometer) was evaluated for noise, spectral response, dynamic range, crosstalk, MTF, noise equivalent irradiance, linearity, and image quality. Other results include image artifact data, temporal characteristics, radiometric accuracy, calibration experience, chip alignment, and array fabrication experience. Special studies and experimentation were included in long array fabrication and real-time image processing for low-cost ground stations, including the use of computer image processing. High quality images were produced and all objectives of the program were attained.

  1. Linear scaling computation of the Fock matrix. II. Rigorous bounds on exchange integrals and incremental Fock build

    NASA Astrophysics Data System (ADS)

    Schwegler, Eric; Challacombe, Matt; Head-Gordon, Martin

    1997-06-01

    A new linear scaling method for computation of the Cartesian Gaussian-based Hartree-Fock exchange matrix is described, which employs a method numerically equivalent to standard direct SCF, and which does not enforce locality of the density matrix. With a previously described method for computing the Coulomb matrix [J. Chem. Phys. 106, 5526 (1997)], linear scaling incremental Fock builds are demonstrated for the first time. Microhartree accuracy and linear scaling are achieved for restricted Hartree-Fock calculations on sequences of water clusters and polyglycine α-helices with the 3-21G and 6-31G basis sets. Eightfold speedups are found relative to our previous method. For systems with a small ionization potential, such as graphitic sheets, the method naturally reverts to the expected quadratic behavior. Also, benchmark 3-21G calculations attaining microhartree accuracy are reported for the P53 tetramerization monomer involving 698 atoms and 3836 basis functions.

  2. Graphical and PC-software analysis of volcano eruption precursors according to the Materials Failure Forecast Method (FFM)

    NASA Astrophysics Data System (ADS)

    Cornelius, Reinold R.; Voight, Barry

    1995-03-01

    The Materials Failure Forecasting Method for volcanic eruptions (FFM) analyses the rate of precursory phenomena. Time of eruption onset is derived from the time of "failure" implied by accelerating rate of deformation. The approach attempts to fit data, Ω, to the differential relationship Ω¨=AΩ˙, where the dot superscript represents the time derivative, and the data Ω may be any of several parameters describing the accelerating deformation or energy release of the volcanic system. Rate coefficients, A and α, may be derived from appropriate data sets to provide an estimate of time to "failure". As the method is still an experimental technique, it should be used with appropriate judgment during times of volcanic crisis. Limitations of the approach are identified and discussed. Several kinds of eruption precursory phenomena, all simulating accelerating creep during the mechanical deformation of the system, can be used with FFM. Among these are tilt data, slope-distance measurements, crater fault movements and seismicity. The use of seismic coda, seismic amplitude-derived energy release and time-integrated amplitudes or coda lengths are examined. Usage of cumulative coda length directly has some practical advantages to more rigorously derived parameters, and RSAM and SSAM technologies appear to be well suited to real-time applications. One graphical and four numerical techniques of applying FFM are discussed. The graphical technique is based on an inverse representation of rate versus time. For α = 2, the inverse rate plot is linear; it is concave upward for α < 2 and concave downward for α > 2. The eruption time is found by simple extrapolation of the data set toward the time axis. Three numerical techniques are based on linear least-squares fits to linearized data sets. The "linearized least-squares technique" is most robust and is expected to be the most practical numerical technique. This technique is based on an iterative linearization of the given rate-time series. The hindsight technique is disadvantaged by a bias favouring a too early eruption time in foresight applications. The "log rate versus log acceleration technique", utilizing a logarithmic representation of the fundamental differential equation, is disadvantaged by large data scatter after interpolation of accelerations. One further numerical technique, a nonlinear least-squares fit to rate data, requires special and more complex software. PC-oriented computer codes were developed for data manipulation, application of the three linearizing numerical methods, and curve fitting. Separate software is required for graphing purposes. All three linearizing techniques facilitate an eruption window based on a data envelope according to the linear least-squares fit, at a specific level of confidence, and an estimated rate at time of failure.

  3. Analysis of calibration data for the uranium active neutron coincidence counting collar with attention to errors in the measured neutron coincidence rate

    DOE PAGES

    Croft, Stephen; Burr, Thomas Lee; Favalli, Andrea; ...

    2015-12-10

    We report that the declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to modelmore » the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. Lastly, we find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters« less

  4. Αntioxidant activity of Cynara scolymus L. and Cynara cardunculus L. extracts obtained by different extraction techniques.

    PubMed

    Kollia, Eleni; Markaki, Panagiota; Zoumpoulakis, Panagiotis; Proestos, Charalampos

    2017-05-01

    Extracts of different parts (heads, bracts and stems) of Cynara cardunculus L. (cardoon) and Cynara scolymus L. (globe artichoke), obtained by two different extraction techniques (Ultrasound-Assisted Extraction (UAE) and classical extraction (CE)) were examined and compared for their total phenolic content (TPC) and their antioxidant activity. Moreover, infusions of the plant's parts were also analysed and compared to aforementioned samples. Results showed that cardoon's heads extract (obtained by Ultrasound-Assisted Extraction) displayed the highest TPC values (1.57 mg Gallic Acid Equivalents (GAE) g -1 fresh weight (fw)), the highest DPPH • scavenging activity (IC50; 0.91 mg ml -1 ) and the highest ABTS •+ radical scavenging capacity (2.08 mg Trolox Equivalents (TE) g -1 fw) compared to infusions and other extracts studied. Moreover, Ultrasound-Assisted Extraction technique proved to be more appropriate and effective for the extraction of antiradical and phenolic compounds.

  5. Data-Driven Method to Estimate Nonlinear Chemical Equivalence.

    PubMed

    Mayo, Michael; Collier, Zachary A; Winton, Corey; Chappell, Mark A

    2015-01-01

    There is great need to express the impacts of chemicals found in the environment in terms of effects from alternative chemicals of interest. Methods currently employed in fields such as life-cycle assessment, risk assessment, mixtures toxicology, and pharmacology rely mostly on heuristic arguments to justify the use of linear relationships in the construction of "equivalency factors," which aim to model these concentration-concentration correlations. However, the use of linear models, even at low concentrations, oversimplifies the nonlinear nature of the concentration-response curve, therefore introducing error into calculations involving these factors. We address this problem by reporting a method to determine a concentration-concentration relationship between two chemicals based on the full extent of experimentally derived concentration-response curves. Although this method can be easily generalized, we develop and illustrate it from the perspective of toxicology, in which we provide equations relating the sigmoid and non-monotone, or "biphasic," responses typical of the field. The resulting concentration-concentration relationships are manifestly nonlinear for nearly any chemical level, even at the very low concentrations common to environmental measurements. We demonstrate the method using real-world examples of toxicological data which may exhibit sigmoid and biphasic mortality curves. Finally, we use our models to calculate equivalency factors, and show that traditional results are recovered only when the concentration-response curves are "parallel," which has been noted before, but we make formal here by providing mathematical conditions on the validity of this approach.

  6. A research on snow distribution in mountainous area using airborne laser scanning

    NASA Astrophysics Data System (ADS)

    Nishihara, T.; Tanise, A.

    2015-12-01

    In snowy cold regions, the snowmelt water stored in dams in early spring meets the water demand for the summer season. Thus, snowmelt water serves as an important water resource. However, snowmelt water also can cause snowmelt floods. Therefore, it's necessary to estimate snow water equivalent in a dam basin as accurately as possible. For this reason, the dam operation offices in Hokkaido, Japan conduct snow surveys every March to estimate snow water equivalent in the dam basin. In estimating, we generally apply a relationship between elevation and snow water equivalent. But above the forest line, snow surveys are generally conducted along ridges due to the risk of avalanches or other hazards. As a result, snow water equivalent above the forest line is significantly underestimated. In this study, we conducted airborne laser scanning to measure snow depth in the high elevation area including above the forest line twice in the same target area (in 2012 and 2015) and analyzed the relationships of snow depth above the forest line and some indicators of terrain. Our target area was the Chubetsu dam basin. It's located in central Hokkaido, a high elevation area in a mountainous region. Hokkaido is a northernmost island of Japan. Therefore it's a cold and snowy region. The target range for airborne laser scanning was 10km2. About 60% of the target range was above the forest line. First, we analyzed the relationship between elevation and snow depth. Below the forest line, the snow depth increased linearly with elevation increase. On the other hand, above the forest line, the snow depth varied greatly. Second, we analyzed the relationship between overground-openness and snow depth above the forest line. Overground-openness is an indicator quantifying how far a target point is above or below the surrounding surface. As a result, a simple relationship was clarified. Snow depth decreased linearly as overground-openness increases. This means that areas with heavy snow cover are distributed in valleys and that of light cover are on ridges. Lastly we compared the result of 2012 and that of 2015. The same characteristic of snow depth, above mentioned, was found. However, regression coefficients of linear equations were different according to the weather conditions of each year.

  7. Full waveform time domain solutions for source and induced magnetotelluric and controlled-source electromagnetic fields using quasi-equivalent time domain decomposition and GPU parallelization

    NASA Astrophysics Data System (ADS)

    Imamura, N.; Schultz, A.

    2015-12-01

    Recently, a full waveform time domain solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations of non-zero wavenumber, the ability to operate in areas of high levels of source signal spatial complexity and non-stationarity, etc. This goal would not be obtainable if one were to adopt the finite difference time-domain (FDTD) approach for the forward problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across the large frequency bandwidth. It means that for FDTD simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a linear system that is computationally burdensome to solve. We have implemented our code that addresses this situation through the use of a fictitious wave domain method and GPUs to speed up the computation time. We also substantially reduce the size of the linear systems by applying concepts from successive cascade decimation, through quasi-equivalent time domain decomposition. By combining these refinements, we have made good progress toward implementing the core of a full waveform joint source field/earth conductivity inverse modeling method. From results, we found the use of previous generation of CPU/GPU speeds computations by an order of magnitude over a parallel CPU only approach. In part, this arises from the use of the quasi-equivalent time domain decomposition, which shrinks the size of the linear system dramatically.

  8. Advanced statistics: linear regression, part I: simple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  9. Dose and linear energy transfer spectral measurements for the supersonic transport program

    NASA Technical Reports Server (NTRS)

    Philbrick, R. B.

    1972-01-01

    The purpose of the package, called the high altitude radiation instrumentation system (HARIS), is to measure the radiation hazard to supersonic transport passengers from solar and galactic cosmic rays. The HARIS includes gaseous linear energy transfer spectrometer, a tissue equivalent ionization chamber, and a geiger meuller tube. The HARIS is flown on RB-57F aircraft at 60,000 feet. Data from the HARIS are reduced to give rad and rem dose rates measured by the package during the flights. Results presented include ambient data obtained on background flights, altitude comparison data, and solar flare data.

  10. Simple Procedure to Compute the Inductance of a Toroidal Ferrite Core from the Linear to the Saturation Regions

    PubMed Central

    Salas, Rosa Ana; Pleite, Jorge

    2013-01-01

    We propose a specific procedure to compute the inductance of a toroidal ferrite core as a function of the excitation current. The study includes the linear, intermediate and saturation regions. The procedure combines the use of Finite Element Analysis in 2D and experimental measurements. Through the two dimensional (2D) procedure we are able to achieve convergence, a reduction of computational cost and equivalent results to those computed by three dimensional (3D) simulations. The validation is carried out by comparing 2D, 3D and experimental results. PMID:28809283

  11. Alternative self-dual gravity in eight dimensions

    NASA Astrophysics Data System (ADS)

    Nieto, J. A.

    2016-07-01

    We develop an alternative Ashtekar formalism in eight dimensions. In fact, using a MacDowell-Mansouri physical framework and a self-dual curvature symmetry, we propose an action in eight dimensions in which the Levi-Civita tenor with eight indices plays a key role. We explicitly show that such an action contains number of linear, quadratic and cubic terms in the Riemann tensor, Ricci tensor and scalar curvature. In particular, the linear term is reduced to the Einstein-Hilbert action with cosmological constant in eight dimensions. We prove that such a reduced action is equivalent to the Lovelock action in eight dimensions.

  12. Small-area snow surveys on the northern plains of North Dakota

    USGS Publications Warehouse

    Emerson, Douglas G.; Carroll, T.R.; Steppuhn, Harold

    1985-01-01

    Snow-cover data are needed for many facets of hydrology. The variation in snow cover over small areas is the focus of this study. The feasibility of using aerial surveys to obtain information on the snow water equivalent of the snow cover in order to minimize the necessity of labor intensive ground snow surveys was- evaluated. A low-flying aircraft was used to measure attenuations of natural terrestrial gamma radiation by snow cover. Aerial and ground snow surveys of eight 1-mile snow courses and one 4-mile snow course were used in the evaluation, with ground snow surveys used as the base to evaluate aerial data. Each of the 1-mile snow courses consisted of a single land use and all had the same terrain type (plane). The 4-mile snow course consists of a variety of land uses and the same terrain type (plane). Using the aerial snow-survey technique, the snow water equivalent of the 1-mile snow courses was. measured with three passes of the aircraft. Use of more than one pass did not improve the results. The mean absolute difference between the aerial- and ground-measured snow water equivalents for the 1-mile snow courses was 26 percent (0.77 inches). The aerial snow water equivalents determined for the 1-mile snow courses were used to estimate the variations in the snow water equivalents over the 4-mile snow course. The weighted mean absolute difference for the 4-mile snow course was 27 percent (0.8 inches). Variations in snow water equivalents could not be verified adequately by segmenting the aerial snow-survey data because of the uniformity found in the snow cover. On the 4-mile snow coirse, about two-thirds of the aerial snow-survey data agreed with the ground snow-survey data within the accuracy of the aerial technique ( + 0.5 inch of the mean snow water equivalent).

  13. Experimentally determined spectral optimization for dedicated breast computed tomography.

    PubMed

    Prionas, Nicolas D; Huang, Shih-Ying; Boone, John M

    2011-02-01

    The current study aimed to experimentally identify the optimal technique factors (x-ray tube potential and added filtration material/thickness) to maximize soft-tissue contrast, microcalcification contrast, and iodine contrast enhancement using cadaveric breast specimens imaged with dedicated breast computed tomography (bCT). Secondarily, the study aimed to evaluate the accuracy of phantom materials as tissue surrogates and to characterize the change in accuracy with varying bCT technique factors. A cadaveric breast specimen was acquired under appropriate approval and scanned using a prototype bCT scanner. Inserted into the specimen were cylindrical inserts of polyethylene, water, iodine contrast medium (iodixanol, 2.5 mg/ml), and calcium hydroxyapatite (100 mg/ml). Six x-ray tube potentials (50, 60, 70, 80, 90, and 100 kVp) and three different filters (0.2 mm Cu, 1.5 mm Al, and 0.2 mm Sn) were tested. For each set of technique factors, the intensity (linear attenuation coefficient) and noise were measured within six regions of interest (ROIs): Glandular tissue, adipose tissue, polyethylene, water, iodine contrast medium, and calcium hydroxyapatite. Dose-normalized contrast to noise ratio (CNRD) was measured for pairwise comparisons among the six ROIs. Regression models were used to estimate the effect of tube potential and added filtration on intensity, noise, and CNRD. Iodine contrast enhancement was maximized using 60 kVp and 0.2 mm Cu. Microcalcification contrast and soft-tissue contrast were maximized at 60 kVp. The 0.2 mm Cu filter achieved significantly higher CNRD for iodine contrast enhancement than the other two filters (p = 0.01), but microcalcification contrast and soft-tissue contrast were similar using the copper and aluminum filters. The average percent difference in linear attenuation coefficient, across all tube potentials, for polyethylene versus adipose tissue was 1.8%, 1.7%, and 1.3% for 0.2 mm Cu, 1.5 mm Al, and 0.2 mm Sn, respectively. For water versus glandular tissue, the average percent difference was 2.7%, 3.9%, and 4.2% for the three filter types. Contrast-enhanced bCT, using injected iodine contrast medium, may be optimized for maximum contrast of enhancing lesions at 60 kVp with 0.2 mm Cu filtration. Soft-tissue contrast and microcalcification contrast may also benefit from lower tube potentials (60 kVp). The linear attenuation coefficients of water and polyethylene slightly overestimate the values of their corresponding tissues, but the reported differences may serve as guidance for dosimetry and quality assurance using tissue equivalent phantoms.

  14. [Recent advances of anastomosis techniques of esophagojejunostomy after laparoscopic totally gastrectomy in gastric tumor].

    PubMed

    Li, Xi; Ke, Chongwei

    2015-05-01

    The esophageal jejunum anastomosis of the digestive tract reconstruction techniques in laparoscopic total gastrectomy includes two categories: circular stapler anastomosis techniques and linear stapler anastomosis techniques. Circular stapler anastomosis techniques include manual anastomosis method, purse string instrument method, Hiki improved special anvil anastomosis technique, the transorally inserted anvil(OrVil(TM)) and reverse puncture device technique. Linear stapler anastomosis techniques include side to side anastomosis technique and Overlap side to side anastomosis technique. Esophageal jejunum anastomosis technique has a wide selection of different technologies with different strengths and the corresponding limitations. This article will introduce research progress of laparoscopic total gastrectomy esophagus jejunum anastomosis from both sides of the development of anastomosis technology and the selection of anastomosis technology.

  15. Formal Requirements-Based Programming for Complex Systems

    NASA Technical Reports Server (NTRS)

    Rash, James L.; Hinchey, Michael G.; Rouff, Christopher A.; Gracanin, Denis

    2005-01-01

    Computer science as a field has not yet produced a general method to mechanically transform complex computer system requirements into a provably equivalent implementation. Such a method would be one major step towards dealing with complexity in computing, yet it remains the elusive holy grail of system development. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that such tools and methods leave unfilled is that the formal models cannot be proven to be equivalent to the system requirements as originated by the customer For the classes of complex systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations. While other techniques are available, this method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. We illustrate the application of the method to an example procedure from the Hubble Robotic Servicing Mission currently under study and preliminary formulation at NASA Goddard Space Flight Center.

  16. Application of a local linearization technique for the solution of a system of stiff differential equations associated with the simulation of a magnetic bearing assembly

    NASA Technical Reports Server (NTRS)

    Kibler, K. S.; Mcdaniel, G. A.

    1981-01-01

    A digital local linearization technique was used to solve a system of stiff differential equations which simulate a magnetic bearing assembly. The results prove the technique to be accurate, stable, and efficient when compared to a general purpose variable order Adams method with a stiff option.

  17. Comparison of lossless compression techniques for prepress color images

    NASA Astrophysics Data System (ADS)

    Van Assche, Steven; Denecker, Koen N.; Philips, Wilfried R.; Lemahieu, Ignace L.

    1998-12-01

    In the pre-press industry color images have both a high spatial and a high color resolution. Such images require a considerable amount of storage space and impose long transmission times. Data compression is desired to reduce these storage and transmission problems. Because of the high quality requirements in the pre-press industry only lossless compression is acceptable. Most existing lossless compression schemes operate on gray-scale images. In this case the color components of color images must be compressed independently. However, higher compression ratios can be achieved by exploiting inter-color redundancies. In this paper we present a comparison of three state-of-the-art lossless compression techniques which exploit such color redundancies: IEP (Inter- color Error Prediction) and a KLT-based technique, which are both linear color decorrelation techniques, and Interframe CALIC, which uses a non-linear approach to color decorrelation. It is shown that these techniques are able to exploit color redundancies and that color decorrelation can be done effectively and efficiently. The linear color decorrelators provide a considerable coding gain (about 2 bpp) on some typical prepress images. The non-linear interframe CALIC predictor does not yield better results, but the full interframe CALIC technique does.

  18. A study of the use of linear programming techniques to improve the performance in design optimization problems

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.

  19. Computation of nonlinear least squares estimator and maximum likelihood using principles in matrix calculus

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.

    2017-11-01

    This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation

  20. Linear time relational prototype based learning.

    PubMed

    Gisbrecht, Andrej; Mokbel, Bassam; Schleif, Frank-Michael; Zhu, Xibin; Hammer, Barbara

    2012-10-01

    Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.

  1. Solving deterministic non-linear programming problem using Hopfield artificial neural network and genetic programming techniques

    NASA Astrophysics Data System (ADS)

    Vasant, P.; Ganesan, T.; Elamvazuthi, I.

    2012-11-01

    A fairly reasonable result was obtained for non-linear engineering problems using the optimization techniques such as neural network, genetic algorithms, and fuzzy logic independently in the past. Increasingly, hybrid techniques are being used to solve the non-linear problems to obtain better output. This paper discusses the use of neuro-genetic hybrid technique to optimize the geological structure mapping which is known as seismic survey. It involves the minimization of objective function subject to the requirement of geophysical and operational constraints. In this work, the optimization was initially performed using genetic programming, and followed by hybrid neuro-genetic programming approaches. Comparative studies and analysis were then carried out on the optimized results. The results indicate that the hybrid neuro-genetic hybrid technique produced better results compared to the stand-alone genetic programming method.

  2. Sequential time interleaved random equivalent sampling for repetitive signal.

    PubMed

    Zhao, Yijiu; Liu, Jingjing

    2016-12-01

    Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.

  3. Adaptive Control Allocation in the Presence of Actuator Failures

    NASA Technical Reports Server (NTRS)

    Liu, Yu; Crespo, Luis G.

    2010-01-01

    In this paper, a novel adaptive control allocation framework is proposed. In the adaptive control allocation structure, cooperative actuators are grouped and treated as an equivalent control effector. A state feedback adaptive control signal is designed for the equivalent effector and allocated to the member actuators adaptively. Two adaptive control allocation algorithms are proposed, which guarantee closed-loop stability and asymptotic state tracking in the presence of uncertain loss of effectiveness and constant-magnitude actuator failures. The proposed algorithms can be shown to reduce the controller complexity with proper grouping of the actuators. The proposed adaptive control allocation schemes are applied to two linearized aircraft models, and the simulation results demonstrate the performance of the proposed algorithms.

  4. Datamining approaches for modeling tumor control probability.

    PubMed

    Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D

    2010-11-01

    Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.

  5. Mathematical Techniques for Nonlinear System Theory.

    DTIC Science & Technology

    1981-09-01

    This report deals with research results obtained in the following areas: (1) Finite-dimensional linear system theory by algebraic methods--linear...Infinite-dimensional linear systems--realization theory of infinite-dimensional linear systems; (3) Nonlinear system theory --basic properties of

  6. Analysis of linear elasticity and non-linearity due to plasticity and material damage in woven and biaxial braided composites

    NASA Astrophysics Data System (ADS)

    Goyal, Deepak

    Textile composites have a wide variety of applications in the aerospace, sports, automobile, marine and medical industries. Due to the availability of a variety of textile architectures and numerous parameters associated with each, optimal design through extensive experimental testing is not practical. Predictive tools are needed to perform virtual experiments of various options. The focus of this research is to develop a better understanding of linear elastic response, plasticity and material damage induced nonlinear behavior and mechanics of load flow in textile composites. Textile composites exhibit multiple scales of complexity. The various textile behaviors are analyzed using a two-scale finite element modeling. A framework to allow use of a wide variety of damage initiation and growth models is proposed. Plasticity induced non-linear behavior of 2x2 braided composites is investigated using a modeling approach based on Hill's yield function for orthotropic materials. The mechanics of load flow in textile composites is demonstrated using special non-standard postprocessing techniques that not only highlight the important details, but also transform the extensive amount of output data into comprehensible modes of behavior. The investigations show that the damage models differ from each other in terms of amount of degradation as well as the properties to be degraded under a particular failure mode. When compared with experimental data, predictions of some models match well for glass/epoxy composite whereas other's match well for carbon/epoxy composites. However, all the models predicted very similar response when damage factors were made similar, which shows that the magnitude of damage factors are very important. Full 3D as well as equivalent tape laminate predictions lie within the range of the experimental data for a wide variety of braided composites with different material systems, which validated the plasticity analysis. Conclusions about the effect of fiber type on the degree of plasticity induced non-linearity in a +/-25° braid depend on the measure of non-linearity. Investigations about the mechanics of load flow in textile composites bring new insights about the textile behavior. For example, the reasons for existence of transverse shear stress under uni-axial loading and occurrence of stress concentrations at certain locations were explained.

  7. The detrimental effect of friction on space microgravity robotics

    NASA Technical Reports Server (NTRS)

    Newman, Wyatt S.; Glosser, Gregory D.; Miller, Jeffrey H.; Rohn, Douglas

    1992-01-01

    The authors present an analysis of why control systems are ineffective in compensating for acceleration disturbances due to Coulomb friction. Linear arguments indicate that the effects of Coulomb friction on a body are most difficult to reject when the control actuator is separated from the body of compliance. The linear arguments were illustrated in a nonlinear simulation of optimal linear tracking control in the presence of nonlinear friction. The results of endpoint acceleration measurements for four robot designs are presented and are compared with simulation and to equivalent measurements on a human. It is concluded that Coulomb friction in common bearings and transmission induces unacceptable levels of endpoint acceleration, that these accelerations cannot be adequately attenuated by control, and that robots for microgravity work will require special design considerations for inherently low friction.

  8. Chemiluminescence-based multivariate sensing of local equivalence ratios in premixed atmospheric methane-air flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Markandey M.; Krishnan, Sundar R.; Srinivasan, Kalyan K.

    Chemiluminescence emissions from OH*, CH*, C2, and CO2 formed within the reaction zone of premixed flames depend upon the fuel-air equivalence ratio in the burning mixture. In the present paper, a new partial least square regression (PLS-R) based multivariate sensing methodology is investigated and compared with an OH*/CH* intensity ratio-based calibration model for sensing equivalence ratio in atmospheric methane-air premixed flames. Five replications of spectral data at nine different equivalence ratios ranging from 0.73 to 1.48 were used in the calibration of both models. During model development, the PLS-R model was initially validated with the calibration data set using themore » leave-one-out cross validation technique. Since the PLS-R model used the entire raw spectral intensities, it did not need the nonlinear background subtraction of CO2 emission that is required for typical OH*/CH* intensity ratio calibrations. An unbiased spectral data set (not used in the PLS-R model development), for 28 different equivalence ratio conditions ranging from 0.71 to 1.67, was used to predict equivalence ratios using the PLS-R and the intensity ratio calibration models. It was found that the equivalence ratios predicted with the PLS-R based multivariate calibration model matched the experimentally measured equivalence ratios within 7%; whereas, the OH*/CH* intensity ratio calibration grossly underpredicted equivalence ratios in comparison to measured equivalence ratios, especially under rich conditions ( > 1.2). The practical implications of the chemiluminescence-based multivariate equivalence ratio sensing methodology are also discussed.« less

  9. Component Cost Analysis of Large Scale Systems

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.; Yousuff, A.

    1982-01-01

    The ideas of cost decomposition is summarized to aid in the determination of the relative cost (or 'price') of each component of a linear dynamic system using quadratic performance criteria. In addition to the insights into system behavior that are afforded by such a component cost analysis CCA, these CCA ideas naturally lead to a theory for cost-equivalent realizations.

  10. Recursion Removal as an Instructional Method to Enhance the Understanding of Recursion Tracing

    ERIC Educational Resources Information Center

    Velázquez-Iturbide, J. Ángel; Castellanos, M. Eugenia; Hijón-Neira, Raquel

    2016-01-01

    Recursion is one of the most difficult programming topics for students. In this paper, an instructional method is proposed to enhance students' understanding of recursion tracing. The proposal is based on the use of rules to translate linear recursion algorithms into equivalent, iterative ones. The paper has two main contributions: the…

  11. Techniques for forced response involving discrete nonlinearities. I - Theory. II - Applications

    NASA Astrophysics Data System (ADS)

    Avitabile, Peter; Callahan, John O.

    Several new techniques developed for the forced response analysis of systems containing discrete nonlinear connection elements are presented and compared to the traditional methods. In particular, the techniques examined are the Equivalent Reduced Model Technique (ERMT), Modal Modification Response Technique (MMRT), and Component Element Method (CEM). The general theory of the techniques is presented, and applications are discussed with particular reference to the beam nonlinear system model using ERMT, MMRT, and CEM; frame nonlinear response using the three techniques; and comparison of the results obtained by using the ERMT, MMRT, and CEM models.

  12. Linear and nonlinear regression techniques for simultaneous and proportional myoelectric control.

    PubMed

    Hahne, J M; Biessmann, F; Jiang, N; Rehbaum, H; Farina, D; Meinecke, F C; Muller, K-R; Parra, L C

    2014-03-01

    In recent years the number of active controllable joints in electrically powered hand-prostheses has increased significantly. However, the control strategies for these devices in current clinical use are inadequate as they require separate and sequential control of each degree-of-freedom (DoF). In this study we systematically compare linear and nonlinear regression techniques for an independent, simultaneous and proportional myoelectric control of wrist movements with two DoF. These techniques include linear regression, mixture of linear experts (ME), multilayer-perceptron, and kernel ridge regression (KRR). They are investigated offline with electro-myographic signals acquired from ten able-bodied subjects and one person with congenital upper limb deficiency. The control accuracy is reported as a function of the number of electrodes and the amount and diversity of training data providing guidance for the requirements in clinical practice. The results showed that KRR, a nonparametric statistical learning method, outperformed the other methods. However, simple transformations in the feature space could linearize the problem, so that linear models could achieve similar performance as KRR at much lower computational costs. Especially ME, a physiologically inspired extension of linear regression represents a promising candidate for the next generation of prosthetic devices.

  13. Sparse 4D TomoSAR imaging in the presence of non-linear deformation

    NASA Astrophysics Data System (ADS)

    Khwaja, Ahmed Shaharyar; ćetin, Müjdat

    2018-04-01

    In this paper, we present a sparse four-dimensional tomographic synthetic aperture radar (4D TomoSAR) imaging scheme that can estimate elevation and linear as well as non-linear seasonal deformation rates of scatterers using the interferometric phase. Unlike existing sparse processing techniques that use fixed dictionaries based on a linear deformation model, we use a variable dictionary for the non-linear deformation in the form of seasonal sinusoidal deformation, in addition to the fixed dictionary for the linear deformation. We estimate the amplitude of the sinusoidal deformation using an optimization method and create the variable dictionary using the estimated amplitude. We show preliminary results using simulated data that demonstrate the soundness of our proposed technique for sparse 4D TomoSAR imaging in the presence of non-linear deformation.

  14. Adaptive Nonlinear RF Cancellation for Improved Isolation in Simultaneous Transmit–Receive Systems

    NASA Astrophysics Data System (ADS)

    Kiayani, Adnan; Waheed, Muhammad Zeeshan; Anttila, Lauri; Abdelaziz, Mahmoud; Korpi, Dani; Syrjala, Ville; Kosunen, Marko; Stadius, Kari; Ryynanen, Jussi; Valkama, Mikko

    2018-05-01

    This paper proposes an active radio frequency (RF) cancellation solution to suppress the transmitter (TX) passband leakage signal in radio transceivers supporting simultaneous transmission and reception. The proposed technique is based on creating an opposite-phase baseband equivalent replica of the TX leakage signal in the transceiver digital front-end through adaptive nonlinear filtering of the known transmit data, to facilitate highly accurate cancellation under a nonlinear TX power amplifier (PA). The active RF cancellation is then accomplished by employing an auxiliary transmitter chain, to generate the actual RF cancellation signal, and combining it with the received signal at the receiver (RX) low noise amplifier (LNA) input. A closed-loop parameter learning approach, based on the decorrelation principle, is also developed to efficiently estimate the coefficients of the nonlinear cancellation filter in the presence of a nonlinear TX PA with memory, finite passive isolation, and a nonlinear RX LNA. The performance of the proposed cancellation technique is evaluated through comprehensive RF measurements adopting commercial LTE-Advanced transceiver hardware components. The results show that the proposed technique can provide an additional suppression of up to 54 dB for the TX passband leakage signal at the RX LNA input, even at considerably high transmit power levels and with wide transmission bandwidths. Such novel cancellation solution can therefore substantially improve the TX-RX isolation, hence reducing the requirements on passive isolation and RF component linearity, as well as increasing the efficiency and flexibility of the RF spectrum use in the emerging 5G radio networks.

  15. Low-complexity stochastic modeling of wall-bounded shear flows

    NASA Astrophysics Data System (ADS)

    Zare, Armin

    Turbulent flows are ubiquitous in nature and they appear in many engineering applications. Transition to turbulence, in general, increases skin-friction drag in air/water vehicles compromising their fuel-efficiency and reduces the efficiency and longevity of wind turbines. While traditional flow control techniques combine physical intuition with costly experiments, their effectiveness can be significantly enhanced by control design based on low-complexity models and optimization. In this dissertation, we develop a theoretical and computational framework for the low-complexity stochastic modeling of wall-bounded shear flows. Part I of the dissertation is devoted to the development of a modeling framework which incorporates data-driven techniques to refine physics-based models. We consider the problem of completing partially known sample statistics in a way that is consistent with underlying stochastically driven linear dynamics. Neither the statistics nor the dynamics are precisely known. Thus, our objective is to reconcile the two in a parsimonious manner. To this end, we formulate optimization problems to identify the dynamics and directionality of input excitation in order to explain and complete available covariance data. For problem sizes that general-purpose solvers cannot handle, we develop customized optimization algorithms based on alternating direction methods. The solution to the optimization problem provides information about critical directions that have maximal effect in bringing model and statistics in agreement. In Part II, we employ our modeling framework to account for statistical signatures of turbulent channel flow using low-complexity stochastic dynamical models. We demonstrate that white-in-time stochastic forcing is not sufficient to explain turbulent flow statistics and develop models for colored-in-time forcing of the linearized Navier-Stokes equations. We also examine the efficacy of stochastically forced linearized NS equations and their parabolized equivalents in the receptivity analysis of velocity fluctuations to external sources of excitation as well as capturing the effect of the slowly-varying base flow on streamwise streaks and Tollmien-Schlichting waves. In Part III, we develop a model-based approach to design surface actuation of turbulent channel flow in the form of streamwise traveling waves. This approach is capable of identifying the drag reducing trends of traveling waves in a simulation-free manner. We also use the stochastically forced linearized NS equations to examine the Reynolds number independent effects of spanwise wall oscillations on drag reduction in turbulent channel flows. This allows us to extend the predictive capability of our simulation-free approach to high Reynolds numbers.

  16. Parallel State Space Construction for a Model Checking Based on Maximality Semantics

    NASA Astrophysics Data System (ADS)

    El Abidine Bouneb, Zine; Saīdouni, Djamel Eddine

    2009-03-01

    The main limiting factor of the model checker integrated in the concurrency verification environment FOCOVE [1, 2], which use the maximality based labeled transition system (noted MLTS) as a true concurrency model[3, 4], is currently the amount of available physical memory. Many techniques have been developed to reduce the size of a state space. An interesting technique among them is the alpha equivalence reduction. Distributed memory execution environment offers yet another choice. The main contribution of the paper is to show that the parallel state space construction algorithm proposed in [5], which is based on interleaving semantics using LTS as semantic model, may be adapted easily to the distributed implementation of the alpha equivalence reduction for the maximality based labeled transition systems.

  17. Lowering Whole-Body Radiation Doses in Pediatric Intensity-Modulated Radiotherapy Through the Use of Unflattened Photon Beams;Flattening filter; Pediatric; Intensity-modulated radiotherapy; Second cancers; Radiation-induced malignancies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cashmore, Jason, E-mail: Jason.cashmore@uhb.nhs.uk; Ramtohul, Mark; Ford, Dan

    Purpose: Intensity modulated radiotherapy (IMRT) has been linked with an increased risk of secondary cancer induction due to the extra leakage radiation associated with delivery of these techniques. Removal of the flattening filter offers a simple way of reducing head leakage, and it may be possible to generate equivalent IMRT plans and to deliver these on a standard linear accelerator operating in unflattened mode. Methods and Materials: An Elekta Precise linear accelerator has been commissioned to operate in both conventional and unflattened modes (energy matched at 6 MV) and a direct comparison made between the treatment planning and delivery ofmore » pediatric intracranial treatments using both approaches. These plans have been evaluated and delivered to an anthropomorphic phantom. Results: Plans generated in unflattened mode are clinically identical to those for conventional IMRT but can be delivered with greatly reduced leakage radiation. Measurements in an anthropomorphic phantom at clinically relevant positions including the thyroid, lung, ovaries, and testes show an average reduction in peripheral doses of 23.7%, 29.9%, 64.9%, and 70.0%, respectively, for identical plan delivery compared to conventional IMRT. Conclusions: IMRT delivery in unflattened mode removes an unwanted and unnecessary source of scatter from the treatment head and lowers leakage doses by up to 70%, thereby reducing the risk of radiation-induced second cancers. Removal of the flattening filter is recommended for IMRT treatments.« less

  18. Active disturbance rejection control based robust output feedback autopilot design for airbreathing hypersonic vehicles.

    PubMed

    Tian, Jiayi; Zhang, Shifeng; Zhang, Yinhui; Li, Tong

    2018-03-01

    Since motion control plant (y (n) =f(⋅)+d) was repeatedly used to exemplify how active disturbance rejection control (ADRC) works when it was proposed, the integral chain system subject to matched disturbances is always regarded as a canonical form and even misconstrued as the only form that ADRC is applicable to. In this paper, a systematic approach is first presented to apply ADRC to a generic nonlinear uncertain system with mismatched disturbances and a robust output feedback autopilot for an airbreathing hypersonic vehicle (AHV) is devised based on that. The key idea is to employ the feedback linearization (FL) and equivalent input disturbance (EID) technique to decouple nonlinear uncertain system into several subsystems in canonical form, thus it would be much easy to directly design classical/improved linear/nonlinear ADRC controller for each subsystem. It is noticed that all disturbances are taken into account when implementing FL rather than just omitting that in previous research, which greatly enhances controllers' robustness against external disturbances. For autopilot design, ADRC strategy enables precise tracking for velocity and altitude reference command in the presence of severe parametric perturbations and atmospheric disturbances only using measurable output information. Bounded-input-bounded-output (BIBO) stable is analyzed for closed-loop system. To illustrate the feasibility and superiority of this novel design, a series of comparative simulations with some prominent and representative methods are carried out on a benchmark longitudinal AHV model. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  19. 1r2dinv: A finite-difference model for inverse analysis of two dimensional linear or radial groundwater flow

    USGS Publications Warehouse

    Bohling, Geoffrey C.; Butler, J.J.

    2001-01-01

    We have developed a program for inverse analysis of two-dimensional linear or radial groundwater flow problems. The program, 1r2dinv, uses standard finite difference techniques to solve the groundwater flow equation for a horizontal or vertical plane with heterogeneous properties. In radial mode, the program simulates flow to a well in a vertical plane, transforming the radial flow equation into an equivalent problem in Cartesian coordinates. The physical parameters in the model are horizontal or x-direction hydraulic conductivity, anisotropy ratio (vertical to horizontal conductivity in a vertical model, y-direction to x-direction in a horizontal model), and specific storage. The program allows the user to specify arbitrary and independent zonations of these three parameters and also to specify which zonal parameter values are known and which are unknown. The Levenberg-Marquardt algorithm is used to estimate parameters from observed head values. Particularly powerful features of the program are the ability to perform simultaneous analysis of heads from different tests and the inclusion of the wellbore in the radial mode. These capabilities allow the program to be used for analysis of suites of well tests, such as multilevel slug tests or pumping tests in a tomographic format. The combination of information from tests stressing different vertical levels in an aquifer provides the means for accurately estimating vertical variations in conductivity, a factor profoundly influencing contaminant transport in the subsurface. ?? 2001 Elsevier Science Ltd. All rights reserved.

  20. Age-dependence of the average and equivalent refractive indices of the crystalline lens

    PubMed Central

    Charman, W. Neil; Atchison, David A.

    2013-01-01

    Lens average and equivalent refractive indices are required for purposes such as lens thickness estimation and optical modeling. We modeled the refractive index gradient as a power function of the normalized distance from lens center. Average index along the lens axis was estimated by integration. Equivalent index was estimated by raytracing through a model eye to establish ocular refraction, and then backward raytracing to determine the constant refractive index yielding the same refraction. Assuming center and edge indices remained constant with age, at 1.415 and 1.37 respectively, average axial refractive index increased (1.408 to 1.411) and equivalent index decreased (1.425 to 1.420) with age increase from 20 to 70 years. These values agree well with experimental estimates based on different techniques, although the latter show considerable scatter. The simple model of index gradient gives reasonable estimates of average and equivalent lens indices, although refinements in modeling and measurements are required. PMID:24466474

Top