Sample records for weissenberg method

  1. Purely-elastic flow instabilities and elastic turbulence in microfluidic cross-slot devices

    PubMed Central

    Sousa, P. C.; Pinho, F. T.

    2018-01-01

    We experimentally investigate the dynamics of viscoelastic fluid flows in cross-slot microgeometries under creeping flow conditions. We focus on the unsteady flow regime observed at high Weissenberg numbers (Wi) with the purpose of understanding the underlying flow signature of elastic turbulence. The effects of the device aspect ratio and fluid rheology on the unsteady flow state are investigated. Visualization of the flow patterns and time-resolved micro-particle image velocimetry were carried out to study the fluid flow behavior for a wide range of Weissenberg numbers. A periodic flow behavior is observed at low Weissenberg numbers followed by a more complex dynamics as Wi increases, eventually leading to the onset of elastic turbulence for very high Weissenberg numbers. PMID:29376533

  2. Spectral Elements Analysis for Viscoelastic Fluids at High Weissenberg Number Using Logarithmic conformation Tensor Model

    NASA Astrophysics Data System (ADS)

    Jafari, Azadeh; Deville, Michel O.; Fiétier, Nicolas

    2008-09-01

    This study discusses the capability of the constitutive laws for the matrix logarithm of the conformation tensor (LCT model) within the framework of the spectral elements method. The high Weissenberg number problems (HWNP) usually produce a lack of convergence of the numerical algorithms. Even though the question whether the HWNP is a purely numerical problem or rather a breakdown of the constitutive law of the model has remained somewhat of a mystery, it has been recognized that the selection of an appropriate constitutive equation constitutes a very crucial step although implementing a suitable numerical technique is still important for successful discrete modeling of non-Newtonian flows. The LCT model formulation of the viscoelastic equations originally suggested by Fattal and Kupferman is applied for 2-dimensional (2D) FENE-CR model. The Planar Poiseuille flow is considered as a benchmark problem to test this representation at high Weissenberg number. The numerical results are compared with numerical solution of the standard constitutive equation.

  3. Finite volume multigrid method of the planar contraction flow of a viscoelastic fluid

    NASA Astrophysics Data System (ADS)

    Moatssime, H. Al; Esselaoui, D.; Hakim, A.; Raghay, S.

    2001-08-01

    This paper reports on a numerical algorithm for the steady flow of viscoelastic fluid. The conservative and constitutive equations are solved using the finite volume method (FVM) with a hybrid scheme for the velocities and first-order upwind approximation for the viscoelastic stress. A non-uniform staggered grid system is used. The iterative SIMPLE algorithm is employed to relax the coupled momentum and continuity equations. The non-linear algebraic equations over the flow domain are solved iteratively by the symmetrical coupled Gauss-Seidel (SCGS) method. In both, the full approximation storage (FAS) multigrid algorithm is used. An Oldroyd-B fluid model was selected for the calculation. Results are reported for planar 4:1 abrupt contraction at various Weissenberg numbers. The solutions are found to be stable and smooth. The solutions show that at high Weissenberg number the domain must be long enough. The convergence of the method has been verified with grid refinement. All the calculations have been performed on a PC equipped with a Pentium III processor at 550 MHz. Copyright

  4. Numerical study of entropy generation and melting heat transfer on MHD generalised non-Newtonian fluid (GNF): Application to optimal energy

    NASA Astrophysics Data System (ADS)

    Iqbal, Z.; Mehmood, Zaffar; Ahmad, Bilal

    2018-05-01

    This paper concerns an application to optimal energy by incorporating thermal equilibrium on MHD-generalised non-Newtonian fluid model with melting heat effect. Highly nonlinear system of partial differential equations is simplified to a nonlinear system using boundary layer approach and similarity transformations. Numerical solutions of velocity and temperature profile are obtained by using shooting method. The contribution of entropy generation is appraised on thermal and fluid velocities. Physical features of relevant parameters have been discussed by plotting graphs and tables. Some noteworthy findings are: Prandtl number, power law index and Weissenberg number contribute in lowering mass boundary layer thickness and entropy effect and enlarging thermal boundary layer thickness. However, an increasing mass boundary layer effect is only due to melting heat parameter. Moreover, thermal boundary layers have same trend for all parameters, i.e., temperature enhances with increase in values of significant parameters. Similarly, Hartman and Weissenberg numbers enhance Bejan number.

  5. Effects of Magnetic field on Peristalsis transport of a Carreau Fluid in a tapered asymmetric channel

    NASA Astrophysics Data System (ADS)

    Prakash, J.; Balaji, N.; Siva, E. P.; Kothandapani, M.; Govindarajan, A.

    2018-04-01

    The paper is concerned with effects of a uniform applied magnetic field on a Carreau fluid flow in a tapered asymmetric channel with peristalsis. The channel non-uniform & asymmetry are formed by choosing the peristaltic wave train on the tapered walls to have different amplitude and phase (ϕ). The governing equations of the Carreau model in two - dimensional peristaltic flow phenomena are constructed under assumptions of long wave length and low Reynolds number approximations. The simplified non - linear governing equations are solved by regular perturbation method. The expressions for pressure rise, frictional force, velocity and stream function are determined and the effects of different parameters like non-dimensional amplitudes walls (a and b), non - uniform parameter (m), Hartmann number (M), phase difference (ϕ),power law index (n) and Weissenberg numbers (We) on the flow characteristics are discussed. It is viewed that the rheological parameter for large (We), the curves of the pressure rise are not linear but it behaves like a Newtonian fluid for very small Weissenberg number.

  6. Chemical reaction for Carreau-Yasuda nanofluid flow past a nonlinear stretching sheet considering Joule heating

    NASA Astrophysics Data System (ADS)

    Khan, Mair; Shahid, Amna; Malik, M. Y.; Salahuddin, T.

    2018-03-01

    Current analysis has been made to scrutinize the consequences of chemical response against magneto-hydrodynamic Carreau-Yasuda nanofluid flow induced by a non-linear stretching surface considering zero normal flux, slip and convective boundary conditions. Joule heating effect is also considered. Appropriate similarity approach is used to convert leading system of PDE's for Carreau-Yasuda nanofluid into nonlinear ODE's. Well known mathematical scheme namely shooting method is utilized to solve the system numerically. Physical parameters, namely Weissenberg number We , thermal slip parameter δ , thermophoresis number NT, non-linear stretching parameter n, magnetic field parameter M, velocity slip parameter k , Lewis number Le, Brownian motion parameter NB, Prandtl number Pr, Eckert number Ec and chemical reaction parameter γ upon temperature, velocity and concentration profiles are visualized through graphs and tables. Numerical influence of mass and heat transfer rates and friction factor are also represented in tabular as well as graphical form respectively. Skin friction coefficient reduces when Weissenberg number We is incremented. Rate of heat transfer enhances for large values of Brownian motion constraint NB. By increasing Lewis quantity Le rate of mass transfer declines.

  7. Conceptual design of novel IP-conveyor-belt Weissenberg-mode data-collection system with multi-readers for macromolecular crystallography. A comparison between Galaxy and Super Galaxy.

    PubMed

    Sakabe, N; Sakabe, K; Sasaki, K

    2004-01-01

    Galaxy is a Weissenberg-type high-speed high-resolution and highly accurate fully automatic data-collection system using two cylindrical IP-cassettes each with a radius of 400 mm and a width of 450 mm. It was originally developed for static three-dimensional analysis using X-ray diffraction and was installed on bending-magnet beamline BL6C at the Photon Factory. It was found, however, that Galaxy was also very useful for time-resolved protein crystallography on a time scale of minutes. This has prompted us to design a new IP-conveyor-belt Weissenberg-mode data-collection system called Super Galaxy for time-resolved crystallography with improved time and crystallographic resolution over that achievable with Galaxy. Super Galaxy was designed with a half-cylinder-shaped cassette with a radius of 420 mm and a width of 690 mm. Using 1.0 A incident X-rays, these dimensions correspond to a maximum resolutions of 0.71 A in the vertical direction and 1.58 A in the horizontal. Upper and lower screens can be used to set the frame size of the recorded image. This function is useful not only to reduce the frame exchange time but also to save disk space on the data server. The use of an IP-conveyor-belt and many IP-readers make Super Galaxy well suited for time-resolved, monochromatic X-ray crystallography at a very intense third-generation SR beamline. Here, Galaxy and a conceptual design for Super Galaxy are described, and their suitability for use as data-collection systems for macromolecular time-resolved monochromatic X-ray crystallography are compared.

  8. Spatial-temporal dynamics of Newtonian and viscoelastic turbulence in channel flow

    NASA Astrophysics Data System (ADS)

    Wang, Sung-Ning; Shekar, Ashwin; Graham, Michael

    2016-11-01

    Introducing a trace amount of polymer into liquid turbulent flows can result in substantial reduction of friction drag. This phenomenon has been widely used in fluid transport; however, the mechanism is not well understood. Past studies have found that in minimal domain turbulent simulations, there areoccasional time periods when flow exhibits features such as weaker vortices, lower friction drag and larger log-law slope; these have been denoted as "hibernatingturbulence". Here we address the question of whether similar behavior arises spatio-temporally in extended domains, focusing on turbulence at friction Reynolds numbers near transition and Weissenberg numbers resulting in low-medium drag reduction. By using image analysis and conditional sampling tools, we identify the hibernating states in extended domains and show that they display striking similarity as those in minimal domains. The hibernating states among different Weissenberg numbers exhibit similar flow statistics, suggesting they are unaltered by low to medium viscoelasticity. In addition, the polymer is much less stretched during hibernation. Finally, these hibernating states vanish as Reynolds number increases. However, they reoccur and gradually become dominant with increasing viscoelasticity.

  9. The crystal structures of potassium and cesium trivanadates

    USGS Publications Warehouse

    Evans, H.T.; Block, S.

    1966-01-01

    Potassium and cesium trivanadates are monoclinic and isomorphous, space group P21/m, with the following dimensions (Z = 2): KV3O8, a = 7.640 A, b = 8.380 A, c = 4.979 A, ??= 96?? 57???; CsV3O8, a = 8.176 A, b = 8.519 A, c = 4.988 A, ?? = 95?? 32???. The crystal structure of KV3O8 has been determined from hk0, 0kl, and h0l Weissenberg data with an R factor of 0.15. The structure of CsV3O8 has been refined with 1273 hkl Weissenberg data to an R factor of 0.089. The structures consist of corrugated sheets based on a linkage of distorted VO6, octahedra. Two of the vanadium atoms lie in double, square-pyramid groups V2O8, which are linked through opposite basal corners into chains along the b axis. The chains are joined laterally along the c axis into sheets by the third vanadium atom in VO groups, also forming part of a square-pyramid coordination. Various aspects of these structures are compared with other known oxovanadate structures.

  10. Pure axial flow of viscoelastic fluids in rectangular microchannels under combined effects of electro-osmosis and hydrodynamics

    NASA Astrophysics Data System (ADS)

    Reshadi, Milad; Saidi, Mohammad Hassan; Ebrahimi, Abbas

    2018-02-01

    This paper presents an analysis of the combined electro-osmotic and pressure-driven axial flows of viscoelastic fluids in a rectangular microchannel with arbitrary aspect ratios. The rheological behavior of the fluid is described by the complete form of Phan-Thien-Tanner (PTT) model with the Gordon-Schowalter convected derivative which covers the upper convected Maxwell, Johnson-Segalman and FENE-P models. Our numerical simulation is based on the computation of 2D Poisson-Boltzmann, Cauchy momentum and PTT constitutive equations. The solution of these governing nonlinear coupled set of equations is obtained by using the second-order central finite difference method in a non-uniform grid system and is verified against 1D analytical solution of the velocity profile with less than 0.06% relative error. Also, a parametric study is carried out to investigate the effect of channel aspect ratio (width to height), wall zeta potential and the Debye-Hückel parameter on 2D velocity profile, volumetric flow rate and the Poiseuille number in the mixed EO/PD flows of viscoelastic fluids with different Weissenberg numbers. Our results show that, for low channel aspect ratios, the previous 1D analytical models underestimate the velocity profile at the channel half-width centerline in the case of favorable pressure gradients and overestimate it in the case of adverse pressure gradients. The results reveal that the inapplicability of the Debye-Hückel approximation at high zeta potentials is more significant for higher Weissenberg number fluids. Also, it is found that, under the specified values of electrokinetic parameters, there is a threshold for velocity scale ratio in which the Poiseuille number is approximately independent of channel aspect ratio.

  11. Macromolecular Origins of Harmonics Higher than the Third in Large-Amplitude Oscillatory Shear Flow

    NASA Astrophysics Data System (ADS)

    Giacomin, Alan; Jbara, Layal; Gilbert, Peter; Chemical Engineering Department Team

    2016-11-01

    In 1935, Andrew Gemant conceived of the complex viscosity, a rheological material function measured by "jiggling" an elastic liquid in oscillatory shear. This test reveals information about both the viscous and elastic properties of the liquid, and about how these properties depend on frequency. The test gained popularity with chemists when John Ferry perfected instruments for measuring both the real and imaginary parts of the complex viscosity. In 1958, Cox and Merz discovered that the steady shear viscosity curve was easily deduced from the magnitude of the complex viscosity, and today oscillatory shear is the single most popular rheological property measurement. With oscillatory shear, we can control two things: the frequency (Deborah number) and the shear rate amplitude (Weissenberg number). When the Weissenberg number is large, the elastic liquids respond with a shear stress over a series of odd-multiples of the test frequency. In this lecture we will explore recent attempts to deepen our understand of the physics of these higher harmonics, including especially harmonics higher than the third. Canada Research Chairs program of the Government of Canada for the Natural Sciences and Engineering Research Council of Canada (NSERC) Tier 1 Canada Research Chair in Rheology.

  12. Influence of polymer additive on flow past a hydrofoil: A numerical study

    NASA Astrophysics Data System (ADS)

    Xiong, Yongliang; Peng, Sai; Yang, Dan; Duan, Juan; Wang, Limin

    2018-01-01

    Flows of dilute polymer solutions past a hydrofoil (NACA0012) are examined by direct numerical simulation to investigate the modification of the wake pattern due to the addition of polymer. The influence of polymer additive is modeled by the FENE-P model in order to simulate a non-linear modulus of elasticity and a finite extendibility of the polymer macromolecules. Simulations were carried out at a Reynolds number of 1000 with the angle of attack varying from 0° to 20°. The results show that the influence of polymer on the flow behavior of the flow past a hydrofoil exhibits different flow regimes. In general, the addition of polymer modifies the wake patterns for all angles of attack in this study. Consequently, both drag and lift forces are changed as the Weissenberg number increases while the drag of the hydrofoil is enhanced at small angles of attack and reduced at large angles of attack. As the Weissenberg number increases, two attached recirculation bubbles or two columns of shedding vortices downstream tend to be symmetric, and the polymer tends to make the flow less sensitive to the variation of the angle of attack.

  13. Modification of a Turbulent Boundary Layer within a Homogeneous Concentration of Drag reducing Polymer Solution

    NASA Astrophysics Data System (ADS)

    Farsiani, Yasaman; Elbing, Brian

    2017-11-01

    High molecular weight polymer solutions in wall-bounded flows can reduce the local skin friction by as much as 80%. External flow studies have typical focused on injection of polymer within a developing turbulent boundary layer (TBL), allowing the concentration and drag reduction level to evolve with downstream distance. Modification of the log-law region of the TBL is directly related to drag reduction, but recent results suggest that the exact behavior is dependent on flow and polymer properties. Weissenberg number and the viscosity ratio (ratio of solvent viscosity to the zero-shear viscosity) are concentration dependent, thus the current study uses a polymer ocean (i.e. a homogenous concentration of polymer solution) with a developing TBL to eliminate uncertainty related to polymer properties. The near-wall modified TBL velocity profiles are acquired with particle image velocimetry. In the current presentation the mean velocity profiles and the corresponding flow (Reynolds number) and polymer (Weissenberg number, viscosity ratio, and length ratio) properties are reported. Note that the impact of polymer degradation on molecular weight will also be quantified and accounted for when estimating polymer properties This work was supported by NSF Grant 1604978.

  14. Boundary layer flow of MHD tangent hyperbolic nanofluid over a stretching sheet: A numerical investigation

    NASA Astrophysics Data System (ADS)

    Khan, Mair; Hussain, Arif; Malik, M. Y.; Salahuddin, T.; Khan, Farzana

    This article presents the two-dimensional flow of MHD hyperbolic tangent fluid with nanoparticles towards a stretching surface. The mathematical modelling of current flow analysis yields the nonlinear set of partial differential equations which then are reduce to ordinary differential equations by using suitable scaling transforms. Then resulting equations are solved by using shooting technique. The behaviour of the involved physical parameters (Weissenberg number We , Hartmann number M , Prandtl number Pr , Brownian motion parameter Nb , Lewis number Le and thermophoresis number Nt) on velocity, temperature and concentration are interpreted in detail. Additionally, local skin friction, local Nusselt number and local Sherwood number are computed and analyzed. It has been explored that Weissenberg number and Hartmann number are decelerate fluid motion. Brownian motion and thermophoresis both enhance the fluid temperature. Local Sherwood number is increasing function whereas Nusselt number is reducing function for increasing values of Brownian motion parameter Nb , Prandtl number Pr , thermophoresis parameter Nt and Lewis number Le . Additionally, computed results are compared with existing literature to validate the accuracy of solution, one can see that present results have quite resemblance with reported data.

  15. Numerical study of unsteady Williamson fluid flow and heat transfer in the presence of MHD through a permeable stretching surface

    NASA Astrophysics Data System (ADS)

    Bibi, Madiha; Khalil-Ur-Rehman; Malik, M. Y.; Tahir, M.

    2018-04-01

    In the present article, unsteady flow field characteristics of the Williamson fluid model are explored. The nanosized particles are suspended in the flow regime having the interaction of a magnetic field. The fluid flow is induced due to a stretching permeable surface. The flow model is controlled through coupled partial differential equations to the used shooting method for a numerical solution. The obtained partial differential equations are converted into ordinary differential equations as an initial value problem. The shooting method is used to find a numerical solution. The mathematical modeling yields physical parameters, namely the Weissenberg number, the Prandtl number, the unsteadiness parameter, the magnetic parameter, the mass transfer parameter, the Lewis number, the thermophoresis parameter and Brownian parameters. It is found that the Williamson fluid velocity, temperature and nanoparticles concentration are a decreasing function of the unsteadiness parameter.

  16. A Galerkin least squares approach to viscoelastic flow.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Rekha R.; Schunk, Peter Randall

    2015-10-01

    A Galerkin/least-squares stabilization technique is applied to a discrete Elastic Viscous Stress Splitting formulation of for viscoelastic flow. From this, a possible viscoelastic stabilization method is proposed. This method is tested with the flow of an Oldroyd-B fluid past a rigid cylinder, where it is found to produce inaccurate drag coefficients. Furthermore, it fails for relatively low Weissenberg number indicating it is not suited for use as a general algorithm. In addition, a decoupled approach is used as a way separating the constitutive equation from the rest of the system. A Pressure Poisson equation is used when the velocity andmore » pressure are sought to be decoupled, but this fails to produce a solution when inflow/outflow boundaries are considered. However, a coupled pressure-velocity equation with a decoupled constitutive equation is successful for the flow past a rigid cylinder and seems to be suitable as a general-use algorithm.« less

  17. The Sedimentation of Particles under Orthogonal Shear in Viscoelastic Fluids

    NASA Astrophysics Data System (ADS)

    Murch, William L.; Krishnan, Sreenath; Shaqfeh, Eric S. G.

    2016-11-01

    Many engineering applications, including oil and gas recovery, require the suspension of particles in viscoelastic fluids during fluid transport and processing. A topic of specific importance involves such particle suspensions experiencing an applied shear flow in a direction perpendicular to gravity (referred to as orthogonal shear). Previously, it has been shown that particle sedimentation coupled with an orthogonal shear flow can reduce the particle settling rate in elastic fluids. The underlying mechanism of this enhanced coupling drag is not fully understood, particularly at finite Weissenberg numbers. This talk examines the role of fluid elasticity on a single, non-Brownian, rigid sphere settling in orthogonal shear using experiments and numerical simulations. New experiments were performed in a Taylor-Couette flow cell using Boger fluids to study the coupling drag as a function of the shear and sedimentation Weissenberg numbers as well as particle confinement. The elastic effect was also studied with fully 3D simulations of flow past a rigid sphere, using the FENE-P constitutive model to describe the polymeric fluid rheology. These simulations show good agreement with the experiments and allow for further insight into the mechanism of elasticity-enhanced drag. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship.

  18. Statistics of polymer extensions in turbulent channel flow.

    PubMed

    Bagheri, Faranggis; Mitra, Dhrubaditya; Perlekar, Prasad; Brandt, Luca

    2012-11-01

    We present direct numerical simulations of turbulent channel flow with passive Lagrangian polymers. To understand the polymer behavior we investigate the behavior of infinitesimal line elements and calculate the probability distribution function (PDF) of finite-time Lyapunov exponents and from them the corresponding Cramer's function for the channel flow. We study the statistics of polymer elongation for both the Oldroyd-B model (for Weissenberg number Wi<1) and the FENE model. We use the location of the minima of the Cramer's function to define the Weissenberg number precisely such that we observe coil-stretch transition at Wi ≈1. We find agreement with earlier analytical predictions for PDF of polymer extensions made by Balkovsky, Fouxon, and Lebedev [Phys. Rev. Lett. 84, 4765 (2000)] for linear polymers (Oldroyd-B model) with Wi <1 and by Chertkov [Phys. Rev. Lett. 84, 4761 (2000)] for nonlinear FENE-P model of polymers. For Wi >1 (FENE model) the polymer are significantly more stretched near the wall than at the center of the channel where the flow is closer to homogenous isotropic turbulence. Furthermore near the wall the polymers show a strong tendency to orient along the streamwise direction of the flow, but near the center line the statistics of orientation of the polymers is consistent with analogous results obtained recently in homogeneous and isotropic flows.

  19. Brownian dynamics simulations of polyelectrolyte adsorption in shear flow with hydrodynamic interaction

    NASA Astrophysics Data System (ADS)

    Hoda, Nazish; Kumar, Satish

    2007-12-01

    The adsorption of single polyelectrolyte molecules in shear flow is studied using Brownian dynamics simulations with hydrodynamic interaction (HI). Simulations are performed with bead-rod and bead-spring chains, and electrostatic interactions are incorporated through a screened Coulombic potential with excluded volume accounted for by the repulsive part of a Lennard-Jones potential. A correction to the Rotne-Prager-Yamakawa tensor is derived that accounts for the presence of a planar wall. The simulations show that migration away from an uncharged wall, which is due to bead-wall HI, is enhanced by increases in the strength of flow and intrachain electrostatic repulsion, consistent with kinetic theory predictions. When the wall and polyelectrolyte are oppositely charged, chain behavior depends on the strength of electrostatic screening. For strong screening, chains get depleted from a region close to the wall and the thickness of this depletion layer scales as N1/3Wi2/3 at high Wi, where N is the chain length and Wi is the Weissenberg number. At intermediate screening, bead-wall electrostatic attraction competes with bead-wall HI, and it is found that there is a critical Weissenberg number for desorption which scales as N-1/2κ-3(lB∣σq∣)3/2, where κ is the inverse screening length, lB is the Bjerrum length, σ is the surface charge density, and q is the bead charge. When the screening is weak, adsorbed chains are observed to align in the vorticity direction at low shear rates due to the effects of repulsive intramolecular interactions. At higher shear rates, the chains align in the flow direction. The simulation method and results of this work are expected to be useful for a number of applications in biophysics and materials science in which polyelectrolyte adsorption plays a key role.

  20. Redetermination of AgPO(3).

    PubMed

    Terebilenko, Katherina V; Zatovsky, Igor V; Ogorodnyk, Ivan V; Baumer, Vyacheslav N; Slobodyanik, Nikolay S

    2011-02-09

    Single crystals of silver(I) polyphosphate(V), AgPO(3), were prepared via a phospho-ric acid melt method using a solution of Ag(3)PO(4) in H(3)PO(4). In comparison with the previous study based on single-crystal Weissenberg photographs [Jost (1961 ▶). Acta Cryst. 14, 779-784], the results were mainly confirmed, but with much higher precision and with all displacement parameters refined anisotropically. The structure is built up from two types of distorted edge- and corner-sharing [AgO(5)] polyhedra, giving rise to multidirectional ribbons, and from two types of PO(4) tetra-hedra linked into meandering chains (PO(3))(n) spreading parallel to the b axis with a repeat unit of four tetra-hedra. The calculated bond-valence sum value of one of the two Ag(I) ions indicates a significant strain of the structure.

  1. Biconvection flow of Carreau fluid over an upper paraboloid surface: A computational study

    NASA Astrophysics Data System (ADS)

    Khan, Mair; Hussain, Arif; Malik, M. Y.; Salahuddin, T.

    Present article explored the physical characteristics of biconvection effects on the MHD flow of Carreau nanofluid over upper horizontal surface of paraboloid revolution along with chemical reaction. The concept of the Carreau nanofluid was introduced the new parameterization achieve the momentum governing equation. Using similarity transformed, the governing partial differential equations are converted into the ordinary differential equations. The obtained governing equations are solved computationally by using implicit finite difference method known as the Keller box technique. The numerical solutions are obtained for the velocity, temperature, concentration, friction factor, local heat and mass transfer coefficients by varying controlling parameters i.e. Biconvection parameter, fluid parameter, Weissenberg number, Hartmann number, Prandtl number, Brownian motion parameter, thermophoresis parameter, Lewis number and chemical reaction parameter. The obtained results are discussed via graphs and tables.

  2. Characteristics of melting heat transfer during flow of Carreau fluid induced by a stretching cylinder.

    PubMed

    Hashim; Khan, Masood; Saleh Alshomrani, Ali

    2017-01-01

    This article provides a comprehensive analysis of the energy transportation by virtue of the melting process of high-temperature phase change materials. We have developed a two-dimensional model for the boundary layer flow of non-Newtonian Carreau fluid. It is assumed that flow is caused by stretching of a cylinder in the axial direction by means of a linear velocity. Adequate local similarity transformations are employed to determine a set of non-linear ordinary differential equations which govern the flow problem. Numerical solutions to the resultant non-dimensional boundary value problem are computed via the fifth-order Runge-Kutta Fehlberg integration scheme. The solutions are captured for both zero and non-zero curvature parameters, i.e., for flow over a flat plate or flow over a cylinder. The flow and heat transfer attributes are witnessed to be prompted in an intricate manner by the melting parameter, the curvature parameter, the Weissenberg number, the power law index and the Prandtl number. We determined that one of the possible ways to boost the fluid velocity is to increase the melting parameter. Additionally, both the velocity of the fluid and the momentum boundary layer thickness are higher in the case of flow over a stretching cylinder. As expected, the magnitude of the skin friction and the rate of heat transfer decrease by raising the values of the melting parameter and the Weissenberg number.

  3. Study of RpI22 in MDS and AML

    DTIC Science & Technology

    2016-12-01

    developed significantly larger and markedly more vascularized thymic tumors than those observed in Rpl22þ/þ control mice. But, unlike Rpl22þ/þ or Rpl22þ...hypoxia (31). Alternatively, we did not observe obvious necrosis in the center of the large thymic tumors from Rpl22-deficent mice, suggesting Rpl22...Orthop Res 2004;22:1175–81. 32. Loeffler S, Fayard B, Weis J, Weissenberger J. Interleukin-6 induces tran- scriptional activation of vascular endothelial

  4. Redetermination of AgPO3

    PubMed Central

    Terebilenko, Katherina V.; Zatovsky, Igor V.; Ogorodnyk, Ivan V.; Baumer, Vyacheslav N.; Slobodyanik, Nikolay S.

    2011-01-01

    Single crystals of silver(I) polyphosphate(V), AgPO3, were prepared via a phospho­ric acid melt method using a solution of Ag3PO4 in H3PO4. In comparison with the previous study based on single-crystal Weissenberg photographs [Jost (1961 ▶). Acta Cryst. 14, 779–784], the results were mainly confirmed, but with much higher precision and with all displacement parameters refined anisotropically. The structure is built up from two types of distorted edge- and corner-sharing [AgO5] polyhedra, giving rise to multidirectional ribbons, and from two types of PO4 tetra­hedra linked into meandering chains (PO3)n spreading parallel to the b axis with a repeat unit of four tetra­hedra. The calculated bond-valence sum value of one of the two AgI ions indicates a significant strain of the structure. PMID:21522230

  5. Numerical simulation of buoyancy peristaltic flow of Johnson-Segalman nanofluid in an inclined channel

    NASA Astrophysics Data System (ADS)

    Hayat, T.; Ayub, Sadia; Alsaedi, Ahmed; Ahmad, Bashir

    2018-06-01

    This study addresses mixed convection on peristaltic flow in an inclined channel. The relevant flow problem is developed for MHD Johnson-Segalman nanofluid. Hall current and thermal radiation are discussed. Channel boundaries are compliant in nature. Slip effects for velocity, temperature and concentration are examined. Long wavelength concept is employed. Variations for prominent parameters in velocity, temperature, concentration, heat transfer coefficient and streamlines are obtained via built-in numerical approach. Velocity shows significant decline for larger local temperature Grashof number. Heat transfer slows down for increasing thermophoresis and thermal slip parameters. Increase in bolus is reported for higher Weissenberg number.

  6. Active and hibernating turbulence in drag-reducing plane Couette flows

    NASA Astrophysics Data System (ADS)

    Pereira, Anselmo S.; Mompean, Gilmar; Thais, Laurent; Soares, Edson J.; Thompson, Roney L.

    2017-08-01

    In this paper we analyze the active and hibernating turbulence in drag-reducing plane Couette flows using direct numerical simulations of the viscoelastic finitely extensible nonlinear elastic model with the Peterlin approximation fluids. The polymer-turbulence interactions are studied from an energetic standpoint for a range of Weissenberg numbers (from 2 up to 30), fixing the Reynolds number based on the plate velocities at 4000, the viscosity ratio at 0.9, and the maximum polymer molecule extensibility at 100. The qualitative picture that emerges from this investigation is a cyclic mechanism of energy exchange between the polymers and turbulence that drives the flow through an oscillatory behavior.

  7. Mechanism of polymer drag reduction using a low-dimensional model.

    PubMed

    Roy, Anshuman; Morozov, Alexander; van Saarloos, Wim; Larson, Ronald G

    2006-12-08

    Using a retarded-motion expansion to describe the polymer stress, we derive a low-dimensional model to understand the effects of polymer elasticity on the self-sustaining process that maintains the coherent wavy streamwise vortical structures underlying wall-bounded turbulence. Our analysis shows that at small Weissenberg numbers, Wi, elasticity enhances the coherent structures. At higher Wi, however, polymer stresses suppress the streamwise vortices (rolls) by calming down the instability of the streaks that regenerates the rolls. We show that this behavior can be attributed to the nonmonotonic dependence of the biaxial extensional viscosity on Wi, and identify it as the key rheological property controlling drag reduction.

  8. Crystal structure of the heptamolybdate(VI) (paramolybdate) ion, [Mo7O24]6-, in the ammonium and potassium tetrahydrate salts

    USGS Publications Warehouse

    Evans, H.T.; Gatehouse, B.M.; Leverett, P.

    1975-01-01

    The crystal structures of the isomorphous salts MI6 [Mo7O24],4H2O (M = NH4 or K) have been refined by three-dimensional X-ray diffraction methods. Unit cell dimensions of these monoclinic compounds, space group P21/C with Z = 4, are, ammonium salt: a = 8.3934 ?? 0.0008, b = 36.1703 ?? 0.0045, c = 10.4715 ?? 0.0011 A??, ?? = 115.958?? ?? 0.008??; and potassium salt: a = 8.15 ?? 0.02, b = 35.68 ?? 0.1, c = 10.30 ?? 0.02 A??, ?? = 115.2?? ?? 02??. By use of multiple Weissenberg patterns, 8197 intensity data (Mo-K?? radiation) for the ammonium compound and 2178 (Cu-K?? radiation) for the potassium compound were estimated visually and used to test and refine Lindqvist's proposed structure in the space group P21/c. Lindqvist's structure was confirmed and the full matrix least-squares isotropic refinement led to R 0.076 (ammonium) 0.120 (potassium), with direct unambiguous location of the cations and water molecules in the potassium compound.

  9. Flow induced streamer formation in particle laden complex flows

    NASA Astrophysics Data System (ADS)

    Debnath, Nandini; Hassanpourfard, Mahtab; Ghosh, Ranajay; Trivedi, Japan; Thundat, Thomas; Kumar, Aloke

    2016-11-01

    We study the combined flow of a polyacrylamide (PAM)solution with polystyrene (PS) nanoparticles, through a microfluidic device containing an array of micropillars. The flow is characterized by a very low Reynolds number (Re<<1). We find that for exceeding a critical Weissenberg number (Wi >= 20), PS nanoparticles localize near pillar walls to form thin slender string-like structures, which we call 'streamers' due to their morphology. Post-formation, these streamers show significant viscous behavior for short observational time-scales, and at longer observational time scales elastic response dominates. Our abiotic streamers could provide a framework for understanding similar structures that often form in biological systems. PhD student, Department of Mechanical Engineering.

  10. Lattice Boltzmann simulation of viscoelastic flow past a confined free rotating cylinder

    NASA Astrophysics Data System (ADS)

    Xia, Yi; Zhang, Peijie; Lin, Jianzhong; Ku, Xiaoke; Nie, Deming

    2018-05-01

    To study the dynamics of rigid body immersed in viscoelastic fluid, an Oldroyd-B fluid flow past an eccentrically situated, free rotating cylinder in a two-dimensional (2D) channel is simulated by a novel lattice Boltzmann method. Two distribution functions are employed, one of which is aimed to solve Navier-Stokes equation and the other to the constitutive equation, respectively. The unified interpolation bounce-back scheme is adopted to treat the moving curved boundary of cylinder, and the novel Galilean invariant momentum exchange method is utilized to obtain the hydrodynamic force and torque exerted on the cylinder. Results show that the center-fixed cylinder rotates inversely in the direction where a cylinder immersed in Newtonian fluid do, which generates a centerline-oriented lift force according to Magnus effect. The cylinder’s eccentricity, flow inertia, fluid elasticity and viscosity would affect the rotation of cylinder in different ways. The cylinder rotates more rapidly when located farther away from the centerline, and slows down when it is too close to the wall. The rotation frequency decreases with increasing Reynolds number, and larger rotation frequency responds to larger Weissenberg number and smaller viscosity ratio, indicating that the fluid elasticity and low solvent viscosity accelerates the flow-induced rotation of cylinder.

  11. Numerical study of MHD micropolar carreau nanofluid in the presence of induced magnetic field

    NASA Astrophysics Data System (ADS)

    Atif, S. M.; Hussain, S.; Sagheer, M.

    2018-03-01

    The heat and mass transfer of a magnetohydrodynamic micropolar Carreau nanofluid on a stretching sheet has been analyzed in the presence of induced magnetic field. An internal heating, thermal radiation, Ohmic and viscous dissipation effects are also considered. The system of the governing partial differential equations is converted into the ordinary differential equations by means of the suitable similarity transformation. The resulting ordinary differential equations are then solved by the well known shooting technique. The impact of emerging physical parameters on the velocity, angular velocity, temperature and concentration profiles are analyzed graphically. The dimensionless velocity is enhanced for the Weissenberg number and the power law index while reverse situation is studied in the thermal and the concentration profile.

  12. DNA Molecules in Microfluidic Oscillatory Flow

    PubMed Central

    Chen, Y.-L.; Graham, M.D.; de Pablo, J.J.; Jo, K.; Schwartz, D.C.

    2008-01-01

    The conformation and dynamics of a single DNA molecule undergoing oscillatory pressure-driven flow in microfluidic channels is studied using Brownian dynamics simulations, accounting for hydrodynamic interactions between segments in the bulk and between the chain and the walls. Oscillatory flow provides a scenario under which the polymers may remain in the channel for an indefinite amount of time as they are stretched and migrate away from the channel walls. We show that by controlling the chain length, flow rate and oscillatory flow frequency, we are able to manipulate the chain extension and the chain migration from the channel walls. The chain stretch and the chain depletion layer thickness near the wall are found to increase as the Weissenberg number increases and as the oscillatory frequency decreases. PMID:19057656

  13. A Phase of Liposomes with Entangled Tubular Vesicles

    NASA Astrophysics Data System (ADS)

    Chiruvolu, Shivkumar; Warriner, Heidi E.; Naranjo, Edward; Idziak, Stefan H. J.; Radler, Joachim O.; Plano, Robert J.; Zasadzinski, Joseph A.; Safinya, Cyrus R.

    1994-11-01

    An equilibrium phase belonging to the family of bilayer liposomes in ternary mixtures of dimyristoylphosphatidylcholine (DMPC), water, and geraniol (a biological alcohol derived from oil-soluble vitamins that acts as a cosurfactant) has been identified. Electron and optical microscopy reveal the phase, labeled Ltv, to be composed of highly entangled tubular vesicles. In situ x-ray diffraction confirms that the tubule walls are multilamellar with the lipids in the chain-melted state. Macroscopic observations show that the Ltv phase coexists with the well-known L_4 phase of spherical vesicles and a bulk L_α phase. However, the defining characteristic of the Ltv phase is the Weissenberg rod climbing effect under shear, which results from its polymer-like entangled microstructure.

  14. Inertio-elastic mixing in a straight microchannel with side wells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Sun Ok; Cooper-White, Justin J.; School of Chemical Engineering, University of Queensland, St Lucia, 4072 QLD

    Mixing remains a challenging task in microfluidic channels because of their inherently small length scale. In this work, we propose an efficient microfluidic mixer based on the chaotic vortex dynamics of a viscoelastic flow in a straight channel with side wells. When the inertia and elasticity of a dilute polymer solution are balanced (i.e., the Reynolds number Re and Weissenberg number Wi are both on the order of 10{sup 1}), chaotic vortices appear in the side wells (inertio-elastic flow instability), enhancing the mixing of adjacent fluid streams. However, there is no chaotic vortex motion in Newtonian flows for any flowmore » rate. Efficient mixing by such an inertio-elastic instability is found to be relevant for a wide range of Re values.« less

  15. Structure of the Si(111)-(5×2)-Au Surface

    NASA Astrophysics Data System (ADS)

    Abukawa, Tadashi; Nishigaya, Yoshiki

    2013-01-01

    The structure of the Si(111)-(5×2)-Au surface, one of the long-standing problems in surface science, has been solved by means of Weissenberg reflection high-energy electron diffraction. The arrangement of the Au atoms and their positions with respect to the substrate were determined from a three-dimensional Patterson function with a lateral resolution of 0.3 Å based on a large amount of diffraction data. The new structural model consists of six Au atoms in a 5×2 unit, which agrees with the recently confirmed Au coverage of 0.6 ML [I. Barke , Phys. Rev. B 79, 155301 (2009).PRBMDO1098-0121]. The model has a distinct ×2 periodicity, and includes a Au dimer. The model is also compatible with previously obtained STM images.

  16. Drag reduction and the dynamics of turbulence in simple and complex fluidsa)

    NASA Astrophysics Data System (ADS)

    Graham, Michael D.

    2014-10-01

    Addition of a small amount of very large polymer molecules or micelle-forming surfactants to a liquid can dramatically reduce the energy dissipation it exhibits in the turbulent flow regime. This rheological drag reduction phenomenon is widely used, for example, in the Alaska pipeline, but it is not well-understood, and no comparable technology exists to reduce turbulent energy consumption in flows of gases, in which polymers or surfactants cannot be dissolved. The most striking feature of this phenomenon is the existence of a so-called maximum drag reduction (MDR) asymptote: for a given geometry and driving force, there is a maximum level of drag reduction that can be achieved through addition of polymers. Changing the concentration, molecular weight or even the chemical structure of the additives has little to no effect on this asymptotic value. This universality is the major puzzle of drag reduction. We describe direct numerical simulations of turbulent minimal channel flow of Newtonian fluids and viscoelastic polymer solutions. Even in the absence of polymers, we show that there are intervals of "hibernating" turbulence that display very low drag as well as many other features of the MDR asymptote observed in polymer solutions. As Weissenberg number increases to moderate values the frequency of these intervals also increases, and a simple theory captures key features of the intermittent dynamics observed in the simulations. At higher Weissenberg number, these intervals are altered - for example, their duration becomes substantially longer and the instantaneous Reynolds shear stress during them becomes very small. Additionally, simulations of "edge states," dynamical trajectories that lie on the boundary between turbulent and laminar flow, display characteristics that are similar to those of hibernating turbulence and thus to the MDR asymptote, again even in the absence of polymer additives. Based on these observations, we propose a tentative unified description of rheological drag reduction. The existence of MDR-like intervals even in the absence of additives sheds light on the observed universality of MDR and may ultimately lead to new flow control approaches for improving energy efficiency in a wide range of processes.

  17. Polymer concentration and properties of elastic turbulence in a von Karman swirling flow

    NASA Astrophysics Data System (ADS)

    Jun, Yonggun; Steinberg, Victor

    2017-10-01

    We report detailed experimental studies of statistical, scaling, and spectral properties of elastic turbulence (ET) in a von Karman swirling flow between rotating and stationary disks of polymer solutions in a wide, from dilute to semidilute entangled, range of polymer concentrations ϕ . The main message of the investigation is that the variation of ϕ just weakly modifies statistical, scaling, and spectral properties of ET in a swirling flow. The qualitative difference between dilute and semidilute unentangled versus semidilute entangled polymer solutions is found in the dependence of the critical Weissenberg number Wic of the elastic instability threshold on ϕ . The control parameter of the problem, the Weissenberg number Wi, is defined as the ratio of the nonlinear elastic stress to dissipation via linear stress relaxation and quantifies the degree of polymer stretching. The power-law scaling of the friction coefficient on Wi/Wic characterizes the ET regime with the exponent independent of ϕ . The torque Γ and pressure p power spectra show power-law decays with well-defined exponents, which has values independent of Wi and ϕ separately at 100 ≤ϕ ≤900 ppm and 1600 ≤ϕ ≤2300 ppm ranges. Another unexpected observation is the presence of two types of the boundary layers, horizontal and vertical, distinguished by their role in the energy pumping and dissipation, which has width dependence on Wi and ϕ differs drastically. In the case of the vertical boundary layer near the driving disk, wvv is independent of Wi/Wic and linearly decreases with ϕ /ϕ * , while in the case of the horizontal boundary layer wvh its width is independent of ϕ /ϕ * , linearly decreases with Wi/Wic , and is about five times smaller than wvv. Moreover, these Wi and ϕ dependencies of the vertical and horizontal boundary layer widths are found in accordance with the inverse turbulent intensity calculated inside the boundary layers Vθh/Vθh rms and Vθv/Vθv rms , respectively. Specifically, the dependence of Vθv/Vθv rms in the vertical boundary layer on Wi and ϕ agrees with a recent theoretical prediction [S. Belan, A. Chernych, and V. Lebedev, Boundary layer of elastic turbulence (unpublished)].

  18. The effect of viscoelasticity on the stability of a pulmonary airway liquid layer

    NASA Astrophysics Data System (ADS)

    Halpern, David; Fujioka, Hideki; Grotberg, James B.

    2010-01-01

    The lungs consist of a network of bifurcating airways that are lined with a thin liquid film. This film is a bilayer consisting of a mucus layer on top of a periciliary fluid layer. Mucus is a non-Newtonian fluid possessing viscoelastic characteristics. Surface tension induces flows within the layer, which may cause the lung's airways to close due to liquid plug formation if the liquid film is sufficiently thick. The stability of the liquid layer is also influenced by the viscoelastic nature of the liquid, which is modeled using the Oldroyd-B constitutive equation or as a Jeffreys fluid. To examine the role of mucus alone, a single layer of a viscoelastic fluid is considered. A system of nonlinear evolution equations is derived using lubrication theory for the film thickness and the film flow rate. A uniform film is initially perturbed and a normal mode analysis is carried out that shows that the growth rate g for a viscoelastic layer is larger than for a Newtonian fluid with the same viscosity. Closure occurs if the minimum core radius, Rmin(t), reaches zero within one breath. Solutions of the nonlinear evolution equations reveal that Rmin normally decreases to zero faster with increasing relaxation time parameter, the Weissenberg number We. For small values of the dimensionless film thickness parameter ɛ, the closure time, tc, increases slightly with We, while for moderate values of ɛ, ranging from 14% to 18% of the tube radius, tc decreases rapidly with We provided the solvent viscosity is sufficiently small. Viscoelasticity was found to have little effect for ɛ >0.18, indicating the strong influence of surface tension. The film thickness parameter ɛ and the Weissenberg number We also have a significant effect on the maximum shear stress on tube wall, max(τw), and thus, potentially, an impact on cell damage. Max(τw) increases with ɛ for fixed We, and it decreases with increasing We for small We provided the solvent viscosity parameter is sufficiently small. For large ɛ ≈0.2, there is no significant difference between the Newtonian flow case and the large We cases.

  19. A phase of liposomes with entangled tubular vesicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiruvolu, S.; Naranjo, E.; Warriner, H.E.

    1994-11-18

    An equilibrium phase belonging to the family of bilayer liposomes in ternary mixtures of dimyristoylphosphatidylcholine (DMPC), water, and geraniol (a biological alcohol derived from oil-soluble vitamins that acts as a cosurfactant) has been identified. Electron and optical microscopy reveal the phase, labeled L{sub tv}, to be composed of highly entangled tubular vesicles. In situ x-ray diffraction confirms that the tubule walls are multilamellar with the lipids in the chain-melted state. Macroscopic observations show that the L{sub tv} phase coexists with the well-known L{sub 4} phase of spherical vesicles and a bulk L{sub {alpha}} phase. However, the defining characteristic of themore » L{sub tv} phase is the Weissenberg rod climbing effect under shear, which results from its polymer-like entangled microstructure. 26 refs., 5 figs.« less

  20. F-actin and microtubule suspensions as indeterminate fluids.

    PubMed

    Buxbaum, R E; Dennerll, T; Weiss, S; Heidemann, S R

    1987-03-20

    The viscosity of F-actin and microtubule suspensions has been measured as a function of shear rate with a Weissenberg rheogoniometer. At shear rates of less than 1.0 per second the viscosity of suspensions of these two structural proteins is inversely proportional to shear rate. These results are consistent with previous in vivo measurements of the viscosity of cytoplasm. This power law implies that shear stress is independent of shear rate; that is, shear stress is a constant at all shear rates less than 1.0 per second. Thus the flow profile of these fluids is indeterminate, or nearly so. This flow property may explain several aspects of intracellular motility in living cells. Possible explanations for this flow property are based on a recent model for semidilute suspensions of rigid rods or a classical friction model for liquid crystals.

  1. Microfluidic converging/diverging channels optimised for homogeneous extensional deformation.

    PubMed

    Zografos, K; Pimenta, F; Alves, M A; Oliveira, M S N

    2016-07-01

    In this work, we optimise microfluidic converging/diverging geometries in order to produce constant strain-rates along the centreline of the flow, for performing studies under homogeneous extension. The design is examined for both two-dimensional and three-dimensional flows where the effects of aspect ratio and dimensionless contraction length are investigated. Initially, pressure driven flows of Newtonian fluids under creeping flow conditions are considered, which is a reasonable approximation in microfluidics, and the limits of the applicability of the design in terms of Reynolds numbers are investigated. The optimised geometry is then used for studying the flow of viscoelastic fluids and the practical limitations in terms of Weissenberg number are reported. Furthermore, the optimisation strategy is also applied for electro-osmotic driven flows, where the development of a plug-like velocity profile allows for a wider region of homogeneous extensional deformation in the flow field.

  2. Numerical simulation of the non-Newtonian mixing layer

    NASA Technical Reports Server (NTRS)

    Azaiez, Jalel; Homsy, G. M.

    1993-01-01

    This work is a continuing effort to advance our understanding of the effects of polymer additives on the structures of the mixing layer. In anticipation of full nonlinear simulations of the non-Newtonian mixing layer, we examined in a first stage the linear stability of the non-Newtonian mixing layer. The results of this study show that, for a fluid described by the Oldroyd-B model, viscoelasticity reduces the instability of the inviscid mixing layer in a special limit where the ratio (We/Re) is of order 1 where We is the Weissenberg number, a measure of the elasticity of the flow, and Re is the Reynolds number. In the present study, we pursue this project with numerical simulations of the non-Newtonian mixing layer. Our primary objective is to determine the effects of viscoelasticity on the roll-up structure. We also examine the origin of the numerical instabilities usually encountered in the simulations of non-Newtonian fluids.

  3. The Einstein viscosity with fluid elasticity

    NASA Astrophysics Data System (ADS)

    Einarsson, Jonas; Yang, Mengfei; Shaqfeh, Eric S. G.

    2017-11-01

    We give the first correction to the suspension viscosity due to fluid elasticity for a dilute suspension of spheres in a viscoelastic medium. Our perturbation theory is valid to O (Wi2) in the Weissenberg number Wi = γ . λ , where γ is the typical magnitude of the suspension velocity gradient, and λ is the relaxation time of the viscoelastic fluid. For shear flow we find that the suspension shear-thickens due to elastic stretching in strain `hot spots' near the particle, despite the fact that the stress inside the particles decreases relative to the Newtonian case. We thus argue that it is crucial to correctly model the extensional rheology of the suspending medium to predict the shear rheology of the suspension. For uniaxial extensional flow we correct existing results at O (Wi) , and find dramatic strain-rate thickening at O (Wi2) . We validate our theory with fully resolved numerical simulations.

  4. Einstein viscosity with fluid elasticity

    NASA Astrophysics Data System (ADS)

    Einarsson, Jonas; Yang, Mengfei; Shaqfeh, Eric S. G.

    2018-01-01

    We give the first correction to the suspension viscosity due to fluid elasticity for a dilute suspension of spheres in a viscoelastic medium. Our perturbation theory is valid to O (ϕ Wi2) in the particle volume fraction ϕ and the Weissenberg number Wi =γ ˙λ , where γ ˙ is the typical magnitude of the suspension velocity gradient, and λ is the relaxation time of the viscoelastic fluid. For shear flow we find that the suspension shear-thickens due to elastic stretching in strain "hot spots" near the particle, despite the fact that the stress inside the particles decreases relative to the Newtonian case. We thus argue that it is crucial to correctly model the extensional rheology of the suspending medium to predict the shear rheology of the suspension. For uniaxial extensional flow we correct existing results at O (ϕ Wi ) , and find dramatic strain-rate thickening at O (ϕ Wi2) . We validate our theory with fully resolved numerical simulations.

  5. Heat and mass transfer of Williamson nanofluid flow yield by an inclined Lorentz force over a nonlinear stretching sheet

    NASA Astrophysics Data System (ADS)

    Khan, Mair; Malik, M. Y.; Salahuddin, T.; Hussian, Arif.

    2018-03-01

    The present analysis is devoted to explore the computational solution of the problem addressing the variable viscosity and inclined Lorentz force effects on Williamson nanofluid over a stretching sheet. Variable viscosity is assumed to vary as a linear function of temperature. The basic mathematical modelled problem i.e. system of PDE's is converted nonlinear into ODE's via applying suitable transformations. Computational solutions of the problem is also achieved via efficient numerical technique shooting. Characteristics of controlling parameters i.e. stretching index, inclined angle, Hartmann number, Weissenberg number, variable viscosity parameter, mixed convention parameter, Brownian motion parameter, Prandtl number, Lewis number, thermophoresis parameter and chemical reactive species on concentration, temperature and velocity gradient. Additionally, friction factor coefficient, Nusselt number and Sherwood number are describe with the help of graphics as well as tables verses flow controlling parameters.

  6. Experimental evidence of a helical, supercritical instability in pipe flow of shear thinning fluids

    NASA Astrophysics Data System (ADS)

    Picaut, L.; Ronsin, O.; Caroli, C.; Baumberger, T.

    2017-08-01

    We study experimentally the flow stability of entangled polymer solutions extruded through glass capillaries. We show that the pipe flow becomes linearly unstable beyond a critical value (Wic≃5 ) of the Weissenberg number, via a supercritical bifurcation which results in a helical distortion of the extrudate. We find that the amplitude of the undulation vanishes as the aspect ratio L /R of the capillary tends to zero, and saturates for large L /R , indicating that the instability affects the whole pipe flow, rather than the contraction or exit regions. These results, when compared to previous theoretical and experimental works, lead us to argue that the nature of the instability is controlled by the level of shear thinning of the fluids. In addition, we provide strong hints that the nonlinear development of the instabiilty is mitigated, in our system, by the gradual emergence of gross wall slip.

  7. Microfluidic converging/diverging channels optimised for homogeneous extensional deformation

    PubMed Central

    Zografos, K.; Oliveira, M. S. N.

    2016-01-01

    In this work, we optimise microfluidic converging/diverging geometries in order to produce constant strain-rates along the centreline of the flow, for performing studies under homogeneous extension. The design is examined for both two-dimensional and three-dimensional flows where the effects of aspect ratio and dimensionless contraction length are investigated. Initially, pressure driven flows of Newtonian fluids under creeping flow conditions are considered, which is a reasonable approximation in microfluidics, and the limits of the applicability of the design in terms of Reynolds numbers are investigated. The optimised geometry is then used for studying the flow of viscoelastic fluids and the practical limitations in terms of Weissenberg number are reported. Furthermore, the optimisation strategy is also applied for electro-osmotic driven flows, where the development of a plug-like velocity profile allows for a wider region of homogeneous extensional deformation in the flow field. PMID:27478523

  8. Shear-banding and superdiffusivity in entangled polymer solutions

    NASA Astrophysics Data System (ADS)

    Shin, Seunghwan; Dorfman, Kevin D.; Cheng, Xiang

    2017-12-01

    Using high-resolution confocal rheometry, we study the shear profiles of well-entangled DNA solutions under large-amplitude oscillatory shear in a rectilinear planar shear cell. With increasing Weissenberg number (Wi), we observe successive transitions from normal Newtonian linear shear profiles to wall-slip dominant shear profiles and, finally, to shear-banding profiles at high Wi. To investigate the microscopic origin of the observed shear banding, we study the dynamics of micron-sized tracers embedded in DNA solutions. Surprisingly, tracer particles in the shear frame exhibit transient superdiffusivity and strong dynamic heterogeneity. The probability distribution functions of particle displacements follow a power-law scaling at large displacements, indicating a Lévy-walk-type motion, reminiscent of tracer dynamics in entangled wormlike micelle solutions and sheared colloidal glasses. We further characterize the length and time scales associated with the abnormal dynamics of tracer particles. We hypothesize that the unusual particle dynamics arise from localized shear-induced chain disentanglement.

  9. Electro-osmotic mobility of non-Newtonian fluids

    PubMed Central

    Zhao, Cunlu; Yang, Chun

    2011-01-01

    Electrokinetically driven microfluidic devices are usually used to analyze and process biofluids which can be classified as non-Newtonian fluids. Conventional electrokinetic theories resulting from Newtonian hydrodynamics then fail to describe the behaviors of these fluids. In this study, a theoretical analysis of electro-osmotic mobility of non-Newtonian fluids is reported. The general Cauchy momentum equation is simplified by incorporation of the Gouy–Chapman solution to the Poisson–Boltzmann equation and the Carreau fluid constitutive model. Then a nonlinear ordinary differential equation governing the electro-osmotic velocity of Carreau fluids is obtained and solved numerically. The effects of the Weissenberg number (Wi), the surface zeta potential (ψ¯s), the power-law exponent(n), and the transitional parameter (β) on electro-osmotic mobility are examined. It is shown that the results presented in this study for the electro-osmotic mobility of Carreau fluids are quite general so that the electro-osmotic mobility for the Newtonian fluids and the power-law fluids can be obtained as two limiting cases. PMID:21503161

  10. Visualization of polymer relaxation in viscoelastic turbulent micro-channel flow.

    PubMed

    Tai, Jiayan; Lim, Chun Ping; Lam, Yee Cheong

    2015-11-13

    In micro-channels, the flow of viscous liquids e.g. water, is laminar due to the low Reynolds number in miniaturized dimensions. An aqueous solution becomes viscoelastic with a minute amount of polymer additives; its flow behavior can become drastically different and turbulent. However, the molecules are typically invisible. Here we have developed a novel visualization technique to examine the extension and relaxation of polymer molecules at high flow velocities in a viscoelastic turbulent flow. Using high speed videography to observe the fluorescein labeled molecules, we show that viscoelastic turbulence is caused by the sporadic, non-uniform release of energy by the polymer molecules. This developed technique allows the examination of a viscoelastic liquid at the molecular level, and demonstrates the inhomogeneity of viscoelastic liquids as a result of molecular aggregation. It paves the way for a deeper understanding of viscoelastic turbulence, and could provide some insights on the high Weissenberg number problem. In addition, the technique may serve as a useful tool for the investigations of polymer drag reduction.

  11. Visualization of polymer relaxation in viscoelastic turbulent micro-channel flow

    NASA Astrophysics Data System (ADS)

    Tai, Jiayan; Lim, Chun Ping; Lam, Yee Cheong

    2015-11-01

    In micro-channels, the flow of viscous liquids e.g. water, is laminar due to the low Reynolds number in miniaturized dimensions. An aqueous solution becomes viscoelastic with a minute amount of polymer additives; its flow behavior can become drastically different and turbulent. However, the molecules are typically invisible. Here we have developed a novel visualization technique to examine the extension and relaxation of polymer molecules at high flow velocities in a viscoelastic turbulent flow. Using high speed videography to observe the fluorescein labeled molecules, we show that viscoelastic turbulence is caused by the sporadic, non-uniform release of energy by the polymer molecules. This developed technique allows the examination of a viscoelastic liquid at the molecular level, and demonstrates the inhomogeneity of viscoelastic liquids as a result of molecular aggregation. It paves the way for a deeper understanding of viscoelastic turbulence, and could provide some insights on the high Weissenberg number problem. In addition, the technique may serve as a useful tool for the investigations of polymer drag reduction.

  12. Visualization of polymer relaxation in viscoelastic turbulent micro-channel flow

    PubMed Central

    Tai, Jiayan; Lim, Chun Ping; Lam, Yee Cheong

    2015-01-01

    In micro-channels, the flow of viscous liquids e.g. water, is laminar due to the low Reynolds number in miniaturized dimensions. An aqueous solution becomes viscoelastic with a minute amount of polymer additives; its flow behavior can become drastically different and turbulent. However, the molecules are typically invisible. Here we have developed a novel visualization technique to examine the extension and relaxation of polymer molecules at high flow velocities in a viscoelastic turbulent flow. Using high speed videography to observe the fluorescein labeled molecules, we show that viscoelastic turbulence is caused by the sporadic, non-uniform release of energy by the polymer molecules. This developed technique allows the examination of a viscoelastic liquid at the molecular level, and demonstrates the inhomogeneity of viscoelastic liquids as a result of molecular aggregation. It paves the way for a deeper understanding of viscoelastic turbulence, and could provide some insights on the high Weissenberg number problem. In addition, the technique may serve as a useful tool for the investigations of polymer drag reduction. PMID:26563615

  13. Configurations and Dynamics of Semi-Flexible Polymers in Good and Poor Solvents

    NASA Astrophysics Data System (ADS)

    Larson, Ronald

    We develop coarse-graining procedures for determining the conformational and dynamic behavior of semi-flexible chains with and without flow using Brownian dynamics (BD) simulations that are insensitive to the degree of coarse-graining. In the absence of flow, in a poor solvent, we find three main collapsed states: torus, bundle, and globule over a range of dimensionless ratios of the three energy parameters, namely solvent-polymer surface energy, energy of polymer folds, and polymer bending energy or persistence length. A theoretical phase diagram, confirmed by BD simulations, captures the general phase behavior of a single long chain (>10 Kuhn lengths) at moderately high (order unity) dimensionless temperature, which is the ratio of thermal energy to the attractive interaction between neighboring monomers. We also find converged results for polymer conformations in shear or extensional flow in solvents of various qualities and determine scaling laws for chain dimensions for low, moderate, and high Weissenberg numbers Wi. We also derive scaling laws to describe chains dimensions and tumbling rates in these regimes.

  14. The effect of the polymer relaxation time on the nonlinear energy cas- cade and dissipation of statistically steady and decaying homogeneous isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Valente, Pedro C.; da Silva, Carlos B.; Pinho, Fernando T.

    2013-11-01

    We report a numerical study of statistically steady and decaying turbulence of FENE-P fluids for varying polymer relaxation times ranging from the Kolmogorov dissipation time-scale to the eddy turnover time. The total turbulent kinetic energy dissipation is shown to increase with the polymer relaxation time in both steady and decaying turbulence, implying a ``drag increase.'' If the total power input in the statistically steady case is kept equal in the Newtonian and the viscoelastic simulations the increase in the turbulence-polymer energy transfer naturally lead to the previously reported depletion of the Newtonian, but not the overall, kinetic energy dissipation. The modifications to the nonlinear energy cascade with varying Deborah/Weissenberg numbers are quantified and their origins investigated. The authors acknowledge the financial support from Fundação para a Ciência e a Tecnologia under grant PTDC/EME-MFE/113589/2009.

  15. Large-eddy simulations of a forced homogeneous isotropic turbulence with polymer additives

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Cai, Wei-Hua; Li, Feng-Chen

    2014-03-01

    Large-eddy simulations (LES) based on the temporal approximate deconvolution model were performed for a forced homogeneous isotropic turbulence (FHIT) with polymer additives at moderate Taylor Reynolds number. Finitely extensible nonlinear elastic in the Peterlin approximation model was adopted as the constitutive equation for the filtered conformation tensor of the polymer molecules. The LES results were verified through comparisons with the direct numerical simulation results. Using the LES database of the FHIT in the Newtonian fluid and the polymer solution flows, the polymer effects on some important parameters such as strain, vorticity, drag reduction, and so forth were studied. By extracting the vortex structures and exploring the flatness factor through a high-order correlation function of velocity derivative and wavelet analysis, it can be found that the small-scale vortex structures and small-scale intermittency in the FHIT are all inhibited due to the existence of the polymers. The extended self-similarity scaling law in the polymer solution flow shows no apparent difference from that in the Newtonian fluid flow at the currently simulated ranges of Reynolds and Weissenberg numbers.

  16. Iterated Stretching of Viscoelastic Jets

    NASA Technical Reports Server (NTRS)

    Chang, Hsueh-Chia; Demekhin, Evgeny A.; Kalaidin, Evgeny

    1999-01-01

    We examine, with asymptotic analysis and numerical simulation, the iterated stretching dynamics of FENE and Oldroyd-B jets of initial radius r(sub 0), shear viscosity nu, Weissenberg number We, retardation number S, and capillary number Ca. The usual Rayleigh instability stretches the local uniaxial extensional flow region near a minimum in jet radius into a primary filament of radius [Ca(1 - S)/ We](sup 1/2)r(sub 0) between two beads. The strain-rate within the filament remains constant while its radius (elastic stress) decreases (increases) exponentially in time with a long elastic relaxation time 3We(r(sup 2, sub 0)/nu). Instabilities convected from the bead relieve the tension at the necks during this slow elastic drainage and trigger a filament recoil. Secondary filaments then form at the necks from the resulting stretching. This iterated stretching is predicted to occur successively to generate high-generation filaments of radius r(sub n), (r(sub n)/r(sub 0)) = square root of 2[r(sub n-1)/r(sub 0)](sup 3/2) until finite-extensibility effects set in.

  17. Interfacial instability of wormlike micellar solutions sheared in a Taylor-Couette cell

    NASA Astrophysics Data System (ADS)

    Mohammadigoushki, Hadi; Muller, Susan J.

    2014-11-01

    We report experiments on wormlike micellar solutions sheared in a custom-made Taylor-Couette (TC) cell. The computer controlled TC cell allows us to rotate both cylinders independently. Wormlike micellar solutions containing water, CTAB, and NaNo3 with different compositions are highly elastic and exhibit shear banding. We visualized the flow field in the θ-z as well as r-z planes, using multiple cameras. When subject to low shear rates, the flow is stable and azimuthal, but becomes unstable above a certain threshold shear rate. This shear rate coincides with the onset of shear banding. Visualizing the θ-z plane shows that this instability is characterized by stationary bands equally spaced in the z direction. Increasing the shear rate results to larger wave lengths. Above a critical shear rate, experiments reveal a chaotic behavior reminiscent of elastic turbulence. We also studied the effect of ramp speed on the onset of instability and report an acceleration below which the critical Weissenberg number for onset of instability is unaffected. Moreover, visualizations in the r-z direction reveals that the interface between the two bands undulates with shear bands evolving towards the outer cylinder regardless of which cylinder is rotating.

  18. Effect of solid boundaries on a motile microorganism in a viscoelastic fluid

    NASA Astrophysics Data System (ADS)

    Karimi, Alireza; Li, Gaojin; Ardekani, Arezoo

    2014-11-01

    Microorganisms swimming in viscoelastic fluids are ubiquitous in nature; this includes biofilms grown on surfaces, Helicobacter pylori colonizing in the mucus layer covering the stomach and spermatozoa swimming through cervical mucus inside the mammalian female reproductive tract. Previous studies have focused on the locomotion of microorganisms in an unbounded viscoelastic fluid. However in many situations, microorganisms interact with solid boundaries and their hydrodynamic interaction is poorly understood. In this work, we numerically study the effect of solid boundaries on the swimming behavior of an archetypal low-Reynolds number swimmer, called ``squirmer,'' in a viscoelastic fluid. A Giesekus constitutive equation is used to model both viscoelasticity and shear-thinning behavior of the background fluid. We found that the time a neutral squirmer spends in the close proximity of the wall increases with polymer relaxation time and reaches a maximum at Weissenberg number of unity. A pusher is found to be trapped near the wall in a viscoelastic fluid, but the puller is less affected. This publication was made possible, in part, with support from NSF (Grant No. CBET-1150348-CAREER) and Indiana Clinical and Translational Sciences Institute Collaboration in Biomedical/Translational Research (Grant No. TR000006) from NIH.

  19. Polymer dynamics driven by a helical filament

    NASA Astrophysics Data System (ADS)

    Balin, Andrew; Shendruk, Tyler; Zoettl, Andreas; Yeomans, Julia

    Microbial flagellates typically inhabit complex suspensions of extracellular polymeric material which can impact the swimming speed of motile microbes, filter-feeding of sessile cells, and the generation of biofilms. There is currently a need to better understand how the fundamental dynamics of polymers near active cells or flagella impacts these various phenomena. We study the hydrodynamic and steric influence of a rotating helical filament on suspended polymers using Stokesian Dynamics simulations. Our results show that as a stationary rotating helix pumps fluid along its long axis, nearby polymers migrate radially inwards and are elongated in the process. We observe that the actuation of the helix tends to increase the probability of finding polymeric material within its pervaded volume. At larger Weissenberg numbers, this accumulation of polymers within the vicinity of the helix is greater. Further, we have analysed the stochastic work performed by the helix on the polymers and we show that this quantity is positive on average and increases with polymer contour length. Our results provide a basis for understanding the microscopic interactions that govern cell dynamics in complex media. This work was supported through funding from the ERC Advanced Grant 291234 MiCE and we acknowledge EMBO funding to TNS (ALTF181-2013).

  20. Dynamics and structures of transitional viscoelastic turbulence in channel flow

    NASA Astrophysics Data System (ADS)

    Shekar, Ashwin; Wang, Sung-Ning; Graham, Michael

    2017-11-01

    Introducing a trace amount of polymer into turbulent flows can result in a substantial reduction of drag. However, the mechanism is not fully understood at high levels of drag reduction. In this work we perform direct numerical simulations (DNS) of viscoelastic channel flow turbulence using a scheme that guarantees the positive-definiteness of polymer conformation tensor without artificial diffusion. Here we present the results of two parametric studies with the bulk Reynolds number fixed at 2000. First, the Weissenberg number (Wi) is kept at 100 and we vary the viscosity ratio (ratio ratio of the solvent viscosity and the total viscosity). Maximum drag reduction (MDR) is observed with viscosity ratio <0.95. As we decrease the viscosity ratio, i.e. increase polymer concentration, the mean velocity profile is almost invariant. However, this is accompanied by a decrease in velocity fluctuations but the flow stays turbulent. Turbulent kinetic energy budget analysis shows that, in this parameter regime, polymer becomes the major source of velocity fluctuations, replacing the energy transfer from the mean flow. In the second study, we fix the viscosity ratio at 0.95 and trace the Wi up to this regime and present the accompanying changes in flow quantities and structures.

  1. Particle sedimentation in a sheared viscoelastic fluid

    NASA Astrophysics Data System (ADS)

    Murch, William L.; Krishnan, Sreenath; Shaqfeh, Eric S. G.; Iaccarino, Gianluca

    2017-11-01

    Particle suspensions are ubiquitous in engineered processes, biological systems, and natural settings. For an engineering application - whether the intent is to suspend and transport particles (e.g., in hydraulic fracturing fluids) or allow particles to sediment (e.g., in industrial separations processes) - understanding and prediction of the particle mobility is critical. This task is often made challenging by the complex nature of the fluid phase, for example, due to fluid viscoelasticity. In this talk, we focus on a fully 3D flow problem in a viscoelastic fluid: a settling particle with a shear flow applied in the plane perpendicular to gravity (referred to as orthogonal shear). Previously, it has been shown that an orthogonal shear flow can reduce the settling rate of particles in viscoelastic fluids. Using experiments and numerical simulations across a wide range of sedimentation and shear Weissenberg number, this talk will address the underlying physical mechanism responsible for the additional drag experienced by a rigid sphere settling in a confined viscoelastic fluid with orthogonal shear. We will then explore multiple particle effects, and discuss the implications and extensions of this work for particle suspensions. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-114747 (WLM).

  2. New developments in isotropic turbulent models for FENE-P fluids

    NASA Astrophysics Data System (ADS)

    Resende, P. R.; Cavadas, A. S.

    2018-04-01

    The evolution of viscoelastic turbulent models, in the last years, has been significant due to the direct numeric simulation (DNS) advances, which allowed us to capture in detail the evolution of the viscoelastic effects and the development of viscoelastic closures. New viscoelastic closures are proposed for viscoelastic fluids described by the finitely extensible nonlinear elastic-Peterlin constitutive model. One of the viscoelastic closure developed in the context of isotropic turbulent models, consists in a modification of the turbulent viscosity to include an elastic effect, capable of predicting, with good accuracy, the behaviour for different drag reductions. Another viscoelastic closure essential to predict drag reduction relates the viscoelastic term involving velocity and the tensor conformation fluctuations. The DNS data show the high impact of this term to predict correctly the drag reduction, and for this reason is proposed a simpler closure capable of predicting the viscoelastic behaviour with good performance. In addition, a new relation is developed to predict the drag reduction, quantity based on the trace of the tensor conformation at the wall, eliminating the need of the typically parameters of Weissenberg and Reynolds numbers, which depend on the friction velocity. This allows future developments for complex geometries.

  3. Direct numerical simulation of particle alignment in viscoelastic fluids

    NASA Astrophysics Data System (ADS)

    Hulsen, Martien; Jaensson, Nick; Anderson, Patrick

    2016-11-01

    Rigid particles suspended in viscoelastic fluids under shear can align in string-like structures in flow direction. To unravel this phenomenon, we present 3D direct numerical simulations of the alignment of two and three rigid, non-Brownian particles in a shear flow of a viscoelastic fluid. The equations are solved on moving, boundary-fitted meshes, which are locally refined to accurately describe the polymer stresses around and in between the particles. A small minimal gap size between the particles is introduced. The Giesekus model is used and the effect of the Weissenberg number, shear thinning and solvent viscosity is investigated. Alignment of two and three particles is observed. Morphology plots have been created for various combinations of fluid parameters. Alignment is mainly governed by the value of the elasticity parameter S, defined as half of the ratio between the first normal stress difference and shear stress of the suspending fluid. Alignment appears to occur above a critical value of S, which decreases with increasing shear thinning. This result, together with simulations of a shear-thinning Carreau fluid, leads us to the conclusion that normal stress differences are essential for particle alignment to occur, but it is also strongly promoted by shear thinning.

  4. Numerical study of a thermally stratified flow of a tangent hyperbolic fluid induced by a stretching cylindrical surface

    NASA Astrophysics Data System (ADS)

    Ur Rehman, Khali; Ali Khan, Abid; Malik, M. Y.; Hussain, Arif

    2017-09-01

    The effects of temperature stratification on a tangent hyperbolic fluid flow over a stretching cylindrical surface are studied. The fluid flow is achieved by taking the no-slip condition into account. The mathematical modelling of the physical problem yields a nonlinear set of partial differential equations. These obtained partial differential equations are converted in terms of ordinary differential equations. Numerical investigation is done to identify the effects of the involved physical parameters on the dimensionless velocity and temperature profiles. In the presence of temperature stratification it is noticed that the curvature parameter makes both the fluid velocity and fluid temperature increase. In addition, positive variations in the thermal stratification parameter produce retardation with respect to the fluid flow, as a result the fluid temperature drops. The skin friction coefficient shows a decreasing nature for increasing value of both power law index and Weissenberg number, whereas the local Nusselt number is an increasing function of the Prandtl number, but opposite trends are found with respect to the thermal stratification parameter. The obtained results are validated by making a comparison with the existing literature which brings support to the presently developed model.

  5. Interfacial instability of wormlike micellar solutions sheared in a Taylor-Couette cell

    NASA Astrophysics Data System (ADS)

    Mohammadigoushki, Hadi; Muller, Susan J.

    2014-10-01

    We report experiments on wormlike micellar solutions sheared in a custom-made Taylor-Couette (TC) cell. The computer controlled TC cell allows us to rotate both cylinders independently. Wormlike micellar solutions containing water, CTAB, and NaNo3 with different compositions are highly elastic and exhibit shear banding within a range of shear rate. We visualized the flow field in the θ-z as well as r-z planes, using multiple cameras. When subject to low shear rates, the flow is stable and azimuthal, but becomes unstable above a certain threshold shear rate. This shear rate coincides with the onset of shear banding. Visualizing the θ-z plane shows that this instability is characterized by stationary bands equally spaced in the z direction. Increasing the shear rate results to larger wave lengths. Above a critical shear rate, experiments reveal a chaotic behavior reminiscent of elastic turbulence. We also studied the effect of ramp speed on the onset of instability and report an acceleration below which the critical Weissenberg number for onset of instability is unaffected. Moreover, visualizations in the r-z direction reveals that the interface between the two bands undulates. The shear band evolves towards the outer cylinder upon increasing the shear rate, regardless of which cylinder is rotating.

  6. Reexamination of the Classical View of how Drag-Reducing Polymer Solutions Modify the Mean Velocity Profile: Baseline Results

    NASA Astrophysics Data System (ADS)

    Farsiani, Yasaman; Baade, Jacquelyne; Elbing, Brian

    2016-11-01

    Recent numerical and experimental data have shown that the classical view of how drag-reducing polymer solutions modify the mean turbulent velocity profile is incorrect. The classical view is that the log-region is unmodified from the traditional law-of-the-wall for Newtonian fluids, though shifted outward. Thus the current study reexamines the modified velocity distribution and its dependence on flow and polymer properties. Based on previous work it is expected that the behavior will depend on the Reynolds number, Weissenberg number, ratio of solvent viscosity to the zero-shear viscosity, and the ratio between the coiled and fully extended polymer chain lengths. The long-term objective for this study includes a parametric study to assess the velocity profile sensitivity to each of these parameters. This study will be performed using a custom design water tunnel, which has a test section that is 1 m long with a 15.2 cm square cross section and a nominal speed range of 1 to 10 m/s. The current presentation focuses on baseline (non-polymeric) measurements of the velocity distribution using PIV, which will be used for comparison of the polymer modified results. Preliminary polymeric results will also be presented. This work was supported by NSF Grant 1604978.

  7. Viscoelastic fluid-structure interactions between a flexible cylinder and wormlike micelle solution

    NASA Astrophysics Data System (ADS)

    Dey, Anita A.; Modarres-Sadeghi, Yahya; Rothstein, Jonathan P.

    2018-06-01

    It is well known that when a flexible or flexibly mounted structure is placed perpendicular to the flow of a Newtonian fluid, it can oscillate due to the shedding of separated vortices at high Reynolds numbers. Unlike Newtonian fluids, the flow of viscoelastic fluids can become unstable even at infinitesimal Reynolds numbers due to a purely elastic flow instability that can occur at large Weissenberg numbers. Recent work has shown that these elastic flow instabilities can drive the motion of flexible sheets. The fluctuating fluid forces exerted on the structure from the elastic flow instabilities can lead to a coupling between an oscillatory structural motion and the state of stress in the fluid flow. In this paper, we present the results of an investigation into the flow of a viscoelastic wormlike micelle solution past a flexible circular cylinder. The time variation of the flow field and the state of stress in the fluid are shown using a combination of particle image tracking and flow-induced birefringence images. The static and dynamic responses of the flexible cylinder are presented for a range of flow velocities. The nonlinear dynamics of the structural motion is studied to better understand an observed transition from a symmetric to an asymmetric structural deformation and oscillation behavior.

  8. Explicit Solvent Simulations of Friction between Brush Layers of Charged and Neutral Bottle-Brush Macromolecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrillo, Jan-Michael; Brown, W Michael; Dobrynin, Andrey

    2012-01-01

    We study friction between charged and neutral brush layers of bottle-brush macromolecules using molecular dynamics simulations. In our simulations the solvent molecules were treated explicitly. The deformation of the bottle-brush macromolecules under the shear were studied as a function of the substrate separation and shear stress. For charged bottle-brush layers we study effect of the added salt on the brush lubricating properties to elucidate factors responsible for energy dissipation in charged and neutral brush systems. Our simulations have shown that for both charged and neutral brush systems the main deformation mode of the bottle-brush macromolecule is associated with the backbonemore » deformation. This deformation mode manifests itself in the backbone deformation ratio, , and shear viscosity, , to be universal functions of the Weissenberg number W. The value of the friction coefficient, , and viscosity, , are larger for the charged bottle-brush coatings in comparison with those for neutral brushes at the same separation distance, D, between substrates. The additional energy dissipation generated by brush sliding in charged bottle-brush systems is due to electrostatic coupling between bottle-brush and counterion motion. This coupling weakens as salt concentration, cs, increases resulting in values of the viscosity, , and friction coefficient, , approaching corresponding values obtained for neutral brush systems.« less

  9. Brownian dynamics of wall tethered polymers in shear flow

    NASA Astrophysics Data System (ADS)

    Lin, Tiras Y.; Saadat, Amir; Kushwaha, Amit; Shaqfeh, Eric S. G.

    2017-11-01

    The dynamics of a wall tethered polymer in shear flow is studied using Brownian dynamics. Simulations are performed with bead-spring chains, and the effect of hydrodynamic interactions (HI) is incorporated through Blake's tensor with a finite size bead correction. We characterize the configuration of the polymer as a function of the Weissenberg number by investigating the regions the polymer explores in both the flow-gradient and flow-vorticity planes. The fractional extension in the flow direction, the width in the vorticity direction, and the thickness in the gradient direction are reported as well, and these quantities are found to compare favorably with the experimental data of the literature. The cyclic motion of the polymer is demonstrated through analysis of the mean velocity field of the end bead. We characterize the collision process of each bead with the wall as a Poisson process and extract an average wall collision rate, which in general varies along the backbone of the chain. The inclusion of HI with the wall for a tethered polymer is found to reduce the average wall collision rate. We anticipate that results from this work will be directly applicable to, e.g., the design of polymer brushes or the use of DNA for making nanowires in molecular electronics. T.Y.L. is supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program.

  10. Brownian Dynamics Simulations of Polyelectrolyte Adsorption in Shear Flow

    NASA Astrophysics Data System (ADS)

    Panwar, Ajay

    2005-03-01

    The adsorption of polyelectrolytes onto charged surfaces often occurs in microfludic devices and can influence their operation. We employ Brownian dynamics simulations to investigate the effect of a simple shear flow on the adsorption of an isolated polyelectrolyte molecule onto an oppositely charged surface. The polyelectrolyte is modeled as a freely-jointed bead-rod chain where the total charge is distributed uniformly among all the beads, and the beads are allowed to interact with one another and the charged surface through screened Coulombic interactions. The simulations are performed by placing the chain some distance above the surface, and the adsorption behavior is studied as a function of the screening length. Specifically, we look at the components of the radius of gyration, normal and parallel to the adsorbing surface, as functions of the screening length, both in the absence and presence of the flow. We find that in the absence of flow, the chain lies flat and stretched on the adsorbing surface in the limit of weak screening, but attains free solution behavior in the limit of strong screening. In the presence of a shear flow, the chain orientation in the direction of the flow increases with increasing Weissenberg number over the entire range of screening lengths studied. We also find that increasing the strength of the shear flow leads to an increased contact of the chain with the surface compared to the case when no flow is present.

  11. The molecular structure of the isopoly complex ion, decavanadate (V10O286-)

    USGS Publications Warehouse

    Evans, H.T.

    1966-01-01

    The structure of the decavanadate ion V10O286- has been found by a determination of the crystal structure of K2Zn2V10O28?? 16H2O. The soluble, orange crystals are triclinic with space group P1 and have a unit cell with a = 10.778 A, b = 11.146 A, c = 8.774 A, ?? = 104?? 57???, ?? = 109?? 3???', and ?? = 65?? 0??? (Z = 1). The structure was solved from a three-dimensional Patterson map based on 5143 Weissenberg-film data. The full-matrix, least-squares refinement gave R = 0.094 and ?? for V-O bond lengths of 0.008 A. The unit cell contains one V10O286- unit, two Zn(H2O)62+ groups, two K+ ions, and four additional water molecules. The decavanadate ion is an isolated group of ten condensed VO6 octahedra, six in a rectangular 2 x 3 array sharing edges, and four more, two fitted in above and two below by sharing sloping edges. The structure, which is based on a sodium-chloride-like arrangement of V and O atoms, has a close relationship to other isopoly complex molybdates, niobates, and tantalates. Strong distortions in the VO6 octahedra are analogous to square-pyramid and other special coordination features known in other vanadate structures.

  12. Drag reduction in plane Couette flow of dilute polymer solutions

    NASA Astrophysics Data System (ADS)

    Liu, Nansheng; Teng, Hao; Lu, Xiyun; Khomami, Bamin

    2017-11-01

    Drag reduction (DR) in the plane Couette flow (PCF) by the addition of flexible polymers has been studied by direct numerical simulation (DNS) in this work. Special interest has been directed to explore the similarity and difference in the DR features between the PCF and the plane Poiseuille flow (PPF), and to clarify the effects of large-scale structures (LSSs) on the near-wall turbulence. It has been demonstrated that in the near-wall region the drag-reduced PCF shares typical DR features similar to those reported for the drag-reduced PPF (White & Mungal 2008; Graham 2014), however in the core region intriguing differences are found between these two DR shear flows of polymeric solution. Specifically, in the core region of the drag-reduced PCF, the polymer chains are stretched substantial and absorb kinetic energy from the turbulent fluctuations. In commensurate, peak values of conformation tensor components Cyy and Czz occur in the core region. This finding is strikingly different from that of the drag-reduced PPF. For the drag-reduced PCF, the LSSs are found to have monotonically increasing effects on the near-wall flow as the Weissenberg number increases, and have their spanwise length scale unchanged. This work is supported by the NSFC Grants 11272306 and 11472268 and the NSF Grant CBET0755269. This research was also supported in part by allocation of advanced computational resources on DARTER by the National Institute for Computational Sciences (NICS).

  13. On the mechanism of elasto-inertial turbulence.

    PubMed

    Dubief, Yves; Terrapon, Vincent E; Soria, Julio

    2013-11-01

    Elasto-inertial turbulence (EIT) is a new state of turbulence found in inertial flows with polymer additives. The dynamics of turbulence generated and controlled by such additives is investigated from the perspective of the coupling between polymer dynamics and flow structures. Direct numerical simulations of channel flow with Reynolds numbers ranging from 1000 to 6000 (based on the bulk and the channel height) are used to study the formation and dynamics of elastic instabilities and their effects on the flow. The flow topology of EIT is found to differ significantly from Newtonian wall-turbulence. Structures identified by positive (rotational flow topology) and negative (extensional/compressional flow topology) second invariant Q a isosurfaces of the velocity gradient are cylindrical and aligned in the spanwise direction. Polymers are significantly stretched in sheet-like regions that extend in the streamwise direction with a small upward tilt. The Q a cylindrical structures emerge from the sheets of high polymer extension, in a mechanism of energy transfer from the fluctuations of the polymer stress work to the turbulent kinetic energy. At subcritical Reynolds numbers, EIT is observed at modest Weissenberg number ( Wi , ratio polymer relaxation time to viscous time scale). For supercritical Reynolds numbers, flows approach EIT at large Wi . EIT provides new insights on the nature of the asymptotic state of polymer drag reduction (maximum drag reduction), and explains the phenomenon of early turbulence, or onset of turbulence at lower Reynolds numbers than for Newtonian flows observed in some polymeric flows.

  14. On the mechanism of elasto-inertial turbulence

    NASA Astrophysics Data System (ADS)

    Dubief, Yves; Terrapon, Vincent E.; Soria, Julio

    2013-11-01

    Elasto-inertial turbulence (EIT) is a new state of turbulence found in inertial flows with polymer additives. The dynamics of turbulence generated and controlled by such additives is investigated from the perspective of the coupling between polymer dynamics and flow structures. Direct numerical simulations of channel flow with Reynolds numbers ranging from 1000 to 6000 (based on the bulk and the channel height) are used to study the formation and dynamics of elastic instabilities and their effects on the flow. The flow topology of EIT is found to differ significantly from Newtonian wall-turbulence. Structures identified by positive (rotational flow topology) and negative (extensional/compressional flow topology) second invariant Qa isosurfaces of the velocity gradient are cylindrical and aligned in the spanwise direction. Polymers are significantly stretched in sheet-like regions that extend in the streamwise direction with a small upward tilt. The Qa cylindrical structures emerge from the sheets of high polymer extension, in a mechanism of energy transfer from the fluctuations of the polymer stress work to the turbulent kinetic energy. At subcritical Reynolds numbers, EIT is observed at modest Weissenberg number (Wi, ratio polymer relaxation time to viscous time scale). For supercritical Reynolds numbers, flows approach EIT at large Wi. EIT provides new insights on the nature of the asymptotic state of polymer drag reduction (maximum drag reduction), and explains the phenomenon of early turbulence, or onset of turbulence at lower Reynolds numbers than for Newtonian flows observed in some polymeric flows.

  15. Shear-induced clustering of Brownian colloids in associative polymer networks at moderate Péclet number

    NASA Astrophysics Data System (ADS)

    Kim, Juntae; Helgeson, Matthew E.

    2016-08-01

    We investigate shear-induced clustering and its impact on fluid rheology in polymer-colloid mixtures at moderate colloid volume fraction. By employing a thermoresponsive system that forms associative polymer-colloid networks, we present experiments of rheology and flow-induced microstructure on colloid-polymer mixtures in which the relative magnitudes of the time scales associated with relaxation of viscoelasticity and suspension microstructure are widely and controllably varied. In doing so, we explore several limits of relative magnitude of the relevant dimensionless shear rates, the Weissenberg number Wi and the Péclet number Pe. In all of these limits, we find that the fluid exhibits two distinct regimes of shear thinning at relatively low and high shear rates, in which the rheology collapses by scaling with Wi and Pe, respectively. Using three-dimensionally-resolved flow small-angle neutron scattering measurements, we observe clustering of the suspension above a critical shear rate corresponding to Pe ˜0.1 over a wide range of fluid conditions, having anisotropy with projected orientation along both the vorticity and compressional axes of shear. The degree of anisotropy is shown to scale with Pe. From this we formulate an empirical model for the shear stress and viscosity, in which the viscoelastic network stress is augmented by an asymptotic shear thickening contribution due to hydrodynamic clustering. Overall, our results elucidate the significant role of hydrodynamic interactions in contributing to shear-induced clustering of Brownian suspensions in viscoelastic liquids.

  16. On the mechanism of elasto-inertial turbulence

    PubMed Central

    Dubief, Yves; Terrapon, Vincent E.; Soria, Julio

    2013-01-01

    Elasto-inertial turbulence (EIT) is a new state of turbulence found in inertial flows with polymer additives. The dynamics of turbulence generated and controlled by such additives is investigated from the perspective of the coupling between polymer dynamics and flow structures. Direct numerical simulations of channel flow with Reynolds numbers ranging from 1000 to 6000 (based on the bulk and the channel height) are used to study the formation and dynamics of elastic instabilities and their effects on the flow. The flow topology of EIT is found to differ significantly from Newtonian wall-turbulence. Structures identified by positive (rotational flow topology) and negative (extensional/compressional flow topology) second invariant Qa isosurfaces of the velocity gradient are cylindrical and aligned in the spanwise direction. Polymers are significantly stretched in sheet-like regions that extend in the streamwise direction with a small upward tilt. The Qa cylindrical structures emerge from the sheets of high polymer extension, in a mechanism of energy transfer from the fluctuations of the polymer stress work to the turbulent kinetic energy. At subcritical Reynolds numbers, EIT is observed at modest Weissenberg number (Wi, ratio polymer relaxation time to viscous time scale). For supercritical Reynolds numbers, flows approach EIT at large Wi. EIT provides new insights on the nature of the asymptotic state of polymer drag reduction (maximum drag reduction), and explains the phenomenon of early turbulence, or onset of turbulence at lower Reynolds numbers than for Newtonian flows observed in some polymeric flows. PMID:24170968

  17. Single polymer dynamics under large amplitude oscillatory extension

    NASA Astrophysics Data System (ADS)

    Zhou, Yuecheng; Schroeder, Charles M.

    2016-09-01

    Understanding the conformational dynamics of polymers in time-dependent flows is of key importance for controlling materials properties during processing. Despite this importance, however, it has been challenging to study polymer dynamics in controlled time-dependent or oscillatory extensional flows. In this work, we study the dynamics of single polymers in large-amplitude oscillatory extension (LAOE) using a combination of experiments and Brownian dynamics (BD) simulations. Two-dimensional LAOE flow is generated using a feedback-controlled stagnation point device known as the Stokes trap, thereby generating an oscillatory planar extensional flow with alternating principal axes of extension and compression. Our results show that polymers experience periodic cycles of compression, reorientation, and extension in LAOE, and dynamics are generally governed by a dimensionless flow strength (Weissenberg number Wi) and dimensionless frequency (Deborah number De). Single molecule experiments are compared to BD simulations with and without intramolecular hydrodynamic interactions (HI) and excluded volume (EV) interactions, and good agreement is obtained across a range of parameters. Moreover, transient bulk stress in LAOE is determined from simulations using the Kramers relation, which reveals interesting and unique rheological signatures for this time-dependent flow. We further construct a series of single polymer stretch-flow rate curves (defined as single molecule Lissajous curves) as a function of Wi and De, and we observe qualitatively different dynamic signatures (butterfly, bow tie, arch, and line shapes) across the two-dimensional Pipkin space defined by Wi and De. Finally, polymer dynamics spanning from the linear to nonlinear response regimes are interpreted in the context of accumulated fluid strain in LAOE.

  18. Fourier decomposition of polymer orientation in large-amplitude oscillatory shear flow

    DOE PAGES

    Giacomin, A. J.; Gilbert, P. H.; Schmalzer, A. M.

    2015-03-19

    In our previous work, we explored the dynamics of a dilute suspension of rigid dumbbells as a model for polymeric liquids in large-amplitude oscillatory shear flow, a flow experiment that has gained a significant following in recent years. We chose rigid dumbbells since these are the simplest molecular model to give higher harmonics in the components of the stress response. We derived the expression for the dumbbell orientation distribution, and then we used this function to calculate the shear stress response, and normal stress difference responses in large-amplitude oscillatory shear flow. In this paper, we deepen our understanding of themore » polymer motion underlying large-amplitude oscillatory shear flow by decomposing the orientation distribution function into its first five Fourier components (the zeroth, first, second, third, and fourth harmonics). We use three-dimensional images to explore each harmonic of the polymer motion. Our analysis includes the three most important cases: (i) nonlinear steady shear flow (where the Deborah number λω is zero and the Weissenberg number λγ 0 is above unity), (ii) nonlinear viscoelasticity (where both λω and λγ 0 exceed unity), and (iii) linear viscoelasticity (where λω exceeds unity and where λγ 0 approaches zero). We learn that the polymer orientation distribution is spherical in the linear viscoelastic regime, and otherwise tilted and peanut-shaped. We find that the peanut-shaping is mainly caused by the zeroth harmonic, and the tilting, by the second. The first, third, and fourth harmonics of the orientation distribution make only slight contributions to the overall polymer motion.« less

  19. Numerical simulation for heat transfer performance in unsteady flow of Williamson fluid driven by a wedge-geometry

    NASA Astrophysics Data System (ADS)

    Hamid, Aamir; Hashim; Khan, Masood

    2018-06-01

    The main concern of this communication is to investigate the two-layer flow of a non-Newtonian rheological fluid past a wedge-shaped geometry. One remarkable aspect of this article is the mathematical formulation for two-dimensional flow of Williamson fluid by incorporating the effect of infinite shear rate viscosity. The impacts of heat transfer mechanism on time-dependent flow field are further studied. At first, we employ the suitable non-dimensional variables to transmute the time-dependent governing flow equations into a system of non-linear ordinary differential equations. The converted conservation equations are numerically integrated subject to physically suitable boundary conditions with the aid of Runge-Kutta Fehlberg integration procedure. The effects of involved pertinent parameters, such as, moving wedge parameter, wedge angle parameter, local Weissenberg number, unsteadiness parameter and Prandtl number on the non-dimensional velocity and temperature distributions have been evaluated. In addition, the numerical values of the local skin friction coefficient and the local Nusselt number are compared and presented through tables. The outcomes of this study indicate that the rate of heat transfer increases with the growth of both wedge angle parameter and unsteadiness parameter. Moreover, a substantial rise in the fluid velocity is observed with enhancement in the viscosity ratio parameter while an opposite trend is true for the non-dimensional temperature field. A comparison is presented between the current study and already published works and results found to be in outstanding agreement. Finally, the main findings of this article are highlighted in the last section.

  20. Flow of wormlike micellar solutions around confined microfluidic cylinders.

    PubMed

    Zhao, Ya; Shen, Amy Q; Haward, Simon J

    2016-10-26

    Wormlike micellar (WLM) solutions are frequently used in enhanced oil and gas recovery applications in porous rock beds where complex microscopic geometries result in mixed flow kinematics with strong shear and extensional components. Experiments with WLM solutions through model microfluidic porous media have revealed a variety of complex flow phenomena, including the formation of stable gel-like structures known as a Flow-Induced Structured Phase (FISP), which undoubtedly play an important role in applications of WLM fluids, but are still poorly understood. A first step in understanding flows of WLM fluids through porous media can be made by examining the flow around a single micro-scale cylinder aligned on the flow axis. Here we study flow behavior of an aqueous WLM solution consisting of cationic surfactant cetyltrimethylammonium bromide (CTAB) and a stable hydrotropic salt 3-hydroxy naphthalene-2-carboxylate (SHNC) in microfluidic devices with three different cylinder blockage ratios, β. We observe a rich sequence of flow instabilities depending on β as the Weissenberg number (Wi) is increased to large values while the Reynolds number (Re) remains low. Instabilities upstream of the cylinder are associated with high stresses in fluid that accelerates into the narrow gap between the cylinder and the channel wall; vortex growth upstream is reminiscent of that seen in microfluidic contraction geometries. Instability downstream of the cylinder is associated with stresses generated at the trailing stagnation point and the resulting flow modification in the wake, coupled with the onset of time-dependent flow upstream and the asymmetric division of flow around the cylinder.

  1. Effect of solid boundaries on swimming dynamics of microorganisms in a viscoelastic fluid

    PubMed Central

    Li, G. -J.; Karimi, A.

    2015-01-01

    We numerically study the effect of solid boundaries on the swimming behavior of a motile microorganism in viscoelastic media. Understanding the swimmer-wall hydrodynamic interactions is crucial to elucidate the adhesion of bacterial cells to nearby substrates which is precursor to the formation of the microbial biofilms. The microorganism is simulated using a squirmer model that captures the major swimming mechanisms of potential, extensile, and contractile types of swimmers, while neglecting the biological complexities. A Giesekus constitutive equation is utilized to describe both viscoelasticity and shear-thinning behavior of the background fluid. We found that the viscoelasticity strongly affects the near-wall motion of a squirmer by generating an opposing polymeric torque which impedes the rotation of the swimmer away from the wall. In particular, the time a neutral squirmer spends at the close proximity of the wall is shown to increase with polymer relaxation time and reaches a maximum at Weissenberg number of unity. The shear-thinning effect is found to weaken the solvent stress and therefore, increases the swimmer-wall contact time. For a puller swimmer, the polymer stretching mainly occurs around its lateral sides, leading to reduced elastic resistance against its locomotion. The neutral and puller swimmers eventually escape the wall attraction effect due to a releasing force generated by the Newtonian viscous stress. In contrast, the pusher is found to be perpetually trapped near the wall as a result of the formation of a highly stretched region behind its body. It is shown that the shear-thinning property of the fluid weakens the wall-trapping effect for the pusher squirmer. PMID:26855446

  2. Exact solutions for oscillatory shear sweep behaviors of complex fluids from the Oldroyd 8-constant framework

    NASA Astrophysics Data System (ADS)

    Saengow, Chaimongkol; Giacomin, A. Jeffrey

    2018-03-01

    In this paper, we provide a new exact framework for analyzing the most commonly measured behaviors in large-amplitude oscillatory shear flow (LAOS), a popular flow for studying the nonlinear physics of complex fluids. Specifically, the strain rate sweep (also called the strain sweep) is used routinely to identify the onset of nonlinearity. By the strain rate sweep, we mean a sequence of LAOS experiments conducted at the same frequency, performed one after another, with increasing shear rate amplitude. In this paper, we give exact expressions for the nonlinear complex viscosity and the corresponding nonlinear complex normal stress coefficients, for the Oldroyd 8-constant framework for oscillatory shear sweeps. We choose the Oldroyd 8-constant framework for its rich diversity of popular special cases (we list 18 of these). We evaluate the Fourier integrals of our previous exact solution to get exact expressions for the real and imaginary parts of the complex viscosity, and for the complex normal stress coefficients, as functions of both test frequency and shear rate amplitude. We explore the role of infinite shear rate viscosity on strain rate sweep responses for the special case of the corotational Jeffreys fluid. We find that raising η∞ raises the real part of the complex viscosity and lowers the imaginary. In our worked examples, we thus first use the corotational Jeffreys fluid, and then, for greater accuracy, we use the Johnson-Segalman fluid, to describe the strain rate sweep response of molten atactic polystyrene. For our comparisons with data, we use the Spriggs relations to generalize the Oldroyd 8-constant framework to multimode. Our generalization yields unequivocally, a longest fluid relaxation time, used to assign Weissenberg and Deborah numbers to each oscillatory shear flow experiment. We then locate each experiment in the Pipkin space.

  3. Spatiotemporal evolution of hairpin eddies, Reynolds stress, and polymer torque in polymer drag-reduced turbulent channel flows.

    PubMed

    Kim, Kyoungyoun; Sureshkumar, Radhakrishna

    2013-06-01

    To study the influence of dynamic interactions between turbulent vortical structures and polymer stress on turbulent friction drag reduction, a series of simulations of channel flow is performed. We obtain self-consistent evolution of an initial eddy in the presence of polymer stresses by utilizing the finitely extensible nonlinear elastic-Peterlin (FENE-P) model. The initial eddy is extracted by the conditional averages for the second quadrant event from fully turbulent Newtonian flow, and the initial polymer conformation fields are given by the solutions of the FENE-P model equations corresponding to the mean shear flow in the Newtonian case. At a relatively low Weissenberg number We(τ) (=50), defined as the ratio of the polymer relaxation time to the wall time scale, the generation of new vortices is inhibited by polymer-induced countertorques. Thus fewer vortices are generated in the buffer layer. However, the head of the primary hairpin is unaffected by the polymer stress. At larger We(τ) values (≥100), the hairpin head becomes weaker and vortex autogeneration and Reynolds stress growth are almost entirely suppressed. The temporal evolution of the vortex strength and polymer torque magnitude reveals that polymer extension by the vortical motion results in a polymer torque that increases in magnitude with time until a maximum value is reached over a time scale comparable to the polymer relaxation time. The polymer torque retards the vortical motion and Reynolds stress production, which in turn weakens flow-induced chain extension and torque itself. An analysis of the vortex time scales reveals that with increasing We(τ), vortical motions associated with a broader range of time scales are affected by the polymer stress. This is qualitatively consistent with Lumley's time criterion for the onset of drag reduction.

  4. The effects of slit-like confinement on flow-induced polymer deformation

    NASA Astrophysics Data System (ADS)

    Ghosal, Aishani; Cherayil, Binny J.

    2017-08-01

    This paper is broadly concerned with the dynamics of a polymer confined to a rectangular slit of width D and deformed by a planar elongational flow of strength γ ˙ . It is interested, more specifically, in the nature of the coil-stretch transition that such polymers undergo when the flow strength γ ˙ is varied, and in the degree to which this transition is affected by the presence of restrictive boundaries. These issues are explored within the framework of a finitely extensible Rouse model that includes pre-averaged surface-mediated hydrodynamic interactions. Calculations of the chain's steady-state fractional extension x using this model suggest that different modes of relaxation (which are characterized by an integer p) exert different levels of control on the coil-stretch transition. In particular, the location of the transition (as identified from the graph of x versus the Weissenberg number Wi, a dimensionless parameter defined by the product of γ ˙ and the time constant τp of a relaxation mode p) is found to vary with the choice of τp. In particular, when τ1 is used in the definition of Wi, the x vs. Wi data for different D lie on a single curve, but when τ3 is used instead (with τ3 > τ1) the corresponding data lie on distinct curves. These findings are in close qualitative agreement with a number of experimental results on confinement effects on DNA stretching in electric fields. Similar D-dependent trends are seen in our calculated force vs. Wi data, but force vs. x data are essentially D-independent and lie on a single curve.

  5. Single polymer dynamics in semi-dilute unentangled and entangled solutions: from molecular conformation to normal stress

    NASA Astrophysics Data System (ADS)

    Schroeder, Charles

    Semi-dilute polymer solutions are encountered in a wide array of applications such as advanced 3D printing technologies. Semi-dilute solutions are characterized by large fluctuations in concentration, such that hydrodynamic interactions, excluded volume interactions, and transient chain entanglements may be important, which greatly complicates analytical modeling and theoretical treatment. Despite recent progress, we still lack a complete molecular-level understanding of polymer dynamics in these systems. In this talk, I will discuss three recent projects in my group to study semi-dilute solutions that focus on single molecule studies of linear and ring polymers and a new method to measure normal stresses in microfluidic devices based on the Stokes trap. In the first effort, we use single polymer techniques to investigate the dynamics of semi-dilute unentangled and semi-dilute entangled DNA solutions in extensional flow, including polymer relaxation from high stretch, transient stretching dynamics in step-strain experiments, and steady-state stretching in flow. In the semi-dilute unentangled regime, our results show a power-law scaling of the longest polymer relaxation time that is consistent with scaling arguments based on the double cross-over regime. Upon increasing concentration, we observe a transition region in dynamics to the entangled regime. We also studied the transient and steady-state stretching dynamics in extensional flow using the Stokes trap, and our results show a decrease in transient polymer stretch and a milder coil-to-stretch transition for semi-dilute polymer solutions compared to dilute solutions, which is interpreted in the context of a critical Weissenberg number Wi at the coil-to-stretch transition. Interestingly, we observe a unique set of polymer conformations in semi-dilute unentangled solutions that are highly suggestive of transient topological entanglements in solutions that are nominally unentangled at equilibrium. Taken together, these results suggest that the transient stretching pathways in semi-dilute solution extensional flows are qualitatively different than for both dilute solutions and for semi-dilute solutions in shear flow. In a second effort, we studied the dynamics of ring polymers in background solutions of semi-dilute linear polymers. Interestingly, we observe strikingly large fluctuations in steady-state polymer extension for ring polymers in flow, which occurs due to the interplay between polymer topology and concentration leading to chain `threading' in flow. In a third effort, we developed a new microfluidic method to measure normal stress and extensional viscosity that can be loosely described as passive yet non-linear microrheology. In particular, we incorporated 3-D particle imaging velocimetry (PIV) with the Stokes trap to study extensional flow-induced particle migration in semi-dilute polymer solutions. Experimental results are analyzed using the framework of a second-order-fluid model, which allows for measurement of normal stress and extensional viscosity in semi-dilute polymer solutions, all of which is a first-of-its-kind demonstration. Microfluidic measurements of extensional viscosity are directly compared to the dripping-onto-substrate or DOS method, and good agreement is generally observed. Overall, our work aims to provide a molecular-level understanding of the role of polymer topology and concentration on bulk rheological properties by using single polymer techniques.

  6. Structure-property evolution during polymer crystallization

    NASA Astrophysics Data System (ADS)

    Arora, Deepak

    The main theme of this research is to understand the structure-property evolution during crystallization of a semicrystalline thermoplastic polymer. A combination of techniques including rheology, small angle light scattering, differential scanning calorimetry and optical microscopy are applied to follow the mechanical and optical properties along with crystallinity and the morphology. Isothermal crystallization experiments on isotactic poly-1-butene at early stages of spherulite growth provide quantitative information about nucleation density, volume fraction of spherulites and their crystallinity, and the mechanism of connecting into a sample spanning structure. Optical microscopy near the fluid-to-solid transition suggests that the transition, as determined by time-resolved mechanical spectroscopy, is not caused by packing/jamming of spherulites but by the formation of a percolating network structure. The effect of strain, Weissenberg number (We ) and specific mechanical work (w) on rate of crystallization (nucleation followed by growth) and on growth of anisotropy was studied for shear-induced crystallization of isotactic poly-1-butene. The samples were sheared for a finite strain at the beginning of the experiment and then crystallized without further flow (Janeschitz-Kriegl protocol). Strain requirements to attain steady state/leveling off of the rate of crystallization were found to be much larger than the strain needed to achieve steady state of flow. The large strain and We>1 criteria were also observed for morphological transition from spherulitic growth to oriented growth. An apparatus for small angle light scattering (SALS) and light transmission measurements under shear was built and tested at the University of Massachusetts Amherst. As a new development, the polarization direction can be rotated by a liquid crystal polarization rotator (LCPR) with a short response time of 20 ms. The experiments were controlled and analyzed with a LabVIEW(TM) based code (LabVIEW(TM) 7.1) in real time. The SALS apparatus was custom built for ExxonMobil Research in Clinton NJ.

  7. Flow of viscoelastic fluids around a sharp microfluidic bend: Role of wormlike micellar structure

    NASA Astrophysics Data System (ADS)

    Hwang, Margaret Y.; Mohammadigoushki, Hadi; Muller, Susan J.

    2017-04-01

    We examine the flow and instabilities of three viscoelastic fluids—a semidilute aqueous solution of polyethylene oxide (PEO) and two wormlike micellar solutions of cetylpyridinium chloride and sodium salicylate—around a microfluidic 90∘ bend, in which shear deformation and streamline curvature dominate. Similar to results reported by Gulati et al. [S. Gulati et al., Phys. Rev. E 78, 036314 (2008), 10.1103/PhysRevE.78.036314; S. Gulati et al., J. Rheol. 54, 375 (2010), 10.1122/1.3308643] for PEO solutions, we report a critical Weissenberg number (Wi) for the onset of lip vortex formation upstream of the corner. However, the decreased aspect ratio (channel depth to width) results in a slightly higher critical Wi and a vortex that grows more slowly. We consider wormlike micellar solutions of two salt to surfactant concentration ratios R =0.55 and R =0.79 . At R =0.55 , the wormlike micelles are linear and exhibit strong viscoelastic behavior, but at R =0.79 , the wormlike micelles become branched and exhibit shear-banding behavior. Microfluidic experiments on the R =0.55 solution reveal two flow transitions. The first transition, at Wi =6 , is characterized by the formation of a stationary lip vortex upstream of the bend; at the second transition, at Wi =20 , the vortex fluctuates in time and changes size. The R =0.79 solution also exhibits two transitions. The first transition at Wi =4 is characterized by the appearance of two intermittent vortices, one at the lip and one at the far outside corner. Increasing the flow rate to Wi >160 results in a transition to a second unstable regime, where there is only a lip vortex that fluctuates in size. The difference in flow transitions in PEO and wormlike micellar solutions presumably arises from the additional contribution of wormlike micellar breakage and reformation under shear. The flow transitions in wormlike micellar solutions are also significantly affected by chain branching.

  8. A viscoelastic strain energy principle expressed in fold thrust belts and other compressional regimes

    NASA Astrophysics Data System (ADS)

    Patton, Regan L.; Watkinson, A. John

    2005-07-01

    A mathematical folding theory for stratified viscoelastic media in layer parallel compression is presented. The second order fluid, in slow flow, is used to model rock rheological behavior because it is the simplest nonlinear constitutive equation exhibiting viscoelastic effects. Scaling and non-dimensionalization of the model system reveals the presence of Weissenberg number ( Wi), defined as a ratio of time scales τ*/( H*/ V*). V*/ H* is the strain rate (s -1) imposed by an assumed far field velocity V* acting on a layer of thickness H*, while τ* (s) is related to the relaxation of normal stresses. Our most significant finding is a transitional behavior as Wi→½, which is independent of the viscosity contrast. A change of variables shows that lengths associated with this transition are scaled by a parameter α=[(1-2 Wi)/(1+2 Wi)] 1/2, which is inversely proportional to local strain energy. On this basis a scaling law representing a distribution of non-dimensional wavelengths (wavelength/layer thickness) is derived. Geologically this is consistent with a transition from folding to faulting, as observed in fold-thrust belts. Folding, a distributed deformation scaling as Wi-1, is found to be energetically favored at non-dimensional wavelengths ranging from about three to seven. Furthermore, the transition from folding to faulting, a localized deformation scaling as ( αWi) -1, is predicted at a non-dimensional wavelength of about seven. These findings are consistent with measurements of thrust sheets in the Sawtooth Mountains of western Montana, USA and other fold-thrust belts. A review of the literature reveals a similar distribution of non-dimensional wavelengths spanning a wide range of observational scales in compressional deformation. Specific examples include lithospheric scale folding in the central Indian Basin and microscopic scale failure of ice columns between splay microcracks in laboratory studies.

  9. Two-dimensional dynamics of elasto-inertial turbulence and its role in polymer drag reduction

    NASA Astrophysics Data System (ADS)

    Sid, S.; Terrapon, V. E.; Dubief, Y.

    2018-02-01

    The goal of the present study is threefold: (i) to demonstrate the two-dimensional nature of the elasto-inertial instability in elasto-inertial turbulence (EIT), (ii) to identify the role of the bidimensional instability in three-dimensional EIT flows, and (iii) to establish the role of the small elastic scales in the mechanism of self-sustained EIT. Direct numerical simulations of viscoelastic fluid flows are performed in both two- and three-dimensional straight periodic channels using the Peterlin finitely extensible nonlinear elastic model (FENE-P). The Reynolds number is set to Reτ=85 , which is subcritical for two-dimensional flows but beyond the transition for three-dimensional ones. The polymer properties selected correspond to those of typical dilute polymer solutions, and two moderate Weissenberg numbers, Wiτ=40 ,100 , are considered. The simulation results show that sustained turbulence can be observed in two-dimensional subcritical flows, confirming the existence of a bidimensional elasto-inertial instability. The same type of instability is also observed in three-dimensional simulations where both Newtonian and elasto-inertial turbulent structures coexist. Depending on the Wi number, one type of structure can dominate and drive the flow. For large Wi values, the elasto-inertial instability tends to prevail over the Newtonian turbulence. This statement is supported by (i) the absence of typical Newtonian near-wall vortices and (ii) strong similarities between two- and three-dimensional flows when considering larger Wi numbers. The role of small elastic scales is investigated by introducing global artificial diffusion (GAD) in the hyperbolic transport equation for polymers. The aim is to measure how the flow reacts when the smallest elastic scales are progressively filtered out. The study results show that the introduction of large polymer diffusion in the system strongly damps a significant part of the elastic scales that are necessary to feed turbulence, eventually leading to flow laminarization. A sufficiently high Schmidt number (weakly diffusive polymers) is necessary to allow self-sustained turbulence to settle. Although EIT can withstand a low amount of diffusion and remains in a nonlaminar chaotic state, adding a finite amount of GAD in the system can have an impact on the dynamics and lead to important quantitative changes, even for Schmidt numbers as large as 102. The use of GAD should therefore be avoided in viscoelastic flow simulations.

  10. Mixing of passive tracers in the decay Batchelor regime of a channel flow

    NASA Astrophysics Data System (ADS)

    Jun, Yonggun; Steinberg, Victor

    2010-12-01

    We report detailed quantitative studies of passive scalar mixing in a curvilinear channel flow, where elastic turbulence in a dilute polymer solution of high molecular weight polyacrylamide in a high viscosity water-sugar solvent was achieved. For quantitative investigation of mixing, a detailed study of the profiles of mean longitudinal and radial components of the velocity in the channel as a function of Wi was carried out. Besides, a maximum of the average value as well as a rms of the longitudinal velocity was used to determine the threshold of the elastic instability in the channel flow. The rms of the radial derivatives of the longitudinal and radial velocity components was utilized to define the control parameters of the problem, the Weissenberg Wiloc and the Péclet Pe numbers. The main result of these studies is the quantitative test of the theoretical prediction about the value of the mixing length in the decay Batchelor regime. The experiment shows large quantitative discrepancy, more than 200 times in the value of the coefficient C, which appears in the theoretical expression for the mixing length, but with the predicted scaling relation. There are two possible reasons to this discrepancy. First is the assumption made in the theory about the δ-correlated velocity field, which is in odds with the experimental observations. Second, and probably a more relevant suggestion for the significantly increased mixing length and thus reduced mixing efficiency, is the observed jets, the rare, localized, and vigorous ejection of the scalar trapped near the wall, which protrudes into the peripheral region as well as the bulk. They are first found in the recent numerical calculations and then observed in the experiment reported. The jets definitely strongly reduce the mixing efficiency in particular in the peripheral region and so can lead to considerable increase of the mixing length. We hope that this result will initiate further numerical calculations of the mixing length. Finally, we analyze statistical properties of the mixing in the decay Batchelor regime by studying the power spectra, the decay exponents scaling, the structure functions of a tracer and moments of PDF of passive scalar increments, and the temporal and spatial correlation functions and find rather satisfactory agreement with theory.

  11. Raman Spectroscopy and Structure of MgSiO3 High Temperature C2/c Clinoenstatite

    NASA Astrophysics Data System (ADS)

    Kusu, R.; Yoshiasa, A.; Nishiyama, T.; Akihiko, N.; Maki, O.; Hiroshi, A.; Sugiyama, K.

    2014-12-01

    The high-temperature clinoenstatite (HT-CEn) is one of the important MgSiO3 pyroxene polymorph. The single-crystal of C2/c HT-CEn endmember is firstly synthesized by rapid pressure-temperature quenching from 15-16 GPa and 900-1900 °C [1]. No report that it is produced as single crystal or large domain had been made on the MgSiO3 endmember. The HT-CEn-type modifications were observed in Ca-poor Mg-Fe clinoenstatite and pigeonite and are always found to be unquenchable in rapid cooling. The high pressure and high temperature experiments of MgSiO3 composition were carried out with a Kawai-type multi-anvil apparatus. The samples were quenched by rapidly releasing the oil pressure load and/or by blow out of anvil cell gasket. The space group of C2/c is strictly determined by Rigaku RAPID Weissenberg photographs and synchrotron radiation. HT-CEn and HP-CEn have the greatly different beta angles of 109° and 101°, respectively. Raman spectra of HT-CEn and OEn single crystals were collected at ambient conditions. The unusual bonding distances frozen in the metastable structure. The observed average Mg1-O and Si-O distances in HT-CEn [1.997 and 1.620 Å, respectively] are shorter than those in HP-CEn at 7.9GPa. The average Mg2-O distance in HT-CEn [2.311 Å] is significantly longer than that in L-CEn, providing an abnormal larger distance for the Mg2 atom. The Mg2 polyhedron in HT-CEn is more irregular than that in HP-CEn. The Debye-Waller factor of atoms in HT-CEn have abnormally larger amplitude. The static irregularity of the atomic displacement caused by the transition is frozen in the metastable state. Almost all Raman peaks are broad owing to the large statistical positional arrangement of atoms in HT-CEn. The braod patterns have the common feature which were obserbed by the high temperature Raman spectroscopy for pyroxene. The peaks have been confirmed at 108, 259, 684, and 1097 cm-1. Peak positions for HT-CEn are different from those for protoenstatite under high temperature. HT-CEn may be found in natural rocks that had rapid quenching history such as a shock-metamorphosed meteorite. Especially the peaks of 108 and 684 cm-1 are clear and Raman spectrra can use for an identification. [1] A. Yoshiasa, A. Nakatsuka, M. Okube and T. Katsura, Acta Crystallographica Section B, 2013, 69, 541-546

  12. Microstructure of Mixed Surfactant Solutions by Electron Microscopy

    NASA Astrophysics Data System (ADS)

    Naranjo, Edward

    1995-01-01

    Surfactant mixtures add a new dimension to the design of complex fluid microstructure. By combining different surfactants it is not only possible to modify aggregate morphology and control the macrascopic properties of colloidal dispersions but also to produce a variety of novel synergistic phases. Mixed systems produce new microstructures by altering the intermolecular and interaggregate forces in ways impossible for single component systems. In this dissertation, we report on the phase behavior and microstructure of several synthetic and biological surfactant mixtures as elucidated by freeze-fracture and cryo-transmission electron microscopy. We have discovered that stable, spontaneous unilamellar vesicles can be prepared from aqueous mixtures of commercially available single-tailed cationic and anionic surfactants. Vesicle stability is determined by the length and volume of the hydrocarbon chains of the "catanionic" pairs. Mixtures containing bulky or branched surfactant pairs (C _{16}/C_{12 -14}) in water produce defect-free fairly monodisperse equilibrium vesicles at high dilution. In contrast, mixtures of catanionic surfactants with highly asymmetric tails (C_{16}/C_8 ) form phases of porous vesicles, dilute lamellar L_{alpha}, and anomalous isotropic L_3 phases. Images of the microstructure by freeze-fracture microscopy show that the L_3 phase consists of multiconnected self-avoiding bilayers with saddle shaped curvature. The forces between bilayers of vesicle-forming cationic and anionic surfactant mixtures were also measured using the Surface Force Apparatus (SFA). We find that the vesicles are stabilized by a long range electrostatic repulsion at large separations (>20 A) and an additional salt-independent repulsive force below 20 A. The measured forces correlate very well with the ternary phase diagram and the vesicle microstructures observed by electron microscopy. In addition to studying ionic surfactants, we have also done original work with biological surfactants. We have found that subtle changes by surfactant additives to phosphatidylcholines (PC) produce dramatic changes in the microstructure of the composite that are impossible to determine from simple scattering experiments. Novel microstructures were observed at mole ratios from 4/1 to 9/1 long chain (Di-C_{16}PC)/short chain lipid (Di-C_7PC), including disc-like micelles and rippled bilayers at room temperature. We have also observed for the first time the formation of single layered ripple phase bilayer fragments. The formation of such fragments eliminates a number of theories of formation of this unique structure that depend on coupling between bilayers. In a similar system, dimyristoyl phosphatidylcholine (DMPC) mixed with the branched alcohol geraniol produces a bluish and extremely viscoelastic phase of giant multilamellar wormy vesicles. This phase shows the Weissenberg effect under flow due to the distortion of the entangled vesicles and may be related to fluid lamellar phases and L _3 phases often seen in surfactant-alcohol -water systems. Lysophosphatidylcholine, the single-chain counterpart of the diacyl phospholipids, can also form bilayer phases when combined with long-chain fatty acids in water. The phase transition characteristics and appearance of the bilayers in equimolar mixtures of lysolipid and fatty acid are similar to those of the diacyl-PC. Electron microscopy reveals large extended multilayers in mixtures with excess lysolipid and multilamellar vesicles in mixtures with excess fatty acid.

  13. Hydrodynamic Coating of a Fiber

    NASA Astrophysics Data System (ADS)

    Quéré, D.; de Ryck, A.

    We discuss how a solid (especially a fiber) is coated when drawn out of a bath of liquid. 1. For slow withdrawals out of pure viscous liquids, the data are found to be fitted by the famous Landau law: then, the coating results from a balance between viscosity and capillarity. For quicker withdrawals, the thickness of the entrained film suddenly diverges, at a velocity on order 1 m/s. Inertia is shown to be responsible for this effect. At still higher velocities, the thickness decreases with the velocity because the solid can only entrain the viscous boundary layer. 2. For complex fluids, surface effects are found in the low velocity regime: out of a surfactant solution, films are thicker than predicted by Landau, by a factor of order 2. The thickening factor is shown to be fixed by the Marangoni flow due to the presence of surfactants; out of an emulsion, the film can be enriched with oil , which can be understood by a simple model of capture; out of a polymer solution, a strong swelling of the film is observed if normal stresses are present. Hence, the problem has two families of solution: (i) at low velocity, the thickness of the layer is fixed by a balance between viscous and surface forces and thus is sensitive to the presence of surfactants, or other heterogeneities; (ii) at high velocity, inertia must be considered and the film thickness is fixed by the bulk properties of the liquid (density and viscosity). In these regimes, it is not affected by the presence of surfactants in the bath. Nous décrivons le dépôt de liquide sur un solide (le plus souvent une fibre) qui advient quand on tire ce solide d'un bain. 1. Si le retrait se fait lentement hors d'un liquide pur et visqueux, les données expérimentales suivent la loi de Landau : le dépôt résulte d'un compromis entre forces visqueuses et forces capillaires. Pour des retraits plus rapides, on observe que l'épaisseur du dépôt diverge, pour une vitesse de l'ordre du mètre par seconde. Nous montrons comment l'inertie du fluide engendre un tel effet. Plus vite encore, l'épaisseur décroît lentement avec la vitesse, le solide ne parvenant à entraîner avec lui que la couche limite visqueuse qu'il a mis en mouvement. 2. Pour des liquides complexes, des effets de surface sont observés dans le régime basse vitesse : hors d'une solution de tensioactifs, les films sont plus épais que ce que prévoit la loi de Landau, d'un facteur 2 environ. Nous montrons que l'épaississement est déterminé par l'écoulement Marangoni dû à la présence des tensioactifs ; hors d'une émulsion, le film peut être enrichi en huile, ce que l'on peut interpréter à l'aide d'un modèle de capture ; hors d'une solution de polymère, on observe un fort gonflement du film dès que la solution est semi-diluée, à cause de l'effet des contraintes normales (effet Weissenberg). Le problème étudié a donc deux familles de solution : (i) à basse vitesse, le dépôt résulte d'un compromis entre viscosité et capillarité, si bien qu'il est sensible à la présence dans le bain d'hétérogénéités (tensioactifs, gouttes d'huile) ; (ii) à plus grande vitesse, l'inertie doit être prise en compte et l'épaisseur du film est alors liée aux propriétés de volume du liquide (densité et viscosité).

  14. Evaluation of Two New Smoothing Methods in Equating: The Cubic B-Spline Presmoothing Method and the Direct Presmoothing Method

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2009-01-01

    This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…

  15. Comparison of DNA extraction methods for meat analysis.

    PubMed

    Yalçınkaya, Burhanettin; Yumbul, Eylem; Mozioğlu, Erkan; Akgoz, Muslum

    2017-04-15

    Preventing adulteration of meat and meat products with less desirable or objectionable meat species is important not only for economical, religious and health reasons, but also, it is important for fair trade practices, therefore, several methods for identification of meat and meat products have been developed. In the present study, ten different DNA extraction methods, including Tris-EDTA Method, a modified Cetyltrimethylammonium Bromide (CTAB) Method, Alkaline Method, Urea Method, Salt Method, Guanidinium Isothiocyanate (GuSCN) Method, Wizard Method, Qiagen Method, Zymogen Method and Genespin Method were examined to determine their relative effectiveness for extracting DNA from meat samples. The results show that the salt method is easy to perform, inexpensive and environmentally friendly. Additionally, it has the highest yield among all the isolation methods tested. We suggest this method as an alternative method for DNA isolation from meat and meat products. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Study of New Method Combined Ultra-High Frequency (UHF) Method and Ultrasonic Method on PD Detection for GIS

    NASA Astrophysics Data System (ADS)

    Li, Yanran; Chen, Duo; Zhang, Jiwei; Chen, Ning; Li, Xiaoqi; Gong, Xiaojing

    2017-09-01

    GIS (gas insulated switchgear), is an important equipment in power system. Partial discharge plays an important role in detecting the insulation performance of GIS. UHF method and ultrasonic method frequently used in partial discharge (PD) detection for GIS. It is necessary to investigate UHF method and ultrasonic method for partial discharge in GIS. However, very few studies have been conducted on the method combined this two methods. From the view point of safety, a new method based on UHF method and ultrasonic method of PD detection for GIS is proposed in order to greatly enhance the ability of anti-interference of signal detection and the accuracy of fault localization. This paper presents study aimed at clarifying the effect of the new method combined UHF method and ultrasonic method. Partial discharge tests were performed in laboratory simulated environment. Obtained results show the ability of anti-interference of signal detection and the accuracy of fault localization for this new method combined UHF method and ultrasonic method.

  17. The multigrid preconditioned conjugate gradient method

    NASA Technical Reports Server (NTRS)

    Tatebe, Osamu

    1993-01-01

    A multigrid preconditioned conjugate gradient method (MGCG method), which uses the multigrid method as a preconditioner of the PCG method, is proposed. The multigrid method has inherent high parallelism and improves convergence of long wavelength components, which is important in iterative methods. By using this method as a preconditioner of the PCG method, an efficient method with high parallelism and fast convergence is obtained. First, it is considered a necessary condition of the multigrid preconditioner in order to satisfy requirements of a preconditioner of the PCG method. Next numerical experiments show a behavior of the MGCG method and that the MGCG method is superior to both the ICCG method and the multigrid method in point of fast convergence and high parallelism. This fast convergence is understood in terms of the eigenvalue analysis of the preconditioned matrix. From this observation of the multigrid preconditioner, it is realized that the MGCG method converges in very few iterations and the multigrid preconditioner is a desirable preconditioner of the conjugate gradient method.

  18. Energy minimization in medical image analysis: Methodologies and applications.

    PubMed

    Zhao, Feng; Xie, Xianghua

    2016-02-01

    Energy minimization is of particular interest in medical image analysis. In the past two decades, a variety of optimization schemes have been developed. In this paper, we present a comprehensive survey of the state-of-the-art optimization approaches. These algorithms are mainly classified into two categories: continuous method and discrete method. The former includes Newton-Raphson method, gradient descent method, conjugate gradient method, proximal gradient method, coordinate descent method, and genetic algorithm-based method, while the latter covers graph cuts method, belief propagation method, tree-reweighted message passing method, linear programming method, maximum margin learning method, simulated annealing method, and iterated conditional modes method. We also discuss the minimal surface method, primal-dual method, and the multi-objective optimization method. In addition, we review several comparative studies that evaluate the performance of different minimization techniques in terms of accuracy, efficiency, or complexity. These optimization techniques are widely used in many medical applications, for example, image segmentation, registration, reconstruction, motion tracking, and compressed sensing. We thus give an overview on those applications as well. Copyright © 2015 John Wiley & Sons, Ltd.

  19. [Comparative study on four kinds of assessment methods of post-marketing safety of Danhong injection].

    PubMed

    Li, Xuelin; Tang, Jinfa; Meng, Fei; Li, Chunxiao; Xie, Yanming

    2011-10-01

    To study the adverse reaction of Danhong injection with four kinds of methods, central monitoring method, chart review method, literature study method and spontaneous reporting method, and to compare the differences between them, explore an appropriate method to carry out post-marketing safety evaluation of traditional Chinese medicine injection. Set down the adverse reactions' questionnaire of four kinds of methods, central monitoring method, chart review method, literature study method and collect the information on adverse reactions in a certain period. Danhong injection adverse reaction information from Henan Province spontaneous reporting system was collected with spontaneous reporting method. Carry on data summary and descriptive analysis. Study the adverse reaction of Danhong injection with four methods of central monitoring method, chart review method, literature study method and spontaneous reporting method, the rates of adverse events were 0.993%, 0.336%, 0.515%, 0.067%, respectively. Cyanosis, arrhythmia, hypotension, sweating, erythema, hemorrhage dermatitis, rash, irritability, bleeding gums, toothache, tinnitus, asthma, elevated aminotransferases, constipation, pain are new discovered adverse reactions. The central monitoring method is the appropriate method to carry out post-marketing safety evaluation of traditional Chinese medicine injection, which could objectively reflect the real world of clinical usage.

  20. Ensemble Methods for MiRNA Target Prediction from Expression Data.

    PubMed

    Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong

    2015-01-01

    microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials.

  1. Ensemble Methods for MiRNA Target Prediction from Expression Data

    PubMed Central

    Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong

    2015-01-01

    Background microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. Results In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials. PMID:26114448

  2. 46 CFR 160.077-5 - Incorporation by reference.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., Breaking of Woven Cloth; Grab Method. (ii) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method. (iii) Method 5134, Strength of Cloth, Tearing; Tongue Method. (iv) Method 5804.1, Weathering Resistance of Cloth; Accelerated Weathering Method. (v) Method 5762, Mildew Resistance of Textile Materials...

  3. 46 CFR 160.077-5 - Incorporation by reference.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Elongation, Breaking of Woven Cloth; Grab Method. (2) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method. (3) Method 5134, Strength of Cloth, Tearing; Tongue Method. (4) Method 5804.1, Weathering Resistance of Cloth; Accelerated Weathering Method. (5) Method 5762, Mildew Resistance of Textile Materials...

  4. 46 CFR 160.077-5 - Incorporation by reference.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., Breaking of Woven Cloth; Grab Method. (ii) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method. (iii) Method 5134, Strength of Cloth, Tearing; Tongue Method. (iv) Method 5804.1, Weathering Resistance of Cloth; Accelerated Weathering Method. (v) Method 5762, Mildew Resistance of Textile Materials...

  5. 46 CFR 160.077-5 - Incorporation by reference.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Elongation, Breaking of Woven Cloth; Grab Method. (2) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method. (3) Method 5134, Strength of Cloth, Tearing; Tongue Method. (4) Method 5804.1, Weathering Resistance of Cloth; Accelerated Weathering Method. (5) Method 5762, Mildew Resistance of Textile Materials...

  6. Methods for analysis of cracks in three-dimensional solids

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Newman, J. C., Jr.

    1984-01-01

    Various analytical and numerical methods used to evaluate the stress intensity factors for cracks in three-dimensional (3-D) solids are reviewed. Classical exact solutions and many of the approximate methods used in 3-D analyses of cracks are reviewed. The exact solutions for embedded elliptic cracks in infinite solids are discussed. The approximate methods reviewed are the finite element methods, the boundary integral equation (BIE) method, the mixed methods (superposition of analytical and finite element method, stress difference method, discretization-error method, alternating method, finite element-alternating method), and the line-spring model. The finite element method with singularity elements is the most widely used method. The BIE method only needs modeling of the surfaces of the solid and so is gaining popularity. The line-spring model appears to be the quickest way to obtain good estimates of the stress intensity factors. The finite element-alternating method appears to yield the most accurate solution at the minimum cost.

  7. Development and validation of spectrophotometric methods for estimating amisulpride in pharmaceutical preparations.

    PubMed

    Sharma, Sangita; Neog, Madhurjya; Prajapati, Vipul; Patel, Hiren; Dabhi, Dipti

    2010-01-01

    Five simple, sensitive, accurate and rapid visible spectrophotometric methods (A, B, C, D and E) have been developed for estimating Amisulpride in pharmaceutical preparations. These are based on the diazotization of Amisulpride with sodium nitrite and hydrochloric acid, followed by coupling with N-(1-naphthyl)ethylenediamine dihydrochloride (Method A), diphenylamine (Method B), beta-naphthol in an alkaline medium (Method C), resorcinol in an alkaline medium (Method D) and chromotropic acid in an alkaline medium (Method E) to form a colored chromogen. The absorption maxima, lambda(max), are at 523 nm for Method A, 382 and 490 nm for Method B, 527 nm for Method C, 521 nm for Method D and 486 nm for Method E. Beer's law was obeyed in the concentration range of 2.5-12.5 microg mL(-1) in Method A, 5-25 and 10-50 microg mL(-1) in Method B, 4-20 microg mL(-1) in Method C, 2.5-12.5 microg mL(-1) in Method D and 5-15 microg mL(-1) in Method E. The results obtained for the proposed methods are in good agreement with labeled amounts, when marketed pharmaceutical preparations were analyzed.

  8. Reconstruction of fluorescence molecular tomography with a cosinoidal level set method.

    PubMed

    Zhang, Xuanxuan; Cao, Xu; Zhu, Shouping

    2017-06-27

    Implicit shape-based reconstruction method in fluorescence molecular tomography (FMT) is capable of achieving higher image clarity than image-based reconstruction method. However, the implicit shape method suffers from a low convergence speed and performs unstably due to the utilization of gradient-based optimization methods. Moreover, the implicit shape method requires priori information about the number of targets. A shape-based reconstruction scheme of FMT with a cosinoidal level set method is proposed in this paper. The Heaviside function in the classical implicit shape method is replaced with a cosine function, and then the reconstruction can be accomplished with the Levenberg-Marquardt method rather than gradient-based methods. As a result, the priori information about the number of targets is not required anymore and the choice of step length is avoided. Numerical simulations and phantom experiments were carried out to validate the proposed method. Results of the proposed method show higher contrast to noise ratios and Pearson correlations than the implicit shape method and image-based reconstruction method. Moreover, the number of iterations required in the proposed method is much less than the implicit shape method. The proposed method performs more stably, provides a faster convergence speed than the implicit shape method, and achieves higher image clarity than the image-based reconstruction method.

  9. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    PubMed

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.

  10. Method Engineering: A Service-Oriented Approach

    NASA Astrophysics Data System (ADS)

    Cauvet, Corine

    In the past, a large variety of methods have been published ranging from very generic frameworks to methods for specific information systems. Method Engineering has emerged as a research discipline for designing, constructing and adapting methods for Information Systems development. Several approaches have been proposed as paradigms in method engineering. The meta modeling approach provides means for building methods by instantiation, the component-based approach aims at supporting the development of methods by using modularization constructs such as method fragments, method chunks and method components. This chapter presents an approach (SO2M) for method engineering based on the service paradigm. We consider services as autonomous computational entities that are self-describing, self-configuring and self-adapting. They can be described, published, discovered and dynamically composed for processing a consumer's demand (a developer's requirement). The method service concept is proposed to capture a development process fragment for achieving a goal. Goal orientation in service specification and the principle of service dynamic composition support method construction and method adaptation to different development contexts.

  11. Simultaneous determination of a binary mixture of pantoprazole sodium and itopride hydrochloride by four spectrophotometric methods.

    PubMed

    Ramadan, Nesrin K; El-Ragehy, Nariman A; Ragab, Mona T; El-Zeany, Badr A

    2015-02-25

    Four simple, sensitive, accurate and precise spectrophotometric methods were developed for the simultaneous determination of a binary mixture containing Pantoprazole Sodium Sesquihydrate (PAN) and Itopride Hydrochloride (ITH). Method (A) is the derivative ratio method ((1)DD), method (B) is the mean centering of ratio spectra method (MCR), method (C) is the ratio difference method (RD) and method (D) is the isoabsorptive point coupled with third derivative method ((3)D). Linear correlation was obtained in range 8-44 μg/mL for PAN by the four proposed methods, 8-40 μg/mL for ITH by methods A, B and C and 10-40 μg/mL for ITH by method D. The suggested methods were validated according to ICH guidelines. The obtained results were statistically compared with those obtained by the official and a reported method for PAN and ITH, respectively, showing no significant difference with respect to accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Simultaneous determination of a binary mixture of pantoprazole sodium and itopride hydrochloride by four spectrophotometric methods

    NASA Astrophysics Data System (ADS)

    Ramadan, Nesrin K.; El-Ragehy, Nariman A.; Ragab, Mona T.; El-Zeany, Badr A.

    2015-02-01

    Four simple, sensitive, accurate and precise spectrophotometric methods were developed for the simultaneous determination of a binary mixture containing Pantoprazole Sodium Sesquihydrate (PAN) and Itopride Hydrochloride (ITH). Method (A) is the derivative ratio method (1DD), method (B) is the mean centering of ratio spectra method (MCR), method (C) is the ratio difference method (RD) and method (D) is the isoabsorptive point coupled with third derivative method (3D). Linear correlation was obtained in range 8-44 μg/mL for PAN by the four proposed methods, 8-40 μg/mL for ITH by methods A, B and C and 10-40 μg/mL for ITH by method D. The suggested methods were validated according to ICH guidelines. The obtained results were statistically compared with those obtained by the official and a reported method for PAN and ITH, respectively, showing no significant difference with respect to accuracy and precision.

  13. Evaluating the efficiency of spectral resolution of univariate methods manipulating ratio spectra and comparing to multivariate methods: An application to ternary mixture in common cold preparation

    NASA Astrophysics Data System (ADS)

    Moustafa, Azza Aziz; Salem, Hesham; Hegazy, Maha; Ali, Omnia

    2015-02-01

    Simple, accurate, and selective methods have been developed and validated for simultaneous determination of a ternary mixture of Chlorpheniramine maleate (CPM), Pseudoephedrine HCl (PSE) and Ibuprofen (IBF), in tablet dosage form. Four univariate methods manipulating ratio spectra were applied, method A is the double divisor-ratio difference spectrophotometric method (DD-RD). Method B is double divisor-derivative ratio spectrophotometric method (DD-RD). Method C is derivative ratio spectrum-zero crossing method (DRZC), while method D is mean centering of ratio spectra (MCR). Two multivariate methods were also developed and validated, methods E and F are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods have the advantage of simultaneous determination of the mentioned drugs without prior separation steps. They were successfully applied to laboratory-prepared mixtures and to commercial pharmaceutical preparation without any interference from additives. The proposed methods were validated according to the ICH guidelines. The obtained results were statistically compared with the official methods where no significant difference was observed regarding both accuracy and precision.

  14. Methods for elimination of dampness in Building walls

    NASA Astrophysics Data System (ADS)

    Campian, Cristina; Pop, Maria

    2016-06-01

    Dampness elimination in building walls is a very sensitive problem, with high costs. Many methods are used, as: chemical method, electro osmotic method or physical method. The RECON method is a representative and a sustainable method in Romania. Italy has the most radical method from all methods. The technology consists in cutting the brick walls, insertion of a special plastic sheeting and injection of a pre-mixed anti-shrinking mortar.

  15. A comparison of several methods of solving nonlinear regression groundwater flow problems

    USGS Publications Warehouse

    Cooley, Richard L.

    1985-01-01

    Computational efficiency and computer memory requirements for four methods of minimizing functions were compared for four test nonlinear-regression steady state groundwater flow problems. The fastest methods were the Marquardt and quasi-linearization methods, which required almost identical computer times and numbers of iterations; the next fastest was the quasi-Newton method, and last was the Fletcher-Reeves method, which did not converge in 100 iterations for two of the problems. The fastest method per iteration was the Fletcher-Reeves method, and this was followed closely by the quasi-Newton method. The Marquardt and quasi-linearization methods were slower. For all four methods the speed per iteration was directly related to the number of parameters in the model. However, this effect was much more pronounced for the Marquardt and quasi-linearization methods than for the other two. Hence the quasi-Newton (and perhaps Fletcher-Reeves) method might be more efficient than either the Marquardt or quasi-linearization methods if the number of parameters in a particular model were large, although this remains to be proven. The Marquardt method required somewhat less central memory than the quasi-linearization metilod for three of the four problems. For all four problems the quasi-Newton method required roughly two thirds to three quarters of the memory required by the Marquardt method, and the Fletcher-Reeves method required slightly less memory than the quasi-Newton method. Memory requirements were not excessive for any of the four methods.

  16. Hybrid DFP-CG method for solving unconstrained optimization problems

    NASA Astrophysics Data System (ADS)

    Osman, Wan Farah Hanan Wan; Asrul Hery Ibrahim, Mohd; Mamat, Mustafa

    2017-09-01

    The conjugate gradient (CG) method and quasi-Newton method are both well known method for solving unconstrained optimization method. In this paper, we proposed a new method by combining the search direction between conjugate gradient method and quasi-Newton method based on BFGS-CG method developed by Ibrahim et al. The Davidon-Fletcher-Powell (DFP) update formula is used as an approximation of Hessian for this new hybrid algorithm. Numerical result showed that the new algorithm perform well than the ordinary DFP method and proven to posses both sufficient descent and global convergence properties.

  17. Generalization of the Engineering Method to the UNIVERSAL METHOD.

    ERIC Educational Resources Information Center

    Koen, Billy Vaughn

    1987-01-01

    Proposes that there is a universal method for all realms of knowledge. Reviews Descartes's definition of the universal method, the engineering definition, and the philosophical basis for the universal method. Contends that the engineering method best represents the universal method. (ML)

  18. Colloidal Electrolytes and the Critical Micelle Concentration

    ERIC Educational Resources Information Center

    Knowlton, L. G.

    1970-01-01

    Describes methods for determining the Critical Micelle Concentration of Colloidal Electrolytes; methods described are: (1) methods based on Colligative Properties, (2) methods based on the Electrical Conductivity of Colloidal Electrolytic Solutions, (3) Dye Method, (4) Dye Solubilization Method, and (5) Surface Tension Method. (BR)

  19. Theoretical analysis of three methods for calculating thermal insulation of clothing from thermal manikin.

    PubMed

    Huang, Jianhua

    2012-07-01

    There are three methods for calculating thermal insulation of clothing measured with a thermal manikin, i.e. the global method, the serial method, and the parallel method. Under the condition of homogeneous clothing insulation, these three methods yield the same insulation values. If the local heat flux is uniform over the manikin body, the global and serial methods provide the same insulation value. In most cases, the serial method gives a higher insulation value than the global method. There is a possibility that the insulation value from the serial method is lower than the value from the global method. The serial method always gives higher insulation value than the parallel method. The insulation value from the parallel method is higher or lower than the value from the global method, depending on the relationship between the heat loss distribution and the surface temperatures. Under the circumstance of uniform surface temperature distribution over the manikin body, the global and parallel methods give the same insulation value. If the constant surface temperature mode is used in the manikin test, the parallel method can be used to calculate the thermal insulation of clothing. If the constant heat flux mode is used in the manikin test, the serial method can be used to calculate the thermal insulation of clothing. The global method should be used for calculating thermal insulation of clothing for all manikin control modes, especially for thermal comfort regulation mode. The global method should be chosen by clothing manufacturers for labelling their products. The serial and parallel methods provide more information with respect to the different parts of clothing.

  20. Comparison of five methods for the estimation of methane production from vented in vitro systems.

    PubMed

    Alvarez Hess, P S; Eckard, R J; Jacobs, J L; Hannah, M C; Moate, P J

    2018-05-23

    There are several methods for estimating methane production (MP) from feedstuffs in vented in vitro systems. One method (A; "gold standard") measures methane proportions in the incubation bottle's head space (HS) and in the vented gas collected in gas bags. Four other methods (B, C, D and E) measure methane proportion in a single gas sample from HS. Method B assumes the same methane proportion in the vented gas as in HS, method C assumes constant methane to carbon dioxide ratio, method D has been developed based on empirical data and method E assumes constant individual venting volumes. This study aimed to compare the MP predictions from these methods to that of the gold standard method under different incubation scenarios, to validate these methods based on their concordance with a gold standard method. Methods C, D and E had greater concordance (0.85, 0.88 and 0.81), lower root mean square error (RMSE) (0.80, 0.72 and 0.85) and lower mean bias (0.20, 0.35, -0.35) with the gold standard than did method B (concordance 0.67, RMSE 1.49 and mean bias 1.26). Methods D and E were simpler to perform than method C and method D was slightly more accurate than method E. Based on precision, accuracy and simplicity of implementation, it is recommended that, when method A cannot be used, methods D and E are preferred to estimate MP from vented in vitro systems. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  1. Kennard-Stone combined with least square support vector machine method for noncontact discriminating human blood species

    NASA Astrophysics Data System (ADS)

    Zhang, Linna; Li, Gang; Sun, Meixiu; Li, Hongxiao; Wang, Zhennan; Li, Yingxin; Lin, Ling

    2017-11-01

    Identifying whole bloods to be either human or nonhuman is an important responsibility for import-export ports and inspection and quarantine departments. Analytical methods and DNA testing methods are usually destructive. Previous studies demonstrated that visible diffuse reflectance spectroscopy method can realize noncontact human and nonhuman blood discrimination. An appropriate method for calibration set selection was very important for a robust quantitative model. In this paper, Random Selection (RS) method and Kennard-Stone (KS) method was applied in selecting samples for calibration set. Moreover, proper stoichiometry method can be greatly beneficial for improving the performance of classification model or quantification model. Partial Least Square Discrimination Analysis (PLSDA) method was commonly used in identification of blood species with spectroscopy methods. Least Square Support Vector Machine (LSSVM) was proved to be perfect for discrimination analysis. In this research, PLSDA method and LSSVM method was used for human blood discrimination. Compared with the results of PLSDA method, this method could enhance the performance of identified models. The overall results convinced that LSSVM method was more feasible for identifying human and animal blood species, and sufficiently demonstrated LSSVM method was a reliable and robust method for human blood identification, and can be more effective and accurate.

  2. A Novel Method to Identify Differential Pathways in Hippocampus Alzheimer's Disease.

    PubMed

    Liu, Chun-Han; Liu, Lian

    2017-05-08

    BACKGROUND Alzheimer's disease (AD) is the most common type of dementia. The objective of this paper is to propose a novel method to identify differential pathways in hippocampus AD. MATERIAL AND METHODS We proposed a combined method by merging existed methods. Firstly, pathways were identified by four known methods (DAVID, the neaGUI package, the pathway-based co-expressed method, and the pathway network approach), and differential pathways were evaluated through setting weight thresholds. Subsequently, we combined all pathways by a rank-based algorithm and called the method the combined method. Finally, common differential pathways across two or more of five methods were selected. RESULTS Pathways obtained from different methods were also different. The combined method obtained 1639 pathways and 596 differential pathways, which included all pathways gained from the four existing methods; hence, the novel method solved the problem of inconsistent results. Besides, a total of 13 common pathways were identified, such as metabolism, immune system, and cell cycle. CONCLUSIONS We have proposed a novel method by combining four existing methods based on a rank product algorithm, and identified 13 significant differential pathways based on it. These differential pathways might provide insight into treatment and diagnosis of hippocampus AD.

  3. Improved accuracy for finite element structural analysis via an integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  4. Study of comparison between Ultra-high Frequency (UHF) method and ultrasonic method on PD detection for GIS

    NASA Astrophysics Data System (ADS)

    Li, Yanran; Chen, Duo; Li, Li; Zhang, Jiwei; Li, Guang; Liu, Hongxia

    2017-11-01

    GIS (gas insulated switchgear), is an important equipment in power system. Partial discharge plays an important role in detecting the insulation performance of GIS. UHF method and ultrasonic method frequently used in partial discharge (PD) detection for GIS. However, few studies have been conducted on comparison of this two methods. From the view point of safety, it is necessary to investigate UHF method and ultrasonic method for partial discharge in GIS. This paper presents study aimed at clarifying the effect of UHF method and ultrasonic method for partial discharge caused by free metal particles in GIS. Partial discharge tests were performed in laboratory simulated environment. Obtained results show the ability of anti-interference of signal detection and the accuracy of fault localization for UHF method and ultrasonic method. A new method based on UHF method and ultrasonic method of PD detection for GIS is proposed in order to greatly enhance the ability of anti-interference of signal detection and the accuracy of detection localization.

  5. Comparison of four extraction/methylation analytical methods to measure fatty acid composition by gas chromatography in meat.

    PubMed

    Juárez, M; Polvillo, O; Contò, M; Ficco, A; Ballico, S; Failla, S

    2008-05-09

    Four different extraction-derivatization methods commonly used for fatty acid analysis in meat (in situ or one-step method, saponification method, classic method and a combination of classic extraction and saponification derivatization) were tested. The in situ method had low recovery and variation. The saponification method showed the best balance between recovery, precision, repeatability and reproducibility. The classic method had high recovery and acceptable variation values, except for the polyunsaturated fatty acids, showing higher variation than the former methods. The combination of extraction and methylation steps had great recovery values, but the precision, repeatability and reproducibility were not acceptable. Therefore the saponification method would be more convenient for polyunsaturated fatty acid analysis, whereas the in situ method would be an alternative for fast analysis. However the classic method would be the method of choice for the determination of the different lipid classes.

  6. Birth Control Methods

    MedlinePlus

    ... Z Health Topics Birth control methods Birth control methods > A-Z Health Topics Birth control methods fact ... To receive Publications email updates Submit Birth control methods Birth control (contraception) is any method, medicine, or ...

  7. 26 CFR 1.381(c)(5)-1 - Inventories.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... the dollar-value method, use the double-extension method, pool under the natural business unit method... double-extension method, pool under the natural business unit method, and value annual inventory... natural business unit method while P corporation pools under the multiple pool method. In addition, O...

  8. 26 CFR 1.381(c)(5)-1 - Inventories.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... the dollar-value method, use the double-extension method, pool under the natural business unit method... double-extension method, pool under the natural business unit method, and value annual inventory... natural business unit method while P corporation pools under the multiple pool method. In addition, O...

  9. 46 CFR 160.076-11 - Incorporation by reference.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... following methods: (1) Method 5100, Strength and Elongation, Breaking of Woven Cloth; Grab Method, 160.076-25; (2) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method, 160.076-25; (3) Method 5134, Strength of Cloth, Tearing; Tongue Method, 160.076-25. Underwriters Laboratories (UL) Underwriters...

  10. Costs and Efficiency of Online and Offline Recruitment Methods: A Web-Based Cohort Study

    PubMed Central

    Riis, Anders H; Hatch, Elizabeth E; Wise, Lauren A; Nielsen, Marie G; Rothman, Kenneth J; Toft Sørensen, Henrik; Mikkelsen, Ellen M

    2017-01-01

    Background The Internet is widely used to conduct research studies on health issues. Many different methods are used to recruit participants for such studies, but little is known about how various recruitment methods compare in terms of efficiency and costs. Objective The aim of our study was to compare online and offline recruitment methods for Internet-based studies in terms of efficiency (number of recruited participants) and costs per participant. Methods We employed several online and offline recruitment methods to enroll 18- to 45-year-old women in an Internet-based Danish prospective cohort study on fertility. Offline methods included press releases, posters, and flyers. Online methods comprised advertisements placed on five different websites, including Facebook and Netdoktor.dk. We defined seven categories of mutually exclusive recruitment methods and used electronic tracking via unique Uniform Resource Locator (URL) and self-reported data to identify the recruitment method for each participant. For each method, we calculated the average cost per participant and efficiency, that is, the total number of recruited participants. Results We recruited 8252 study participants. Of these, 534 were excluded as they could not be assigned to a specific recruitment method. The final study population included 7724 participants, of whom 803 (10.4%) were recruited by offline methods, 3985 (51.6%) by online methods, 2382 (30.8%) by online methods not initiated by us, and 554 (7.2%) by other methods. Overall, the average cost per participant was €6.22 for online methods initiated by us versus €9.06 for offline methods. Costs per participant ranged from €2.74 to €105.53 for online methods and from €0 to €67.50 for offline methods. Lowest average costs per participant were for those recruited from Netdoktor.dk (€2.99) and from Facebook (€3.44). Conclusions In our Internet-based cohort study, online recruitment methods were superior to offline methods in terms of efficiency (total number of participants enrolled). The average cost per recruited participant was also lower for online than for offline methods, although costs varied greatly among both online and offline recruitment methods. We observed a decrease in the efficiency of some online recruitment methods over time, suggesting that it may be optimal to adopt multiple online methods. PMID:28249833

  11. Interior-Point Methods for Linear Programming: A Review

    ERIC Educational Resources Information Center

    Singh, J. N.; Singh, D.

    2002-01-01

    The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…

  12. The Relation of Finite Element and Finite Difference Methods

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1976-01-01

    Finite element and finite difference methods are examined in order to bring out their relationship. It is shown that both methods use two types of discrete representations of continuous functions. They differ in that finite difference methods emphasize the discretization of independent variable, while finite element methods emphasize the discretization of dependent variable (referred to as functional approximations). An important point is that finite element methods use global piecewise functional approximations, while finite difference methods normally use local functional approximations. A general conclusion is that finite element methods are best designed to handle complex boundaries, while finite difference methods are superior for complex equations. It is also shown that finite volume difference methods possess many of the advantages attributed to finite element methods.

  13. [Baseflow separation methods in hydrological process research: a review].

    PubMed

    Xu, Lei-Lei; Liu, Jing-Lin; Jin, Chang-Jie; Wang, An-Zhi; Guan, De-Xin; Wu, Jia-Bing; Yuan, Feng-Hui

    2011-11-01

    Baseflow separation research is regarded as one of the most important and difficult issues in hydrology and ecohydrology, but lacked of unified standards in the concepts and methods. This paper introduced the theories of baseflow separation based on the definitions of baseflow components, and analyzed the development course of different baseflow separation methods. Among the methods developed, graph separation method is simple and applicable but arbitrary, balance method accords with hydrological mechanism but is difficult in application, whereas time series separation method and isotopic method can overcome the subjective and arbitrary defects caused by graph separation method, and thus can obtain the baseflow procedure quickly and efficiently. In recent years, hydrological modeling, digital filtering, and isotopic method are the main methods used for baseflow separation.

  14. Semi top-down method combined with earth-bank, an effective method for basement construction.

    NASA Astrophysics Data System (ADS)

    Tuan, B. Q.; Tam, Ng M.

    2018-04-01

    Choosing an appropriate method of deep excavation not only plays a decisive role in technical success, but also in economics of the construction project. Presently, we mainly base on to key methods: “Bottom-up” and “Top-down” construction method. Right now, this paper presents an another method of construction that is “Semi Top-down method combining with earth-bank” in order to take the advantages and limit the weakness of the above methods. The Bottom-up method was improved by using the earth-bank to stabilize retaining walls instead of the bracing steel struts. The Top-down method was improved by using the open cut method for the half of the earthwork quantities.

  15. Marker-based reconstruction of the kinematics of a chain of segments: a new method that incorporates joint kinematic constraints.

    PubMed

    Klous, Miriam; Klous, Sander

    2010-07-01

    The aim of skin-marker-based motion analysis is to reconstruct the motion of a kinematical model from noisy measured motion of skin markers. Existing kinematic models for reconstruction of chains of segments can be divided into two categories: analytical methods that do not take joint constraints into account and numerical global optimization methods that do take joint constraints into account but require numerical optimization of a large number of degrees of freedom, especially when the number of segments increases. In this study, a new and largely analytical method for a chain of rigid bodies is presented, interconnected in spherical joints (chain-method). In this method, the number of generalized coordinates to be determined through numerical optimization is three, irrespective of the number of segments. This new method is compared with the analytical method of Veldpaus et al. [1988, "A Least-Squares Algorithm for the Equiform Transformation From Spatial Marker Co-Ordinates," J. Biomech., 21, pp. 45-54] (Veldpaus-method, a method of the first category) and the numerical global optimization method of Lu and O'Connor [1999, "Bone Position Estimation From Skin-Marker Co-Ordinates Using Global Optimization With Joint Constraints," J. Biomech., 32, pp. 129-134] (Lu-method, a method of the second category) regarding the effects of continuous noise simulating skin movement artifacts and regarding systematic errors in joint constraints. The study is based on simulated data to allow a comparison of the results of the different algorithms with true (noise- and error-free) marker locations. Results indicate a clear trend that accuracy for the chain-method is higher than the Veldpaus-method and similar to the Lu-method. Because large parts of the equations in the chain-method can be solved analytically, the speed of convergence in this method is substantially higher than in the Lu-method. With only three segments, the average number of required iterations with the chain-method is 3.0+/-0.2 times lower than with the Lu-method when skin movement artifacts are simulated by applying a continuous noise model. When simulating systematic errors in joint constraints, the number of iterations for the chain-method was almost a factor 5 lower than the number of iterations for the Lu-method. However, the Lu-method performs slightly better than the chain-method. The RMSD value between the reconstructed and actual marker positions is approximately 57% of the systematic error on the joint center positions for the Lu-method compared with 59% for the chain-method.

  16. Novel two wavelength spectrophotometric methods for simultaneous determination of binary mixtures with severely overlapping spectra

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam M.; Saleh, Sarah S.; Hassan, Nagiba Y.; Salem, Hesham

    2015-02-01

    This work presents the application of different spectrophotometric techniques based on two wavelengths for the determination of severely overlapped spectral components in a binary mixture without prior separation. Four novel spectrophotometric methods were developed namely: induced dual wavelength method (IDW), dual wavelength resolution technique (DWRT), advanced amplitude modulation method (AAM) and induced amplitude modulation method (IAM). The results of the novel methods were compared to that of three well-established methods which were: dual wavelength method (DW), Vierordt's method (VD) and bivariate method (BV). The developed methods were applied for the analysis of the binary mixture of hydrocortisone acetate (HCA) and fusidic acid (FSA) formulated as topical cream accompanied by the determination of methyl paraben and propyl paraben present as preservatives. The specificity of the novel methods was investigated by analyzing laboratory prepared mixtures and the combined dosage form. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed. No difference was observed between the obtained results when compared to the reported HPLC method, which proved that the developed methods could be alternative to HPLC techniques in quality control laboratories.

  17. Determination of Slope Safety Factor with Analytical Solution and Searching Critical Slip Surface with Genetic-Traversal Random Method

    PubMed Central

    2014-01-01

    In the current practice, to determine the safety factor of a slope with two-dimensional circular potential failure surface, one of the searching methods for the critical slip surface is Genetic Algorithm (GA), while the method to calculate the slope safety factor is Fellenius' slices method. However GA needs to be validated with more numeric tests, while Fellenius' slices method is just an approximate method like finite element method. This paper proposed a new method to determine the minimum slope safety factor which is the determination of slope safety factor with analytical solution and searching critical slip surface with Genetic-Traversal Random Method. The analytical solution is more accurate than Fellenius' slices method. The Genetic-Traversal Random Method uses random pick to utilize mutation. A computer automatic search program is developed for the Genetic-Traversal Random Method. After comparison with other methods like slope/w software, results indicate that the Genetic-Traversal Random Search Method can give very low safety factor which is about half of the other methods. However the obtained minimum safety factor with Genetic-Traversal Random Search Method is very close to the lower bound solutions of slope safety factor given by the Ansys software. PMID:24782679

  18. Enumeration of total aerobic microorganisms in foods by SimPlate Total Plate Count-Color Indicator methods and conventional culture methods: collaborative study.

    PubMed

    Feldsine, Philip T; Leung, Stephanie C; Lienau, Andrew H; Mui, Linda A; Townsend, David E

    2003-01-01

    The relative efficacy of the SimPlate Total Plate Count-Color Indicator (TPC-CI) method (SimPlate 35 degrees C) was compared with the AOAC Official Method 966.23 (AOAC 35 degrees C) for enumeration of total aerobic microorganisms in foods. The SimPlate TPC-CI method, incubated at 30 degrees C (SimPlate 30 degrees C), was also compared with the International Organization for Standardization (ISO) 4833 method (ISO 30 degrees C). Six food types were analyzed: ground black pepper, flour, nut meats, frozen hamburger patties, frozen fruits, and fresh vegetables. All foods tested were naturally contaminated. Nineteen laboratories throughout North America and Europe participated in the study. Three method comparisons were conducted. In general, there was <0.3 mean log count difference in recovery among the SimPlate methods and their corresponding reference methods. Mean log counts between the 2 reference methods were also very similar. Repeatability (Sr) and reproducibility (SR) standard deviations were similar among the 3 method comparisons. The SimPlate method (35 degrees C) and the AOAC method were comparable for enumerating total aerobic microorganisms in foods. Similarly, the SimPlate method (30 degrees C) was comparable to the ISO method when samples were prepared and incubated according to the ISO method.

  19. Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation

    NASA Astrophysics Data System (ADS)

    Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab

    2015-05-01

    3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.

  20. Completed Suicide with Violent and Non-Violent Methods in Rural Shandong, China: A Psychological Autopsy Study

    PubMed Central

    Sun, Shi-Hua; Jia, Cun-Xian

    2014-01-01

    Background This study aims to describe the specific characteristics of completed suicides by violent methods and non-violent methods in rural Chinese population, and to explore the related factors for corresponding methods. Methods Data of this study came from investigation of 199 completed suicide cases and their paired controls of rural areas in three different counties in Shandong, China, by interviewing one informant of each subject using the method of Psychological Autopsy (PA). Results There were 78 (39.2%) suicides with violent methods and 121 (60.8%) suicides with non-violent methods. Ingesting pesticides, as a non-violent method, appeared to be the most common suicide method (103, 51.8%). Hanging (73 cases, 36.7%) and drowning (5 cases, 2.5%) were the only violent methods observed. Storage of pesticides at home and higher suicide intent score were significantly associated with choice of violent methods while committing suicide. Risk factors related to suicide death included negative life events and hopelessness. Conclusions Suicide with violent methods has different factors from suicide with non-violent methods. Suicide methods should be considered in suicide prevention and intervention strategies. PMID:25111835

  1. A review of propeller noise prediction methodology: 1919-1994

    NASA Technical Reports Server (NTRS)

    Metzger, F. Bruce

    1995-01-01

    This report summarizes a review of the literature regarding propeller noise prediction methods. The review is divided into six sections: (1) early methods; (2) more recent methods based on earlier theory; (3) more recent methods based on the Acoustic Analogy; (4) more recent methods based on Computational Acoustics; (5) empirical methods; and (6) broadband methods. The report concludes that there are a large number of noise prediction procedures available which vary markedly in complexity. Deficiencies in accuracy of methods in many cases may be related, not to the methods themselves, but the accuracy and detail of the aerodynamic inputs used to calculate noise. The steps recommended in the report to provide accurate and easy to use prediction methods are: (1) identify reliable test data; (2) define and conduct test programs to fill gaps in the existing data base; (3) identify the most promising prediction methods; (4) evaluate promising prediction methods relative to the data base; (5) identify and correct the weaknesses in the prediction methods, including lack of user friendliness, and include features now available only in research codes; (6) confirm the accuracy of improved prediction methods to the data base; and (7) make the methods widely available and provide training in their use.

  2. A different approach to estimate nonlinear regression model using numerical methods

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].

  3. Sorting protein decoys by machine-learning-to-rank

    PubMed Central

    Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen

    2016-01-01

    Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset. PMID:27530967

  4. Sorting protein decoys by machine-learning-to-rank.

    PubMed

    Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen

    2016-08-17

    Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset.

  5. Improved accuracy for finite element structural analysis via a new integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  6. Wideband characterization of the complex wave number and characteristic impedance of sound absorbers.

    PubMed

    Salissou, Yacoubou; Panneton, Raymond

    2010-11-01

    Several methods for measuring the complex wave number and the characteristic impedance of sound absorbers have been proposed in the literature. These methods can be classified into single frequency and wideband methods. In this paper, the main existing methods are revisited and discussed. An alternative method which is not well known or discussed in the literature while exhibiting great potential is also discussed. This method is essentially an improvement of the wideband method described by Iwase et al., rewritten so that the setup is more ISO 10534-2 standard-compliant. Glass wool, melamine foam and acoustical/thermal insulator wool are used to compare the main existing wideband non-iterative methods with this alternative method. It is found that, in the middle and high frequency ranges the alternative method yields results that are comparable in accuracy to the classical two-cavity method and the four-microphone transfer-matrix method. However, in the low frequency range, the alternative method appears to be more accurate than the other methods, especially when measuring the complex wave number.

  7. Methods for environmental change; an exploratory study.

    PubMed

    Kok, Gerjo; Gottlieb, Nell H; Panne, Robert; Smerecnik, Chris

    2012-11-28

    While the interest of health promotion researchers in change methods directed at the target population has a long tradition, interest in change methods directed at the environment is still developing. In this survey, the focus is on methods for environmental change; especially about how these are composed of methods for individual change ('Bundling') and how within one environmental level, organizations, methods differ when directed at the management ('At') or applied by the management ('From'). The first part of this online survey dealt with examining the 'bundling' of individual level methods to methods at the environmental level. The question asked was to what extent the use of an environmental level method would involve the use of certain individual level methods. In the second part of the survey the question was whether there are differences between applying methods directed 'at' an organization (for instance, by a health promoter) versus 'from' within an organization itself. All of the 20 respondents are experts in the field of health promotion. Methods at the individual level are frequently bundled together as part of a method at a higher ecological level. A number of individual level methods are popular as part of most of the environmental level methods, while others are not chosen very often. Interventions directed at environmental agents often have a strong focus on the motivational part of behavior change.There are different approaches targeting a level or being targeted from a level. The health promoter will use combinations of motivation and facilitation. The manager will use individual level change methods focusing on self-efficacy and skills. Respondents think that any method may be used under the right circumstances, although few endorsed coercive methods. Taxonomies of theoretical change methods for environmental change should include combinations of individual level methods that may be bundled and separate suggestions for methods targeting a level or being targeted from a level. Future research needs to cover more methods to rate and to be rated. Qualitative data may explain some of the surprising outcomes, such as the lack of large differences and the avoidance of coercion. Taxonomies should include the theoretical parameters that limit the effectiveness of the method.

  8. A comparison theorem for the SOR iterative method

    NASA Astrophysics Data System (ADS)

    Sun, Li-Ying

    2005-09-01

    In 1997, Kohno et al. have reported numerically that the improving modified Gauss-Seidel method, which was referred to as the IMGS method, is superior to the SOR iterative method. In this paper, we prove that the spectral radius of the IMGS method is smaller than that of the SOR method and Gauss-Seidel method, if the relaxation parameter [omega][set membership, variant](0,1]. As a result, we prove theoretically that this method is succeeded in improving the convergence of some classical iterative methods. Some recent results are improved.

  9. A review of parametric approaches specific to aerodynamic design process

    NASA Astrophysics Data System (ADS)

    Zhang, Tian-tian; Wang, Zhen-guo; Huang, Wei; Yan, Li

    2018-04-01

    Parametric modeling of aircrafts plays a crucial role in the aerodynamic design process. Effective parametric approaches have large design space with a few variables. Parametric methods that commonly used nowadays are summarized in this paper, and their principles have been introduced briefly. Two-dimensional parametric methods include B-Spline method, Class/Shape function transformation method, Parametric Section method, Hicks-Henne method and Singular Value Decomposition method, and all of them have wide application in the design of the airfoil. This survey made a comparison among them to find out their abilities in the design of the airfoil, and the results show that the Singular Value Decomposition method has the best parametric accuracy. The development of three-dimensional parametric methods is limited, and the most popular one is the Free-form deformation method. Those methods extended from two-dimensional parametric methods have promising prospect in aircraft modeling. Since different parametric methods differ in their characteristics, real design process needs flexible choice among them to adapt to subsequent optimization procedure.

  10. A Review and Comparison of Methods for Recreating Individual Patient Data from Published Kaplan-Meier Survival Curves for Economic Evaluations: A Simulation Study

    PubMed Central

    Wan, Xiaomin; Peng, Liubao; Li, Yuanjian

    2015-01-01

    Background In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. Methods A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. Results All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. Conclusions The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method. PMID:25803659

  11. Comparisons of Lagrangian and Eulerian PDF methods in simulations of non-premixed turbulent jet flames with moderate-to-strong turbulence-chemistry interactions

    NASA Astrophysics Data System (ADS)

    Jaishree, J.; Haworth, D. C.

    2012-06-01

    Transported probability density function (PDF) methods have been applied widely and effectively for modelling turbulent reacting flows. In most applications of PDF methods to date, Lagrangian particle Monte Carlo algorithms have been used to solve a modelled PDF transport equation. However, Lagrangian particle PDF methods are computationally intensive and are not readily integrated into conventional Eulerian computational fluid dynamics (CFD) codes. Eulerian field PDF methods have been proposed as an alternative. Here a systematic comparison is performed among three methods for solving the same underlying modelled composition PDF transport equation: a consistent hybrid Lagrangian particle/Eulerian mesh (LPEM) method, a stochastic Eulerian field (SEF) method and a deterministic Eulerian field method with a direct-quadrature-method-of-moments closure (a multi-environment PDF-MEPDF method). The comparisons have been made in simulations of a series of three non-premixed, piloted methane-air turbulent jet flames that exhibit progressively increasing levels of local extinction and turbulence-chemistry interactions: Sandia/TUD flames D, E and F. The three PDF methods have been implemented using the same underlying CFD solver, and results obtained using the three methods have been compared using (to the extent possible) equivalent physical models and numerical parameters. Reasonably converged mean and rms scalar profiles are obtained using 40 particles per cell for the LPEM method or 40 Eulerian fields for the SEF method. Results from these stochastic methods are compared with results obtained using two- and three-environment MEPDF methods. The relative advantages and disadvantages of each method in terms of accuracy and computational requirements are explored and identified. In general, the results obtained from the two stochastic methods (LPEM and SEF) are very similar, and are in closer agreement with experimental measurements than those obtained using the MEPDF method, while MEPDF is the most computationally efficient of the three methods. These and other findings are discussed in detail.

  12. AN EULERIAN-LAGRANGIAN LOCALIZED ADJOINT METHOD FOR THE ADVECTION-DIFFUSION EQUATION

    EPA Science Inventory

    Many numerical methods use characteristic analysis to accommodate the advective component of transport. Such characteristic methods include Eulerian-Lagrangian methods (ELM), modified method of characteristics (MMOC), and operator splitting methods. A generalization of characteri...

  13. Capital investment analysis: three methods.

    PubMed

    Gapenski, L C

    1993-08-01

    Three cash flow/discount rate methods can be used when conducting capital budgeting financial analyses: the net operating cash flow method, the net cash flow to investors method, and the net cash flow to equity holders method. The three methods differ in how the financing mix and the benefits of debt financing are incorporated. This article explains the three methods, demonstrates that they are essentially equivalent, and recommends which method to use under specific circumstances.

  14. Effective description of a 3D object for photon transportation in Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Suganuma, R.; Ogawa, K.

    2000-06-01

    Photon transport simulation by means of the Monte Carlo method is an indispensable technique for examining scatter and absorption correction methods in SPECT and PET. The authors have developed a method for object description with maximum size regions (maximum rectangular regions: MRRs) to speed up photon transport simulation, and compared the computation time with that for conventional object description methods, a voxel-based (VB) method and an octree method, in the simulations of two kinds of phantoms. The simulation results showed that the computation time with the proposed method became about 50% of that with the VD method and about 70% of that with the octree method for a high resolution MCAT phantom. Here, details of the expansion of the MRR method to three dimensions are given. Moreover, the effectiveness of the proposed method was compared with the VB and octree methods.

  15. Region of influence regression for estimating the 50-year flood at ungaged sites

    USGS Publications Warehouse

    Tasker, Gary D.; Hodge, S.A.; Barks, C.S.

    1996-01-01

    Five methods of developing regional regression models to estimate flood characteristics at ungaged sites in Arkansas are examined. The methods differ in the manner in which the State is divided into subrogions. Each successive method (A to E) is computationally more complex than the previous method. Method A makes no subdivision. Methods B and C define two and four geographic subrogions, respectively. Method D uses cluster/discriminant analysis to define subrogions on the basis of similarities in watershed characteristics. Method E, the new region of influence method, defines a unique subregion for each ungaged site. Split-sample results indicate that, in terms of root-mean-square error, method E (38 percent error) is best. Methods C and D (42 and 41 percent error) were in a virtual tie for second, and methods B (44 percent error) and A (49 percent error) were fourth and fifth best.

  16. Scalable parallel elastic-plastic finite element analysis using a quasi-Newton method with a balancing domain decomposition preconditioner

    NASA Astrophysics Data System (ADS)

    Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu

    2018-04-01

    A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.

  17. Designing Class Methods from Dataflow Diagrams

    NASA Astrophysics Data System (ADS)

    Shoval, Peretz; Kabeli-Shani, Judith

    A method for designing the class methods of an information system is described. The method is part of FOOM - Functional and Object-Oriented Methodology. In the analysis phase of FOOM, two models defining the users' requirements are created: a conceptual data model - an initial class diagram; and a functional model - hierarchical OO-DFDs (object-oriented dataflow diagrams). Based on these models, a well-defined process of methods design is applied. First, the OO-DFDs are converted into transactions, i.e., system processes that supports user task. The components and the process logic of each transaction are described in detail, using pseudocode. Then, each transaction is decomposed, according to well-defined rules, into class methods of various types: basic methods, application-specific methods and main transaction (control) methods. Each method is attached to a proper class; messages between methods express the process logic of each transaction. The methods are defined using pseudocode or message charts.

  18. Simple Test Functions in Meshless Local Petrov-Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.

    2016-01-01

    Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.

  19. Leapfrog variants of iterative methods for linear algebra equations

    NASA Technical Reports Server (NTRS)

    Saylor, Paul E.

    1988-01-01

    Two iterative methods are considered, Richardson's method and a general second order method. For both methods, a variant of the method is derived for which only even numbered iterates are computed. The variant is called a leapfrog method. Comparisons between the conventional form of the methods and the leapfrog form are made under the assumption that the number of unknowns is large. In the case of Richardson's method, it is possible to express the final iterate in terms of only the initial approximation, a variant of the iteration called the grand-leap method. In the case of the grand-leap variant, a set of parameters is required. An algorithm is presented to compute these parameters that is related to algorithms to compute the weights and abscissas for Gaussian quadrature. General algorithms to implement the leapfrog and grand-leap methods are presented. Algorithms for the important special case of the Chebyshev method are also given.

  20. Development of a Coordinate Transformation method for direct georeferencing in map projection frames

    NASA Astrophysics Data System (ADS)

    Zhao, Haitao; Zhang, Bing; Wu, Changshan; Zuo, Zhengli; Chen, Zhengchao

    2013-03-01

    This paper develops a novel Coordinate Transformation method (CT-method), with which the orientation angles (roll, pitch, heading) of the local tangent frame of the GPS/INS system are transformed into those (omega, phi, kappa) of the map projection frame for direct georeferencing (DG). Especially, the orientation angles in the map projection frame were derived from a sequence of coordinate transformations. The effectiveness of orientation angles transformation was verified through comparing with DG results obtained from conventional methods (Legat method and POSPac method) using empirical data. Moreover, the CT-method was also validated with simulated data. One advantage of the proposed method is that the orientation angles can be acquired simultaneously while calculating position elements of exterior orientation (EO) parameters and auxiliary points coordinates by coordinate transformation. These three methods were demonstrated and compared using empirical data. Empirical results show that the CT-method is both as sound and effective as Legat method. Compared with POSPac method, the CT-method is more suitable for calculating EO parameters for DG in map projection frames. DG accuracy of the CT-method and Legat method are at the same level. DG results of all these three methods have systematic errors in height due to inconsistent length projection distortion in the vertical and horizontal components, and these errors can be significantly reduced using the EO height correction technique in Legat's approach. Similar to the results obtained with empirical data, the effectiveness of the CT-method was also proved with simulated data. POSPac method: The method is presented by Applanix POSPac software technical note (Hutton and Savina, 1997). It is implemented in the POSEO module of POSPac software.

  1. Comparison of four USEPA digestion methods for trace metal analysis using certified and Florida soils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, M.; Ma, L.Q.

    1998-11-01

    It is critical to compare existing sample digestion methods for evaluating soil contamination and remediation. USEPA Methods 3050, 3051, 3051a, and 3052 were used to digest standard reference materials and representative Florida surface soils. Fifteen trace metals (Ag, As, Ba, Be, Cd, Cr, Cu, Hg, Mn, Mo, Ni, Pb, Sb, Se, and Za), and six macro elements (Al, Ca, Fe, K, Mg, and P) were analyzed. Precise analysis was achieved for all elements except for Cd, Mo, Se, and Sb in NIST SRMs 2704 and 2709 by USEPA Methods 3050 and 3051, and for all elements except for As, Mo,more » Sb, and Se in NIST SRM 2711 by USEPA Method 3052. No significant differences were observed for the three NIST SRMs between the microwave-assisted USEPA Methods 3051 and 3051A and the conventional USEPA Method 3050 Methods 3051 and 3051a and the conventional USEPA Method 3050 except for Hg, Sb, and Se. USEPA Method 3051a provided comparable values for NIST SRMs certified using USEPA Method 3050. However, for method correlation coefficients and elemental recoveries in 40 Florida surface soils, USEPA Method 3051a was an overall better alternative for Method 3050 than was Method 3051. Among the four digestion methods, the microwave-assisted USEPA Method 3052 achieved satisfactory recoveries for all elements except As and Mg using NIST SRM 2711. This total-total digestion method provided greater recoveries for 12 elements Ag, Be, Cr, Fe, K, Mn, Mo, Ni, Pb, Sb, Se, and Zn, but lower recoveries for Mg in Florida soils than did the total-recoverable digestion methods.« less

  2. [Comparative analysis between diatom nitric acid digestion method and plankton 16S rDNA PCR method].

    PubMed

    Han, Jun-ge; Wang, Cheng-bao; Li, Xing-biao; Fan, Yan-yan; Feng, Xiang-ping

    2013-10-01

    To compare and explore the application value of diatom nitric acid digestion method and plankton 16S rDNA PCR method for drowning identification. Forty drowning cases from 2010 to 2011 were collected from Department of Forensic Medicine of Wenzhou Medical University. Samples including lung, kidney, liver and field water from each case were tested with diatom nitric acid digestion method and plankton 16S rDNA PCR method, respectively. The Diatom nitric acid digestion method and plankton 16S rDNA PCR method required 20 g and 2 g of each organ, and 15 mL and 1.5 mL of field water, respectively. The inspection time and detection rate were compared between the two methods. Diatom nitric acid digestion method mainly detected two species of diatoms, Centriae and Pennatae, while plankton 16S rDNA PCR method amplified a length of 162 bp band. The average inspection time of each case of the Diatom nitric acid digestion method was (95.30 +/- 2.78) min less than (325.33 +/- 14.18) min of plankton 16S rDNA PCR method (P < 0.05). The detection rates of two methods for field water and lung were both 100%. For liver and kidney, the detection rate of plankton 16S rDNA PCR method was both 80%, higher than 40% and 30% of diatom nitric acid digestion method (P < 0.05), respectively. The laboratory testing method needs to be appropriately selected according to the specific circumstances in the forensic appraisal of drowning. Compared with diatom nitric acid digestion method, plankton 16S rDNA PCR method has practice values with such advantages as less quantity of samples, huge information and high specificity.

  3. Reliable clarity automatic-evaluation method for optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Qin, Bangyong; Shang, Ren; Li, Shengyang; Hei, Baoqin; Liu, Zhiwen

    2015-10-01

    Image clarity, which reflects the sharpness degree at the edge of objects in images, is an important quality evaluate index for optical remote sensing images. Scholars at home and abroad have done a lot of work on estimation of image clarity. At present, common clarity-estimation methods for digital images mainly include frequency-domain function methods, statistical parametric methods, gradient function methods and edge acutance methods. Frequency-domain function method is an accurate clarity-measure approach. However, its calculation process is complicate and cannot be carried out automatically. Statistical parametric methods and gradient function methods are both sensitive to clarity of images, while their results are easy to be affected by the complex degree of images. Edge acutance method is an effective approach for clarity estimate, while it needs picking out the edges manually. Due to the limits in accuracy, consistent or automation, these existing methods are not applicable to quality evaluation of optical remote sensing images. In this article, a new clarity-evaluation method, which is based on the principle of edge acutance algorithm, is proposed. In the new method, edge detection algorithm and gradient search algorithm are adopted to automatically search the object edges in images. Moreover, The calculation algorithm for edge sharpness has been improved. The new method has been tested with several groups of optical remote sensing images. Compared with the existing automatic evaluation methods, the new method perform better both in accuracy and consistency. Thus, the new method is an effective clarity evaluation method for optical remote sensing images.

  4. 26 CFR 1.412(c)(1)-2 - Shortfall method.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 5 2013-04-01 2013-04-01 false Shortfall method. 1.412(c)(1)-2 Section 1.412(c... Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's underlying funding method for purposes of section 412. As such, the use of the shortfall...

  5. 26 CFR 1.412(c)(1)-2 - Shortfall method.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 5 2012-04-01 2011-04-01 true Shortfall method. 1.412(c)(1)-2 Section 1.412(c... Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's underlying funding method for purposes of section 412. As such, the use of the shortfall...

  6. 26 CFR 1.412(c)(1)-2 - Shortfall method.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 5 2014-04-01 2014-04-01 false Shortfall method. 1.412(c)(1)-2 Section 1.412(c... Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's underlying funding method for purposes of section 412. As such, the use of the shortfall...

  7. 26 CFR 1.412(c)(1)-2 - Shortfall method.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 5 2011-04-01 2011-04-01 false Shortfall method. 1.412(c)(1)-2 Section 1.412(c... Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's underlying funding method for purposes of section 412. As such, the use of the shortfall...

  8. 40 CFR 60.547 - Test methods and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...

  9. 40 CFR 60.547 - Test methods and procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...

  10. 40 CFR 60.547 - Test methods and procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...

  11. The Dramatic Methods of Hans van Dam.

    ERIC Educational Resources Information Center

    van de Water, Manon

    1994-01-01

    Interprets for the American reader the untranslated dramatic methods of Hans van Dam, a leading drama theorist in the Netherlands. Discusses the functions of drama as a method, closed dramatic methods, open dramatic methods, and applying van Dam's methods. (SR)

  12. Methods for environmental change; an exploratory study

    PubMed Central

    2012-01-01

    Background While the interest of health promotion researchers in change methods directed at the target population has a long tradition, interest in change methods directed at the environment is still developing. In this survey, the focus is on methods for environmental change; especially about how these are composed of methods for individual change (‘Bundling’) and how within one environmental level, organizations, methods differ when directed at the management (‘At’) or applied by the management (‘From’). Methods The first part of this online survey dealt with examining the ‘bundling’ of individual level methods to methods at the environmental level. The question asked was to what extent the use of an environmental level method would involve the use of certain individual level methods. In the second part of the survey the question was whether there are differences between applying methods directed ‘at’ an organization (for instance, by a health promoter) versus ‘from’ within an organization itself. All of the 20 respondents are experts in the field of health promotion. Results Methods at the individual level are frequently bundled together as part of a method at a higher ecological level. A number of individual level methods are popular as part of most of the environmental level methods, while others are not chosen very often. Interventions directed at environmental agents often have a strong focus on the motivational part of behavior change. There are different approaches targeting a level or being targeted from a level. The health promoter will use combinations of motivation and facilitation. The manager will use individual level change methods focusing on self-efficacy and skills. Respondents think that any method may be used under the right circumstances, although few endorsed coercive methods. Conclusions Taxonomies of theoretical change methods for environmental change should include combinations of individual level methods that may be bundled and separate suggestions for methods targeting a level or being targeted from a level. Future research needs to cover more methods to rate and to be rated. Qualitative data may explain some of the surprising outcomes, such as the lack of large differences and the avoidance of coercion. Taxonomies should include the theoretical parameters that limit the effectiveness of the method. PMID:23190712

  13. Implementation of an improved adaptive-implicit method in a thermal compositional simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, T.B.

    1988-11-01

    A multicomponent thermal simulator with an adaptive-implicit-method (AIM) formulation/inexact-adaptive-Newton (IAN) method is presented. The final coefficient matrix retains the original banded structure so that conventional iterative methods can be used. Various methods for selection of the eliminated unknowns are tested. AIM/IAN method has a lower work count per Newtonian iteration than fully implicit methods, but a wrong choice of unknowns will result in excessive Newtonian iterations. For the problems tested, the residual-error method described in the paper for selecting implicit unknowns, together with the IAN method, had an improvement of up to 28% of the CPU time over the fullymore » implicit method.« less

  14. Approaches to Mixed Methods Dissemination and Implementation Research: Methods, Strengths, Caveats, and Opportunities.

    PubMed

    Green, Carla A; Duan, Naihua; Gibbons, Robert D; Hoagwood, Kimberly E; Palinkas, Lawrence A; Wisdom, Jennifer P

    2015-09-01

    Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings.

  15. Approaches to Mixed Methods Dissemination and Implementation Research: Methods, Strengths, Caveats, and Opportunities

    PubMed Central

    Green, Carla A.; Duan, Naihua; Gibbons, Robert D.; Hoagwood, Kimberly E.; Palinkas, Lawrence A.; Wisdom, Jennifer P.

    2015-01-01

    Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings. PMID:24722814

  16. Bond additivity corrections for quantum chemistry methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C. F. Melius; M. D. Allendorf

    1999-04-01

    In the 1980's, the authors developed a bond-additivity correction procedure for quantum chemical calculations called BAC-MP4, which has proven reliable in calculating the thermochemical properties of molecular species, including radicals as well as stable closed-shell species. New Bond Additivity Correction (BAC) methods have been developed for the G2 method, BAC-G2, as well as for a hybrid DFT/MP2 method, BAC-Hybrid. These BAC methods use a new form of BAC corrections, involving atomic, molecular, and bond-wise additive terms. These terms enable one to treat positive and negative ions as well as neutrals. The BAC-G2 method reduces errors in the G2 method duemore » to nearest-neighbor bonds. The parameters within the BAC-G2 method only depend on atom types. Thus the BAC-G2 method can be used to determine the parameters needed by BAC methods involving lower levels of theory, such as BAC-Hybrid and BAC-MP4. The BAC-Hybrid method should scale well for large molecules. The BAC-Hybrid method uses the differences between the DFT and MP2 as an indicator of the method's accuracy, while the BAC-G2 method uses its internal methods (G1 and G2MP2) to provide an indicator of its accuracy. Indications of the average error as well as worst cases are provided for each of the BAC methods.« less

  17. Comparison of different methods to quantify fat classes in bakery products.

    PubMed

    Shin, Jae-Min; Hwang, Young-Ok; Tu, Ock-Ju; Jo, Han-Bin; Kim, Jung-Hun; Chae, Young-Zoo; Rhu, Kyung-Hun; Park, Seung-Kook

    2013-01-15

    The definition of fat differs in different countries; thus whether fat is listed on food labels depends on the country. Some countries list crude fat content in the 'Fat' section on the food label, whereas other countries list total fat. In this study, three methods were used for determining fat classes and content in bakery products: the Folch method, the automated Soxhlet method, and the AOAC 996.06 method. The results using these methods were compared. Fat (crude) extracted by the Folch and Soxhlet methods was gravimetrically determined and assessed by fat class using capillary gas chromatography (GC). In most samples, fat (total) content determined by the AOAC 996.06 method was lower than the fat (crude) content determined by the Folch or automated Soxhlet methods. Furthermore, monounsaturated fat or saturated fat content determined by the AOAC 996.06 method was lowest. Almost no difference was observed between fat (crude) content determined by the Folch method and that determined by the automated Soxhlet method for nearly all samples. In three samples (wheat biscuits, butter cookies-1, and chocolate chip cookies), monounsaturated fat, saturated fat, and trans fat content obtained by the automated Soxhlet method was higher than that obtained by the Folch method. The polyunsaturated fat content obtained by the automated Soxhlet method was not higher than that obtained by the Folch method in any sample. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Luo; Yidong Xia; Robert Nourgaliev

    2011-05-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison.more » Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.« less

  19. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method

    PubMed Central

    2014-01-01

    Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018

  20. Integral methods of solving boundary-value problems of nonstationary heat conduction and their comparative analysis

    NASA Astrophysics Data System (ADS)

    Kot, V. A.

    2017-11-01

    The modern state of approximate integral methods used in applications, where the processes of heat conduction and heat and mass transfer are of first importance, is considered. Integral methods have found a wide utility in different fields of knowledge: problems of heat conduction with different heat-exchange conditions, simulation of thermal protection, Stefantype problems, microwave heating of a substance, problems on a boundary layer, simulation of a fluid flow in a channel, thermal explosion, laser and plasma treatment of materials, simulation of the formation and melting of ice, inverse heat problems, temperature and thermal definition of nanoparticles and nanoliquids, and others. Moreover, polynomial solutions are of interest because the determination of a temperature (concentration) field is an intermediate stage in the mathematical description of any other process. The following main methods were investigated on the basis of the error norms: the Tsoi and Postol’nik methods, the method of integral relations, the Gudman integral method of heat balance, the improved Volkov integral method, the matched integral method, the modified Hristov method, the Mayer integral method, the Kudinov method of additional boundary conditions, the Fedorov boundary method, the method of weighted temperature function, the integral method of boundary characteristics. It was established that the two last-mentioned methods are characterized by high convergence and frequently give solutions whose accuracy is not worse that the accuracy of numerical solutions.

  1. Method for producing smooth inner surfaces

    DOEpatents

    Cooper, Charles A.

    2016-05-17

    The invention provides a method for preparing superconducting cavities, the method comprising causing polishing media to tumble by centrifugal barrel polishing within the cavities for a time sufficient to attain a surface smoothness of less than 15 nm root mean square roughness over approximately a 1 mm.sup.2 scan area. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media bound to a carrier to tumble within the cavities. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media in a slurry to tumble within the cavities.

  2. A Hybrid Method for Pancreas Extraction from CT Image Based on Level Set Methods

    PubMed Central

    Tan, Hanqing; Fujita, Hiroshi

    2013-01-01

    This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction. PMID:24066016

  3. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method.

    PubMed

    Polidori, David; Rowley, Clarence

    2014-07-22

    The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.

  4. Trends in the Contraceptive Method Mix in Low- and Middle-Income Countries: Analysis Using a New “Average Deviation” Measure

    PubMed Central

    Ross, John; Keesbury, Jill; Hardee, Karen

    2015-01-01

    ABSTRACT The method mix of contraceptive use is severely unbalanced in many countries, with over half of all use provided by just 1 or 2 methods. That tends to limit the range of user options and constrains the total prevalence of use, leading to unplanned pregnancies and births or abortions. Previous analyses of method mix distortions focused on countries where a single method accounted for more than half of all use (the 50% rule). We introduce a new measure that uses the average deviation (AD) of method shares around their own mean and apply that to a secondary analysis of method mix data for 8 contraceptive methods from 666 national surveys in 123 countries. A high AD value indicates a skewed method mix while a low AD value indicates a more uniform pattern across methods; the values can range from 0 to 21.9. Most AD values ranged from 6 to 19, with an interquartile range of 8.6 to 12.2. Using the AD measure, we identified 15 countries where the method mix has evolved from a distorted one to a better balanced one, with AD values declining, on average, by 35% over time. Countries show disparate paths in method gains and losses toward a balanced mix, but 4 patterns are suggested: (1) rise of one method partially offset by changes in other methods, (2) replacement of traditional with modern methods, (3) continued but declining domination by a single method, and (4) declines in dominant methods with increases in other methods toward a balanced mix. Regions differ markedly in their method mix profiles and preferences, raising the question of whether programmatic resources are best devoted to better provision of the well-accepted methods or to deploying neglected or new ones, or to a combination of both approaches. PMID:25745119

  5. A review and comparison of methods for recreating individual patient data from published Kaplan-Meier survival curves for economic evaluations: a simulation study.

    PubMed

    Wan, Xiaomin; Peng, Liubao; Li, Yuanjian

    2015-01-01

    In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method.

  6. Achieving cost-neutrality with long-acting reversible contraceptive methods.

    PubMed

    Trussell, James; Hassan, Fareen; Lowin, Julia; Law, Amy; Filonenko, Anna

    2015-01-01

    This analysis aimed to estimate the average annual cost of available reversible contraceptive methods in the United States. In line with literature suggesting long-acting reversible contraceptive (LARC) methods become increasingly cost-saving with extended duration of use, it aimed to also quantify minimum duration of use required for LARC methods to achieve cost-neutrality relative to other reversible contraceptive methods while taking into consideration discontinuation. A three-state economic model was developed to estimate relative costs of no method (chance), four short-acting reversible (SARC) methods (oral contraceptive, ring, patch and injection) and three LARC methods [implant, copper intrauterine device (IUD) and levonorgestrel intrauterine system (LNG-IUS) 20 mcg/24 h (total content 52 mg)]. The analysis was conducted over a 5-year time horizon in 1000 women aged 20-29 years. Method-specific failure and discontinuation rates were based on published literature. Costs associated with drug acquisition, administration and failure (defined as an unintended pregnancy) were considered. Key model outputs were annual average cost per method and minimum duration of LARC method usage to achieve cost-savings compared to SARC methods. The two least expensive methods were copper IUD ($304 per women, per year) and LNG-IUS 20 mcg/24 h ($308). Cost of SARC methods ranged between $432 (injection) and $730 (patch), per women, per year. A minimum of 2.1 years of LARC usage would result in cost-savings compared to SARC usage. This analysis finds that even if LARC methods are not used for their full durations of efficacy, they become cost-saving relative to SARC methods within 3 years of use. Previous economic arguments in support of using LARC methods have been criticized for not considering that LARC methods are not always used for their full duration of efficacy. This study calculated that cost-savings from LARC methods relative to SARC methods, with discontinuation rates considered, can be realized within 3 years. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. [Analyzing and modeling methods of near infrared spectroscopy for in-situ prediction of oil yield from oil shale].

    PubMed

    Liu, Jie; Zhang, Fu-Dong; Teng, Fei; Li, Jun; Wang, Zhi-Hong

    2014-10-01

    In order to in-situ detect the oil yield of oil shale, based on portable near infrared spectroscopy analytical technology, with 66 rock core samples from No. 2 well drilling of Fuyu oil shale base in Jilin, the modeling and analyzing methods for in-situ detection were researched. By the developed portable spectrometer, 3 data formats (reflectance, absorbance and K-M function) spectra were acquired. With 4 different modeling data optimization methods: principal component-mahalanobis distance (PCA-MD) for eliminating abnormal samples, uninformative variables elimination (UVE) for wavelength selection and their combina- tions: PCA-MD + UVE and UVE + PCA-MD, 2 modeling methods: partial least square (PLS) and back propagation artificial neural network (BPANN), and the same data pre-processing, the modeling and analyzing experiment were performed to determine the optimum analysis model and method. The results show that the data format, modeling data optimization method and modeling method all affect the analysis precision of model. Results show that whether or not using the optimization method, reflectance or K-M function is the proper spectrum format of the modeling database for two modeling methods. Using two different modeling methods and four different data optimization methods, the model precisions of the same modeling database are different. For PLS modeling method, the PCA-MD and UVE + PCA-MD data optimization methods can improve the modeling precision of database using K-M function spectrum data format. For BPANN modeling method, UVE, UVE + PCA-MD and PCA- MD + UVE data optimization methods can improve the modeling precision of database using any of the 3 spectrum data formats. In addition to using the reflectance spectra and PCA-MD data optimization method, modeling precision by BPANN method is better than that by PLS method. And modeling with reflectance spectra, UVE optimization method and BPANN modeling method, the model gets the highest analysis precision, its correlation coefficient (Rp) is 0.92, and its standard error of prediction (SEP) is 0.69%.

  8. Relative effectiveness of the Bacteriological Analytical Manual method for the recovery of Salmonella from whole cantaloupes and cantaloupe rinses with selected preenrichment media and rapid methods.

    PubMed

    Hammack, Thomas S; Valentin-Bon, Iris E; Jacobson, Andrew P; Andrews, Wallace H

    2004-05-01

    Soak and rinse methods were compared for the recovery of Salmonella from whole cantaloupes. Cantaloupes were surface inoculated with Salmonella cell suspensions and stored for 4 days at 2 to 6 degrees C. Cantaloupes were placed in sterile plastic bags with a nonselective preenrichment broth at a 1:1.5 cantaloupe weight-to-broth volume ratio. The cantaloupe broths were shaken for 5 min at 100 rpm after which 25-ml aliquots (rinse) were removed from the bags. The 25-ml rinses were preenriched in 225-ml portions of the same uninoculated broth type at 35 degrees C for 24 h (rinse method). The remaining cantaloupe broths were incubated at 35 degrees C for 24 h (soak method). The preenrichment broths used were buffered peptone water (BPW), modified BPW, lactose (LAC) broth, and Universal Preenrichment (UP) broth. The Bacteriological Analytical Manual Salmonella culture method was compared with the following rapid methods: the TECRA Unique Salmonella method, the VIDAS ICS/SLM method, and the VIDAS SLM method. The soak method detected significantly more Salmonella-positive cantaloupes (P < 0.05) than did the rinse method: 367 Salmonella-positive cantaloupes of 540 test cantaloupes by the soak method and 24 Salmonella-positive cantaloupes of 540 test cantaloupes by the rinse method. Overall, BPW, LAC, and UP broths were equivalent for the recovery of Salmonella from cantaloupes. Both the VIDAS ICS/SLM and TECRA Unique Salmonella methods detected significantly fewer Salmonella-positive cantaloupes than did the culture method: the VIDAS ICS/SLM method detected 23 of 50 Salmonella-positive cantaloupes (60 tested) and the TECRA Unique Salmonella method detected 16 of 29 Salmonella-positive cantaloupes (60 tested). The VIDAS SLM and culture methods were equivalent: both methods detected 37 of 37 Salmonella-positive cantaloupes (60 tested).

  9. Temperature Profiles of Different Cooling Methods in Porcine Pancreas Procurement

    PubMed Central

    Weegman, Brad P.; Suszynski, Thomas M.; Scott, William E.; Ferrer, Joana; Avgoustiniatos, Efstathios S.; Anazawa, Takayuki; O’Brien, Timothy D.; Rizzari, Michael D.; Karatzas, Theodore; Jie, Tun; Sutherland, David ER.; Hering, Bernhard J.; Papas, Klearchos K.

    2014-01-01

    Background Porcine islet xenotransplantation is a promising alternative to human islet allotransplantation. Porcine pancreas cooling needs to be optimized to reduce the warm ischemia time (WIT) following donation after cardiac death, which is associated with poorer islet isolation outcomes. Methods This study examines the effect of 4 different cooling Methods on core porcine pancreas temperature (n=24) and histopathology (n=16). All Methods involved surface cooling with crushed ice and chilled irrigation. Method A, which is the standard for porcine pancreas procurement, used only surface cooling. Method B involved an intravascular flush with cold solution through the pancreas arterial system. Method C involved an intraductal infusion with cold solution through the major pancreatic duct, and Method D combined all 3 cooling Methods. Results Surface cooling alone (Method A) gradually decreased core pancreas temperature to < 10 °C after 30 minutes. Using an intravascular flush (Method B) improved cooling during the entire duration of procurement, but incorporating an intraductal infusion (Method C) rapidly reduced core temperature 15–20 °C within the first 2 minutes of cooling. Combining all methods (Method D) was the most effective at rapidly reducing temperature and providing sustained cooling throughout the duration of procurement, although the recorded WIT was not different between Methods (p=0.36). Histological scores were different between the cooling Methods (p=0.02) and the worst with Method A. There were differences in histological scores between Methods A and C (p=0.02) and Methods A and D (p=0.02), but not between Methods C and D (p=0.95), which may highlight the importance of early cooling using an intraductal infusion. Conclusions In conclusion, surface cooling alone cannot rapidly cool large (porcine or human) pancreata. Additional cooling with an intravascular flush and intraductal infusion results in improved core porcine pancreas temperature profiles during procurement and histopathology scores. These data may also have implications on human pancreas procurement since use of an intraductal infusion is not common practice. PMID:25040217

  10. A comparison of Ki-67 counting methods in luminal Breast Cancer: The Average Method vs. the Hot Spot Method

    PubMed Central

    Jang, Min Hye; Kim, Hyun Jung; Chung, Yul Ri; Lee, Yangkyu

    2017-01-01

    In spite of the usefulness of the Ki-67 labeling index (LI) as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67) between the two methods and the ratio of the Ki-67 LIs (H/A ratio) of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700). In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility. PMID:28187177

  11. A comparison of Ki-67 counting methods in luminal Breast Cancer: The Average Method vs. the Hot Spot Method.

    PubMed

    Jang, Min Hye; Kim, Hyun Jung; Chung, Yul Ri; Lee, Yangkyu; Park, So Yeon

    2017-01-01

    In spite of the usefulness of the Ki-67 labeling index (LI) as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67) between the two methods and the ratio of the Ki-67 LIs (H/A ratio) of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700). In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility.

  12. Estimating Tree Height-Diameter Models with the Bayesian Method

    PubMed Central

    Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733

  13. Estimating tree height-diameter models with the Bayesian method.

    PubMed

    Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.

  14. A comparison of treatment effectiveness between the CAD/CAM method and the manual method for managing adolescent idiopathic scoliosis.

    PubMed

    Wong, M S; Cheng, J C Y; Lo, K H

    2005-04-01

    The treatment effectiveness of the CAD/CAM method and the manual method in managing adolescent idiopathic scoliosis (AIS) was compared. Forty subjects were recruited with twenty subjects for each method. The clinical parameters namely Cobb's angle and apical vertebral rotation were evaluated at the pre-brace and the immediate in-brace visits. The results demonstrated that orthotic treatments rendered by the CAD/CAM method and the conventional manual method were effective in providing initial control of Cobb's angle. Significant decreases (p < 0.05) were found between the pre-brace and immediate in-brace visits for both methods. The mean reductions of Cobb's angle were 12.8 degrees (41.9%) for the CAD/CAM method and 9.8 degrees (32.1%) for the manual method. An initial control of the apical vertebral rotation was not shown in this study. In the comparison between the CAD/CAM method and the manual method, no significant difference was found in the control of Cobb's angle and apical vertebral rotation. The current study demonstrated that the CAD/CAM method can provide similar result in the initial stage of treatment as compared with the manual method.

  15. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  16. Costs and Efficiency of Online and Offline Recruitment Methods: A Web-Based Cohort Study.

    PubMed

    Christensen, Tina; Riis, Anders H; Hatch, Elizabeth E; Wise, Lauren A; Nielsen, Marie G; Rothman, Kenneth J; Toft Sørensen, Henrik; Mikkelsen, Ellen M

    2017-03-01

    The Internet is widely used to conduct research studies on health issues. Many different methods are used to recruit participants for such studies, but little is known about how various recruitment methods compare in terms of efficiency and costs. The aim of our study was to compare online and offline recruitment methods for Internet-based studies in terms of efficiency (number of recruited participants) and costs per participant. We employed several online and offline recruitment methods to enroll 18- to 45-year-old women in an Internet-based Danish prospective cohort study on fertility. Offline methods included press releases, posters, and flyers. Online methods comprised advertisements placed on five different websites, including Facebook and Netdoktor.dk. We defined seven categories of mutually exclusive recruitment methods and used electronic tracking via unique Uniform Resource Locator (URL) and self-reported data to identify the recruitment method for each participant. For each method, we calculated the average cost per participant and efficiency, that is, the total number of recruited participants. We recruited 8252 study participants. Of these, 534 were excluded as they could not be assigned to a specific recruitment method. The final study population included 7724 participants, of whom 803 (10.4%) were recruited by offline methods, 3985 (51.6%) by online methods, 2382 (30.8%) by online methods not initiated by us, and 554 (7.2%) by other methods. Overall, the average cost per participant was €6.22 for online methods initiated by us versus €9.06 for offline methods. Costs per participant ranged from €2.74 to €105.53 for online methods and from €0 to €67.50 for offline methods. Lowest average costs per participant were for those recruited from Netdoktor.dk (€2.99) and from Facebook (€3.44). In our Internet-based cohort study, online recruitment methods were superior to offline methods in terms of efficiency (total number of participants enrolled). The average cost per recruited participant was also lower for online than for offline methods, although costs varied greatly among both online and offline recruitment methods. We observed a decrease in the efficiency of some online recruitment methods over time, suggesting that it may be optimal to adopt multiple online methods. ©Tina Christensen, Anders H Riis, Elizabeth E Hatch, Lauren A Wise, Marie G Nielsen, Kenneth J Rothman, Henrik Toft Sørensen, Ellen M Mikkelsen. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 01.03.2017.

  17. A simple high performance liquid chromatography method for analyzing paraquat in soil solution samples.

    PubMed

    Ouyang, Ying; Mansell, Robert S; Nkedi-Kizza, Peter

    2004-01-01

    A high performance liquid chromatography (HPLC) method with UV detection was developed to analyze paraquat (1,1'-dimethyl-4,4'-dipyridinium dichloride) herbicide content in soil solution samples. The analytical method was compared with the liquid scintillation counting (LSC) method using 14C-paraquat. Agreement obtained between the two methods was reasonable. However, the detection limit for paraquat analysis was 0.5 mg L(-1) by the HPLC method and 0.05 mg L(-1) by the LSC method. The LSC method was, therefore, 10 times more precise than the HPLC method for solution concentrations less than 1 mg L(-1). In spite of the high detection limit, the UC (nonradioactive) HPLC method provides an inexpensive and environmentally safe means for determining paraquat concentration in soil solution compared with the 14C-LSC method.

  18. Hybrid finite element and Brownian dynamics method for diffusion-controlled reactions.

    PubMed

    Bauler, Patricia; Huber, Gary A; McCammon, J Andrew

    2012-04-28

    Diffusion is often the rate determining step in many biological processes. Currently, the two main computational methods for studying diffusion are stochastic methods, such as Brownian dynamics, and continuum methods, such as the finite element method. This paper proposes a new hybrid diffusion method that couples the strengths of each of these two methods. The method is derived for a general multidimensional system, and is presented using a basic test case for 1D linear and radially symmetric diffusion systems.

  19. Application of multiattribute decision-making methods for the determination of relative significance factor of impact categories.

    PubMed

    Noh, Jaesung; Lee, Kun Mo

    2003-05-01

    A relative significance factor (f(i)) of an impact category is the external weight of the impact category. The objective of this study is to propose a systematic and easy-to-use method for the determination of f(i). Multiattribute decision-making (MADM) methods including the analytical hierarchy process (AHP), the rank-order centroid method, and the fuzzy method were evaluated for this purpose. The results and practical aspects of using the three methods are compared. Each method shows the same trend, with minor differences in the value of f(i). Thus, all three methods can be applied to the determination of f(i). The rank order centroid method reduces the number of pairwise comparisons by placing the alternatives in order, although it has inherent weakness over the fuzzy method in expressing the degree of vagueness associated with assigning weights to criteria and alternatives. The rank order centroid method is considered a practical method for the determination of f(i) because it is easier and simpler to use compared to the AHP and the fuzzy method.

  20. Utility of N-Bromosuccinimide for the Titrimetric and Spectrophotometric Determination of Famotidine in Pharmaceutical Formulations

    PubMed Central

    Zenita, O.; Basavaiah, K.

    2011-01-01

    Two titrimetric and two spectrophotometric methods are described for the assay of famotidine (FMT) in tablets using N-bromosuccinimide (NBS). The first titrimetric method is direct in which FMT is titrated directly with NBS in HCl medium using methyl orange as indicator (method A). The remaining three methods are indirect in which the unreacted NBS is determined after the complete reaction between FMT and NBS by iodometric back titration (method B) or by reacting with a fixed amount of either indigo carmine (method C) or neutral red (method D). The method A and method B are applicable over the range of 2–9 mg and 1–7 mg, respectively. In spectrophotometric methods, Beer's law is obeyed over the concentration ranges of 0.75–6.0 μg mL−1 (method C) and 0.3–3.0 μg mL−1 (method D). The applicability of the developed methods was demonstrated by the determination of FMT in pure drug as well as in tablets. PMID:21760785

  1. Twostep-by-twostep PIRK-type PC methods with continuous output formulas

    NASA Astrophysics Data System (ADS)

    Cong, Nguyen Huu; Xuan, Le Ngoc

    2008-11-01

    This paper deals with parallel predictor-corrector (PC) iteration methods based on collocation Runge-Kutta (RK) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of first-order differential equations. At nth step, the continuous output formulas are used not only for predicting the stage values in the PC iteration methods but also for calculating the step values at (n+2)th step. In this case, the integration processes can be proceeded twostep-by-twostep. The resulting twostep-by-twostep (TBT) parallel-iterated RK-type (PIRK-type) methods with continuous output formulas (twostep-by-twostep PIRKC methods or TBTPIRKC methods) give us a faster integration process. Fixed stepsize applications of these TBTPIRKC methods to a few widely-used test problems reveal that the new PC methods are much more efficient when compared with the well-known parallel-iterated RK methods (PIRK methods), parallel-iterated RK-type PC methods with continuous output formulas (PIRKC methods) and sequential explicit RK codes DOPRI5 and DOP853 available from the literature.

  2. Which method should be the reference method to evaluate the severity of rheumatic mitral stenosis? Gorlin's method versus 3D-echo.

    PubMed

    Pérez de Isla, Leopoldo; Casanova, Carlos; Almería, Carlos; Rodrigo, José Luis; Cordeiro, Pedro; Mataix, Luis; Aubele, Ada Lia; Lang, Roberto; Zamorano, José Luis

    2007-12-01

    Several studies have shown a wide variability among different methods to determine the valve area in patients with rheumatic mitral stenosis. Our aim was to evaluate if 3D-echo planimetry is more accurate than the Gorlin method to measure the valve area. Twenty-six patients with mitral stenosis underwent 2D and 3D-echo echocardiographic examinations and catheterization. Valve area was estimated by different methods. A median value of the mitral valve area, obtained from the measurements of three classical non-invasive methods (2D planimetry, pressure half-time and PISA method), was used as the reference method and it was compared with 3D-echo planimetry and Gorlin's method. Our results showed that the accuracy of 3D-echo planimetry is superior to the accuracy of the Gorlin method for the assessment of mitral valve area. We should keep in mind the fact that 3D-echo planimetry may be a better reference method than the Gorlin method to assess the severity of rheumatic mitral stenosis.

  3. Evaluation and comparison of Abbott Jaffe and enzymatic creatinine methods: Could the old method meet the new requirements?

    PubMed

    Küme, Tuncay; Sağlam, Barıs; Ergon, Cem; Sisman, Ali Rıza

    2018-01-01

    The aim of this study is to evaluate and compare the analytical performance characteristics of the two creatinine methods based on the Jaffe and enzymatic methods. Two original creatinine methods, Jaffe and enzymatic, were evaluated on Architect c16000 automated analyzer via limit of detection (LOD) and limit of quantitation (LOQ), linearity, intra-assay and inter-assay precision, and comparability in serum and urine samples. The method comparison and bias estimation using patient samples according to CLSI guideline were performed on 230 serum and 141 urine samples by analyzing on the same auto-analyzer. The LODs were determined as 0.1 mg/dL for both serum methods and as 0.25 and 0.07 mg/dL for the Jaffe and the enzymatic urine method respectively. The LOQs were similar with 0.05 mg/dL value for both serum methods, and enzymatic urine method had a lower LOQ than Jaffe urine method, values at 0.5 and 2 mg/dL respectively. Both methods were linear up to 65 mg/dL for serum and 260 mg/dL for urine. The intra-assay and inter-assay precision data were under desirable levels in both methods. The higher correlations were determined between two methods in serum and urine (r=.9994, r=.9998 respectively). On the other hand, Jaffe method gave the higher creatinine results than enzymatic method, especially at the low concentrations in both serum and urine. Both Jaffe and enzymatic methods were found to meet the analytical performance requirements in routine use. However, enzymatic method was found to have better performance in low creatinine levels. © 2017 Wiley Periodicals, Inc.

  4. Comparison of the lysis centrifugation method with the conventional blood culture method in cases of sepsis in a tertiary care hospital.

    PubMed

    Parikh, Harshal R; De, Anuradha S; Baveja, Sujata M

    2012-07-01

    Physicians and microbiologists have long recognized that the presence of living microorganisms in the blood of a patient carries with it considerable morbidity and mortality. Hence, blood cultures have become critically important and frequently performed test in clinical microbiology laboratories for diagnosis of sepsis. To compare the conventional blood culture method with the lysis centrifugation method in cases of sepsis. Two hundred nonduplicate blood cultures from cases of sepsis were analyzed using two blood culture methods concurrently for recovery of bacteria from patients diagnosed clinically with sepsis - the conventional blood culture method using trypticase soy broth and the lysis centrifugation method using saponin by centrifuging at 3000 g for 30 minutes. Overall bacteria recovered from 200 blood cultures were 17.5%. The conventional blood culture method had a higher yield of organisms, especially Gram positive cocci. The lysis centrifugation method was comparable with the former method with respect to Gram negative bacilli. The sensitivity of lysis centrifugation method in comparison to conventional blood culture method was 49.75% in this study, specificity was 98.21% and diagnostic accuracy was 89.5%. In almost every instance, the time required for detection of the growth was earlier by lysis centrifugation method, which was statistically significant. Contamination by lysis centrifugation was minimal, while that by conventional method was high. Time to growth by the lysis centrifugation method was highly significant (P value 0.000) as compared to time to growth by the conventional blood culture method. For the diagnosis of sepsis, combination of the lysis centrifugation method and the conventional blood culture method with trypticase soy broth or biphasic media is advocable, in order to achieve faster recovery and a better yield of microorganisms.

  5. Optimization and validation of spectrophotometric methods for determination of finasteride in dosage and biological forms

    PubMed Central

    Amin, Alaa S.; Kassem, Mohammed A.

    2012-01-01

    Aim and Background: Three simple, accurate and sensitive spectrophotometric methods for the determination of finasteride in pure, dosage and biological forms, and in the presence of its oxidative degradates were developed. Materials and Methods: These methods are indirect, involve the addition of excess oxidant potassium permanganate for method A; cerric sulfate [Ce(SO4)2] for methods B; and N-bromosuccinimide (NBS) for method C of known concentration in acid medium to finasteride, and the determination of the unreacted oxidant by measurement of the decrease in absorbance of methylene blue for method A, chromotrope 2R for method B, and amaranth for method C at a suitable maximum wavelength, λmax: 663, 528, and 520 nm, for the three methods, respectively. The reaction conditions for each method were optimized. Results: Regression analysis of the Beer plots showed good correlation in the concentration ranges of 0.12–3.84 μg mL–1 for method A, and 0.12–3.28 μg mL–1 for method B and 0.14 – 3.56 μg mL–1 for method C. The apparent molar absorptivity, Sandell sensitivity, detection and quantification limits were evaluated. The stoichiometric ratio between the finasteride and the oxidant was estimated. The validity of the proposed methods was tested by analyzing dosage forms and biological samples containing finasteride with relative standard deviation ≤ 0.95. Conclusion: The proposed methods could successfully determine the studied drug with varying excess of its oxidative degradation products, with recovery between 99.0 and 101.4, 99.2 and 101.6, and 99.6 and 101.0% for methods A, B, and C, respectively. PMID:23781478

  6. John Butcher and hybrid methods

    NASA Astrophysics Data System (ADS)

    Mehdiyeva, Galina; Imanova, Mehriban; Ibrahimov, Vagif

    2017-07-01

    As is known there are the mainly two classes of the numerical methods for solving ODE, which is commonly called a one and multistep methods. Each of these methods has certain advantages and disadvantages. It is obvious that the method which has better properties of these methods should be constructed at the junction of them. In the middle of the XX century, Butcher and Gear has constructed at the junction of the methods of Runge-Kutta and Adams, which is called hybrid method. Here considers the construction of certain generalizations of hybrid methods, with the high order of accuracy and to explore their application to solving the Ordinary Differential, Volterra Integral and Integro-Differential equations. Also have constructed some specific hybrid methods with the degree p ≤ 10.

  7. Critical study of higher order numerical methods for solving the boundary-layer equations

    NASA Technical Reports Server (NTRS)

    Wornom, S. F.

    1978-01-01

    A fourth order box method is presented for calculating numerical solutions to parabolic, partial differential equations in two variables or ordinary differential equations. The method, which is the natural extension of the second order box scheme to fourth order, was demonstrated with application to the incompressible, laminar and turbulent, boundary layer equations. The efficiency of the present method is compared with two point and three point higher order methods, namely, the Keller box scheme with Richardson extrapolation, the method of deferred corrections, a three point spline method, and a modified finite element method. For equivalent accuracy, numerical results show the present method to be more efficient than higher order methods for both laminar and turbulent flows.

  8. A temperature match based optimization method for daily load prediction considering DLC effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Z.

    This paper presents a unique optimization method for short term load forecasting. The new method is based on the optimal template temperature match between the future and past temperatures. The optimal error reduction technique is a new concept introduced in this paper. Two case studies show that for hourly load forecasting, this method can yield results as good as the rather complicated Box-Jenkins Transfer Function method, and better than the Box-Jenkins method; for peak load prediction, this method is comparable in accuracy to the neural network method with back propagation, and can produce more accurate results than the multi-linear regressionmore » method. The DLC effect on system load is also considered in this method.« less

  9. [Isolation and identification methods of enterobacteria group and its technological advancement].

    PubMed

    Furuta, Itaru

    2007-08-01

    In the last half-century, isolation and identification methods of enterobacteria groups have markedly improved by technological advancement. Clinical microbiology tests have changed overtime from tube methods to commercial identification kits and automated identification. Tube methods are the original method for the identification of enterobacteria groups, that is, a basically essential method to recognize bacterial fermentation and biochemical principles. In this paper, traditional tube tests are discussed, such as the utilization of carbohydrates, indole, methyl red, and citrate and urease tests. Commercial identification kits and automated instruments by computer based analysis as current methods are also discussed, and those methods provide rapidity and accuracy. Nonculture techniques of nucleic acid typing methods using PCR analysis, and immunochemical methods using monoclonal antibodies can be further developed.

  10. Comparison of three commercially available fit-test methods.

    PubMed

    Janssen, Larry L; Luinenburg, D Michael; Mullins, Haskell E; Nelson, Thomas J

    2002-01-01

    American National Standards Institute (ANSI) standard Z88.10, Respirator Fit Testing Methods, includes criteria to evaluate new fit-tests. The standard allows generated aerosol, particle counting, or controlled negative pressure quantitative fit-tests to be used as the reference method to determine acceptability of a new test. This study examined (1) comparability of three Occupational Safety and Health Administration-accepted fit-test methods, all of which were validated using generated aerosol as the reference method; and (2) the effect of the reference method on the apparent performance of a fit-test method under evaluation. Sequential fit-tests were performed using the controlled negative pressure and particle counting quantitative fit-tests and the bitter aerosol qualitative fit-test. Of 75 fit-tests conducted with each method, the controlled negative pressure method identified 24 failures; bitter aerosol identified 22 failures; and the particle counting method identified 15 failures. The sensitivity of each method, that is, agreement with the reference method in identifying unacceptable fits, was calculated using each of the other two methods as the reference. None of the test methods met the ANSI sensitivity criterion of 0.95 or greater when compared with either of the other two methods. These results demonstrate that (1) the apparent performance of any fit-test depends on the reference method used, and (2) the fit-tests evaluated use different criteria to identify inadequately fitting respirators. Although "acceptable fit" cannot be defined in absolute terms at this time, the ability of existing fit-test methods to reject poor fits can be inferred from workplace protection factor studies.

  11. A Tale of Two Methods: Chart and Interview Methods for Identifying Delirium

    PubMed Central

    Saczynski, Jane S.; Kosar, Cyrus M.; Xu, Guoquan; Puelle, Margaret R.; Schmitt, Eva; Jones, Richard N.; Marcantonio, Edward R.; Wong, Bonnie; Isaza, Ilean; Inouye, Sharon K.

    2014-01-01

    Background Interview and chart-based methods for identifying delirium have been validated. However, relative strengths and limitations of each method have not been described, nor has a combined approach (using both interviews and chart), been systematically examined. Objectives To compare chart and interview-based methods for identification of delirium. Design, Setting and Participants Participants were 300 patients aged 70+ undergoing major elective surgery (majority were orthopedic surgery) interviewed daily during hospitalization for delirium using the Confusion Assessment Method (CAM; interview-based method) and whose medical charts were reviewed for delirium using a validated chart-review method (chart-based method). We examined rate of agreement on the two methods and patient characteristics of those identified using each approach. Predictive validity for clinical outcomes (length of stay, postoperative complications, discharge disposition) was compared. In the absence of a gold-standard, predictive value could not be calculated. Results The cumulative incidence of delirium was 23% (n= 68) by the interview-based method, 12% (n=35) by the chart-based method and 27% (n=82) by the combined approach. Overall agreement was 80%; kappa was 0.30. The methods differed in detection of psychomotor features and time of onset. The chart-based method missed delirium in CAM-identified patients laacking features of psychomotor agitation or inappropriate behavior. The CAM-based method missed chart-identified cases occurring during the night shift. The combined method had high predictive validity for all clinical outcomes. Conclusions Interview and chart-based methods have specific strengths for identification of delirium. A combined approach captures the largest number and the broadest range of delirium cases. PMID:24512042

  12. Inventory Management for Irregular Shipment of Goods in Distribution Centre

    NASA Astrophysics Data System (ADS)

    Takeda, Hitoshi; Kitaoka, Masatoshi; Usuki, Jun

    2016-01-01

    The shipping amount of commodity goods (Foods, confectionery, dairy products, such as public cosmetic pharmaceutical products) changes irregularly at the distribution center dealing with the general consumer goods. Because the shipment time and the amount of the shipment are irregular, the demand forecast becomes very difficult. For this, the inventory control becomes difficult, too. It cannot be applied to the shipment of the commodity by the conventional inventory control methods. This paper proposes the method for inventory control by cumulative flow curve method. It proposed the method of deciding the order quantity of the inventory control by the cumulative flow curve. Here, it proposes three methods. 1) Power method,2) Polynomial method and 3)Revised Holt's linear method that forecasts data with trends that is a kind of exponential smoothing method. This paper compares the economics of the conventional method, which is managed by the experienced and three new proposed methods. And, the effectiveness of the proposal method is verified from the numerical calculations.

  13. Computational Methods in Drug Discovery

    PubMed Central

    Sliwoski, Gregory; Kothiwale, Sandeepkumar; Meiler, Jens

    2014-01-01

    Computer-aided drug discovery/design methods have played a major role in the development of therapeutically important small molecules for over three decades. These methods are broadly classified as either structure-based or ligand-based methods. Structure-based methods are in principle analogous to high-throughput screening in that both target and ligand structure information is imperative. Structure-based approaches include ligand docking, pharmacophore, and ligand design methods. The article discusses theory behind the most important methods and recent successful applications. Ligand-based methods use only ligand information for predicting activity depending on its similarity/dissimilarity to previously known active ligands. We review widely used ligand-based methods such as ligand-based pharmacophores, molecular descriptors, and quantitative structure-activity relationships. In addition, important tools such as target/ligand data bases, homology modeling, ligand fingerprint methods, etc., necessary for successful implementation of various computer-aided drug discovery/design methods in a drug discovery campaign are discussed. Finally, computational methods for toxicity prediction and optimization for favorable physiologic properties are discussed with successful examples from literature. PMID:24381236

  14. [Primary culture of human normal epithelial cells].

    PubMed

    Tang, Yu; Xu, Wenji; Guo, Wanbei; Xie, Ming; Fang, Huilong; Chen, Chen; Zhou, Jun

    2017-11-28

    The traditional primary culture methods of human normal epithelial cells have disadvantages of low activity of cultured cells, the low cultivated rate and complicated operation. To solve these problems, researchers made many studies on culture process of human normal primary epithelial cell. In this paper, we mainly introduce some methods used in separation and purification of human normal epithelial cells, such as tissue separation method, enzyme digestion separation method, mechanical brushing method, red blood cell lysis method, percoll layered medium density gradient separation method. We also review some methods used in the culture and subculture, including serum-free medium combined with low mass fraction serum culture method, mouse tail collagen coating method, and glass culture bottle combined with plastic culture dish culture method. The biological characteristics of human normal epithelial cells, the methods of immunocytochemical staining, trypan blue exclusion are described. Moreover, the factors affecting the aseptic operation, the conditions of the extracellular environment, the conditions of the extracellular environment during culture, the number of differential adhesion, and the selection and dosage of additives are summarized.

  15. A Modified Magnetic Gradient Contraction Based Method for Ferromagnetic Target Localization

    PubMed Central

    Wang, Chen; Zhang, Xiaojuan; Qu, Xiaodong; Pan, Xiao; Fang, Guangyou; Chen, Luzhao

    2016-01-01

    The Scalar Triangulation and Ranging (STAR) method, which is based upon the unique properties of magnetic gradient contraction, is a high real-time ferromagnetic target localization method. Only one measurement point is required in the STAR method and it is not sensitive to changes in sensing platform orientation. However, the localization accuracy of the method is limited by the asphericity errors and the inaccurate value of position leads to larger errors in the estimation of magnetic moment. To improve the localization accuracy, a modified STAR method is proposed. In the proposed method, the asphericity errors of the traditional STAR method are compensated with an iterative algorithm. The proposed method has a fast convergence rate which meets the requirement of high real-time localization. Simulations and field experiments have been done to evaluate the performance of the proposed method. The results indicate that target parameters estimated by the modified STAR method are more accurate than the traditional STAR method. PMID:27999322

  16. Comparison of three explicit multigrid methods for the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.; Turkel, Eli; Schaffer, Steve

    1987-01-01

    Three explicit multigrid methods, Ni's method, Jameson's finite-volume method, and a finite-difference method based on Brandt's work, are described and compared for two model problems. All three methods use an explicit multistage Runge-Kutta scheme on the fine grid, and this scheme is also described. Convergence histories for inviscid flow over a bump in a channel for the fine-grid scheme alone show that convergence rate is proportional to Courant number and that implicit residual smoothing can significantly accelerate the scheme. Ni's method was slightly slower than the implicitly-smoothed scheme alone. Brandt's and Jameson's methods are shown to be equivalent in form but differ in their node versus cell-centered implementations. They are about 8.5 times faster than Ni's method in terms of CPU time. Results for an oblique shock/boundary layer interaction problem verify the accuracy of the finite-difference code. All methods slowed considerably on the stretched viscous grid but Brandt's method was still 2.1 times faster than Ni's method.

  17. Robust numerical solution of the reservoir routing equation

    NASA Astrophysics Data System (ADS)

    Fiorentini, Marcello; Orlandini, Stefano

    2013-09-01

    The robustness of numerical methods for the solution of the reservoir routing equation is evaluated. The methods considered in this study are: (1) the Laurenson-Pilgrim method, (2) the fourth-order Runge-Kutta method, and (3) the fixed order Cash-Karp method. Method (1) is unable to handle nonmonotonic outflow rating curves. Method (2) is found to fail under critical conditions occurring, especially at the end of inflow recession limbs, when large time steps (greater than 12 min in this application) are used. Method (3) is computationally intensive and it does not solve the limitations of method (2). The limitations of method (2) can be efficiently overcome by reducing the time step in the critical phases of the simulation so as to ensure that water level remains inside the domains of the storage function and the outflow rating curve. The incorporation of a simple backstepping procedure implementing this control into the method (2) yields a robust and accurate reservoir routing method that can be safely used in distributed time-continuous catchment models.

  18. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    PubMed

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Construction of exponentially fitted symplectic Runge-Kutta-Nyström methods from partitioned Runge-Kutta methods

    NASA Astrophysics Data System (ADS)

    Monovasilis, Theodore; Kalogiratou, Zacharoula; Simos, T. E.

    2014-10-01

    In this work we derive exponentially fitted symplectic Runge-Kutta-Nyström (RKN) methods from symplectic exponentially fitted partitioned Runge-Kutta (PRK) methods methods (for the approximate solution of general problems of this category see [18] - [40] and references therein). We construct RKN methods from PRK methods with up to five stages and fourth algebraic order.

  20. Why, and how, mixed methods research is undertaken in health services research in England: a mixed methods study

    PubMed Central

    O'Cathain, Alicia; Murphy, Elizabeth; Nicholl, Jon

    2007-01-01

    Background Recently, there has been a surge of international interest in combining qualitative and quantitative methods in a single study – often called mixed methods research. It is timely to consider why and how mixed methods research is used in health services research (HSR). Methods Documentary analysis of proposals and reports of 75 mixed methods studies funded by a research commissioner of HSR in England between 1994 and 2004. Face-to-face semi-structured interviews with 20 researchers sampled from these studies. Results 18% (119/647) of HSR studies were classified as mixed methods research. In the documentation, comprehensiveness was the main driver for using mixed methods research, with researchers wanting to address a wider range of questions than quantitative methods alone would allow. Interviewees elaborated on this, identifying the need for qualitative research to engage with the complexity of health, health care interventions, and the environment in which studies took place. Motivations for adopting a mixed methods approach were not always based on the intrinsic value of mixed methods research for addressing the research question; they could be strategic, for example, to obtain funding. Mixed methods research was used in the context of evaluation, including randomised and non-randomised designs; survey and fieldwork exploratory studies; and instrument development. Studies drew on a limited number of methods – particularly surveys and individual interviews – but used methods in a wide range of roles. Conclusion Mixed methods research is common in HSR in the UK. Its use is driven by pragmatism rather than principle, motivated by the perceived deficit of quantitative methods alone to address the complexity of research in health care, as well as other more strategic gains. Methods are combined in a range of contexts, yet the emerging methodological contributions from HSR to the field of mixed methods research are currently limited to the single context of combining qualitative methods and randomised controlled trials. Health services researchers could further contribute to the development of mixed methods research in the contexts of instrument development, survey and fieldwork, and non-randomised evaluations. PMID:17570838

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor-Pashow, K.; Fondeur, F.; White, T.

    Savannah River National Laboratory (SRNL) was tasked with identifying and developing at least one, but preferably two methods for quantifying the suppressor in the Next Generation Solvent (NGS) system. The suppressor is a guanidine derivative, N,N',N"-tris(3,7-dimethyloctyl)guanidine (TiDG). A list of 10 possible methods was generated, and screening experiments were performed for 8 of the 10 methods. After completion of the screening experiments, the non-aqueous acid-base titration was determined to be the most promising, and was selected for further development as the primary method. {sup 1}H NMR also showed promising results from the screening experiments, and this method was selected formore » further development as the secondary method. Other methods, including {sup 36}Cl radiocounting and ion chromatography, also showed promise; however, due to the similarity to the primary method (titration) and the inability to differentiate between TiDG and TOA (tri-n-ocytlamine) in the blended solvent, {sup 1}H NMR was selected over these methods. Analysis of radioactive samples obtained from real waste ESS (extraction, scrub, strip) testing using the titration method showed good results. Based on these results, the titration method was selected as the method of choice for TiDG measurement. {sup 1}H NMR has been selected as the secondary (back-up) method, and additional work is planned to further develop this method and to verify the method using radioactive samples. Procedures for analyzing radioactive samples of both pure NGS and blended solvent were developed and issued for the both methods.« less

  2. Novel atomic absorption spectrometric and rapid spectrophotometric methods for the quantitation of paracetamol in saliva: application to pharmacokinetic studies.

    PubMed

    Issa, M M; Nejem, R M; El-Abadla, N S; Al-Kholy, M; Saleh, Akila A

    2008-01-01

    A novel atomic absorption spectrometric method and two highly sensitive spectrophotometric methods were developed for the determination of paracetamol. These techniques based on the oxidation of paracetamol by iron (III) (method I); oxidation of p-aminophenol after the hydrolysis of paracetamol (method II). Iron (II) then reacts with potassium ferricyanide to form Prussian blue color with a maximum absorbance at 700 nm. The atomic absorption method was accomplished by extracting the excess iron (III) in method II and aspirates the aqueous layer into air-acetylene flame to measure the absorbance of iron (II) at 302.1 nm. The reactions have been spectrometrically evaluated to attain optimum experimental conditions. Linear responses were exhibited over the ranges 1.0-10, 0.2-2.0 and 0.1-1.0 mug/ml for method I, method II and atomic absorption spectrometric method, respectively. A high sensitivity is recorded for the proposed methods I and II and atomic absorption spectrometric method value indicate: 0.05, 0.022 and 0.012 mug/ml, respectively. The limit of quantitation of paracetamol by method II and atomic absorption spectrometric method were 0.20 and 0.10 mug/ml. Method II and the atomic absorption spectrometric method were applied to demonstrate a pharmacokinetic study by means of salivary samples in normal volunteers who received 1.0 g paracetamol. Intra and inter-day precision did not exceed 6.9%.

  3. Novel Atomic Absorption Spectrometric and Rapid Spectrophotometric Methods for the Quantitation of Paracetamol in Saliva: Application to Pharmacokinetic Studies

    PubMed Central

    Issa, M. M.; Nejem, R. M.; El-Abadla, N. S.; Al-Kholy, M.; Saleh, Akila. A.

    2008-01-01

    A novel atomic absorption spectrometric method and two highly sensitive spectrophotometric methods were developed for the determination of paracetamol. These techniques based on the oxidation of paracetamol by iron (III) (method I); oxidation of p-aminophenol after the hydrolysis of paracetamol (method II). Iron (II) then reacts with potassium ferricyanide to form Prussian blue color with a maximum absorbance at 700 nm. The atomic absorption method was accomplished by extracting the excess iron (III) in method II and aspirates the aqueous layer into air-acetylene flame to measure the absorbance of iron (II) at 302.1 nm. The reactions have been spectrometrically evaluated to attain optimum experimental conditions. Linear responses were exhibited over the ranges 1.0-10, 0.2-2.0 and 0.1-1.0 μg/ml for method I, method II and atomic absorption spectrometric method, respectively. A high sensitivity is recorded for the proposed methods I and II and atomic absorption spectrometric method value indicate: 0.05, 0.022 and 0.012 μg/ml, respectively. The limit of quantitation of paracetamol by method II and atomic absorption spectrometric method were 0.20 and 0.10 μg/ml. Method II and the atomic absorption spectrometric method were applied to demonstrate a pharmacokinetic study by means of salivary samples in normal volunteers who received 1.0 g paracetamol. Intra and inter-day precision did not exceed 6.9%. PMID:20046743

  4. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.

  5. X-ray imaging using amorphous selenium: a photoinduced discharge readout method for digital mammography.

    PubMed

    Rowlands, J A; Hunter, D M; Araj, N

    1991-01-01

    A new digital image readout method for electrostatic charge images on photoconductive plates is described. The method can be used to read out images on selenium plates similar to those used in xeromammography. The readout method, called the air-gap photoinduced discharge method (PID), discharges the latent image pixel by pixel and measures the charge. The PID readout method, like electrometer methods, is linear. However, the PID method permits much better resolution than scanning electrometers while maintaining quantum limited performance at high radiation exposure levels. Thus the air-gap PID method appears to be uniquely superior for high-resolution digital imaging tasks such as mammography.

  6. Quantitative naturalistic methods for detecting change points in psychotherapy research: an illustration with alliance ruptures.

    PubMed

    Eubanks-Carter, Catherine; Gorman, Bernard S; Muran, J Christopher

    2012-01-01

    Analysis of change points in psychotherapy process could increase our understanding of mechanisms of change. In particular, naturalistic change point detection methods that identify turning points or breakpoints in time series data could enhance our ability to identify and study alliance ruptures and resolutions. This paper presents four categories of statistical methods for detecting change points in psychotherapy process: criterion-based methods, control chart methods, partitioning methods, and regression methods. Each method's utility for identifying shifts in the alliance is illustrated using a case example from the Beth Israel Psychotherapy Research program. Advantages and disadvantages of the various methods are discussed.

  7. A comparative study of interface reconstruction methods for multi-material ALE simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kucharik, Milan; Garimalla, Rao; Schofield, Samuel

    2009-01-01

    In this paper we compare the performance of different methods for reconstructing interfaces in multi-material compressible flow simulations. The methods compared are a material-order-dependent Volume-of-Fluid (VOF) method, a material-order-independent VOF method based on power diagram partitioning of cells and the Moment-of-Fluid method (MOF). We demonstrate that the MOF method provides the most accurate tracking of interfaces, followed by the VOF method with the right material ordering. The material-order-independent VOF method performs some-what worse than the above two while the solutions with VOF using the wrong material order are considerably worse.

  8. Digital photography and transparency-based methods for measuring wound surface area.

    PubMed

    Bhedi, Amul; Saxena, Atul K; Gadani, Ravi; Patel, Ritesh

    2013-04-01

    To compare and determine a credible method of measurement of wound surface area by linear, transparency, and photographic methods for monitoring progress of wound healing accurately and ascertaining whether these methods are significantly different. From April 2005 to December 2006, 40 patients (30 men, 5 women, 5 children) admitted to the surgical ward of Shree Sayaji General Hospital, Baroda, had clean as well as infected wound following trauma, debridement, pressure sore, venous ulcer, and incision and drainage. Wound surface areas were measured by these three methods (linear, transparency, and photographic methods) simultaneously on alternate days. The linear method is statistically and significantly different from transparency and photographic methods (P value <0.05), but there is no significant difference between transparency and photographic methods (P value >0.05). Photographic and transparency methods provided measurements of wound surface area with equivalent result and there was no statistically significant difference between these two methods.

  9. Anatomically-Aided PET Reconstruction Using the Kernel Method

    PubMed Central

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-01-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810

  10. Anatomically-aided PET reconstruction using the kernel method.

    PubMed

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  11. [An automatic peak detection method for LIBS spectrum based on continuous wavelet transform].

    PubMed

    Chen, Peng-Fei; Tian, Di; Qiao, Shu-Jun; Yang, Guang

    2014-07-01

    Spectrum peak detection in the laser-induced breakdown spectroscopy (LIBS) is an essential step, but the presence of background and noise seriously disturb the accuracy of peak position. The present paper proposed a method applied to automatic peak detection for LIBS spectrum in order to enhance the ability of overlapping peaks searching and adaptivity. We introduced the ridge peak detection method based on continuous wavelet transform to LIBS, and discussed the choice of the mother wavelet and optimized the scale factor and the shift factor. This method also improved the ridge peak detection method with a correcting ridge method. The experimental results show that compared with other peak detection methods (the direct comparison method, derivative method and ridge peak search method), our method had a significant advantage on the ability to distinguish overlapping peaks and the precision of peak detection, and could be be applied to data processing in LIBS.

  12. A Method of DTM Construction Based on Quadrangular Irregular Networks and Related Error Analysis

    PubMed Central

    Kang, Mengjun

    2015-01-01

    A new method of DTM construction based on quadrangular irregular networks (QINs) that considers all the original data points and has a topological matrix is presented. A numerical test and a real-world example are used to comparatively analyse the accuracy of QINs against classical interpolation methods and other DTM representation methods, including SPLINE, KRIGING and triangulated irregular networks (TINs). The numerical test finds that the QIN method is the second-most accurate of the four methods. In the real-world example, DTMs are constructed using QINs and the three classical interpolation methods. The results indicate that the QIN method is the most accurate method tested. The difference in accuracy rank seems to be caused by the locations of the data points sampled. Although the QIN method has drawbacks, it is an alternative method for DTM construction. PMID:25996691

  13. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  14. [Theory, method and application of method R on estimation of (co)variance components].

    PubMed

    Liu, Wen-Zhong

    2004-07-01

    Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.

  15. Multiple zeros of polynomials

    NASA Technical Reports Server (NTRS)

    Wood, C. A.

    1974-01-01

    For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.

  16. Evaluation of the methods for enumerating coliform bacteria from water samples using precise reference standards.

    PubMed

    Wohlsen, T; Bates, J; Vesey, G; Robinson, W A; Katouli, M

    2006-04-01

    To use BioBall cultures as a precise reference standard to evaluate methods for enumeration of Escherichia coli and other coliform bacteria in water samples. Eight methods were evaluated including membrane filtration, standard plate count (pour and spread plate methods), defined substrate technology methods (Colilert and Colisure), the most probable number method and the Petrifilm disposable plate method. Escherichia coli and Enterobacter aerogenes BioBall cultures containing 30 organisms each were used. All tests were performed using 10 replicates. The mean recovery of both bacteria varied with the different methods employed. The best and most consistent results were obtained with Petrifilm and the pour plate method. Other methods either yielded a low recovery or showed significantly high variability between replicates. The BioBall is a very suitable quality control tool for evaluating the efficiency of methods for bacterial enumeration in water samples.

  17. Wilsonian methods of concept analysis: a critique.

    PubMed

    Hupcey, J E; Morse, J M; Lenz, E R; Tasón, M C

    1996-01-01

    Wilsonian methods of concept analysis--that is, the method proposed by Wilson and Wilson-derived methods in nursing (as described by Walker and Avant; Chinn and Kramer [Jacobs]; Schwartz-Barcott and Kim; and Rodgers)--are discussed and compared in this article. The evolution and modifications of Wilson's method in nursing are described and research that has used these methods, assessed. The transformation of Wilson's method is traced as each author has adopted his techniques and attempted to modify the method to correct for limitations. We suggest that these adaptations and modifications ultimately erode Wilson's method. Further, the Wilson-derived methods have been overly simplified and used by nurse researchers in a prescriptive manner, and the results often do not serve the purpose of expanding nursing knowledge. We conclude that, considering the significance of concept development for the nursing profession, the development of new methods and a means for evaluating conceptual inquiry must be given priority.

  18. The Application of Continuous Wavelet Transform Based Foreground Subtraction Method in 21 cm Sky Surveys

    NASA Astrophysics Data System (ADS)

    Gu, Junhua; Xu, Haiguang; Wang, Jingying; An, Tao; Chen, Wen

    2013-08-01

    We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.

  19. Study report on a double isotope method of calcium absorption

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Some of the pros and cons of three methods to study gastrointestinal calcium absorption are briefly discussed. The methods are: (1) a balance study; (2) a single isotope method; and (3) a double isotope method. A procedure for the double isotope method is also included.

  20. Comparison on genomic predictions using three GBLUP methods and two single-step blending methods in the Nordic Holstein population

    PubMed Central

    2012-01-01

    Background A single-step blending approach allows genomic prediction using information of genotyped and non-genotyped animals simultaneously. However, the combined relationship matrix in a single-step method may need to be adjusted because marker-based and pedigree-based relationship matrices may not be on the same scale. The same may apply when a GBLUP model includes both genomic breeding values and residual polygenic effects. The objective of this study was to compare single-step blending methods and GBLUP methods with and without adjustment of the genomic relationship matrix for genomic prediction of 16 traits in the Nordic Holstein population. Methods The data consisted of de-regressed proofs (DRP) for 5 214 genotyped and 9 374 non-genotyped bulls. The bulls were divided into a training and a validation population by birth date, October 1, 2001. Five approaches for genomic prediction were used: 1) a simple GBLUP method, 2) a GBLUP method with a polygenic effect, 3) an adjusted GBLUP method with a polygenic effect, 4) a single-step blending method, and 5) an adjusted single-step blending method. In the adjusted GBLUP and single-step methods, the genomic relationship matrix was adjusted for the difference of scale between the genomic and the pedigree relationship matrices. A set of weights on the pedigree relationship matrix (ranging from 0.05 to 0.40) was used to build the combined relationship matrix in the single-step blending method and the GBLUP method with a polygenetic effect. Results Averaged over the 16 traits, reliabilities of genomic breeding values predicted using the GBLUP method with a polygenic effect (relative weight of 0.20) were 0.3% higher than reliabilities from the simple GBLUP method (without a polygenic effect). The adjusted single-step blending and original single-step blending methods (relative weight of 0.20) had average reliabilities that were 2.1% and 1.8% higher than the simple GBLUP method, respectively. In addition, the GBLUP method with a polygenic effect led to less bias of genomic predictions than the simple GBLUP method, and both single-step blending methods yielded less bias of predictions than all GBLUP methods. Conclusions The single-step blending method is an appealing approach for practical genomic prediction in dairy cattle. Genomic prediction from the single-step blending method can be improved by adjusting the scale of the genomic relationship matrix. PMID:22455934

  1. Roka Listeria detection method using transcription mediated amplification to detect Listeria species in select foods and surfaces. Performance Tested Method(SM) 011201.

    PubMed

    Hua, Yang; Kaplan, Shannon; Reshatoff, Michael; Hu, Ernie; Zukowski, Alexis; Schweis, Franz; Gin, Cristal; Maroni, Brett; Becker, Michael; Wisniewski, Michele

    2012-01-01

    The Roka Listeria Detection Assay was compared to the reference culture methods for nine select foods and three select surfaces. The Roka method used Half-Fraser Broth for enrichment at 35 +/- 2 degrees C for 24-28 h. Comparison of Roka's method to reference methods requires an unpaired approach. Each method had a total of 545 samples inoculated with a Listeria strain. Each food and surface was inoculated with a different strain of Listeria at two different levels per method. For the dairy products (Brie cheese, whole milk, and ice cream), our method was compared to AOAC Official Method(SM) 993.12. For the ready-to-eat meats (deli chicken, cured ham, chicken salad, and hot dogs) and environmental surfaces (sealed concrete, stainless steel, and plastic), these samples were compared to the U.S. Department of Agriculture/Food Safety and Inspection Service-Microbiology Laboratory Guidebook (USDA/FSIS-MLG) method MLG 8.07. Cold-smoked salmon and romaine lettuce were compared to the U.S. Food and Drug Administration/Bacteriological Analytical Manual, Chapter 10 (FDA/BAM) method. Roka's method had 358 positives out of 545 total inoculated samples compared to 332 positive for the reference methods. Overall the probability of detection analysis of the results showed better or equivalent performance compared to the reference methods.

  2. A propagation method with adaptive mesh grid based on wave characteristics for wave optics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan

    2015-10-01

    Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.

  3. Reliability and accuracy of real-time visualization techniques for measuring school cafeteria tray waste: validating the quarter-waste method.

    PubMed

    Hanks, Andrew S; Wansink, Brian; Just, David R

    2014-03-01

    Measuring food waste is essential to determine the impact of school interventions on what children eat. There are multiple methods used for measuring food waste, yet it is unclear which method is most appropriate in large-scale interventions with restricted resources. This study examines which of three visual tray waste measurement methods is most reliable, accurate, and cost-effective compared with the gold standard of individually weighing leftovers. School cafeteria researchers used the following three visual methods to capture tray waste in addition to actual food waste weights for 197 lunch trays: the quarter-waste method, the half-waste method, and the photograph method. Inter-rater and inter-method reliability were highest for on-site visual methods (0.90 for the quarter-waste method and 0.83 for the half-waste method) and lowest for the photograph method (0.48). This low reliability is partially due to the inability of photographs to determine whether packaged items (such as milk or yogurt) are empty or full. In sum, the quarter-waste method was the most appropriate for calculating accurate amounts of tray waste, and the photograph method might be appropriate if researchers only wish to detect significant differences in waste or consumption of selected, unpackaged food. Copyright © 2014 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  4. Modified flotation method with the use of Percoll for the detection of Isospora suis oocysts in suckling piglet faeces.

    PubMed

    Karamon, Jacek; Ziomko, Irena; Cencek, Tomasz; Sroka, Jacek

    2008-10-01

    The modification of flotation method for the examination of diarrhoeic piglet faeces for the detection of Isospora suis oocysts was elaborated. The method was based on removing fractions of fat from the sample of faeces by centrifugation with a 25% Percoll solution. The investigations were carried out in comparison to the McMaster method. From five variants of the Percoll flotation method, the best results were obtained when 2ml of flotation liquid per 1g of faeces were used. The limit of detection in the Percoll flotation method was 160 oocysts per 1g, and was better than with the McMaster method. The efficacy of the modified method was confirmed by results obtained in the examination of the I. suis infected piglets. From all faecal samples, positive samples in the Percoll flotation method were double the results than that of the routine method. Oocysts were first detected by the Percoll flotation method on day 4 post-invasion, i.e. one-day earlier than with the McMaster method. During the experiment (except for 3 days), the extensity of I. suis invasion in the litter examined by the Percoll flotation method was higher than that with the McMaster method. The obtained results show that the modified flotation method with the use of Percoll could be applied in the diagnostics of suckling piglet isosporosis.

  5. Comparison of concentration methods for rapid detection of hookworm ova in wastewater matrices using quantitative PCR.

    PubMed

    Gyawali, P; Ahmed, W; Jagals, P; Sidhu, J P S; Toze, S

    2015-12-01

    Hookworm infection contributes around 700 million infections worldwide especially in developing nations due to increased use of wastewater for crop production. The effective recovery of hookworm ova from wastewater matrices is difficult due to their low concentrations and heterogeneous distribution. In this study, we compared the recovery rates of (i) four rapid hookworm ova concentration methods from municipal wastewater, and (ii) two concentration methods from sludge samples. Ancylostoma caninum ova were used as surrogate for human hookworm (Ancylostoma duodenale and Necator americanus). Known concentration of A. caninum hookworm ova were seeded into wastewater (treated and raw) and sludge samples collected from two wastewater treatment plants (WWTPs) in Brisbane and Perth, Australia. The A. caninum ova were concentrated from treated and raw wastewater samples using centrifugation (Method A), hollow fiber ultrafiltration (HFUF) (Method B), filtration (Method C) and flotation (Method D) methods. For sludge samples, flotation (Method E) and direct DNA extraction (Method F) methods were used. Among the four methods tested, filtration (Method C) method was able to recover higher concentrations of A. caninum ova consistently from treated wastewater (39-50%) and raw wastewater (7.1-12%) samples collected from both WWTPs. The remaining methods (Methods A, B and D) yielded variable recovery rate ranging from 0.2 to 40% for treated and raw wastewater samples. The recovery rates for sludge samples were poor (0.02-4.7), although, Method F (direct DNA extraction) provided 1-2 orders of magnitude higher recovery rate than Method E (flotation). Based on our results it can be concluded that the recovery rates of hookworm ova from wastewater matrices, especially sludge samples, can be poor and highly variable. Therefore, choice of concentration method is vital for the sensitive detection of hookworm ova in wastewater matrices. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  6. Achieving cost-neutrality with long-acting reversible contraceptive methods⋆

    PubMed Central

    Trussell, James; Hassan, Fareen; Lowin, Julia; Law, Amy; Filonenko, Anna

    2014-01-01

    Objectives This analysis aimed to estimate the average annual cost of available reversible contraceptive methods in the United States. In line with literature suggesting long-acting reversible contraceptive (LARC) methods become increasingly cost-saving with extended duration of use, it aimed to also quantify minimum duration of use required for LARC methods to achieve cost-neutrality relative to other reversible contraceptive methods while taking into consideration discontinuation. Study design A three-state economic model was developed to estimate relative costs of no method (chance), four short-acting reversible (SARC) methods (oral contraceptive, ring, patch and injection) and three LARC methods [implant, copper intrauterine device (IUD) and levonorgestrel intrauterine system (LNG-IUS) 20 mcg/24 h (total content 52 mg)]. The analysis was conducted over a 5-year time horizon in 1000 women aged 20–29 years. Method-specific failure and discontinuation rates were based on published literature. Costs associated with drug acquisition, administration and failure (defined as an unintended pregnancy) were considered. Key model outputs were annual average cost per method and minimum duration of LARC method usage to achieve cost-savings compared to SARC methods. Results The two least expensive methods were copper IUD ($304 per women, per year) and LNG-IUS 20 mcg/24 h ($308). Cost of SARC methods ranged between $432 (injection) and $730 (patch), per women, per year. A minimum of 2.1 years of LARC usage would result in cost-savings compared to SARC usage. Conclusions This analysis finds that even if LARC methods are not used for their full durations of efficacy, they become cost-saving relative to SARC methods within 3 years of use. Implications Previous economic arguments in support of using LARC methods have been criticized for not considering that LARC methods are not always used for their full duration of efficacy. This study calculated that cost-savings from LARC methods relative to SARC methods, with discontinuation rates considered, can be realized within 3 years. PMID:25282161

  7. A method for addressing differences in concentrations of fipronil and three degradates obtained by two different laboratory methods

    USGS Publications Warehouse

    Crawford, Charles G.; Martin, Jeffrey D.

    2017-07-21

    In October 2012, the U.S. Geological Survey (USGS) began measuring the concentration of the pesticide fipronil and three of its degradates (desulfinylfipronil, fipronil sulfide, and fipronil sulfone) by a new laboratory method using direct aqueous-injection liquid chromatography tandem mass spectrometry (DAI LC–MS/MS). This method replaced the previous method—in use since 2002—that used gas chromatography/mass spectrometry (GC/MS). The performance of the two methods is not comparable for fipronil and the three degradates. Concentrations of these four chemical compounds determined by the DAI LC–MS/MS method are substantially lower than the GC/MS method. A method was developed to correct for the difference in concentrations obtained by the two laboratory methods based on a methods comparison field study done in 2012. Environmental and field matrix spike samples to be analyzed by both methods from 48 stream sites from across the United States were sampled approximately three times each for this study. These data were used to develop a relation between the two laboratory methods for each compound using regression analysis. The relations were used to calibrate data obtained by the older method to the new method in order to remove any biases attributable to differences in the methods. The coefficients of the equations obtained from the regressions were used to calibrate over 16,600 observations of fipronil, as well as the three degradates determined by the GC/MS method retrieved from the USGS National Water Information System. The calibrated values were then compared to over 7,800 observations of fipronil and to the three degradates determined by the DAI LC–MS/MS method also retrieved from the National Water Information System. The original and calibrated values from the GC/MS method, along with measures of uncertainty in the calibrated values and the original values from the DAI LC–MS/MS method, are provided in an accompanying data release.

  8. 24 CFR 291.90 - Sales methods.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...

  9. 24 CFR 291.90 - Sales methods.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...

  10. 24 CFR 291.90 - Sales methods.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...

  11. 24 CFR 291.90 - Sales methods.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...

  12. 77 FR 48733 - Transitional Program for Covered Business Method Patents-Definitions of Covered Business Method...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-14

    ... Office 37 CFR Part 42 Transitional Program for Covered Business Method Patents--Definitions of Covered Business Method Patent and Technological Invention; Final Rule #0;#0;Federal Register / Vol. 77 , No. 157... Business Method Patents-- Definitions of Covered Business Method Patent and Technological Invention AGENCY...

  13. 24 CFR 291.90 - Sales methods.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...

  14. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method... (analytical method) provided that the chemistry of the method or the determinative technique is not changed... prevent efficient recovery of organic pollutants and prevent the method from meeting QC requirements, the...

  15. A Review of Methods for Missing Data.

    ERIC Educational Resources Information Center

    Pigott, Therese D.

    2001-01-01

    Reviews methods for handling missing data in a research study. Model-based methods, such as maximum likelihood using the EM algorithm and multiple imputation, hold more promise than ad hoc methods. Although model-based methods require more specialized computer programs and assumptions about the nature of missing data, these methods are appropriate…

  16. The Views of Turkish Pre-Service Teachers about Effectiveness of Cluster Method as a Teaching Writing Method

    ERIC Educational Resources Information Center

    Kitis, Emine; Türkel, Ali

    2017-01-01

    The aim of this study is to find out Turkish pre-service teachers' views on effectiveness of cluster method as a writing teaching method. The Cluster Method can be defined as a connotative creative writing method. The way the method works is that the person who brainstorms on connotations of a word or a concept in abscence of any kind of…

  17. Assay of fluoxetine hydrochloride by titrimetric and HPLC methods.

    PubMed

    Bueno, F; Bergold, A M; Fröehlich, P E

    2000-01-01

    Two alternative methods were proposed to assay Fluoxetine Hydrochloride: a titrimetric method and another by HPLC using as mobile phase water pH 3.5: acetonitrile (65:35). These methods were applied to the determination of Fluoxetine as such or in formulations (capsules). The titrimetric method is an alternative for pharmacies and small industries. Both methods showed accuracy and precision and are an alternative to the official methods.

  18. Thermophysical Properties of Matter - The TPRC Data Series. Volume 3. Thermal Conductivity - Nonmetallic Liquids and Gases

    DTIC Science & Technology

    1970-01-01

    design and experimentation. I. The Shock- Tube Method Smiley [546] introduced the use of shock waves...one of the greatest disadvantages of this technique. Both the unique adaptability of the shock tube method for high -temperature measurement of...Line-Source Flow Method H. The Hot-Wire Thermal Diffusion Column Method I. The Shock- Tube Method J. The Arc Method K. The Ultrasonic Method .

  19. New methods for the numerical integration of ordinary differential equations and their application to the equations of motion of spacecraft

    NASA Technical Reports Server (NTRS)

    Banyukevich, A.; Ziolkovski, K.

    1975-01-01

    A number of hybrid methods for solving Cauchy problems are described on the basis of an evaluation of advantages of single and multiple-point numerical integration methods. The selection criterion is the principle of minimizing computer time. The methods discussed include the Nordsieck method, the Bulirsch-Stoer extrapolation method, and the method of recursive Taylor-Steffensen power series.

  20. Comparison of measurement methods for capacitive tactile sensors and their implementation

    NASA Astrophysics Data System (ADS)

    Tarapata, Grzegorz; Sienkiewicz, Rafał

    2015-09-01

    This paper presents a review of ideas and implementations of measurement methods utilized for capacity measurements in tactile sensors. The paper describes technical method, charge amplification method, generation and as well integration method. Three selected methods were implemented in dedicated measurement system and utilised for capacitance measurements of ourselves made tactile sensors. The tactile sensors tested in this work were fully fabricated with the inkjet printing technology. The tests result were presented and summarised. The charge amplification method (CDC) was selected as the best method for the measurement of the tactile sensors.

  1. On time discretizations for spectral methods. [numerical integration of Fourier and Chebyshev methods for dynamic partial differential equations

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Turkel, E.

    1980-01-01

    New methods are introduced for the time integration of the Fourier and Chebyshev methods of solution for dynamic differential equations. These methods are unconditionally stable, even though no matrix inversions are required. Time steps are chosen by accuracy requirements alone. For the Fourier method both leapfrog and Runge-Kutta methods are considered. For the Chebyshev method only Runge-Kutta schemes are tested. Numerical calculations are presented to verify the analytic results. Applications to the shallow water equations are presented.

  2. Comparison of Response Surface Construction Methods for Derivative Estimation Using Moving Least Squares, Kriging and Radial Basis Functions

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2005-01-01

    Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.

  3. Two smart spectrophotometric methods for the simultaneous estimation of Simvastatin and Ezetimibe in combined dosage form

    NASA Astrophysics Data System (ADS)

    Magdy, Nancy; Ayad, Miriam F.

    2015-02-01

    Two simple, accurate, precise, sensitive and economic spectrophotometric methods were developed for the simultaneous determination of Simvastatin and Ezetimibe in fixed dose combination products without prior separation. The first method depends on a new chemometrics-assisted ratio spectra derivative method using moving window polynomial least square fitting method (Savitzky-Golay filters). The second method is based on a simple modification for the ratio subtraction method. The suggested methods were validated according to USP guidelines and can be applied for routine quality control testing.

  4. Application of LC/MS/MS Techniques to Development of US ...

    EPA Pesticide Factsheets

    This presentation will describe the U.S. EPA’s drinking water and ambient water method development program in relation to the process employed and the typical challenges encountered in developing standardized LC/MS/MS methods for chemicals of emerging concern. The EPA’s Drinking Water Contaminant Candidate List and Unregulated Contaminant Monitoring Regulations, which are the driving forces behind drinking water method development, will be introduced. Three drinking water LC/MS/MS methods (Methods 537, 544 and a new method for nonylphenol) and two ambient water LC/MS/MS methods for cyanotoxins will be described that highlight some of the challenges encountered during development of these methods. This presentation will provide the audience with basic understanding of EPA's drinking water method development program and an introduction to two new ambient water EPA methods.

  5. The Roche Immunoturbidimetric Albumin Method on Cobas c 501 Gives Higher Values Than the Abbott and Roche BCP Methods When Analyzing Patient Plasma Samples.

    PubMed

    Helmersson-Karlqvist, Johanna; Flodin, Mats; Havelka, Aleksandra Mandic; Xu, Xiao Yan; Larsson, Anders

    2016-09-01

    Serum/plasma albumin is an important and widely used laboratory marker and it is important that we measure albumin correctly without bias. We had indications that the immunoturbidimetric method on Cobas c 501 and the bromocresol purple (BCP) method on Architect 16000 differed, so we decided to study these methods more closely. A total of 1,951 patient requests with albumin measured with both the Architect BCP and Cobas immunoturbidimetric methods were extracted from the laboratory system. A comparison with fresh plasma samples was also performed that included immunoturbidimetric and BCP methods on Cobas c 501 and analysis of the international protein calibrator ERM-DA470k/IFCC. The median difference between the Abbott BCP and Roche immunoturbidimetric methods was 3.3 g/l and the Roche method overestimated ERM-DA470k/IFCC by 2.2 g/l. The Roche immunoturbidimetric method gave higher values than the Roche BCP method: y = 1.111x - 0.739, R² = 0.971. The Roche immunoturbidimetric albumin method gives clearly higher values than the Abbott and Roche BCP methods when analyzing fresh patient samples. The differences between the two methods were similar at normal and low albumin levels. © 2016 Wiley Periodicals, Inc.

  6. Manual tracing versus smartphone application (app) tracing: a comparative study.

    PubMed

    Sayar, Gülşilay; Kilinc, Delal Dara

    2017-11-01

    This study aimed to compare the results of conventional manual cephalometric tracing with those acquired with smartphone application cephalometric tracing. The cephalometric radiographs of 55 patients (25 females and 30 males) were traced via the manual and app methods and were subsequently examined with Steiner's analysis. Five skeletal measurements, five dental measurements and two soft tissue measurements were managed based on 21 landmarks. The durations of the performances of the two methods were also compared. SNA (Sella, Nasion, A point angle) and SNB (Sella, Nasion, B point angle) values for the manual method were statistically lower (p < .001) than those for the app method. The ANB value for the manual method was statistically lower than that of app method. L1-NB (°) and upper lip protrusion values for the manual method were statistically higher than those for the app method. Go-GN/SN, U1-NA (°) and U1-NA (mm) values for manual method were statistically lower than those for the app method. No differences between the two methods were found in the L1-NB (mm), occlusal plane to SN, interincisal angle or lower lip protrusion values. Although statistically significant differences were found between the two methods, the cephalometric tracing proceeded faster with the app method than with the manual method.

  7. Contraceptive Method Choice Among Young Adults: Influence of Individual and Relationship Factors.

    PubMed

    Harvey, S Marie; Oakley, Lisa P; Washburn, Isaac; Agnew, Christopher R

    2018-01-26

    Because decisions related to contraceptive behavior are often made by young adults in the context of specific relationships, the relational context likely influences use of contraceptives. Data presented here are from in-person structured interviews with 536 Black, Hispanic, and White young adults from East Los Angeles, California. We collected partner-specific relational and contraceptive data on all sexual partnerships for each individual, on four occasions, over one year. Using three-level multinomial logistic regression models, we examined individual and relationship factors predictive of contraceptive use. Results indicated that both individual and relationship factors predicted contraceptive use, but factors varied by method. Participants reporting greater perceived partner exclusivity and relationship commitment were more likely to use hormonal/long-acting methods only or a less effective method/no method versus condoms only. Those with greater participation in sexual decision making were more likely to use any method over a less effective method/no method and were more likely to use condoms only or dual methods versus a hormonal/long-acting method only. In addition, for women only, those who reported greater relationship commitment were more likely to use hormonal/long-acting methods or a less effective method/no method versus a dual method. In summary, interactive relationship qualities and dynamics (commitment and sexual decision making) significantly predicted contraceptive use.

  8. [A study for testing the antifungal susceptibility of yeast by the Japanese Society for Medical Mycology (JSMM) method. The proposal of the modified JSMM method 2009].

    PubMed

    Nishiyama, Yayoi; Abe, Michiko; Ikeda, Reiko; Uno, Jun; Oguri, Toyoko; Shibuya, Kazutoshi; Maesaki, Shigefumi; Mohri, Shinobu; Yamada, Tsuyoshi; Ishibashi, Hiroko; Hasumi, Yayoi; Abe, Shigeru

    2010-01-01

    The Japanese Society for Medical Mycology (JSMM) method used for testing the antifungal susceptibility of yeast, the MIC end point for azole antifungal agents, is currently set at IC(80). It was recently shown, however that there is an inconsistency in the MIC value between the JSMM method and the CLSI M27-A2 (CLSI) method, in which the end- point was to read as IC(50). To resolve this discrepancy and reassess the JSMM method, the MIC for three azoles, fluconazole, itraconazole and voriconazole were compared to 5 strains of each of the following Candida species: C. albicans, C. glabrata, C. tropicalis, C. parapsilosis and C. krusei, for a total of 25 comparisons, using the JSMM method, a modified JSMM method, and the CLSI method. The results showed that when the MIC end- point criterion of the JSMM method was changed from IC(80) to IC(50) (the modified JSMM method) , the MIC value was consistent and compatible with the CLSI method. Finally, it should be emphasized that the JSMM method, using a spectrophotometer for MIC measurement, was superior in both stability and reproducibility, as compared to the CLSI method in which growth was assessed by visual observation.

  9. Modified Fully Utilized Design (MFUD) Method for Stress and Displacement Constraints

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya; Gendy, Atef; Berke, Laszlo; Hopkins, Dale

    1997-01-01

    The traditional fully stressed method performs satisfactorily for stress-limited structural design. When this method is extended to include displacement limitations in addition to stress constraints, it is known as the fully utilized design (FUD). Typically, the FUD produces an overdesign, which is the primary limitation of this otherwise elegant method. We have modified FUD in an attempt to alleviate the limitation. This new method, called the modified fully utilized design (MFUD) method, has been tested successfully on a number of designs that were subjected to multiple loads and had both stress and displacement constraints. The solutions obtained with MFUD compare favorably with the optimum results that can be generated by using nonlinear mathematical programming techniques. The MFUD method appears to have alleviated the overdesign condition and offers the simplicity of a direct, fully stressed type of design method that is distinctly different from optimization and optimality criteria formulations. The MFUD method is being developed for practicing engineers who favor traditional design methods rather than methods based on advanced calculus and nonlinear mathematical programming techniques. The Integrated Force Method (IFM) was found to be the appropriate analysis tool in the development of the MFUD method. In this paper, the MFUD method and its optimality are presented along with a number of illustrative examples.

  10. Accuracy of two geocoding methods for geographic information system-based exposure assessment in epidemiological studies.

    PubMed

    Faure, Elodie; Danjou, Aurélie M N; Clavel-Chapelon, Françoise; Boutron-Ruault, Marie-Christine; Dossus, Laure; Fervers, Béatrice

    2017-02-24

    Environmental exposure assessment based on Geographic Information Systems (GIS) and study participants' residential proximity to environmental exposure sources relies on the positional accuracy of subjects' residences to avoid misclassification bias. Our study compared the positional accuracy of two automatic geocoding methods to a manual reference method. We geocoded 4,247 address records representing the residential history (1990-2008) of 1,685 women from the French national E3N cohort living in the Rhône-Alpes region. We compared two automatic geocoding methods, a free-online geocoding service (method A) and an in-house geocoder (method B), to a reference layer created by manually relocating addresses from method A (method R). For each automatic geocoding method, positional accuracy levels were compared according to the urban/rural status of addresses and time-periods (1990-2000, 2001-2008), using Chi Square tests. Kappa statistics were performed to assess agreement of positional accuracy of both methods A and B with the reference method, overall, by time-periods and by urban/rural status of addresses. Respectively 81.4% and 84.4% of addresses were geocoded to the exact address (65.1% and 61.4%) or to the street segment (16.3% and 23.0%) with methods A and B. In the reference layer, geocoding accuracy was higher in urban areas compared to rural areas (74.4% vs. 10.5% addresses geocoded to the address or interpolated address level, p < 0.0001); no difference was observed according to the period of residence. Compared to the reference method, median positional errors were 0.0 m (IQR = 0.0-37.2 m) and 26.5 m (8.0-134.8 m), with positional errors <100 m for 82.5% and 71.3% of addresses, for method A and method B respectively. Positional agreement of method A and method B with method R was 'substantial' for both methods, with kappa coefficients of 0.60 and 0.61 for methods A and B, respectively. Our study demonstrates the feasibility of geocoding residential addresses in epidemiological studies not initially recorded for environmental exposure assessment, for both recent addresses and residence locations more than 20 years ago. Accuracy of the two automatic geocoding methods was comparable. The in-house method (B) allowed a better control of the geocoding process and was less time consuming.

  11. Comparison of reproducibility of natural head position using two methods.

    PubMed

    Khan, Abdul Rahim; Rajesh, R N G; Dinesh, M R; Sanjay, N; Girish, K S; Venkataraghavan, Karthik

    2012-01-01

    Lateral cephalometric radiographs have become virtually indispensable to orthodontists in the treatment of patients. They are important in orthodontic growth analysis, diagnosis, treatment planning, monitoring of therapy and evaluation of final treatment outcome. The purpose of this study was to evaluate and compare the maximum reproducibility with minimum variation of natural head position using two methods, i.e. the mirror method and the fluid level device method. The study included two sets of 40 lateral cephalograms taken using two methods of obtaining natural head position: (1) The mirror method and (2) fluid level device method, with a time interval of 2 months. Inclusion criteria • Subjects were randomly selected aged between 18 to 26 years Exclusion criteria • History of orthodontic treatment • Any history of respiratory tract problem or chronic mouth breathing • Any congenital deformity • History of traumatically-induced deformity • History of myofacial pain syndrome • Any previous history of head and neck surgery. The result showed that both the methods for obtaining natural head position-the mirror method and fluid level device method were comparable, but maximum reproducibility was more with the fluid level device as shown by the Dahlberg's coefficient and Bland-Altman plot. The minimum variance was seen with the fluid level device method as shown by Precision and Pearson correlation. The mirror method and the fluid level device method used for obtaining natural head position were comparable without any significance, and the fluid level device method was more reproducible and showed less variance when compared to mirror method for obtaining natural head position. Fluid level device method was more reproducible and shows less variance when compared to mirror method for obtaining natural head position.

  12. Comparing four non-invasive methods to determine the ventilatory anaerobic threshold during cardiopulmonary exercise testing in children with congenital heart or lung disease.

    PubMed

    Visschers, Naomi C A; Hulzebos, Erik H; van Brussel, Marco; Takken, Tim

    2015-11-01

    The ventilatory anaerobic threshold (VAT) is an important method to assess the aerobic fitness in patients with cardiopulmonary disease. Several methods exist to determine the VAT; however, there is no consensus which of these methods is the most accurate. To compare four different non-invasive methods for the determination of the VAT via respiratory gas exchange analysis during a cardiopulmonary exercise test (CPET). A secondary objective is to determine the interobserver reliability of the VAT. CPET data of 30 children diagnosed with either cystic fibrosis (CF; N = 15) or with a surgically corrected dextro-transposition of the great arteries (asoTGA; N = 15) were included. No significant differences were found between conditions or among testers. The RER = 1 method differed the most compared to the other methods, showing significant higher results in all six variables. The PET-O2 method differed significantly on five of six and four of six exercise variables with the V-slope method and the VentEq method, respectively. The V-slope and the VentEq method differed significantly on one of six exercise variables. Ten of thirteen ICCs that were >0.80 had a 95% CI > 0.70. The RER = 1 method and the V-slope method had the highest number of significant ICCs and 95% CIs. The V-slope method, the ventilatory equivalent method and the PET-O2 method are comparable and reliable methods to determine the VAT during CPET in children with CF or asoTGA. © 2014 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  13. Evaluation of Four Methods for Predicting Carbon Stocks of Korean Pine Plantations in Heilongjiang Province, China

    PubMed Central

    Gao, Huilin; Dong, Lihu; Li, Fengri; Zhang, Lianjun

    2015-01-01

    A total of 89 trees of Korean pine (Pinus koraiensis) were destructively sampled from the plantations in Heilongjiang Province, P.R. China. The sample trees were measured and calculated for the biomass and carbon stocks of tree components (i.e., stem, branch, foliage and root). Both compatible biomass and carbon stock models were developed with the total biomass and total carbon stocks as the constraints, respectively. Four methods were used to evaluate the carbon stocks of tree components. The first method predicted carbon stocks directly by the compatible carbon stocks models (Method 1). The other three methods indirectly predicted the carbon stocks in two steps: (1) estimating the biomass by the compatible biomass models, and (2) multiplying the estimated biomass by three different carbon conversion factors (i.e., carbon conversion factor 0.5 (Method 2), average carbon concentration of the sample trees (Method 3), and average carbon concentration of each tree component (Method 4)). The prediction errors of estimating the carbon stocks were compared and tested for the differences between the four methods. The results showed that the compatible biomass and carbon models with tree diameter (D) as the sole independent variable performed well so that Method 1 was the best method for predicting the carbon stocks of tree components and total. There were significant differences among the four methods for the carbon stock of stem. Method 2 produced the largest error, especially for stem and total. Methods 3 and Method 4 were slightly worse than Method 1, but the differences were not statistically significant. In practice, the indirect method using the mean carbon concentration of individual trees was sufficient to obtain accurate carbon stocks estimation if carbon stocks models are not available. PMID:26659257

  14. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  15. A New Online Calibration Method Based on Lord's Bias-Correction.

    PubMed

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  16. Qualitative versus quantitative methods in psychiatric research.

    PubMed

    Razafsha, Mahdi; Behforuzi, Hura; Azari, Hassan; Zhang, Zhiqun; Wang, Kevin K; Kobeissy, Firas H; Gold, Mark S

    2012-01-01

    Qualitative studies are gaining their credibility after a period of being misinterpreted as "not being quantitative." Qualitative method is a broad umbrella term for research methodologies that describe and explain individuals' experiences, behaviors, interactions, and social contexts. In-depth interview, focus groups, and participant observation are among the qualitative methods of inquiry commonly used in psychiatry. Researchers measure the frequency of occurring events using quantitative methods; however, qualitative methods provide a broader understanding and a more thorough reasoning behind the event. Hence, it is considered to be of special importance in psychiatry. Besides hypothesis generation in earlier phases of the research, qualitative methods can be employed in questionnaire design, diagnostic criteria establishment, feasibility studies, as well as studies of attitude and beliefs. Animal models are another area that qualitative methods can be employed, especially when naturalistic observation of animal behavior is important. However, since qualitative results can be researcher's own view, they need to be statistically confirmed, quantitative methods. The tendency to combine both qualitative and quantitative methods as complementary methods has emerged over recent years. By applying both methods of research, scientists can take advantage of interpretative characteristics of qualitative methods as well as experimental dimensions of quantitative methods.

  17. Methods of Farm Guidance

    ERIC Educational Resources Information Center

    Vir, Dharm

    1971-01-01

    A survey of teaching methods for farm guidance workers in India, outlining some approaches developed by and used in other nations. Discusses mass educational methods, group educational methods, and the local leadership method. (JB)

  18. Using mixed methods research designs in health psychology: an illustrated discussion from a pragmatist perspective.

    PubMed

    Bishop, Felicity L

    2015-02-01

    To outline some of the challenges of mixed methods research and illustrate how they can be addressed in health psychology research. This study critically reflects on the author's previously published mixed methods research and discusses the philosophical and technical challenges of mixed methods, grounding the discussion in a brief review of methodological literature. Mixed methods research is characterized as having philosophical and technical challenges; the former can be addressed by drawing on pragmatism, the latter by considering formal mixed methods research designs proposed in a number of design typologies. There are important differences among the design typologies which provide diverse examples of designs that health psychologists can adapt for their own mixed methods research. There are also similarities; in particular, many typologies explicitly orient to the technical challenges of deciding on the respective timing of qualitative and quantitative methods and the relative emphasis placed on each method. Characteristics, strengths, and limitations of different sequential and concurrent designs are identified by reviewing five mixed methods projects each conducted for a different purpose. Adapting formal mixed methods designs can help health psychologists address the technical challenges of mixed methods research and identify the approach that best fits the research questions and purpose. This does not obfuscate the need to address philosophical challenges of mixing qualitative and quantitative methods. Statement of contribution What is already known on this subject? Mixed methods research poses philosophical and technical challenges. Pragmatism in a popular approach to the philosophical challenges while diverse typologies of mixed methods designs can help address the technical challenges. Examples of mixed methods research can be hard to locate when component studies from mixed methods projects are published separately. What does this study add? Critical reflections on the author's previously published mixed methods research illustrate how a range of different mixed methods designs can be adapted and applied to address health psychology research questions. The philosophical and technical challenges of mixed methods research should be considered together and in relation to the broader purpose of the research. © 2014 The British Psychological Society.

  19. Why, and how, mixed methods research is undertaken in health services research in England: a mixed methods study.

    PubMed

    O'Cathain, Alicia; Murphy, Elizabeth; Nicholl, Jon

    2007-06-14

    Recently, there has been a surge of international interest in combining qualitative and quantitative methods in a single study--often called mixed methods research. It is timely to consider why and how mixed methods research is used in health services research (HSR). Documentary analysis of proposals and reports of 75 mixed methods studies funded by a research commissioner of HSR in England between 1994 and 2004. Face-to-face semi-structured interviews with 20 researchers sampled from these studies. 18% (119/647) of HSR studies were classified as mixed methods research. In the documentation, comprehensiveness was the main driver for using mixed methods research, with researchers wanting to address a wider range of questions than quantitative methods alone would allow. Interviewees elaborated on this, identifying the need for qualitative research to engage with the complexity of health, health care interventions, and the environment in which studies took place. Motivations for adopting a mixed methods approach were not always based on the intrinsic value of mixed methods research for addressing the research question; they could be strategic, for example, to obtain funding. Mixed methods research was used in the context of evaluation, including randomised and non-randomised designs; survey and fieldwork exploratory studies; and instrument development. Studies drew on a limited number of methods--particularly surveys and individual interviews--but used methods in a wide range of roles. Mixed methods research is common in HSR in the UK. Its use is driven by pragmatism rather than principle, motivated by the perceived deficit of quantitative methods alone to address the complexity of research in health care, as well as other more strategic gains. Methods are combined in a range of contexts, yet the emerging methodological contributions from HSR to the field of mixed methods research are currently limited to the single context of combining qualitative methods and randomised controlled trials. Health services researchers could further contribute to the development of mixed methods research in the contexts of instrument development, survey and fieldwork, and non-randomised evaluations.

  20. New hybrid conjugate gradient methods with the generalized Wolfe line search.

    PubMed

    Xu, Xiao; Kong, Fan-Yu

    2016-01-01

    The conjugate gradient method was an efficient technique for solving the unconstrained optimization problem. In this paper, we made a linear combination with parameters β k of the DY method and the HS method, and putted forward the hybrid method of DY and HS. We also proposed the hybrid of FR and PRP by the same mean. Additionally, to present the two hybrid methods, we promoted the Wolfe line search respectively to compute the step size α k of the two hybrid methods. With the new Wolfe line search, the two hybrid methods had descent property and global convergence property of the two hybrid methods that can also be proved.

  1. Research on the calibration methods of the luminance parameter of radiation luminance meters

    NASA Astrophysics Data System (ADS)

    Cheng, Weihai; Huang, Biyong; Lin, Fangsheng; Li, Tiecheng; Yin, Dejin; Lai, Lei

    2017-10-01

    This paper introduces standard diffusion reflection white plate method and integrating sphere standard luminance source method to calibrate the luminance parameter. The paper compares the effects of calibration results by using these two methods through principle analysis and experimental verification. After using two methods to calibrate the same radiation luminance meter, the data obtained verifies the testing results of the two methods are both reliable. The results show that the display value using standard white plate method has fewer errors and better reproducibility. However, standard luminance source method is more convenient and suitable for on-site calibration. Moreover, standard luminance source method has wider range and can test the linear performance of the instruments.

  2. The change and development of statistical methods used in research articles in child development 1930-2010.

    PubMed

    Køppe, Simo; Dammeyer, Jesper

    2014-09-01

    The evolution of developmental psychology has been characterized by the use of different quantitative and qualitative methods and procedures. But how does the use of methods and procedures change over time? This study explores the change and development of statistical methods used in articles published in Child Development from 1930 to 2010. The methods used in every article in the first issue of every volume were categorized into four categories. Until 1980 relatively simple statistical methods were used. During the last 30 years there has been an explosive use of more advanced statistical methods employed. The absence of statistical methods or use of simple methods had been eliminated.

  3. Social network extraction based on Web: 1. Related superficial methods

    NASA Astrophysics Data System (ADS)

    Khairuddin Matyuso Nasution, Mahyuddin

    2018-01-01

    Often the nature of something affects methods to resolve the related issues about it. Likewise, methods to extract social networks from the Web, but involve the structured data types differently. This paper reveals several methods of social network extraction from the same sources that is Web: the basic superficial method, the underlying superficial method, the description superficial method, and the related superficial methods. In complexity we derive the inequalities between methods and so are their computations. In this case, we find that different results from the same tools make the difference from the more complex to the simpler: Extraction of social network by involving co-occurrence is more complex than using occurrences.

  4. Performance of a proposed determinative method for p-TSA in rainbow trout fillet tissue and bridging the proposed method with a method for total chloramine-T residues in rainbow trout fillet tissue

    USGS Publications Warehouse

    Meinertz, J.R.; Stehly, G.R.; Gingerich, W.H.; Greseth, Shari L.

    2001-01-01

    Chloramine-T is an effective drug for controlling fish mortality caused by bacterial gill disease. As part of the data required for approval of chloramine-T use in aquaculture, depletion of the chloramine-T marker residue (para-toluenesulfonamide; p-TSA) from edible fillet tissue of fish must be characterized. Declaration of p-TSA as the marker residue for chloramine-T in rainbow trout was based on total residue depletion studies using a method that used time consuming and cumbersome techniques. A simple and robust method recently developed is being proposed as a determinative method for p-TSA in fish fillet tissue. The proposed determinative method was evaluated by comparing accuracy and precision data with U.S. Food and Drug Administration criteria and by bridging the method to the former method for chloramine-T residues. The method accuracy and precision fulfilled the criteria for determinative methods; accuracy was 92.6, 93.4, and 94.6% with samples fortified at 0.5X, 1X, and 2X the expected 1000 ng/g tolerance limit for p-TSA, respectively. Method precision with tissue containing incurred p-TSA at a nominal concentration of 1000 ng/g ranged from 0.80 to 8.4%. The proposed determinative method was successfully bridged with the former method. The concentrations of p-TSA developed with the proposed method were not statistically different at p < 0.05 from p-TSA concentrations developed with the former method.

  5. Standard setting: comparison of two methods.

    PubMed

    George, Sanju; Haque, M Sayeed; Oyebode, Femi

    2006-09-14

    The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard-setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. The norm-reference method of standard-setting (mean minus 1 SD) was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ). Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart). We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. The pass rate with the norm-reference method was 85% (66/78) and that by the Angoff method was 100% (78 out of 78). The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% - 87%). The modified Angoff method had an inter-rater reliability of 0.81-0.82 and a test-retest reliability of 0.59-0.74. There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.

  6. Women's Contraceptive Preference-Use Mismatch

    PubMed Central

    He, Katherine; Dalton, Vanessa K.; Zochowski, Melissa K.

    2017-01-01

    Abstract Background: Family planning research has not adequately addressed women's preferences for different contraceptive methods and whether women's contraceptive experiences match their preferences. Methods: Data were drawn from the Women's Healthcare Experiences and Preferences Study, an Internet survey of 1,078 women aged 18–55 randomly sampled from a national probability panel. Survey items assessed women's preferences for contraceptive methods, match between methods preferred and used, and perceived reasons for mismatch. We estimated predictors of contraceptive preference with multinomial logistic regression models. Results: Among women at risk for pregnancy who responded with their preferred method (n = 363), hormonal methods (non-LARC [long-acting reversible contraception]) were the most preferred method (34%), followed by no method (23%) and LARC (18%). Sociodemographic differences in contraception method preferences were noted (p-values <0.05), generally with minority, married, and older women having higher rates of preferring less effective methods, compared to their counterparts. Thirty-six percent of women reported preference-use mismatch, with the majority preferring more effective methods than those they were using. Rates of match between preferred and usual methods were highest for LARC (76%), hormonal (non-LARC) (65%), and no method (65%). The most common reasons for mismatch were cost/insurance (41%), lack of perceived/actual need (34%), and method-specific preference concerns (19%). Conclusion: While preference for effective contraception was common among this sample of women, we found substantial mismatch between preferred and usual methods, notably among women of lower socioeconomic status and women using less effective methods. Findings may have implications for patient-centered contraceptive interventions. PMID:27710196

  7. Validation of various adaptive threshold methods of segmentation applied to follicular lymphoma digital images stained with 3,3’-Diaminobenzidine&Haematoxylin

    PubMed Central

    2013-01-01

    The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the ’brown component’ extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop. Virtual Slides The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017. PMID:23531405

  8. Validation of various adaptive threshold methods of segmentation applied to follicular lymphoma digital images stained with 3,3'-Diaminobenzidine&Haematoxylin.

    PubMed

    Korzynska, Anna; Roszkowiak, Lukasz; Lopez, Carlos; Bosch, Ramon; Witkowski, Lukasz; Lejeune, Marylene

    2013-03-25

    The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the 'brown component' extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop. The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017.

  9. Numerical Grid Generation and Potential Airfoil Analysis and Design

    DTIC Science & Technology

    1988-01-01

    Gauss- Seidel , SOR and ADI iterative methods e JACOBI METHOD In the Jacobi method each new value of a function is computed entirely from old values...preceding iteration and adding the inhomogeneous (boundary condition) term. * GAUSS- SEIDEL METHOD When we compute I in a Jacobi method, we have already...Gauss- Seidel method. Sufficient condition for p convergence of the Gauss- Seidel method is diagonal-dominance of [A].9W e SUCESSIVE OVER-RELAXATION (SOR

  10. Evaluation of intrinsic respiratory signal determination methods for 4D CBCT adapted for mice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Rachael; Pan, Tinsu, E-mail: tpan@mdanderson.org; Rubinstein, Ashley

    Purpose: 4D CT imaging in mice is important in a variety of areas including studies of lung function and tumor motion. A necessary step in 4D imaging is obtaining a respiratory signal, which can be done through an external system or intrinsically through the projection images. A number of methods have been developed that can successfully determine the respiratory signal from cone-beam projection images of humans, however only a few have been utilized in a preclinical setting and most of these rely on step-and-shoot style imaging. The purpose of this work is to assess and make adaptions of several successfulmore » methods developed for humans for an image-guided preclinical radiation therapy system. Methods: Respiratory signals were determined from the projection images of free-breathing mice scanned on the X-RAD system using four methods: the so-called Amsterdam shroud method, a method based on the phase of the Fourier transform, a pixel intensity method, and a center of mass method. The Amsterdam shroud method was modified so the sharp inspiration peaks associated with anesthetized mouse breathing could be detected. Respiratory signals were used to sort projections into phase bins and 4D images were reconstructed. Error and standard deviation in the assignment of phase bins for the four methods compared to a manual method considered to be ground truth were calculated for a range of region of interest (ROI) sizes. Qualitative comparisons were additionally made between the 4D images obtained using each of the methods and the manual method. Results: 4D images were successfully created for all mice with each of the respiratory signal extraction methods. Only minimal qualitative differences were noted between each of the methods and the manual method. The average error (and standard deviation) in phase bin assignment was 0.24 ± 0.08 (0.49 ± 0.11) phase bins for the Fourier transform method, 0.09 ± 0.03 (0.31 ± 0.08) phase bins for the modified Amsterdam shroud method, 0.09 ± 0.02 (0.33 ± 0.07) phase bins for the intensity method, and 0.37 ± 0.10 (0.57 ± 0.08) phase bins for the center of mass method. Little dependence on ROI size was noted for the modified Amsterdam shroud and intensity methods while the Fourier transform and center of mass methods showed a noticeable dependence on the ROI size. Conclusions: The modified Amsterdam shroud, Fourier transform, and intensity respiratory signal methods are sufficiently accurate to be used for 4D imaging on the X-RAD system and show improvement over the existing center of mass method. The intensity and modified Amsterdam shroud methods are recommended due to their high accuracy and low dependence on ROI size.« less

  11. 26 CFR 1.167(b)-2 - Declining balance method.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 2 2014-04-01 2014-04-01 false Declining balance method. 1.167(b)-2 Section 1... Declining balance method. (a) Application of method. Under the declining balance method a uniform rate is.... While salvage is not taken into account in determining the annual allowances under this method, in no...

  12. 77 FR 60985 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-05

    ... Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of three new equivalent methods for monitoring ambient air quality. SUMMARY... equivalent methods, one for measuring concentrations of PM 2.5 , one for measuring concentrations of PM 10...

  13. 40 CFR Appendix A to Part 425 - Potassium Ferricyanide Titration Method

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Method A Appendix A to Part 425 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Appendix A to Part 425—Potassium Ferricyanide Titration Method Source The potassium ferricyanide titration method is based on method SLM 4/2 described in “Official Method of Analysis,” Society of Leather Trades...

  14. 40 CFR Appendix A to Part 425 - Potassium Ferricyanide Titration Method

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Method A Appendix A to Part 425 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED..., App. A Appendix A to Part 425—Potassium Ferricyanide Titration Method Source The potassium ferricyanide titration method is based on method SLM 4/2 described in “Official Method of Analysis,” Society of...

  15. 78 FR 67360 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Five New Equivalent Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-12

    ... Methods: Designation of Five New Equivalent Methods AGENCY: Office of Research and Development; Environmental Protection Agency (EPA). ACTION: Notice of the designation of five new equivalent methods for...) has designated, in accordance with 40 CFR Part 53, five new equivalent methods, one for measuring...

  16. 40 CFR Appendix A to Part 425 - Potassium Ferricyanide Titration Method

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Method A Appendix A to Part 425 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Appendix A to Part 425—Potassium Ferricyanide Titration Method Source The potassium ferricyanide titration method is based on method SLM 4/2 described in “Official Method of Analysis,” Society of Leather Trades...

  17. 78 FR 22540 - Notice of Public Meeting/Webinar: EPA Method Development Update on Drinking Water Testing Methods...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-16

    ...: EPA Method Development Update on Drinking Water Testing Methods for Contaminant Candidate List... Division will describe methods currently in development for many CCL contaminants, with an expectation that several of these methods will support future cycles of the Unregulated Contaminant Monitoring Rule (UCMR...

  18. Problems d'elaboration d'une methode locale: la methode "Paris-Khartoum" (Problems in Implementing a Local Method: the Paris-Khartoum Method)

    ERIC Educational Resources Information Center

    Penhoat, Loick; Sakow, Kostia

    1978-01-01

    A description of the development and implementation of a method introduced in the Sudan that attempts to relate to Sudanese culture and to motivate students. The relationship between language teaching methods and the total educational system is discussed. (AMH)

  19. Exponentially fitted symplectic Runge-Kutta-Nyström methods derived by partitioned Runge-Kutta methods

    NASA Astrophysics Data System (ADS)

    Monovasilis, Th.; Kalogiratou, Z.; Simos, T. E.

    2013-10-01

    In this work we derive symplectic EF/TF RKN methods by symplectic EF/TF PRK methods. Also EF/TF symplectic RKN methods are constructed directly from classical symplectic RKN methods. Several numerical examples will be given in order to decide which is the most favourable implementation.

  20. Standard methods for chemical analysis of steel, cast iron, open-hearth iron, and wrought iron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1973-01-01

    Methods are described for determining manganese, phosphorus, sulfur, selenium, copper, nickel, chromium, vanadium, tungsten, titanium, lead, boron, molybdenum ( alpha -benzoin oxime method), zirconium (cupferron --phosphate method), niobium and tantalum (hydrolysis with perchloric and sulfurous acids (gravimetric, titrimetric, and photometric methods)), and beryllium (oxide method). (DHM)

  1. Detection of coupling delay: A problem not yet solved

    NASA Astrophysics Data System (ADS)

    Coufal, David; Jakubík, Jozef; Jajcay, Nikola; Hlinka, Jaroslav; Krakovská, Anna; Paluš, Milan

    2017-08-01

    Nonparametric detection of coupling delay in unidirectionally and bidirectionally coupled nonlinear dynamical systems is examined. Both continuous and discrete-time systems are considered. Two methods of detection are assessed—the method based on conditional mutual information—the CMI method (also known as the transfer entropy method) and the method of convergent cross mapping—the CCM method. Computer simulations show that neither method is generally reliable in the detection of coupling delays. For continuous-time chaotic systems, the CMI method appears to be more sensitive and applicable in a broader range of coupling parameters than the CCM method. In the case of tested discrete-time dynamical systems, the CCM method has been found to be more sensitive, while the CMI method required much stronger coupling strength in order to bring correct results. However, when studied systems contain a strong oscillatory component in their dynamics, results of both methods become ambiguous. The presented study suggests that results of the tested algorithms should be interpreted with utmost care and the nonparametric detection of coupling delay, in general, is a problem not yet solved.

  2. An historical survey of computational methods in optimal control.

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  3. Identifying Outliers of Non-Gaussian Groundwater State Data Based on Ensemble Estimation for Long-Term Trends

    NASA Astrophysics Data System (ADS)

    Park, E.; Jeong, J.; Choi, J.; Han, W. S.; Yun, S. T.

    2016-12-01

    Three modified outlier identification methods: the three sigma rule (3s), inter quantile range (IQR) and median absolute deviation (MAD), which take advantage of the ensemble regression method are proposed. For validation purposes, the performance of the methods is compared using simulated and actual groundwater data with a few hypothetical conditions. In the validations using simulated data, all of the proposed methods reasonably identify outliers at a 5% outlier level; whereas, only the IQR method performs well for identifying outliers at a 30% outlier level. When applying the methods to real groundwater data, the outlier identification performance of the IQR method is found to be superior to the other two methods. However, the IQR method is found to have a limitation in the false identification of excessive outliers, which may be supplemented by joint applications with the other methods (i.e., the 3s rule and MAD methods). The proposed methods can be also applied as a potential tool for future anomaly detection by model training based on currently available data.

  4. Overview of paint removal methods

    NASA Astrophysics Data System (ADS)

    Foster, Terry

    1995-04-01

    With the introduction of strict environmental regulations governing the use and disposal of methylene chloride and phenols, major components of chemical paint strippers, there have been many new environmentally safe and effective methods of paint removal developed. The new methods developed for removing coatings from aircraft and aircraft components include: mechanical methods using abrasive media such as plastic, wheat starch, walnut shells, ice and dry ice, environmentally safe chemical strippers and paint softeners, and optical methods such as lasers and flash lamps. Each method has its advantages and disadvantages, and some have unique applications. For example, mechanical and abrasive methods can damage sensitive surfaces such as composite materials and strict control of blast parameters and conditions are required. Optical methods can be slow, leaving paint residues, and chemical methods may not remove all of the coating or require special coating formulations to be effective. As an introduction to environmentally safe and effective methods of paint removal, this paper is an overview of the various methods available. The purpose of this overview is to introduce the various paint removal methods available.

  5. Newton's method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    More, J. J.; Sorensen, D. C.

    1982-02-01

    Newton's method plays a central role in the development of numerical techniques for optimization. In fact, most of the current practical methods for optimization can be viewed as variations on Newton's method. It is therefore important to understand Newton's method as an algorithm in its own right and as a key introduction to the most recent ideas in this area. One of the aims of this expository paper is to present and analyze two main approaches to Newton's method for unconstrained minimization: the line search approach and the trust region approach. The other aim is to present some of themore » recent developments in the optimization field which are related to Newton's method. In particular, we explore several variations on Newton's method which are appropriate for large scale problems, and we also show how quasi-Newton methods can be derived quite naturally from Newton's method.« less

  6. [Comparison of two nucleic acid extraction methods for norovirus in oysters].

    PubMed

    Yuan, Qiao; Li, Hui; Deng, Xiaoling; Mo, Yanling; Fang, Ling; Ke, Changwen

    2013-04-01

    To explore a convenient and effective method for norovirus nucleic acid extraction from oysters suitable for long-term viral surveillance. Two methods, namely method A (glycine washing and polyethylene glycol precipitation of the virus followed by silica gel centrifugal column) and method B (protease K digestion followed by application of paramagnetic silicon) were compared for their performance in norovirus nucleic acid extraction from oysters. Real-time RT-PCR was used to detect norovirus in naturally infected oysters and in oysters with induced infection. The two methods yielded comparable positive detection rates for the samples, but the recovery rate of the virus was higher with method B than with method A. Method B is a more convenient and rapid method for norovirus nucleic acid extraction from oysters and suitable for long-term surveillance of norovirus.

  7. On the Formulation of Weakly Singular Displacement/Traction Integral Equations; and Their Solution by the MLPG Method

    NASA Technical Reports Server (NTRS)

    Atluri, Satya N.; Shen, Shengping

    2002-01-01

    In this paper, a very simple method is used to derive the weakly singular traction boundary integral equation based on the integral relationships for displacement gradients. The concept of the MLPG method is employed to solve the integral equations, especially those arising in solid mechanics. A moving Least Squares (MLS) interpolation is selected to approximate the trial functions in this paper. Five boundary integral Solution methods are introduced: direct solution method; displacement boundary-value problem; traction boundary-value problem; mixed boundary-value problem; and boundary variational principle. Based on the local weak form of the BIE, four different nodal-based local test functions are selected, leading to four different MLPG methods for each BIE solution method. These methods combine the advantages of the MLPG method and the boundary element method.

  8. A numerical method to solve the 1D and the 2D reaction diffusion equation based on Bessel functions and Jacobian free Newton-Krylov subspace methods

    NASA Astrophysics Data System (ADS)

    Parand, K.; Nikarya, M.

    2017-11-01

    In this paper a novel method will be introduced to solve a nonlinear partial differential equation (PDE). In the proposed method, we use the spectral collocation method based on Bessel functions of the first kind and the Jacobian free Newton-generalized minimum residual (JFNGMRes) method with adaptive preconditioner. In this work a nonlinear PDE has been converted to a nonlinear system of algebraic equations using the collocation method based on Bessel functions without any linearization, discretization or getting the help of any other methods. Finally, by using JFNGMRes, the solution of the nonlinear algebraic system is achieved. To illustrate the reliability and efficiency of the proposed method, we solve some examples of the famous Fisher equation. We compare our results with other methods.

  9. Mending the Gap, An Effort to Aid the Transfer of Formal Methods Technology

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly

    2009-01-01

    Formal methods can be applied to many of the development and verification activities required for civil avionics software. RTCA/DO-178B, Software Considerations in Airborne Systems and Equipment Certification, gives a brief description of using formal methods as an alternate method of compliance with the objectives of that standard. Despite this, the avionics industry at large has been hesitant to adopt formal methods, with few developers have actually used formal methods for certification credit. Why is this so, given the volume of evidence of the benefits of formal methods? This presentation will explore some of the challenges to using formal methods in a certification context and describe the effort by the Formal Methods Subgroup of RTCA SC-205/EUROCAE WG-71 to develop guidance to make the use of formal methods a recognized approach.

  10. Methods for the calculation of axial wave numbers in lined ducts with mean flow

    NASA Technical Reports Server (NTRS)

    Eversman, W.

    1981-01-01

    A survey is made of the methods available for the calculation of axial wave numbers in lined ducts. Rectangular and circular ducts with both uniform and non-uniform flow are considered as are ducts with peripherally varying liners. A historical perspective is provided by a discussion of the classical methods for computing attenuation when no mean flow is present. When flow is present these techniques become either impractical or impossible. A number of direct eigenvalue determination schemes which have been used when flow is present are discussed. Methods described are extensions of the classical no-flow technique, perturbation methods based on the no-flow technique, direct integration methods for solution of the eigenvalue equation, an integration-iteration method based on the governing differential equation for acoustic transmission, Galerkin methods, finite difference methods, and finite element methods.

  11. Optimal projection method determination by Logdet Divergence and perturbed von-Neumann Divergence.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Qiu, Yushan; Cheng, Xiao-Qing

    2017-12-14

    Positive semi-definiteness is a critical property in kernel methods for Support Vector Machine (SVM) by which efficient solutions can be guaranteed through convex quadratic programming. However, a lot of similarity functions in applications do not produce positive semi-definite kernels. We propose projection method by constructing projection matrix on indefinite kernels. As a generalization of the spectrum method (denoising method and flipping method), the projection method shows better or comparable performance comparing to the corresponding indefinite kernel methods on a number of real world data sets. Under the Bregman matrix divergence theory, we can find suggested optimal λ in projection method using unconstrained optimization in kernel learning. In this paper we focus on optimal λ determination, in the pursuit of precise optimal λ determination method in unconstrained optimization framework. We developed a perturbed von-Neumann divergence to measure kernel relationships. We compared optimal λ determination with Logdet Divergence and perturbed von-Neumann Divergence, aiming at finding better λ in projection method. Results on a number of real world data sets show that projection method with optimal λ by Logdet divergence demonstrate near optimal performance. And the perturbed von-Neumann Divergence can help determine a relatively better optimal projection method. Projection method ia easy to use for dealing with indefinite kernels. And the parameter embedded in the method can be determined through unconstrained optimization under Bregman matrix divergence theory. This may provide a new way in kernel SVMs for varied objectives.

  12. Two-dimensional phase unwrapping using robust derivative estimation and adaptive integration.

    PubMed

    Strand, Jarle; Taxt, Torfinn

    2002-01-01

    The adaptive integration (ADI) method for two-dimensional (2-D) phase unwrapping is presented. The method uses an algorithm for noise robust estimation of partial derivatives, followed by a noise robust adaptive integration process. The ADI method can easily unwrap phase images with moderate noise levels, and the resulting images are congruent modulo 2pi with the observed, wrapped, input images. In a quantitative evaluation, both the ADI and the BLS methods (Strand et al.) were better than the least-squares methods of Ghiglia and Romero (GR), and of Marroquin and Rivera (MRM). In a qualitative evaluation, the ADI, the BLS, and a conjugate gradient version of the MRM method (MRMCG), were all compared using a synthetic image with shear, using 115 magnetic resonance images, and using 22 fiber-optic interferometry images. For the synthetic image and the interferometry images, the ADI method gave consistently visually better results than the other methods. For the MR images, the MRMCG method was best, and the ADI method second best. The ADI method was less sensitive to the mask definition and the block size than the BLS method, and successfully unwrapped images with shears that were not marked in the masks. The computational requirements of the ADI method for images of nonrectangular objects were comparable to only two iterations of many least-squares-based methods (e.g., GR). We believe the ADI method provides a powerful addition to the ensemble of tools available for 2-D phase unwrapping.

  13. A method for assigning species into groups based on generalized Mahalanobis distance between habitat model coefficients

    USGS Publications Warehouse

    Williams, C.J.; Heglund, P.J.

    2009-01-01

    Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.

  14. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    PubMed

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  15. Statistical methods used to test for agreement of medical instruments measuring continuous variables in method comparison studies: a systematic review.

    PubMed

    Zaki, Rafdzah; Bulgiba, Awang; Ismail, Roshidi; Ismail, Noor Azina

    2012-01-01

    Accurate values are a must in medicine. An important parameter in determining the quality of a medical instrument is agreement with a gold standard. Various statistical methods have been used to test for agreement. Some of these methods have been shown to be inappropriate. This can result in misleading conclusions about the validity of an instrument. The Bland-Altman method is the most popular method judging by the many citations of the article proposing this method. However, the number of citations does not necessarily mean that this method has been applied in agreement research. No previous study has been conducted to look into this. This is the first systematic review to identify statistical methods used to test for agreement of medical instruments. The proportion of various statistical methods found in this review will also reflect the proportion of medical instruments that have been validated using those particular methods in current clinical practice. Five electronic databases were searched between 2007 and 2009 to look for agreement studies. A total of 3,260 titles were initially identified. Only 412 titles were potentially related, and finally 210 fitted the inclusion criteria. The Bland-Altman method is the most popular method with 178 (85%) studies having used this method, followed by the correlation coefficient (27%) and means comparison (18%). Some of the inappropriate methods highlighted by Altman and Bland since the 1980s are still in use. This study finds that the Bland-Altman method is the most popular method used in agreement research. There are still inappropriate applications of statistical methods in some studies. It is important for a clinician or medical researcher to be aware of this issue because misleading conclusions from inappropriate analyses will jeopardize the quality of the evidence, which in turn will influence quality of care given to patients in the future.

  16. Statistical Methods Used to Test for Agreement of Medical Instruments Measuring Continuous Variables in Method Comparison Studies: A Systematic Review

    PubMed Central

    Zaki, Rafdzah; Bulgiba, Awang; Ismail, Roshidi; Ismail, Noor Azina

    2012-01-01

    Background Accurate values are a must in medicine. An important parameter in determining the quality of a medical instrument is agreement with a gold standard. Various statistical methods have been used to test for agreement. Some of these methods have been shown to be inappropriate. This can result in misleading conclusions about the validity of an instrument. The Bland-Altman method is the most popular method judging by the many citations of the article proposing this method. However, the number of citations does not necessarily mean that this method has been applied in agreement research. No previous study has been conducted to look into this. This is the first systematic review to identify statistical methods used to test for agreement of medical instruments. The proportion of various statistical methods found in this review will also reflect the proportion of medical instruments that have been validated using those particular methods in current clinical practice. Methodology/Findings Five electronic databases were searched between 2007 and 2009 to look for agreement studies. A total of 3,260 titles were initially identified. Only 412 titles were potentially related, and finally 210 fitted the inclusion criteria. The Bland-Altman method is the most popular method with 178 (85%) studies having used this method, followed by the correlation coefficient (27%) and means comparison (18%). Some of the inappropriate methods highlighted by Altman and Bland since the 1980s are still in use. Conclusions This study finds that the Bland-Altman method is the most popular method used in agreement research. There are still inappropriate applications of statistical methods in some studies. It is important for a clinician or medical researcher to be aware of this issue because misleading conclusions from inappropriate analyses will jeopardize the quality of the evidence, which in turn will influence quality of care given to patients in the future. PMID:22662248

  17. Evaluation of PDA Technical Report No 33. Statistical Testing Recommendations for a Rapid Microbiological Method Case Study.

    PubMed

    Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David

    2015-01-01

    New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc. 2015.

  18. [The research and application of pretreatment method for matrix-assisted laser desorption ionization-time of flight mass spectrometry identification of filamentous fungi].

    PubMed

    Huang, Y F; Chang, Z; Bai, J; Zhu, M; Zhang, M X; Wang, M; Zhang, G; Li, X Y; Tong, Y G; Wang, J L; Lu, X X

    2017-08-08

    Objective: To establish and evaluate the feasibility of a pretreatment method for matrix-assisted laser desorption ionization-time of flight mass spectrometry identification of filamentous fungi developed by the laboratory. Methods: Three hundred and eighty strains of filamentous fungi from January 2014 to December 2016 were recovered and cultured on sabouraud dextrose agar (SDA) plate at 28 ℃ to mature state. Meanwhile, the fungi were cultured in liquid sabouraud medium with a vertical rotation method recommended by Bruker and a horizontal vibration method developed by the laboratory until adequate amount of colonies were observed. For the strains cultured with the three methods, protein was extracted with modified magnetic bead-based extraction method for mass spectrum identification. Results: For 380 fungi strains, it took 3-10 d to culture with SDA culture method, and the ratio of identification of the species and genus was 47% and 81%, respectively; it took 5-7 d to culture with vertical rotation method, and the ratio of identification of the species and genus was 76% and 94%, respectively; it took 1-2 d to culture with horizontal vibration method, and the ratio of identification of the species and genus was 96% and 99%, respectively. For the comparison between horizontal vibration method and SDA culture method comparison, the difference was statistically significant (χ(2)=39.026, P <0.01); for the comparison between horizontal vibration method and vertical rotation method recommended by Bruker, the difference was statistically significant(χ(2)=11.310, P <0.01). Conclusion: The horizontal vibration method and modified magnetic bead-based extraction method developed by the laboratory is superior to the method recommended by Bruker and SDA culture method in terms of the identification capacity for filamentous fungi, which can be applied in clinic.

  19. Development of a practical costing method for hospitals.

    PubMed

    Cao, Pengyu; Toyabe, Shin-Ichi; Akazawa, Kouhei

    2006-03-01

    To realize an effective cost control, a practical and accurate cost accounting system is indispensable in hospitals. In traditional cost accounting systems, the volume-based costing (VBC) is the most popular cost accounting method. In this method, the indirect costs are allocated to each cost object (services or units of a hospital) using a single indicator named a cost driver (e.g., Labor hours, revenues or the number of patients). However, this method often results in rough and inaccurate results. The activity based costing (ABC) method introduced in the mid 1990s can prove more accurate results. With the ABC method, all events or transactions that cause costs are recognized as "activities", and a specific cost driver is prepared for each activity. Finally, the costs of activities are allocated to cost objects by the corresponding cost driver. However, it is much more complex and costly than other traditional cost accounting methods because the data collection for cost drivers is not always easy. In this study, we developed a simplified ABC (S-ABC) costing method to reduce the workload of ABC costing by reducing the number of cost drivers used in the ABC method. Using the S-ABC method, we estimated the cost of the laboratory tests, and as a result, similarly accurate results were obtained with the ABC method (largest difference was 2.64%). Simultaneously, this new method reduces the seven cost drivers used in the ABC method to four. Moreover, we performed an evaluation using other sample data from physiological laboratory department to certify the effectiveness of this new method. In conclusion, the S-ABC method provides two advantages in comparison to the VBC and ABC methods: (1) it can obtain accurate results, and (2) it is simpler to perform. Once we reduce the number of cost drivers by applying the proposed S-ABC method to the data for the ABC method, we can easily perform the cost accounting using few cost drivers after the second round of costing.

  20. Comparative study between recent methods manipulating ratio spectra and classical methods based on two-wavelength selection for the determination of binary mixture of antazoline hydrochloride and tetryzoline hydrochloride

    NASA Astrophysics Data System (ADS)

    Abdel-Halim, Lamia M.; Abd-El Rahman, Mohamed K.; Ramadan, Nesrin K.; EL Sanabary, Hoda F. A.; Salem, Maissa Y.

    2016-04-01

    A comparative study was developed between two classical spectrophotometric methods (dual wavelength method and Vierordt's method) and two recent methods manipulating ratio spectra (ratio difference method and first derivative of ratio spectra method) for simultaneous determination of Antazoline hydrochloride (AN) and Tetryzoline hydrochloride (TZ) in their combined pharmaceutical formulation and in the presence of benzalkonium chloride as a preservative without preliminary separation. The dual wavelength method depends on choosing two wavelengths for each drug in a way so that the difference in absorbance at those two wavelengths is zero for the other drug. While Vierordt's method, is based upon measuring the absorbance and the absorptivity values of the two drugs at their λmax (248.0 and 219.0 nm for AN and TZ, respectively), followed by substitution in the corresponding Vierordt's equation. Recent methods manipulating ratio spectra depend on either measuring the difference in amplitudes of ratio spectra between 255.5 and 269.5 nm for AN and 220.0 and 273.0 nm for TZ in case of ratio difference method or computing first derivative of the ratio spectra for each drug then measuring the peak amplitude at 250.0 nm for AN and at 224.0 nm for TZ in case of first derivative of ratio spectrophotometry. The specificity of the developed methods was investigated by analyzing different laboratory prepared mixtures of the two drugs. All methods were applied successfully for the determination of the selected drugs in their combined dosage form proving that the classical spectrophotometric methods can still be used successfully in analysis of binary mixture using minimal data manipulation rather than recent methods which require relatively more steps. Furthermore, validation of the proposed methods was performed according to ICH guidelines; accuracy, precision and repeatability are found to be within the acceptable limits. Statistical studies showed that the methods can be competitively applied in quality control laboratories.

  1. Lipidomic analysis of biological samples: Comparison of liquid chromatography, supercritical fluid chromatography and direct infusion mass spectrometry methods.

    PubMed

    Lísa, Miroslav; Cífková, Eva; Khalikova, Maria; Ovčačíková, Magdaléna; Holčapek, Michal

    2017-11-24

    Lipidomic analysis of biological samples in a clinical research represents challenging task for analytical methods given by the large number of samples and their extreme complexity. In this work, we compare direct infusion (DI) and chromatography - mass spectrometry (MS) lipidomic approaches represented by three analytical methods in terms of comprehensiveness, sample throughput, and validation results for the lipidomic analysis of biological samples represented by tumor tissue, surrounding normal tissue, plasma, and erythrocytes of kidney cancer patients. Methods are compared in one laboratory using the identical analytical protocol to ensure comparable conditions. Ultrahigh-performance liquid chromatography/MS (UHPLC/MS) method in hydrophilic interaction liquid chromatography mode and DI-MS method are used for this comparison as the most widely used methods for the lipidomic analysis together with ultrahigh-performance supercritical fluid chromatography/MS (UHPSFC/MS) method showing promising results in metabolomics analyses. The nontargeted analysis of pooled samples is performed using all tested methods and 610 lipid species within 23 lipid classes are identified. DI method provides the most comprehensive results due to identification of some polar lipid classes, which are not identified by UHPLC and UHPSFC methods. On the other hand, UHPSFC method provides an excellent sensitivity for less polar lipid classes and the highest sample throughput within 10min method time. The sample consumption of DI method is 125 times higher than for other methods, while only 40μL of organic solvent is used for one sample analysis compared to 3.5mL and 4.9mL in case of UHPLC and UHPSFC methods, respectively. Methods are validated for the quantitative lipidomic analysis of plasma samples with one internal standard for each lipid class. Results show applicability of all tested methods for the lipidomic analysis of biological samples depending on the analysis requirements. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. New clinical validation method for automated sphygmomanometer: a proposal by Japan ISO-WG for sphygmomanometer standard.

    PubMed

    Shirasaki, Osamu; Asou, Yosuke; Takahashi, Yukio

    2007-12-01

    Owing to fast or stepwise cuff deflation, or measuring at places other than the upper arm, the clinical accuracy of most recent automated sphygmomanometers (auto-BPMs) cannot be validated by one-arm simultaneous comparison, which would be the only accurate validation method based on auscultation. Two main alternative methods are provided by current standards, that is, two-arm simultaneous comparison (method 1) and one-arm sequential comparison (method 2); however, the accuracy of these validation methods might not be sufficient to compensate for the suspicious accuracy in lateral blood pressure (BP) differences (LD) and/or BP variations (BPV) between the device and reference readings. Thus, the Japan ISO-WG for sphygmomanometer standards has been studying a new method that might improve validation accuracy (method 3). The purpose of this study is to determine the appropriateness of method 3 by comparing immunity to LD and BPV with those of the current validation methods (methods 1 and 2). The validation accuracy of the above three methods was assessed in human participants [N=120, 45+/-15.3 years (mean+/-SD)]. An oscillometric automated monitor, Omron HEM-762, was used as the tested device. When compared with the others, methods 1 and 3 showed a smaller intra-individual standard deviation of device error (SD1), suggesting their higher reproducibility of validation. The SD1 by method 2 (P=0.004) significantly correlated with the participant's BP, supporting our hypothesis that the increased SD of device error by method 2 is at least partially caused by essential BPV. Method 3 showed a significantly (P=0.0044) smaller interparticipant SD of device error (SD2), suggesting its higher interparticipant consistency of validation. Among the methods of validation of the clinical accuracy of auto-BPMs, method 3, which showed the highest reproducibility and highest interparticipant consistency, can be proposed as being the most appropriate.

  3. [Significance of bacteria detection with filter paper method on diagnosis of diabetic foot wound infection].

    PubMed

    Zou, X H; Zhu, Y P; Ren, G Q; Li, G C; Zhang, J; Zou, L J; Feng, Z B; Li, B H

    2017-02-20

    Objective: To evaluate the significance of bacteria detection with filter paper method on diagnosis of diabetic foot wound infection. Methods: Eighteen patients with diabetic foot ulcer conforming to the study criteria were hospitalized in Liyuan Hospital Affiliated to Tongji Medical College of Huazhong University of Science and Technology from July 2014 to July 2015. Diabetic foot ulcer wounds were classified according to the University of Texas diabetic foot classification (hereinafter referred to as Texas grade) system, and general condition of patients with wounds in different Texas grade was compared. Exudate and tissue of wounds were obtained, and filter paper method and biopsy method were adopted to detect the bacteria of wounds of patients respectively. Filter paper method was regarded as the evaluation method, and biopsy method was regarded as the control method. The relevance, difference, and consistency of the detection results of two methods were tested. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of filter paper method in bacteria detection were calculated. Receiver operating characteristic (ROC) curve was drawn based on the specificity and sensitivity of filter paper method in bacteria detection of 18 patients to predict the detection effect of the method. Data were processed with one-way analysis of variance and Fisher's exact test. In patients tested positive for bacteria by biopsy method, the correlation between bacteria number detected by biopsy method and that by filter paper method was analyzed with Pearson correlation analysis. Results: (1) There were no statistically significant differences among patients with wounds in Texas grade 1, 2, and 3 in age, duration of diabetes, duration of wound, wound area, ankle brachial index, glycosylated hemoglobin, fasting blood sugar, blood platelet count, erythrocyte sedimentation rate, C-reactive protein, aspartate aminotransferase, serum creatinine, and urea nitrogen (with F values from 0.029 to 2.916, P values above 0.05), while there were statistically significant differences among patients with wounds in Texas grade 1, 2, and 3 in white blood cell count and alanine aminotransferase (with F values 4.688 and 6.833 respectively, P <0.05 or P <0.01). (2) According to the results of biopsy method, 6 patients were tested negative for bacteria, and 12 patients were tested positive for bacteria, among which 10 patients were with bacterial number above 1×10(5)/g, and 2 patients with bacterial number below 1×10(5)/g. According to the results of filter paper method, 8 patients were tested negative for bacteria, and 10 patients were tested positive for bacteria, among which 7 patients were with bacterial number above 1×10(5)/g, and 3 patients with bacterial number below 1×10(5)/g. There were 7 patients tested positive for bacteria both by biopsy method and filter paper method, 8 patients tested negative for bacteria both by biopsy method and filter paper method, and 3 patients tested positive for bacteria by biopsy method but negative by filter paper method. Patients tested negative for bacteria by biopsy method did not tested positive for bacteria by filter paper method. There was directional association between the detection results of two methods ( P =0.004), i. e. if result of biopsy method was positive, result of filter paper method could also be positive. There was no obvious difference in the detection results of two methods ( P =0.250). The consistency between the detection results of two methods was ordinary (Kappa=0.68, P =0.002). (3) The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of filter paper method in bacteria detection were 70%, 100%, 1.00, 0.73, and 83.3%, respectively. Total area under ROC curve of bacteria detection by filter paper method in 18 patients was 0.919 (with 95% confidence interval 0-1.000, P =0.030). (4) There were 13 strains of bacteria detected by biopsy method, with 5 strains of Acinetobacter baumannii, 5 strains of Staphylococcus aureus, 1 strain of Pseudomonas aeruginosa, 1 strain of Streptococcus bovis, and 1 strain of bird Enterococcus . There were 11 strains of bacteria detected by filter paper method, with 5 strains of Acinetobacter baumannii, 3 strains of Staphylococcus aureus, 1 strain of Pseudomonas aeruginosa, 1 strain of Streptococcus bovis, and 1 strain of bird Enterococcus . Except for Staphylococcus aureus, the sensitivity and specificity of filter paper method in the detection of the other 4 bacteria were all 100%. The consistency between filter paper method and biopsy method in detecting Acinetobacter baumannii was good (Kappa=1.00, P <0.01), while that in detecting Staphylococcus aureus was ordinary (Kappa=0.68, P <0.05). (5) There was no obvious correlation between the bacteria number of wounds detected by filter paper method and that by biopsy method ( r =0.257, P =0.419). There was obvious correlation between the bacteria numbers detected by two methods in wounds with Texas grade 1 and 2 (with r values as 0.999, P values as 0.001). There was no obvious correlation between the bacteria numbers detected by two methods in wounds with Texas grade 3 ( r =-0.053, P =0.947). Conclusions: The detection result of filter paper method is in accordance with that of biopsy method in the determination of bacterial infection, and it is of great importance in the diagnosis of local infection of diabetic foot wound.

  4. A k-space method for large-scale models of wave propagation in tissue.

    PubMed

    Mast, T D; Souriau, L P; Liu, D L; Tabei, M; Nachman, A I; Waag, R C

    2001-03-01

    Large-scale simulation of ultrasonic pulse propagation in inhomogeneous tissue is important for the study of ultrasound-tissue interaction as well as for development of new imaging methods. Typical scales of interest span hundreds of wavelengths; most current two-dimensional methods, such as finite-difference and finite-element methods, are unable to compute propagation on this scale with the efficiency needed for imaging studies. Furthermore, for most available methods of simulating ultrasonic propagation, large-scale, three-dimensional computations of ultrasonic scattering are infeasible. Some of these difficulties have been overcome by previous pseudospectral and k-space methods, which allow substantial portions of the necessary computations to be executed using fast Fourier transforms. This paper presents a simplified derivation of the k-space method for a medium of variable sound speed and density; the derivation clearly shows the relationship of this k-space method to both past k-space methods and pseudospectral methods. In the present method, the spatial differential equations are solved by a simple Fourier transform method, and temporal iteration is performed using a k-t space propagator. The temporal iteration procedure is shown to be exact for homogeneous media, unconditionally stable for "slow" (c(x) < or = c0) media, and highly accurate for general weakly scattering media. The applicability of the k-space method to large-scale soft tissue modeling is shown by simulating two-dimensional propagation of an incident plane wave through several tissue-mimicking cylinders as well as a model chest wall cross section. A three-dimensional implementation of the k-space method is also employed for the example problem of propagation through a tissue-mimicking sphere. Numerical results indicate that the k-space method is accurate for large-scale soft tissue computations with much greater efficiency than that of an analogous leapfrog pseudospectral method or a 2-4 finite difference time-domain method. However, numerical results also indicate that the k-space method is less accurate than the finite-difference method for a high contrast scatterer with bone-like properties, although qualitative results can still be obtained by the k-space method with high efficiency. Possible extensions to the method, including representation of absorption effects, absorbing boundary conditions, elastic-wave propagation, and acoustic nonlinearity, are discussed.

  5. The method of planning the energy consumption for electricity market

    NASA Astrophysics Data System (ADS)

    Russkov, O. V.; Saradgishvili, S. E.

    2017-10-01

    The limitations of existing forecast models are defined. The offered method is based on game theory, probabilities theory and forecasting the energy prices relations. New method is the basis for planning the uneven energy consumption of industrial enterprise. Ecological side of the offered method is disclosed. The program module performed the algorithm of the method is described. Positive method tests at the industrial enterprise are shown. The offered method allows optimizing the difference between planned and factual consumption of energy every hour of a day. The conclusion about applicability of the method for addressing economic and ecological challenges is made.

  6. Numerical solution of sixth-order boundary-value problems using Legendre wavelet collocation method

    NASA Astrophysics Data System (ADS)

    Sohaib, Muhammad; Haq, Sirajul; Mukhtar, Safyan; Khan, Imad

    2018-03-01

    An efficient method is proposed to approximate sixth order boundary value problems. The proposed method is based on Legendre wavelet in which Legendre polynomial is used. The mechanism of the method is to use collocation points that converts the differential equation into a system of algebraic equations. For validation two test problems are discussed. The results obtained from proposed method are quite accurate, also close to exact solution, and other different methods. The proposed method is computationally more effective and leads to more accurate results as compared to other methods from literature.

  7. Modifications of the PCPT method for HJB equations

    NASA Astrophysics Data System (ADS)

    Kossaczký, I.; Ehrhardt, M.; Günther, M.

    2016-10-01

    In this paper we will revisit the modification of the piecewise constant policy timestepping (PCPT) method for solving Hamilton-Jacobi-Bellman (HJB) equations. This modification is called piecewise predicted policy timestepping (PPPT) method and if properly used, it may be significantly faster. We will quickly recapitulate the algorithms of PCPT, PPPT methods and of the classical implicit method and apply them on a passport option pricing problem with non-standard payoff. We will present modifications needed to solve this problem effectively with the PPPT method and compare the performance with the PCPT method and the classical implicit method.

  8. Rapid Method for Sodium Hydroxide/Sodium Peroxide Fusion ...

    EPA Pesticide Factsheets

    Technical Fact Sheet Analysis Purpose: Qualitative analysis Technique: Alpha spectrometry Method Developed for: Plutonium-238 and plutonium-239 in water and air filters Method Selected for: SAM lists this method as a pre-treatment technique supporting analysis of refractory radioisotopic forms of plutonium in drinking water and air filters using the following qualitative techniques: • Rapid methods for acid or fusion digestion • Rapid Radiochemical Method for Plutonium-238 and Plutonium 239/240 in Building Materials for Environmental Remediation Following Radiological Incidents. Summary of subject analytical method which will be posted to the SAM website to allow access to the method.

  9. The Importance of Method Selection in Determining Product Integrity for Nutrition Research1234

    PubMed Central

    Mudge, Elizabeth M; Brown, Paula N

    2016-01-01

    The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. PMID:26980823

  10. Development of a Double Glass Mounting Method Using Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) and its Evaluation for Permanent Mounting of Small Nematodes

    PubMed Central

    ZAHABIUN, Farzaneh; SADJJADI, Seyed Mahmoud; ESFANDIARI, Farideh

    2015-01-01

    Background: Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. Methods: A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. Results: The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Conclusion: Using this method is cost effective and fast for mounting of small nematodes comparing to classic method. PMID:26811729

  11. An evaluation of the efficiency of cleaning methods in a bacon factory

    PubMed Central

    Dempster, J. F.

    1971-01-01

    The germicidal efficiencies of hot water (140-150° F.) under pressure (method 1), hot water + 2% (w/v) detergent solution (method 2) and hot water + detergent + 200 p.p.m. solution of available chlorine (method 3) were compared at six sites in a bacon factory. Results indicated that sites 1 and 2 (tiled walls) were satisfactorily cleaned by each method. It was therefore considered more economical to clean such surfaces routinely by method 1. However, this method was much less efficient (31% survival of micro-organisms) on site 3 (wooden surface) than methods 2 (7% survival) and 3 (1% survival). Likewise the remaining sites (dehairing machine, black scraper and table) were least efficiently cleaned by method 1. The most satisfactory results were obtained when these surfaces were treated by method 3. Pig carcasses were shown to be contaminated by an improperly cleaned black scraper. Repeated cleaning and sterilizing (method 3) of this equipment reduced the contamination on carcasses from about 70% to less than 10%. PMID:5291745

  12. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al.

    PubMed

    Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang

    2015-09-21

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.

  13. Simplified adsorption method for detection of antibodies to Candida albicans germ tubes.

    PubMed Central

    Ponton, J; Quindos, G; Arilla, M C; Mackenzie, D W

    1994-01-01

    Two modifications that simplify and shorten a method for adsorption of the antibodies against the antigens expressed on both blastospore and germ tube cell wall surfaces (methods 2 and 3) were compared with the original method of adsorption (method 1) to detect anti-Candida albicans germ tube antibodies in 154 serum specimens. Adsorption of the sera by both modified methods resulted in titers very similar to those obtained by the original method. Only 5.2% of serum specimens tested by method 2 and 5.8% of serum specimens tested by method 3 presented greater than one dilution discrepancies in the titers with respect to the titer observed by method 1. When a test based on method 2 was evaluated with sera from patients with invasive candidiasis, the best discriminatory results (sensitivity, 84.6%; specificity, 87.9%; positive predictive value, 75.9%; negative predictive value, 92.7%; efficiency, 86.9%) were obtained when a titer of > or = 1:160 was considered positive. PMID:8126184

  14. A hybrid perturbation Galerkin technique with applications to slender body theory

    NASA Technical Reports Server (NTRS)

    Geer, James F.; Andersen, Carl M.

    1989-01-01

    A two-step hybrid perturbation-Galerkin method to solve a variety of applied mathematics problems which involve a small parameter is presented. The method consists of: (1) the use of a regular or singular perturbation method to determine the asymptotic expansion of the solution in terms of the small parameter; (2) construction of an approximate solution in the form of a sum of the perturbation coefficient functions multiplied by (unknown) amplitudes (gauge functions); and (3) the use of the classical Bubnov-Galerkin method to determine these amplitudes. This hybrid method has the potential of overcoming some of the drawbacks of the perturbation method and the Bubnov-Galerkin method when they are applied by themselves, while combining some of the good features of both. The proposed method is applied to some singular perturbation problems in slender body theory. The results obtained from the hybrid method are compared with approximate solutions obtained by other methods, and the degree of applicability of the hybrid method to broader problem areas is discussed.

  15. A hybrid perturbation Galerkin technique with applications to slender body theory

    NASA Technical Reports Server (NTRS)

    Geer, James F.; Andersen, Carl M.

    1987-01-01

    A two step hybrid perturbation-Galerkin method to solve a variety of applied mathematics problems which involve a small parameter is presented. The method consists of: (1) the use of a regular or singular perturbation method to determine the asymptotic expansion of the solution in terms of the small parameter; (2) construction of an approximate solution in the form of a sum of the perturbation coefficient functions multiplied by (unknown) amplitudes (gauge functions); and (3) the use of the classical Bubnov-Galerkin method to determine these amplitudes. This hybrid method has the potential of overcoming some of the drawbacks of the perturbation method and the Bubnov-Galerkin method when they are applied by themselves, while combining some of the good features of both. The proposed method is applied to some singular perturbation problems in slender body theory. The results obtained from the hybrid method are compared with approximate solutions obtained by other methods, and the degree of applicability of the hybrid method to broader problem areas is discussed.

  16. Comparison of the convolution quadrature method and enhanced inverse FFT with application in elastodynamic boundary element method

    NASA Astrophysics Data System (ADS)

    Schanz, Martin; Ye, Wenjing; Xiao, Jinyou

    2016-04-01

    Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.

  17. Explicit methods in extended phase space for inseparable Hamiltonian problems

    NASA Astrophysics Data System (ADS)

    Pihajoki, Pauli

    2015-03-01

    We present a method for explicit leapfrog integration of inseparable Hamiltonian systems by means of an extended phase space. A suitably defined new Hamiltonian on the extended phase space leads to equations of motion that can be numerically integrated by standard symplectic leapfrog (splitting) methods. When the leapfrog is combined with coordinate mixing transformations, the resulting algorithm shows good long term stability and error behaviour. We extend the method to non-Hamiltonian problems as well, and investigate optimal methods of projecting the extended phase space back to original dimension. Finally, we apply the methods to a Hamiltonian problem of geodesics in a curved space, and a non-Hamiltonian problem of a forced non-linear oscillator. We compare the performance of the methods to a general purpose differential equation solver LSODE, and the implicit midpoint method, a symplectic one-step method. We find the extended phase space methods to compare favorably to both for the Hamiltonian problem, and to the implicit midpoint method in the case of the non-linear oscillator.

  18. Recent Advances in the Method of Forces: Integrated Force Method of Structural Analysis

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.

    1998-01-01

    Stress that can be induced in an elastic continuum can be determined directly through the simultaneous application of the equilibrium equations and the compatibility conditions. In the literature, this direct stress formulation is referred to as the integrated force method. This method, which uses forces as the primary unknowns, complements the popular equilibrium-based stiffness method, which considers displacements as the unknowns. The integrated force method produces accurate stress, displacement, and frequency results even for modest finite element models. This version of the force method should be developed as an alternative to the stiffness method because the latter method, which has been researched for the past several decades, may have entered its developmental plateau. Stress plays a primary role in the development of aerospace and other products, and its analysis is difficult. Therefore, it is advisable to use both methods to calculate stress and eliminate errors through comparison. This paper examines the role of the integrated force method in analysis, animation and design.

  19. Comparison of gravimetric, creamatocrit and esterified fatty acid methods for determination of total fat content in human milk.

    PubMed

    Du, Jian; Gay, Melvin C L; Lai, Ching Tat; Trengove, Robert D; Hartmann, Peter E; Geddes, Donna T

    2017-02-15

    The gravimetric method is considered the gold standard for measuring the fat content of human milk. However, it is labor intensive and requires large volumes of human milk. Other methods, such as creamatocrit and esterified fatty acid assay (EFA), have also been used widely in fat analysis. However, these methods have not been compared concurrently with the gravimetric method. Comparison of the three methods was conducted with human milk of varying fat content. Correlations between these methods were high (r(2)=0.99). Statistical differences (P<0.001) were observed in the overall fat measurements and within each group (low, medium and high fat milk) using the three methods. Overall, stronger correlation with lower mean (4.73g/L) and percentage differences (5.16%) was observed with the creamatocrit than the EFA method when compared to the gravimetric method. Furthermore, the ease of operation and real-time analysis make the creamatocrit method preferable. Copyright © 2016. Published by Elsevier Ltd.

  20. EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.

    PubMed

    Hadinia, M; Jafari, R; Soleimani, M

    2016-06-01

    This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.

  1. Testing Multivariate Adaptive Regression Splines (MARS) as a Method of Land Cover Classification of TERRA-ASTER Satellite Images.

    PubMed

    Quirós, Elia; Felicísimo, Angel M; Cuartero, Aurora

    2009-01-01

    This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test.

  2. Monitoring the chemical production of citrus-derived bioactive 5-demethylnobiletin using surface enhanced Raman spectroscopy

    PubMed Central

    Zheng, Jinkai; Fang, Xiang; Cao, Yong; Xiao, Hang; He, Lili

    2013-01-01

    To develop an accurate and convenient method for monitoring the production of citrus-derived bioactive 5-demethylnobiletin from demethylation reaction of nobiletin, we compared surface enhanced Raman spectroscopy (SERS) methods with a conventional HPLC method. Our results show that both the substrate-based and solution-based SERS methods correlated with HPLC method very well. The solution method produced lower root mean square error of calibration and higher correlation coefficient than the substrate method. The solution method utilized an ‘affinity chromatography’-like procedure to separate the reactant nobiletin from the product 5-demthylnobiletin based on their different binding affinity to the silver dendrites. The substrate method was found simpler and faster to collect the SERS ‘fingerprint’ spectra of the samples as no incubation between samples and silver was needed and only trace amount of samples were required. Our results demonstrated that the SERS methods were superior to HPLC method in conveniently and rapidly characterizing and quantifying 5-demethylnobiletin production. PMID:23885986

  3. Flow “Fine” Synthesis: High Yielding and Selective Organic Synthesis by Flow Methods

    PubMed Central

    2015-01-01

    Abstract The concept of flow “fine” synthesis, that is, high yielding and selective organic synthesis by flow methods, is described. Some examples of flow “fine” synthesis of natural products and APIs are discussed. Flow methods have several advantages over batch methods in terms of environmental compatibility, efficiency, and safety. However, synthesis by flow methods is more difficult than synthesis by batch methods. Indeed, it has been considered that synthesis by flow methods can be applicable for the production of simple gasses but that it is difficult to apply to the synthesis of complex molecules such as natural products and APIs. Therefore, organic synthesis of such complex molecules has been conducted by batch methods. On the other hand, syntheses and reactions that attain high yields and high selectivities by flow methods are increasingly reported. Flow methods are leading candidates for the next generation of manufacturing methods that can mitigate environmental concerns toward sustainable society. PMID:26337828

  4. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhong-Li, E-mail: zl.liu@163.com; Zhang, Xiu-Lu; Cai, Ling-Cang

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curvemore » of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.« less

  5. The Importance of Method Selection in Determining Product Integrity for Nutrition Research.

    PubMed

    Mudge, Elizabeth M; Betz, Joseph M; Brown, Paula N

    2016-03-01

    The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. © 2016 American Society for Nutrition.

  6. Student Preferences Regarding Teaching Methods in a Drug-Induced Diseases and Clinical Toxicology Course

    PubMed Central

    Gim, Suzanna

    2013-01-01

    Objectives. To determine which teaching method in a drug-induced diseases and clinical toxicology course was preferred by students and whether their preference correlated with their learning of drug-induced diseases. Design. Three teaching methods incorporating active-learning exercises were implemented. A survey instrument was developed to analyze students’ perceptions of the active-learning methods used and how they compared to the traditional teaching method (lecture). Examination performance was then correlated to students’ perceptions of various teaching methods. Assessment. The majority of the 107 students who responded to the survey found traditional lecture significantly more helpful than active-learning methods (p=0.01 for all comparisons). None of the 3 active-learning methods were preferred over the others. No significant correlations were found between students’ survey responses and examination performance. Conclusions. Students preferred traditional lecture to other instructional methods. Learning was not influenced by the teaching method or by preference for a teaching method. PMID:23966726

  7. A new sampling method for fibre length measurement

    NASA Astrophysics Data System (ADS)

    Wu, Hongyan; Li, Xianghong; Zhang, Junying

    2018-06-01

    This paper presents a new sampling method for fibre length measurement. This new method can meet the three features of an effective sampling method, also it can produce the beard with two symmetrical ends which can be scanned from the holding line to get two full fibrograms for each sample. The methodology was introduced and experiments were performed to investigate effectiveness of the new method. The results show that the new sampling method is an effective sampling method.

  8. A comparison between progressive extension method (PEM) and iterative method (IM) for magnetic field extrapolations in the solar atmosphere

    NASA Technical Reports Server (NTRS)

    Wu, S. T.; Sun, M. T.; Sakurai, Takashi

    1990-01-01

    This paper presents a comparison between two numerical methods for the extrapolation of nonlinear force-free magnetic fields, viz the Iterative Method (IM) and the Progressive Extension Method (PEM). The advantages and disadvantages of these two methods are summarized, and the accuracy and numerical instability are discussed. On the basis of this investigation, it is claimed that the two methods do resemble each other qualitatively.

  9. Adaptive Discontinuous Galerkin Methods in Multiwavelets Bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archibald, Richard K; Fann, George I; Shelton Jr, William Allison

    2011-01-01

    We use a multiwavelet basis with the Discontinuous Galerkin (DG) method to produce a multi-scale DG method. We apply this Multiwavelet DG method to convection and convection-diffusion problems in multiple dimensions. Merging the DG method with multiwavelets allows the adaptivity in the DG method to be resolved through manipulation of multiwavelet coefficients rather than grid manipulation. Additionally, the Multiwavelet DG method is tested on non-linear equations in one dimension and on the cubed sphere.

  10. Sensitivity of Particle Size in Discrete Element Method to Particle Gas Method (DEM_PGM) Coupling in Underbody Blast Simulations

    DTIC Science & Technology

    2016-06-12

    Particle Size in Discrete Element Method to Particle Gas Method (DEM_PGM) Coupling in Underbody Blast Simulations Venkatesh Babu, Kumar Kulkarni, Sanjay...buried in soil viz., (1) coupled discrete element & particle gas methods (DEM-PGM) and (2) Arbitrary Lagrangian-Eulerian (ALE), are investigated. The...DEM_PGM and identify the limitations/strengths compared to the ALE method. Discrete Element Method (DEM) can model individual particle directly, and

  11. Two Project Methods: Preliminary Observations on the Similarities and Differences between William Heard Kilpatrick's Project Method and John Dewey's Problem-Solving Method

    ERIC Educational Resources Information Center

    Sutinen, Ari

    2013-01-01

    The project method became a famous teaching method when William Heard Kilpatrick published his article "Project Method" in 1918. The key idea in Kilpatrick's project method is to try to explain how pupils learn things when they work in projects toward different common objects. The same idea of pupils learning by work or action in an…

  12. Using an Ordinal Outranking Method Supporting the Acquisition of Military Equipment

    DTIC Science & Technology

    2009-10-01

    will concentrate on the well-known ORESTE method ([10],[12]) which is complementary to the PROMETHEE methods. There are other methods belonging to...the PROMETHEE methods. This MCDM method is taught in the curriculum of the High Staff College for Military Administrators of the Belgian MoD...C(b,a) similar to the preference indicators ( , ) and (b,a)a b  of the PROMETHEE methods (see [4] and SAS-080 14 and SAS-080 15). These

  13. Review of Statistical Methods for Analysing Healthcare Resources and Costs

    PubMed Central

    Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G

    2011-01-01

    We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344

  14. An adaptive proper orthogonal decomposition method for model order reduction of multi-disc rotor system

    NASA Astrophysics Data System (ADS)

    Jin, Yulin; Lu, Kuan; Hou, Lei; Chen, Yushu

    2017-12-01

    The proper orthogonal decomposition (POD) method is a main and efficient tool for order reduction of high-dimensional complex systems in many research fields. However, the robustness problem of this method is always unsolved, although there are some modified POD methods which were proposed to solve this problem. In this paper, a new adaptive POD method called the interpolation Grassmann manifold (IGM) method is proposed to address the weakness of local property of the interpolation tangent-space of Grassmann manifold (ITGM) method in a wider parametric region. This method is demonstrated here by a nonlinear rotor system of 33-degrees of freedom (DOFs) with a pair of liquid-film bearings and a pedestal looseness fault. The motion region of the rotor system is divided into two parts: simple motion region and complex motion region. The adaptive POD method is compared with the ITGM method for the large and small spans of parameter in the two parametric regions to present the advantage of this method and disadvantage of the ITGM method. The comparisons of the responses are applied to verify the accuracy and robustness of the adaptive POD method, as well as the computational efficiency is also analyzed. As a result, the new adaptive POD method has a strong robustness and high computational efficiency and accuracy in a wide scope of parameter.

  15. A hydrostatic weighing method using total lung capacity and a small tank.

    PubMed Central

    Warner, J G; Yeater, R; Sherwood, L; Weber, K

    1986-01-01

    The purpose of this study was to establish the validity and reliability of a hydrostatic weighing method using total lung capacity (measuring vital capacity with a respirometer at the time of weighing) the prone position, and a small oblong tank. The validity of the method was established by comparing the TLC prone (tank) method against three hydrostatic weighing methods administered in a pool. The three methods included residual volume seated, TLC seated and TLC prone. Eighty male and female subjects were underwater weighed using each of the four methods. Validity coefficients for per cent body fat between the TLC prone (tank) method and the RV seated (pool), TLC seated (pool) and TLC prone (pool) methods were .98, .99 and .99, respectively. A randomised complete block ANOVA found significant differences between the RV seated (pool) method and each of the three TLC methods with respect to both body density and per cent body fat. The differences were negligible with respect to HW error. Reliability of the TLC prone (tank) method was established by weighing twenty subjects three different times with ten-minute time intervals between testing. Multiple correlations yielded reliability coefficients for body density and per cent body fat values of .99 and .99, respectively. It was concluded that the TLC prone (tank) method is valid, reliable and a favourable method of hydrostatic weighing. PMID:3697596

  16. A hydrostatic weighing method using total lung capacity and a small tank.

    PubMed

    Warner, J G; Yeater, R; Sherwood, L; Weber, K

    1986-03-01

    The purpose of this study was to establish the validity and reliability of a hydrostatic weighing method using total lung capacity (measuring vital capacity with a respirometer at the time of weighing) the prone position, and a small oblong tank. The validity of the method was established by comparing the TLC prone (tank) method against three hydrostatic weighing methods administered in a pool. The three methods included residual volume seated, TLC seated and TLC prone. Eighty male and female subjects were underwater weighed using each of the four methods. Validity coefficients for per cent body fat between the TLC prone (tank) method and the RV seated (pool), TLC seated (pool) and TLC prone (pool) methods were .98, .99 and .99, respectively. A randomised complete block ANOVA found significant differences between the RV seated (pool) method and each of the three TLC methods with respect to both body density and per cent body fat. The differences were negligible with respect to HW error. Reliability of the TLC prone (tank) method was established by weighing twenty subjects three different times with ten-minute time intervals between testing. Multiple correlations yielded reliability coefficients for body density and per cent body fat values of .99 and .99, respectively. It was concluded that the TLC prone (tank) method is valid, reliable and a favourable method of hydrostatic weighing.

  17. A work study of the CAD/CAM method and conventional manual method in the fabrication of spinal orthoses for patients with adolescent idiopathic scoliosis.

    PubMed

    Wong, M S; Cheng, J C Y; Wong, M W; So, S F

    2005-04-01

    A study was conducted to compare the CAD/CAM method with the conventional manual method in fabrication of spinal orthoses for patients with adolescent idiopathic scoliosis. Ten subjects were recruited for this study. Efficiency analyses of the two methods were performed from cast filling/ digitization process to completion of cast/image rectification. The dimensional changes of the casts/ models rectified by the two cast rectification methods were also investigated. The results demonstrated that the CAD/CAM method was faster than the conventional manual method in the studied processes. The mean rectification time of the CAD/CAM method was shorter than that of the conventional manual method by 108.3 min (63.5%). This indicated that the CAD/CAM method took about 1/3 of the time of the conventional manual to finish cast rectification. In the comparison of cast/image dimensional differences between the conventional manual method and the CAD/CAM method, five major dimensions in each of the five rectified regions namely the axilla, thoracic, lumbar, abdominal and pelvic regions were involved. There were no significant dimensional differences (p < 0.05) in 19 out of the 25 studied dimensions. This study demonstrated that the CAD/CAM system could save the time in the rectification process and offer a relatively high resemblance in cast rectification as compared with the conventional manual method.

  18. An Improved Newton's Method.

    ERIC Educational Resources Information Center

    Mathews, John H.

    1989-01-01

    Describes Newton's method to locate roots of an equation using the Newton-Raphson iteration formula. Develops an adaptive method overcoming limitations of the iteration method. Provides the algorithm and computer program of the adaptive Newton-Raphson method. (YP)

  19. Symplectic test particle encounters: a comparison of methods

    NASA Astrophysics Data System (ADS)

    Wisdom, Jack

    2017-01-01

    A new symplectic method for handling encounters of test particles with massive bodies is presented. The new method is compared with several popular methods (RMVS3, SYMBA, and MERCURY). The new method compares favourably.

  20. The Tongue and Quill

    DTIC Science & Technology

    2004-08-01

    ethnography , phenomenological study , grounded theory study and content analysis. THE HISTORICAL METHOD Methods I. Qualitative Research Methods ... Phenomenological Study 4. Grounded Theory Study 5. Content Analysis II. Quantitative Research Methods A...A. The Historical Method B. General Qualitative

  1. 26 CFR 1.412(c)(1)-2 - Shortfall method.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 5 2010-04-01 2010-04-01 false Shortfall method. 1.412(c)(1)-2 Section 1.412(c... (CONTINUED) INCOME TAXES Pension, Profit-Sharing, Stock Bonus Plans, Etc. § 1.412(c)(1)-2 Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's...

  2. Comparisons of two methods of harvesting biomass for energy

    Treesearch

    W.F. Watson; B.J. Stokes; I.W. Savelle

    1986-01-01

    Two harvesting methods for utilization of understory biomass were tested against a conventional harvesting method to determine relative costs. The conventional harvesting method tested removed all pine 6 inches diameter at breast height (DBH) and larger and hardwood sawlogs as tree length logs. The two intensive harvesting methods were a one-pass and a two-pass method...

  3. Log sampling methods and software for stand and landscape analyses.

    Treesearch

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe methods for efficient, accurate sampling of logs at landscape and stand scales to estimate density, total length, cover, volume, and weight. Our methods focus on optimizing the sampling effort by choosing an appropriate sampling method and transect length for specific forest conditions and objectives. Sampling methods include the line-intersect method and...

  4. 77 FR 55832 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of a New Equivalent Method

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-11

    ... Methods: Designation of a New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of a new equivalent method for monitoring ambient air quality. SUMMARY: Notice is... part 53, a new equivalent method for measuring concentrations of PM 2.5 in the ambient air. FOR FURTHER...

  5. 26 CFR 1.446-2 - Method of accounting for interest.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... account by a taxpayer under the taxpayer's regular method of accounting (e.g., an accrual method or the... 26 Internal Revenue 6 2010-04-01 2010-04-01 false Method of accounting for interest. 1.446-2... TAX (CONTINUED) INCOME TAXES Methods of Accounting § 1.446-2 Method of accounting for interest. (a...

  6. Rapid Radiochemical Method for Radium-226 in Building ...

    EPA Pesticide Factsheets

    Technical Fact Sheet Analysis Purpose: Qualitative analysis Technique: Alpha spectrometry Method Developed for: Radium-226 in building materials Method Selected for: SAM lists this method for qualitative analysis of radium-226 in concrete or brick building materials Summary of subject analytical method which will be posted to the SAM website to allow access to the method.

  7. Rapid Radiochemical Method for Americium-241 in Building ...

    EPA Pesticide Factsheets

    Technical Fact Sheet Analysis Purpose: Qualitative analysis Technique: Alpha spectrometry Method Developed for: Americium-241 in building materials Method Selected for: SAM lists this method for qualitative analysis of americium-241 in concrete or brick building materials. Summary of subject analytical method which will be posted to the SAM website to allow access to the method.

  8. Draft Environmental Impact Statement: Peacekeeper Rail Garrison Program

    DTIC Science & Technology

    1988-06-01

    2-13 3.0 ENVIRONMENTAL ANALYSIS METHODS ................................ 3-1 3.1 Methods for Assessing Nationwide Impacts...3-2 3.1.1 Methods for Assessing National Economic Impacts ........... 3-2 3.1.2 Methods for Assessing Railroad Network...3.2.4 Methods for Assessing Existing and Future Baseline Conditions .......................................... 3-6 3.2.5 Methods for Assessing

  9. A Comparative Investigation of the Efficiency of Two Classroom Observational Methods.

    ERIC Educational Resources Information Center

    Kissel, Mary Ann

    The problem of this study was to determine whether Method A is a more efficient observational method for obtaining activity type behaviors in an individualized classroom than Method B. Method A requires the observer to record the activities of the entire class at given intervals while Method B requires only the activities of selected individuals…

  10. Improved methods of vibration analysis of pretwisted, airfoil blades

    NASA Technical Reports Server (NTRS)

    Subrahmanyam, K. B.; Kaza, K. R. V.

    1984-01-01

    Vibration analysis of pretwisted blades of asymmetric airfoil cross section is performed by using two mixed variational approaches. Numerical results obtained from these two methods are compared to those obtained from an improved finite difference method and also to those given by the ordinary finite difference method. The relative merits, convergence properties and accuracies of all four methods are studied and discussed. The effects of asymmetry and pretwist on natural frequencies and mode shapes are investigated. The improved finite difference method is shown to be far superior to the conventional finite difference method in several respects. Close lower bound solutions are provided by the improved finite difference method for untwisted blades with a relatively coarse mesh while the mixed methods have not indicated any specific bound.

  11. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System.

    PubMed

    Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan

    2017-02-20

    In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.

  12. Comparison of Instream and Laboratory Methods of Measuring Sediment Oxygen Demand

    USGS Publications Warehouse

    Hall, Dennis C.; Berkas, Wayne R.

    1988-01-01

    Sediment oxygen demand (SOD) was determined at three sites in a gravel-bottomed central Missouri stream by: (1) two variations of an instream method, and (2) a laboratory method. SOD generally was greatest by the instream methods, which are considered more accurate, and least by the laboratory method. Disturbing stream sediment did not significantly decrease SOD by the instream method. Temperature ranges of up to 12 degree Celsius had no significant effect on the SOD. In the gravel-bottomed stream, the placement of chambers was critical to obtain reliable measurements. SOD rates were dependent on the method; therefore, care should be taken in comparing SOD data obtained by different methods. There is a need for a carefully researched standardized method for SOD determinations.

  13. Echo movement and evolution from real-time processing.

    NASA Technical Reports Server (NTRS)

    Schaffner, M. R.

    1972-01-01

    Preliminary experimental data on the effectiveness of conventional radars in measuring the movement and evolution of meteorological echoes when the radar is connected to a programmable real-time processor are examined. In the processor programming is accomplished by conceiving abstract machines which constitute the actual programs used in the methods employed. An analysis of these methods, such as the center of gravity method, the contour-displacement method, the method of slope, the cross-section method, the contour crosscorrelation method, the method of echo evolution at each point, and three-dimensional measurements, shows that the motions deduced from them may differ notably (since each method determines different quantities) but the plurality of measurement may give additional information on the characteristics of the precipitation.

  14. Comparison of methods for measuring cholinesterase inhibition by carbamates

    PubMed Central

    Wilhelm, K.; Vandekar, M.; Reiner, E.

    1973-01-01

    The Acholest and tintometric methods are used widely for measuring blood cholinesterase activity after exposure to organophosphorus compounds. However, if applied for measuring blood cholinesterase activity in persons exposed to carbamates, the accuracy of the methods requires verification since carbamylated cholinesterases are unstable. The spectrophotometric method was used as a reference method and the two field methods were employed under controlled conditions. Human blood cholinesterases were inhibited in vitro by four methylcarbamates that are used as insecticides. When plasma cholinesterase activity was measured by the Acholest and spectrophotometric methods, no difference was found. The enzyme activity in whole blood determined by the tintometric method was ≤ 11% higher than when the same sample was measured by the spectrophotometric method. PMID:4541147

  15. An advanced probabilistic structural analysis method for implicit performance functions

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.

    1989-01-01

    In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.

  16. Formal methods technology transfer: Some lessons learned

    NASA Technical Reports Server (NTRS)

    Hamilton, David

    1992-01-01

    IBM has a long history in the application of formal methods to software development and verification. There have been many successes in the development of methods, tools and training to support formal methods. And formal methods have been very successful on several projects. However, the use of formal methods has not been as widespread as hoped. This presentation summarizes several approaches that have been taken to encourage more widespread use of formal methods, and discusses the results so far. The basic problem is one of technology transfer, which is a very difficult problem. It is even more difficult for formal methods. General problems of technology transfer, especially the transfer of formal methods technology, are also discussed. Finally, some prospects for the future are mentioned.

  17. Method Development in Forensic Toxicology.

    PubMed

    Peters, Frank T; Wissenbach, Dirk K; Busardo, Francesco Paolo; Marchei, Emilia; Pichini, Simona

    2017-01-01

    In the field of forensic toxicology, the quality of analytical methods is of great importance to ensure the reliability of results and to avoid unjustified legal consequences. A key to high quality analytical methods is a thorough method development. The presented article will provide an overview on the process of developing methods for forensic applications. This includes the definition of the method's purpose (e.g. qualitative vs quantitative) and the analytes to be included, choosing an appropriate sample matrix, setting up separation and detection systems as well as establishing a versatile sample preparation. Method development is concluded by an optimization process after which the new method is subject to method validation. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  18. Implementation of Leak Test Methods for the International Space Station (ISS) Elements, Systems and Components

    NASA Technical Reports Server (NTRS)

    Underwood, Steve; Lvovsky, Oleg

    2007-01-01

    The International Space Station (ISS has Qualification and Acceptance Environmental Test Requirements document, SSP 41172 that includes many environmental tests such as Thermal vacuum & Cycling, Depress/Repress, Sinusoidal, Random, and Acoustic Vibration, Pyro Shock, Acceleration, Humidity, Pressure, Electromatic Interference (EMI)/Electromagnetic Compatibility (EMCO), etc. This document also includes (13) leak test methods for Pressure Integrity Verification of the ISS Elements, Systems, and Components. These leak test methods are well known, however, the test procedure for specific leak test method shall be written and implemented paying attention to the important procedural steps/details that, if omitted or deviated, could impact the quality of the final product and affect the crew safety. Such procedural steps/details for different methods include, but not limited to: - Sequence of testing, f or example, pressurization and submersion steps for Method I (Immersion); - Stabilization of the mass spectrometer leak detector outputs fo r Method II (vacuum Chamber or Bell jar); - Proper data processing an d taking a conservative approach while making predictions for on-orbit leakage rate for Method III(Pressure Change); - Proper Calibration o f the mass spectrometer leak detector for all the tracer gas (mostly Helium) Methods such as Method V (Detector Probe), Method VI (Hood), Method VII (Tracer Probe), Method VIII(Accumulation); - Usage of visibl ility aides for Method I (Immersion), Method IV (Chemical Indicator), Method XII (Foam/Liquid Application), and Method XIII (Hydrostatic/Visual Inspection); While some methods could be used for the total leaka ge (either internal-to-external or external-to-internal) rate requirement verification (Vacuum Chamber, Pressure Decay, Hood, Accumulation), other methods shall be used only as a pass/fail test for individual joints (e.g., welds, fittings, and plugs) or for troubleshooting purposes (Chemical Indicator, Detector Probe, Tracer Probe, Local Vacuum Chamber, Foam/Liquid Application, and Hydrostatic/Visual Inspection). Any isolation of SSP 41172 requirements have led to either retesting of hardware or accepting a risk associated with the potential system or component pressure integrity problem during flight.

  19. Temperature profiles of different cooling methods in porcine pancreas procurement.

    PubMed

    Weegman, Bradley P; Suszynski, Thomas M; Scott, William E; Ferrer Fábrega, Joana; Avgoustiniatos, Efstathios S; Anazawa, Takayuki; O'Brien, Timothy D; Rizzari, Michael D; Karatzas, Theodore; Jie, Tun; Sutherland, David E R; Hering, Bernhard J; Papas, Klearchos K

    2014-01-01

    Porcine islet xenotransplantation is a promising alternative to human islet allotransplantation. Porcine pancreas cooling needs to be optimized to reduce the warm ischemia time (WIT) following donation after cardiac death, which is associated with poorer islet isolation outcomes. This study examines the effect of four different cooling Methods on core porcine pancreas temperature (n = 24) and histopathology (n = 16). All Methods involved surface cooling with crushed ice and chilled irrigation. Method A, which is the standard for porcine pancreas procurement, used only surface cooling. Method B involved an intravascular flush with cold solution through the pancreas arterial system. Method C involved an intraductal infusion with cold solution through the major pancreatic duct, and Method D combined all three cooling Methods. Surface cooling alone (Method A) gradually decreased core pancreas temperature to <10 °C after 30 min. Using an intravascular flush (Method B) improved cooling during the entire duration of procurement, but incorporating an intraductal infusion (Method C) rapidly reduced core temperature 15-20 °C within the first 2 min of cooling. Combining all methods (Method D) was the most effective at rapidly reducing temperature and providing sustained cooling throughout the duration of procurement, although the recorded WIT was not different between Methods (P = 0.36). Histological scores were different between the cooling Methods (P = 0.02) and the worst with Method A. There were differences in histological scores between Methods A and C (P = 0.02) and Methods A and D (P = 0.02), but not between Methods C and D (P = 0.95), which may highlight the importance of early cooling using an intraductal infusion. In conclusion, surface cooling alone cannot rapidly cool large (porcine or human) pancreata. Additional cooling with an intravascular flush and intraductal infusion results in improved core porcine pancreas temperature profiles during procurement and histopathology scores. These data may also have implications on human pancreas procurement as use of an intraductal infusion is not common practice. © 2014 John Wiley & Sons A/S Published by John Wiley & Sons Ltd.

  20. The PneuCarriage Project: A Multi-Centre Comparative Study to Identify the Best Serotyping Methods for Examining Pneumococcal Carriage in Vaccine Evaluation Studies

    PubMed Central

    Satzke, Catherine; Dunne, Eileen M.; Porter, Barbara D.; Klugman, Keith P.; Mulholland, E. Kim

    2015-01-01

    Background The pneumococcus is a diverse pathogen whose primary niche is the nasopharynx. Over 90 different serotypes exist, and nasopharyngeal carriage of multiple serotypes is common. Understanding pneumococcal carriage is essential for evaluating the impact of pneumococcal vaccines. Traditional serotyping methods are cumbersome and insufficient for detecting multiple serotype carriage, and there are few data comparing the new methods that have been developed over the past decade. We established the PneuCarriage project, a large, international multi-centre study dedicated to the identification of the best pneumococcal serotyping methods for carriage studies. Methods and Findings Reference sample sets were distributed to 15 research groups for blinded testing. Twenty pneumococcal serotyping methods were used to test 81 laboratory-prepared (spiked) samples. The five top-performing methods were used to test 260 nasopharyngeal (field) samples collected from children in six high-burden countries. Sensitivity and positive predictive value (PPV) were determined for the test methods and the reference method (traditional serotyping of >100 colonies from each sample). For the alternate serotyping methods, the overall sensitivity ranged from 1% to 99% (reference method 98%), and PPV from 8% to 100% (reference method 100%), when testing the spiked samples. Fifteen methods had ≥70% sensitivity to detect the dominant (major) serotype, whilst only eight methods had ≥70% sensitivity to detect minor serotypes. For the field samples, the overall sensitivity ranged from 74.2% to 95.8% (reference method 93.8%), and PPV from 82.2% to 96.4% (reference method 99.6%). The microarray had the highest sensitivity (95.8%) and high PPV (93.7%). The major limitation of this study is that not all of the available alternative serotyping methods were included. Conclusions Most methods were able to detect the dominant serotype in a sample, but many performed poorly in detecting the minor serotype populations. Microarray with a culture amplification step was the top-performing method. Results from this comprehensive evaluation will inform future vaccine evaluation and impact studies, particularly in low-income settings, where pneumococcal disease burden remains high. PMID:26575033

  1. [The clinical value of sentinel lymph node detection in laryngeal and hypopharyngeal carcinoma patients with clinically negative neck by methylene blue method and radiolabeled tracer method].

    PubMed

    Zhao, Xin; Xiao, Dajiang; Ni, Jianming; Zhu, Guochen; Yuan, Yuan; Xu, Ting; Zhang, Yongsheng

    2014-11-01

    To investigate the clinical value of sentinel lymph node (SLN) detection in laryngeal and hypopharyngeal carcinoma patients with clinically negative neck (cN0) by methylene blue method, radiolabeled tracer method and combination of these two methods. Thirty-three patients with cN0 laryngeal carcinoma and six patients with cN0 hypopharyngeal carcinoma underwent SLN detection using both of methylene blue and radiolabeled tracer method. All these patients were accepted received the injection of radioactive isotope 99 Tc(m)-sulfur colloid (SC) and methylene blue into the carcinoma before surgery, then all these patients underwent intraopertive lymphatic mapping with a handheld gamma-detecting probe and blue-dyed SLN. After the mapping of SLN, selected neck dissections and tumor resections were peformed. The results of SLN detection by radiolabeled tracer, dye and combination of both methods were compared. The detection rate of SLN by radiolabeled tracer, methylene blue and combined method were 89.7%, 79.5%, 92.3% respectively. The number of detected SLN was significantly different between radiolabeled tracer method and combined method, and also between methylene blue method and combined method. The detection rate of methylene blue and radiolabeled tracer method were significantly different from combined method (P < 0.05). Nine patients were found to have lymph node metastasis by final pathological examination. The accuracy and negative rate of SLN detection of the combined method were 97.2% and 11.1%. The combined method using radiolabeled tracer and methylene blue can improve the detection rate and accuracy of sentinel lymph node detection. Furthermore, sentinel lymph node detection can accurately represent the cervical lymph node status in cN0 laryngeal and hypopharyngeal carcinoma.

  2. Slump sitting X-ray of the lumbar spine is superior to the conventional flexion view in assessing lumbar spine instability.

    PubMed

    Hey, Hwee Weng Dennis; Lau, Eugene Tze-Chun; Lim, Joel-Louis; Choong, Denise Ai-Wen; Tan, Chuen-Seng; Liu, Gabriel Ka-Po; Wong, Hee-Kit

    2017-03-01

    Flexion radiographs have been used to identify cases of spinal instability. However, current methods are not standardized and are not sufficiently sensitive or specific to identify instability. This study aimed to introduce a new slump sitting method for performing lumbar spine flexion radiographs and comparison of the angular range of motions (ROMs) and displacements between the conventional method and this new method. This study used is a prospective study on radiological evaluation of the lumbar spine flexion ROMs and displacements using dynamic radiographs. Sixty patients were recruited from a single spine tertiary center. Angular and displacement measurements of lumbar spine flexion were carried out. Participants were randomly allocated into two groups: those who did the new method first, followed by the conventional method versus those who did the conventional method first, followed by the new method. A comparison of the angular and displacement measurements of lumbar spine flexion between the conventional method and the new method was performed and tested for superiority and non-inferiority. The measurements of global lumbar angular ROM were, on average, 17.3° larger (p<.0001) using the new slump sitting method compared with the conventional method. They were most significant at the levels of L3-L4, L4-L5, and L5-S1 (p<.0001, p<.0001 and p=.001, respectively). There was no significant difference between both methods when measuring lumbar displacements (p=.814). The new method of slump sitting dynamic radiograph was shown to be superior to the conventional method in measuring the angular ROM and non-inferior to the conventional method in the measurement of displacement. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Searching for transcription factor binding sites in vector spaces

    PubMed Central

    2012-01-01

    Background Computational approaches to transcription factor binding site identification have been actively researched in the past decade. Learning from known binding sites, new binding sites of a transcription factor in unannotated sequences can be identified. A number of search methods have been introduced over the years. However, one can rarely find one single method that performs the best on all the transcription factors. Instead, to identify the best method for a particular transcription factor, one usually has to compare a handful of methods. Hence, it is highly desirable for a method to perform automatic optimization for individual transcription factors. Results We proposed to search for transcription factor binding sites in vector spaces. This framework allows us to identify the best method for each individual transcription factor. We further introduced two novel methods, the negative-to-positive vector (NPV) and optimal discriminating vector (ODV) methods, to construct query vectors to search for binding sites in vector spaces. Extensive cross-validation experiments showed that the proposed methods significantly outperformed the ungapped likelihood under positional background method, a state-of-the-art method, and the widely-used position-specific scoring matrix method. We further demonstrated that motif subtypes of a TF can be readily identified in this framework and two variants called the k NPV and k ODV methods benefited significantly from motif subtype identification. Finally, independent validation on ChIP-seq data showed that the ODV and NPV methods significantly outperformed the other compared methods. Conclusions We conclude that the proposed framework is highly flexible. It enables the two novel methods to automatically identify a TF-specific subspace to search for binding sites. Implementations are available as source code at: http://biogrid.engr.uconn.edu/tfbs_search/. PMID:23244338

  4. [Social aspects of natural methods (author's transl)].

    PubMed

    Linhard, J

    1981-01-01

    It is rather difficult to distinguish between "natural methods" and "no natural methods" or "unnatural methods". "Natural methods" should therefore be defined as those which are used without any additional product. Use and success depend on the motivation and control of the couple. These methods are: postcoital douching, prolonged lactation, rhythm method according to Knaus or to Ogino by observing BBT, observation of cervical mucus according to Billings, coitus interruptus, and coitus reservatus. As far as we know, these methods have been used since primeval times and have been commented on during different periods and at different places as being used with the support of all 3 monotheistic religions until the era of Augustinus and Thomas of Aquinas. From then on the Christian and later on the Catholic faith saw human production as the purpose of matrimony and therefore banned all methods with the exception of the rhythm method. It has been assumed that the decrease of fertility in Europe since the industrial revolution was a result of using these methods--primarily coitus interruptus, which still seems to be widely spread. It is therefore unintelligible why so little is known about the impact of these methods on the medical and social sector. As long as the ideal method is not available the natural methods should be given a place in the development of a contraceptive methodology. Since the natural methods do not cost anything, they could help to carry forward family planning in countries with low-income population. But before employing them for the purpose they have to be studied in view of their medicobiological as well as their social aspects in order to learn more about these old and much used methods. (Author's)

  5. Evaluation of selected methods for determining streamflow during periods of ice effect

    USGS Publications Warehouse

    Melcher, Norwood B.; Walker, J.F.

    1992-01-01

    Seventeen methods for estimating ice-affected streamflow are evaluated for potential use with the U.S. Geological Survey streamflow-gaging station network. The methods evaluated were identified by written responses from U.S. Geological Survey field offices and by a comprehensive literature search. The methods selected and techniques used for applying the methods are described in this report. The methods are evaluated by comparing estimated results with data collected at three streamflow-gaging stations in Iowa during the winter of 1987-88. Discharge measurements were obtained at 1- to 5-day intervals during the ice-affected periods at the three stations to define an accurate baseline record. Discharge records were compiled for each method based on data available, assuming a 6-week field schedule. The methods are classified into two general categories-subjective and analytical--depending on whether individual judgment is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used at streamflow-gaging stations, where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice-adjustment factor) may be appropriate for use at stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge-ratio and multiple-regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.

  6. Fatigue properties of JIS H3300 C1220 copper for strain life prediction

    NASA Astrophysics Data System (ADS)

    Harun, Muhammad Faiz; Mohammad, Roslina

    2018-05-01

    The existing methods for estimating strain life parameters are dependent on the material's monotonic tensile properties. However, a few of these methods yield quite complicated expressions for calculating fatigue parameters, and are specific to certain groups of materials only. The Universal Slopes method, Modified Universal Slopes method, Uniform Material Law, the Hardness method, and Medians method are a few existing methods for predicting strain-life fatigue based on monotonic tensile material properties and hardness of material. In the present study, nine methods for estimating fatigue life and properties are applied on JIS H3300 C1220 copper to determine the best methods for strain life estimation of this ductile material. Experimental strain-life curves are compared to estimations obtained using each method. Muralidharan-Manson's Modified Universal Slopes method and Bäumel-Seeger's method for unalloyed and low-alloy steels are found to yield batter accuracy in estimating fatigue life with a deviation of less than 25%. However, the prediction of both methods only yield much better accuracy for a cycle of less than 1000 or for strain amplitudes of more than 1% and less than 6%. Manson's Original Universal Slopes method and Ong's Modified Four-Point Correlation method are found to predict the strain-life fatigue of copper with better accuracy for a high number of cycles of strain amplitudes of less than 1%. The differences between mechanical behavior during monotonic and cyclic loading and the complexity in deciding the coefficient in an equation are probably the reason for the lack of a reliable method for estimating fatigue behavior using the monotonic properties of a group of materials. It is therefore suggested that a differential approach and new expressions be developed to estimate the strain-life fatigue parameters for ductile materials such as copper.

  7. Innovative application of the moisture analyzer for determination of dry mass content of processed cheese

    NASA Astrophysics Data System (ADS)

    Kowalska, Małgorzata; Janas, Sławomir; Woźniak, Magdalena

    2018-04-01

    The aim of this work was the presentation of an alternative method of determination of the total dry mass content in processed cheese. The authors claim that the presented method can be used in industry's quality control laboratories for routine testing and for quick in-process control. For the test purposes both reference method of determination of dry mass in processed cheese and moisture analyzer method were used. The tests were carried out for three different kinds of processed cheese. In accordance with the reference method, the sample was placed on a layer of silica sand and dried at the temperature of 102 °C for about 4 h. The moisture analyzer test required method validation, with regard to drying temperature range and mass of the analyzed sample. Optimum drying temperature of 110 °C was determined experimentally. For Hochland cream processed cheese sample, the total dry mass content, obtained using the reference method, was 38.92%, whereas using the moisture analyzer method, it was 38.74%. An average analysis time in case of the moisture analyzer method was 9 min. For the sample of processed cheese with tomatoes, the reference method result was 40.37%, and the alternative method result was 40.67%. For the sample of cream processed cheese with garlic the reference method gave value of 36.88%, and the alternative method, of 37.02%. An average time of those determinations was 16 min. Obtained results confirmed that use of moisture analyzer is effective. Compliant values of dry mass content were obtained for both of the used methods. According to the authors, the fact that the measurement took incomparably less time for moisture analyzer method, is a key criterion of in-process control and final quality control method selection.

  8. Alternative microbial methods: An overview and selection criteria.

    PubMed

    Jasson, Vicky; Jacxsens, Liesbeth; Luning, Pieternel; Rajkovic, Andreja; Uyttendaele, Mieke

    2010-09-01

    This study provides an overview and criteria for the selection of a method, other than the reference method, for microbial analysis of foods. In a first part an overview of the general characteristics of rapid methods available, both for enumeration and detection, is given with reference to relevant bibliography. Perspectives on future development and the potential of the rapid method for routine application in food diagnostics are discussed. As various alternative "rapid" methods in different formats are available on the market, it can be very difficult for a food business operator or for a control authority to select the most appropriate method which fits its purpose. Validation of a method by a third party, according to international accepted protocol based upon ISO 16140, may increase the confidence in the performance of a method. A list of at the moment validated methods for enumeration of both utility indicators (aerobic plate count) and hygiene indicators (Enterobacteriaceae, Escherichia coli, coagulase positive Staphylococcus) as well as for detection of the four major pathogens (Salmonella spp., Listeria monocytogenes, E. coli O157 and Campylobacter spp.) is included with reference to relevant websites to check for updates. In a second part of this study, selection criteria are introduced to underpin the choice of the appropriate method(s) for a defined application. The selection criteria link the definition of the context in which the user of the method functions - and thus the prospective use of the microbial test results - with the technical information on the method and its operational requirements and sustainability. The selection criteria can help the end user of the method to obtain a systematic insight into all relevant factors to be taken into account for selection of a method for microbial analysis. Copyright 2010 Elsevier Ltd. All rights reserved.

  9. Study on ABO and RhD blood grouping: Comparison between conventional tile method and a new solid phase method (InTec Blood Grouping Test Kit).

    PubMed

    Yousuf, R; Abdul Ghani, S A; Abdul Khalid, N; Leong, C F

    2018-04-01

    'InTec Blood Grouping Test kit' using solid-phase technology is a new method which may be used at outdoor blood donation site or at bed side as an alternative to the conventional tile method in view of its stability at room temperature and fulfilled the criteria as point of care test. This study aimed to compare the efficiency of this solid phase method (InTec Blood Grouping Test Kit) with the conventional tile method in determining the ABO and RhD blood group of healthy donors. A total of 760 voluntary donors who attended the Blood Bank, Penang Hospital or offsite blood donation campaigns from April to May 2014 were recruited. The ABO and RhD blood groups were determined by the conventional tile method and the solid phase method, in which the tube method was used as the gold standard. For ABO blood grouping, the tile method has shown 100% concordance results with the gold standard tube method, whereas the solid-phase method only showed concordance result for 754/760 samples (99.2%). Therefore, for ABO grouping, tile method has 100% sensitivity and specificity while the solid phase method has slightly lower sensitivity of 97.7% but both with good specificity of 100%. For RhD grouping, both the tile and solid phase methods have grouped one RhD positive specimen as negative each, thus giving the sensitivity and specificity of 99.9% and 100% for both methods respectively. The 'InTec Blood Grouping Test Kit' is suitable for offsite usage because of its simplicity and user friendliness. However, further improvement in adding the internal quality control may increase the test sensitivity and validity of the test results.

  10. Knowledge, beliefs and use of nursing methods in preventing pressure sores in Dutch hospitals.

    PubMed

    Halfens, R J; Eggink, M

    1995-02-01

    Different methods have been developed in the past to prevent patients from developing pressure sores. The consensus guidelines developed in the Netherlands make a distinction between preventive methods useful for all patients, methods useful only in individual cases, and methods which are not useful at all. This study explores the extent of use of the different methods within Dutch hospitals, and the knowledge and beliefs of nurses regarding the usefulness of these methods. A mail questionnaire was sent to a representative sample of nurses working within Dutch hospitals. A total of 373 questionnaires were returned and used for the analyses. The results showed that many methods judged by the consensus report as not useful, or only useful in individual cases, are still being used. Some methods which are judged as useful, like the use of a risk assessment scale, are used on only a few wards. The opinion of nurses regarding the usefulness of the methods differ from the guidelines of the consensus committee. Although there is agreement about most of the useful methods, there is less agreement about the methods which are useful in individual cases or methods which are not useful at all. In particular the use of massage and cream are, in the opinion of the nurses, useful in individual or in all cases.

  11. Automatic allograft bone selection through band registration and its application to distal femur.

    PubMed

    Zhang, Yu; Qiu, Lei; Li, Fengzan; Zhang, Qing; Zhang, Li; Niu, Xiaohui

    2017-09-01

    Clinical reports suggest that large bone defects could be effectively restored by allograft bone transplantation, where allograft bone selection acts an important role. Besides, there is a huge demand for developing the automatic allograft bone selection methods, as the automatic methods could greatly improve the management efficiency of the large bone banks. Although several automatic methods have been presented to select the most suitable allograft bone from the massive allograft bone bank, these methods still suffer from inaccuracy. In this paper, we propose an effective allograft bone selection method without using the contralateral bones. Firstly, the allograft bone is globally aligned to the recipient bone by surface registration. Then, the global alignment is further refined through band registration. The band, defined as the recipient points within the lifted and lowered cutting planes, could involve more local structure of the defected segment. Therefore, our method could achieve robust alignment and high registration accuracy of the allograft and recipient. Moreover, the existing contour method and surface method could be unified into one framework under our method by adjusting the lift and lower distances of the cutting planes. Finally, our method has been validated on the database of distal femurs. The experimental results indicate that our method outperforms the surface method and contour method.

  12. Validation of a questionnaire method for estimating extent of menstrual blood loss in young adult women.

    PubMed

    Heath, A L; Skeaff, C M; Gibson, R S

    1999-04-01

    The objective of this study was to validate two indirect methods for estimating the extent of menstrual blood loss against a reference method to determine which method would be most appropriate for use in a population of young adult women. Thirty-two women aged 18 to 29 years (mean +/- SD; 22.4 +/- 2.8) were recruited by poster in Dunedin (New Zealand). Data are presented for 29 women. A recall method and a record method for estimating extent of menstrual loss were validated against a weighed reference method. Spearman rank correlation coefficients between blood loss assessed by Weighed Menstrual Loss and Menstrual Record was rs = 0.47 (p = 0.012), and between Weighed Menstrual Loss and Menstrual Recall, was rs = 0.61 (p = 0.001). The Record method correctly classified 66% of participants into the same tertile, grossly misclassifying 14%. The Recall method correctly classified 59% of participants, grossly misclassifying 7%. Reference method menstrual loss calculated for surrogate categories demonstrated a significant difference between the second and third tertiles for the Record method, and between the first and third tertiles for the Recall method. The Menstrual Recall method can differentiate between low and high levels of menstrual blood loss in young adult women, is quick to complete and analyse, and has a low participant burden.

  13. A comparative study of novel spectrophotometric methods based on isosbestic points; application on a pharmaceutical ternary mixture

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam M.; Saleh, Sarah S.; Hassan, Nagiba Y.; Salem, Hesham

    This work represents the application of the isosbestic points present in different absorption spectra. Three novel spectrophotometric methods were developed, the first method is the absorption subtraction method (AS) utilizing the isosbestic point in zero-order absorption spectra; the second method is the amplitude modulation method (AM) utilizing the isosbestic point in ratio spectra; and third method is the amplitude summation method (A-Sum) utilizing the isosbestic point in derivative spectra. The three methods were applied for the analysis of the ternary mixture of chloramphenicol (CHL), dexamethasone sodium phosphate (DXM) and tetryzoline hydrochloride (TZH) in eye drops in the presence of benzalkonium chloride as a preservative. The components at the isosbestic point were determined using the corresponding unified regression equation at this point with no need for a complementary method. The obtained results were statistically compared to each other and to that of the developed PLS model. The specificity of the developed methods was investigated by analyzing laboratory prepared mixtures and the combined dosage form. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed.

  14. Towards an Airframe Noise Prediction Methodology: Survey of Current Approaches

    NASA Technical Reports Server (NTRS)

    Farassat, Fereidoun; Casper, Jay H.

    2006-01-01

    In this paper, we present a critical survey of the current airframe noise (AFN) prediction methodologies. Four methodologies are recognized. These are the fully analytic method, CFD combined with the acoustic analogy, the semi-empirical method and fully numerical method. It is argued that for the immediate need of the aircraft industry, the semi-empirical method based on recent high quality acoustic database is the best available method. The method based on CFD and the Ffowcs William- Hawkings (FW-H) equation with penetrable data surface (FW-Hpds ) has advanced considerably and much experience has been gained in its use. However, more research is needed in the near future particularly in the area of turbulence simulation. The fully numerical method will take longer to reach maturity. Based on the current trends, it is predicted that this method will eventually develop into the method of choice. Both the turbulence simulation and propagation methods need to develop more for this method to become useful. Nonetheless, the authors propose that the method based on a combination of numerical and analytical techniques, e.g., CFD combined with FW-H equation, should also be worked on. In this effort, the current symbolic algebra software will allow more analytical approaches to be incorporated into AFN prediction methods.

  15. A Reconstructed Discontinuous Galerkin Method for the Compressible Euler Equations on Arbitrary Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Luo; Luquing Luo; Robert Nourgaliev

    2009-06-01

    A reconstruction-based discontinuous Galerkin (DG) method is presented for the solution of the compressible Euler equations on arbitrary grids. By taking advantage of handily available and yet invaluable information, namely the derivatives, in the context of the discontinuous Galerkin methods, a solution polynomial of one degree higher is reconstructed using a least-squares method. The stencils used in the reconstruction involve only the van Neumann neighborhood (face-neighboring cells) and are compact and consistent with the underlying DG method. The resulting DG method can be regarded as an improvement of a recovery-based DG method in the sense that it shares the samemore » nice features as the recovery-based DG method, such as high accuracy and efficiency, and yet overcomes some of its shortcomings such as a lack of flexibility, compactness, and robustness. The developed DG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate the accuracy and efficiency of the method. The numerical results indicate that this reconstructed DG method is able to obtain a third-order accurate solution at a slightly higher cost than its second-order DG method and provide an increase in performance over the third order DG method in terms of computing time and storage requirement.« less

  16. [Comparison of different methods in dealing with HIV viral load data with diversified missing value mechanism on HIV positive MSM].

    PubMed

    Jiang, Z; Dou, Z; Song, W L; Xu, J; Wu, Z Y

    2017-11-10

    Objective: To compare results of different methods: in organizing HIV viral load (VL) data with missing values mechanism. Methods We used software SPSS 17.0 to simulate complete and missing data with different missing value mechanism from HIV viral loading data collected from MSM in 16 cities in China in 2013. Maximum Likelihood Methods Using the Expectation and Maximization Algorithm (EM), regressive method, mean imputation, delete method, and Markov Chain Monte Carlo (MCMC) were used to supplement missing data respectively. The results: of different methods were compared according to distribution characteristics, accuracy and precision. Results HIV VL data could not be transferred into a normal distribution. All the methods showed good results in iterating data which is Missing Completely at Random Mechanism (MCAR). For the other types of missing data, regressive and MCMC methods were used to keep the main characteristic of the original data. The means of iterating database with different methods were all close to the original one. The EM, regressive method, mean imputation, and delete method under-estimate VL while MCMC overestimates it. Conclusion: MCMC can be used as the main imputation method for HIV virus loading missing data. The iterated data can be used as a reference for mean HIV VL estimation among the investigated population.

  17. Limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method for the parameter estimation on geographically weighted ordinal logistic regression model (GWOLR)

    NASA Astrophysics Data System (ADS)

    Saputro, Dewi Retno Sari; Widyaningsih, Purnami

    2017-08-01

    In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton's method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton's method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute the efficiency of the LBFGS method in the iterative and recursive computation of Hessian matrix and its inverse for the GWOLR parameter estimation. In reference to the research findings, we found out that the BFGS and LBFGS methods have arithmetic operation schemes, including O(n2) and O(nm).

  18. Comprehensive reliability allocation method for CNC lathes based on cubic transformed functions of failure mode and effects analysis

    NASA Astrophysics Data System (ADS)

    Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin

    2015-03-01

    Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.

  19. An Extraction Method of an Informative DOM Node from a Web Page by Using Layout Information

    NASA Astrophysics Data System (ADS)

    Tsuruta, Masanobu; Masuyama, Shigeru

    We propose an informative DOM node extraction method from a Web page for preprocessing of Web content mining. Our proposed method LM uses layout data of DOM nodes generated by a generic Web browser, and the learning set consists of hundreds of Web pages and the annotations of informative DOM nodes of those Web pages. Our method does not require large scale crawling of the whole Web site to which the target Web page belongs. We design LM so that it uses the information of the learning set more efficiently in comparison to the existing method that uses the same learning set. By experiments, we evaluate the methods obtained by combining one that consists of the method for extracting the informative DOM node both the proposed method and the existing methods, and the existing noise elimination methods: Heur removes advertisements and link-lists by some heuristics and CE removes the DOM nodes existing in the Web pages in the same Web site to which the target Web page belongs. Experimental results show that 1) LM outperforms other methods for extracting the informative DOM node, 2) the combination method (LM, {CE(10), Heur}) based on LM (precision: 0.755, recall: 0.826, F-measure: 0.746) outperforms other combination methods.

  20. Comparative study between the hand-wrist method and cervical vertebral maturation method for evaluation skeletal maturity in cleft patients.

    PubMed

    Manosudprasit, Montian; Wangsrimongkol, Tasanee; Pisek, Poonsak; Chantaramungkorn, Melissa

    2013-09-01

    To test the measure of agreement between use of the Skeletal Maturation Index (SMI) method of Fishman using hand-wrist radiographs and the Cervical Vertebral Maturation Index (CVMI) method for assessing skeletal maturity of the cleft patients. Hand-wrist and lateral cephalometric radiographs of 60 cleft subjects (35 females and 25 males, age range: 7-16 years) were used. Skeletal age was assessed using an adjustment to the SMI method of Fishman to compare with the CVMI method of Hassel and Farman. Agreement between skeletal age assessed by both methods and the intra- and inter-examiner reliability of both methods were tested by weighted kappa analysis. There was good agreement between the two methods with a kappa value of 0.80 (95% CI = 0.66-0.88, p-value <0.001). Reliability of intra- and inter-examiner of both methods was very good with kappa value ranging from 0.91 to 0.99. The CVMI method can be used as an alternative to the SMI method in skeletal age assessment in cleft patients with the benefit of no need of an additional radiograph and avoiding extra-radiation exposure. Comparing the two methods, the present study found better agreement from peak of adolescence onwards.

  1. Computer-aided analysis with Image J for quantitatively assessing psoriatic lesion area.

    PubMed

    Sun, Z; Wang, Y; Ji, S; Wang, K; Zhao, Y

    2015-11-01

    Body surface area is important in determining the severity of psoriasis. However, objective, reliable, and practical method is still in need for this purpose. We performed a computer image analysis (CIA) of psoriatic area using the image J freeware to determine whether this method could be used for objective evaluation of psoriatic area. Fifteen psoriasis patients were randomized to be treated with adalimumab or placebo in a clinical trial. At each visit, the psoriasis area of each body site was estimated by two physicians (E-method), and standard photographs were taken. The psoriasis area in the pictures was assessed with CIA using semi-automatic threshold selection (T-method), or manual selection (M-method, gold standard). The results assessed by the three methods were analyzed with reliability and affecting factors evaluated. Both T- and E-method correlated strongly with M-method, and T-method had a slightly stronger correlation with M-method. Both T- and E-methods had a good consistency between the evaluators. All the three methods were able to detect the change in the psoriatic area after treatment, while the E-method tends to overestimate. The CIA with image J freeware is reliable and practicable in quantitatively assessing the lesional of psoriasis area. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Mixed Methods in CAM Research: A Systematic Review of Studies Published in 2012

    PubMed Central

    Bishop, Felicity L.; Holmes, Michelle M.

    2013-01-01

    Background. Mixed methods research uses qualitative and quantitative methods together in a single study or a series of related studies. Objectives. To review the prevalence and quality of mixed methods studies in complementary medicine. Methods. All studies published in the top 10 integrative and complementary medicine journals in 2012 were screened. The quality of mixed methods studies was appraised using a published tool designed for mixed methods studies. Results. 4% of papers (95 out of 2349) reported mixed methods studies, 80 of which met criteria for applying the quality appraisal tool. The most popular formal mixed methods design was triangulation (used by 74% of studies), followed by embedded (14%), sequential explanatory (8%), and finally sequential exploratory (5%). Quantitative components were generally of higher quality than qualitative components; when quantitative components involved RCTs they were of particularly high quality. Common methodological limitations were identified. Most strikingly, none of the 80 mixed methods studies addressed the philosophical tensions inherent in mixing qualitative and quantitative methods. Conclusions and Implications. The quality of mixed methods research in CAM can be enhanced by addressing philosophical tensions and improving reporting of (a) analytic methods and reflexivity (in qualitative components) and (b) sampling and recruitment-related procedures (in all components). PMID:24454489

  3. Accuracy of tree diameter estimation from terrestrial laser scanning by circle-fitting methods

    NASA Astrophysics Data System (ADS)

    Koreň, Milan; Mokroš, Martin; Bucha, Tomáš

    2017-12-01

    This study compares the accuracies of diameter at breast height (DBH) estimations by three initial (minimum bounding box, centroid, and maximum distance) and two refining (Monte Carlo and optimal circle) circle-fitting methods The circle-fitting algorithms were evaluated in multi-scan mode and a simulated single-scan mode on 157 European beech trees (Fagus sylvatica L.). DBH measured by a calliper was used as reference data. Most of the studied circle-fitting algorithms significantly underestimated the mean DBH in both scanning modes. Only the Monte Carlo method in the single-scan mode significantly overestimated the mean DBH. The centroid method proved to be the least suitable and showed significantly different results from the other circle-fitting methods in both scanning modes. In multi-scan mode, the accuracy of the minimum bounding box method was not significantly different from the accuracies of the refining methods The accuracy of the maximum distance method was significantly different from the accuracies of the refining methods in both scanning modes. The accuracy of the Monte Carlo method was significantly different from the accuracy of the optimal circle method in only single-scan mode. The optimal circle method proved to be the most accurate circle-fitting method for DBH estimation from point clouds in both scanning modes.

  4. Chemometric methods for the simultaneous determination of some water-soluble vitamins.

    PubMed

    Mohamed, Abdel-Maaboud I; Mohamed, Horria A; Mohamed, Niveen A; El-Zahery, Marwa R

    2011-01-01

    Two spectrophotometric methods, derivative and multivariate methods, were applied for the determination of binary, ternary, and quaternary mixtures of the water-soluble vitamins thiamine HCI (I), pyridoxine HCI (II), riboflavin (III), and cyanocobalamin (IV). The first method is divided into first derivative and first derivative of ratio spectra methods, and the second into classical least squares and principal components regression methods. Both methods are based on spectrophotometric measurements of the studied vitamins in 0.1 M HCl solution in the range of 200-500 nm for all components. The linear calibration curves were obtained from 2.5-90 microg/mL, and the correlation coefficients ranged from 0.9991 to 0.9999. These methods were applied for the analysis of the following mixtures: (I) and (II); (I), (II), and (III); (I), (II), and (IV); and (I), (II), (III), and (IV). The described methods were successfully applied for the determination of vitamin combinations in synthetic mixtures and dosage forms from different manufacturers. The recovery ranged from 96.1 +/- 1.2 to 101.2 +/- 1.0% for derivative methods and 97.0 +/- 0.5 to 101.9 +/- 1.3% for multivariate methods. The results of the developed methods were compared with those of reported methods, and gave good accuracy and precision.

  5. Perceptions of rural women about contraceptive usage in district Khushab, Punjab.

    PubMed

    Tabassum, Aqeela; Manj, Yasir Nawaz; Gunjial, Tahira Rehman; Nazir, Salma

    2016-12-01

    To identify the perceptions of rural women about modern contraceptive methods and to ascertain the psycho-social and economic attitude of women about family planning methods. This cross-sectional study was conducted at the University of Sargodha, Sargodha, Pakistan, from December 2014 to March 2015, and comprised married women. The sample was selected using multistage sampling technique through Fitzgibbon table. They were interviewed regarding use of family planning methods. . SPSS 16 was used for data analysis. Of the 500 women, 358(71.6%) were never-users and 142(28.4%) were past-users of family planning methods. Moreover, 52(14.5%) of never-users did not know about a single modern contraceptive method. Of the past-users, 43(30.3%) knew about 1-3 methods and 99(69.7%) about 4 or more methods. Furthermore, 153(30.6%) respondents graded condoms as good, 261(55.2%) agreed that family planning helped in improving one's standard of living to a great extent while 453(90.6%) indicated that family planning methods were not expensive. Besides, 366(71.2%) respondents believed that using contraceptive method caused infertility. Dissatisfaction with methods, method failure, bad experiences with side effects, privacy concerns and different myths associated to the methods were strongly related to the non-usage of modern contraceptive methods.

  6. Fast polarimetric dehazing method for visibility enhancement in HSI colour space

    NASA Astrophysics Data System (ADS)

    Zhang, Wenfei; Liang, Jian; Ren, Liyong; Ju, Haijuan; Bai, Zhaofeng; Wu, Zhaoxin

    2017-09-01

    Image haze removal has attracted much attention in optics and computer vision fields in recent years due to its wide applications. In particular, the fast and real-time dehazing methods are of significance. In this paper, we propose a fast dehazing method in hue, saturation and intensity colour space based on the polarimetric imaging technique. We implement the polarimetric dehazing method in the intensity channel, and the colour distortion of the image is corrected using the white patch retinex method. This method not only reserves the detailed information restoration capacity, but also improves the efficiency of the polarimetric dehazing method. Comparison studies with state of the art methods demonstrate that the proposed method obtains equal or better quality results and moreover the implementation is much faster. The proposed method is promising in real-time image haze removal and video haze removal applications.

  7. Parameter estimation of Monod model by the Least-Squares method for microalgae Botryococcus Braunii sp

    NASA Astrophysics Data System (ADS)

    See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.

    2018-04-01

    This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.

  8. A Mixed Prioritization Operators Strategy Using A Single Measurement Criterion For AHP Application Development

    NASA Astrophysics Data System (ADS)

    Yuen, Kevin Kam Fung

    2009-10-01

    The most appropriate prioritization method is still one of the unsettled issues of the Analytic Hierarchy Process, although many studies have been made and applied. Interestingly, many AHP applications apply only Saaty's Eigenvector method as many studies have found that this method may produce rank reversals and have proposed various prioritization methods as alternatives. Some methods have been proved to be better than the Eigenvector method. However, these methods seem not to attract the attention of researchers. In this paper, eight important prioritization methods are reviewed. A Mixed Prioritization Operators Strategy (MPOS) is developed to select a vector which is prioritized by the most appropriate prioritization operator. To verify this new method, a case study of high school selection is revised using the proposed method. The contribution is that MPOS is useful for solving prioritization problems in the AHP.

  9. On Multifunctional Collaborative Methods in Engineering Science

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.

    2001-01-01

    Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized.

  10. Modified microplate method for rapid and efficient estimation of siderophore produced by bacteria.

    PubMed

    Arora, Naveen Kumar; Verma, Maya

    2017-12-01

    In this study, siderophore production by various bacteria amongst the plant-growth-promoting rhizobacteria was quantified by a rapid and efficient method. In total, 23 siderophore-producing bacterial isolates/strains were taken to estimate their siderophore-producing ability by the standard method (chrome azurol sulphonate assay) as well as 96 well microplate method. Production of siderophore was estimated in percent siderophore unit by both the methods. It was observed that data obtained by both methods correlated positively with each other proving the correctness of microplate method. By the modified microplate method, siderophore production by several bacterial strains can be estimated both qualitatively and quantitatively at one go, saving time, chemicals, making it very less tedious, and also being cheaper in comparison with the method currently in use. The modified microtiter plate method as proposed here makes it far easier to screen the plant-growth-promoting character of plant-associated bacteria.

  11. What can Numerical Computation do for the History of Science? (Study of an Orbit Drawn by Newton on a Letter to Hooke)

    NASA Astrophysics Data System (ADS)

    Stuchi, Teresa; Cardozo Dias, P.

    2013-05-01

    Abstract (2,250 Maximum Characters): On a letter to Robert Hooke, Isaac Newton drew the orbit of a mass moving under a constant attracting central force. How he drew the orbit may indicate how and when he developed dynamic categories. Some historians claim that Newton used a method contrived by Hooke; others that he used some method of curvature. We prove geometrically: Hooke’s method is a second order symplectic area preserving algorithm, and the method of curvature is a first order algorithm without special features; then we integrate the hamiltonian equations. Integration by the method of curvature can also be done exploring geometric properties of curves. We compare three methods: Hooke’s method, the method of curvature and a first order method. A fourth order algorithm sets a standard of comparison. We analyze which of these methods best explains Newton’s drawing.

  12. What can numerical computation do for the history of science? (a study of an orbit drawn by Newton in a letter to Hooke)

    NASA Astrophysics Data System (ADS)

    Cardozo Dias, Penha Maria; Stuchi, T. J.

    2013-11-01

    In a letter to Robert Hooke, Isaac Newton drew the orbit of a mass moving under a constant attracting central force. The drawing of the orbit may indicate how and when Newton developed dynamic categories. Some historians claim that Newton used a method contrived by Hooke; others that he used some method of curvature. We prove that Hooke’s method is a second-order symplectic area-preserving algorithm, and the method of curvature is a first-order algorithm without special features; then we integrate the Hamiltonian equations. Integration by the method of curvature can also be done, exploring the geometric properties of curves. We compare three methods: Hooke’s method, the method of curvature and a first-order method. A fourth-order algorithm sets a standard of comparison. We analyze which of these methods best explains Newton’s drawing.

  13. Adenosine Monophosphate-Based Detection of Bacterial Spores

    NASA Technical Reports Server (NTRS)

    Kern, Roger G.; Chen, Fei; Venkateswaran, Kasthuri; Hattori, Nori; Suzuki, Shigeya

    2009-01-01

    A method of rapid detection of bacterial spores is based on the discovery that a heat shock consisting of exposure to a temperature of 100 C for 10 minutes causes the complete release of adenosine monophosphate (AMP) from the spores. This method could be an alternative to the method described in the immediately preceding article. Unlike that method and related prior methods, the present method does not involve germination and cultivation; this feature is an important advantage because in cases in which the spores are those of pathogens, delays involved in germination and cultivation could increase risks of infection. Also, in comparison with other prior methods that do not involve germination, the present method affords greater sensitivity. At present, the method is embodied in a laboratory procedure, though it would be desirable to implement the method by means of a miniaturized apparatus in order to make it convenient and economical enough to encourage widespread use.

  14. Meshless Local Petrov-Galerkin Method for Bending Problems

    NASA Technical Reports Server (NTRS)

    Phillips, Dawn R.; Raju, Ivatury S.

    2002-01-01

    Recent literature shows extensive research work on meshless or element-free methods as alternatives to the versatile Finite Element Method. One such meshless method is the Meshless Local Petrov-Galerkin (MLPG) method. In this report, the method is developed for bending of beams - C1 problems. A generalized moving least squares (GMLS) interpolation is used to construct the trial functions, and spline and power weight functions are used as the test functions. The method is applied to problems for which exact solutions are available to evaluate its effectiveness. The accuracy of the method is demonstrated for problems with load discontinuities and continuous beam problems. A Petrov-Galerkin implementation of the method is shown to greatly reduce computational time and effort and is thus preferable over the previously developed Galerkin approach. The MLPG method for beam problems yields very accurate deflections and slopes and continuous moment and shear forces without the need for elaborate post-processing techniques.

  15. A variationally coupled FE-BE method for elasticity and fracture mechanics

    NASA Technical Reports Server (NTRS)

    Lu, Y. Y.; Belytschko, T.; Liu, W. K.

    1991-01-01

    A new method for coupling finite element and boundary element subdomains in elasticity and fracture mechanics problems is described. The essential feature of this new method is that a single variational statement is obtained for the entire domain, and in this process the terms associated with tractions on the interfaces between the subdomains are eliminated. This provides the additional advantage that the ambiguities associated with the matching of discontinuous tractions are circumvented. The method leads to a direct procedure for obtaining the discrete equations for the coupled problem without any intermediate steps. In order to evaluate this method and compare it with previous methods, a patch test for coupled procedures has been devised. Evaluation of this variationally coupled method and other methods, such as stiffness coupling and constraint traction matching coupling, shows that this method is substantially superior. Solutions for a series of fracture mechanics problems are also reported to illustrate the effectiveness of this method.

  16. Comparing strategies to assess multiple behavior change in behavioral intervention studies.

    PubMed

    Drake, Bettina F; Quintiliani, Lisa M; Sapp, Amy L; Li, Yi; Harley, Amy E; Emmons, Karen M; Sorensen, Glorian

    2013-03-01

    Alternatives to individual behavior change methods have been proposed, however, little has been done to investigate how these methods compare. To explore four methods that quantify change in multiple risk behaviors targeting four common behaviors. We utilized data from two cluster-randomized, multiple behavior change trials conducted in two settings: small businesses and health centers. Methods used were: (1) summative; (2) z-score; (3) optimal linear combination; and (4) impact score. In the Small Business study, methods 2 and 3 revealed similar outcomes. However, physical activity did not contribute to method 3. In the Health Centers study, similar results were found with each of the methods. Multivitamin intake contributed significantly more to each of the summary measures than other behaviors. Selection of methods to assess multiple behavior change in intervention trials must consider study design, and the targeted population when determining the appropriate method/s to use.

  17. Fast multipole methods on a cluster of GPUs for the meshless simulation of turbulence

    NASA Astrophysics Data System (ADS)

    Yokota, R.; Narumi, T.; Sakamaki, R.; Kameoka, S.; Obi, S.; Yasuoka, K.

    2009-11-01

    Recent advances in the parallelizability of fast N-body algorithms, and the programmability of graphics processing units (GPUs) have opened a new path for particle based simulations. For the simulation of turbulence, vortex methods can now be considered as an interesting alternative to finite difference and spectral methods. The present study focuses on the efficient implementation of the fast multipole method and pseudo-particle method on a cluster of NVIDIA GeForce 8800 GT GPUs, and applies this to a vortex method calculation of homogeneous isotropic turbulence. The results of the present vortex method agree quantitatively with that of the reference calculation using a spectral method. We achieved a maximum speed of 7.48 TFlops using 64 GPUs, and the cost performance was near 9.4/GFlops. The calculation of the present vortex method on 64 GPUs took 4120 s, while the spectral method on 32 CPUs took 4910 s.

  18. Dynamic one-dimensional modeling of secondary settling tanks and system robustness evaluation.

    PubMed

    Li, Ben; Stenstrom, M K

    2014-01-01

    One-dimensional secondary settling tank models are widely used in current engineering practice for design and optimization, and usually can be expressed as a nonlinear hyperbolic or nonlinear strongly degenerate parabolic partial differential equation (PDE). Reliable numerical methods are needed to produce approximate solutions that converge to the exact analytical solutions. In this study, we introduced a reliable numerical technique, the Yee-Roe-Davis (YRD) method as the governing PDE solver, and compared its reliability with the prevalent Stenstrom-Vitasovic-Takács (SVT) method by assessing their simulation results at various operating conditions. The YRD method also produced a similar solution to the previously developed Method G and Enquist-Osher method. The YRD and SVT methods were also used for a time-to-failure evaluation, and the results show that the choice of numerical method can greatly impact the solution. Reliable numerical methods, such as the YRD method, are strongly recommended.

  19. Simultaneous determination of binary mixture of amlodipine besylate and atenolol based on dual wavelengths

    NASA Astrophysics Data System (ADS)

    Lamie, Nesrine T.

    2015-10-01

    Four, accurate, precise, and sensitive spectrophotometric methods are developed for simultaneous determination of a binary mixture of amlodipine besylate (AM) and atenolol (AT). AM is determined at its λmax 360 nm (0D), while atenolol can be determined by four different methods. Method (A) is absorption factor (AF). Method (B) is the new ratio difference method (RD) which measures the difference in amplitudes between 210 and 226 nm. Method (C) is novel constant center spectrophotometric method (CC). Method (D) is mean centering of the ratio spectra (MCR) at 284 nm. The methods are tested by analyzing synthetic mixtures of the cited drugs and they are applied to their commercial pharmaceutical preparation. The validity of results is assessed by applying standard addition technique. The results obtained are found to agree statistically with those obtained by official methods, showing no significant difference with respect to accuracy and precision.

  20. A scale-invariant change detection method for land use/cover change research

    NASA Astrophysics Data System (ADS)

    Xing, Jin; Sieber, Renee; Caelli, Terrence

    2018-07-01

    Land Use/Cover Change (LUCC) detection relies increasingly on comparing remote sensing images with different spatial and spectral scales. Based on scale-invariant image analysis algorithms in computer vision, we propose a scale-invariant LUCC detection method to identify changes from scale heterogeneous images. This method is composed of an entropy-based spatial decomposition, two scale-invariant feature extraction methods, Maximally Stable Extremal Region (MSER) and Scale-Invariant Feature Transformation (SIFT) algorithms, a spatial regression voting method to integrate MSER and SIFT results, a Markov Random Field-based smoothing method, and a support vector machine classification method to assign LUCC labels. We test the scale invariance of our new method with a LUCC case study in Montreal, Canada, 2005-2012. We found that the scale-invariant LUCC detection method provides similar accuracy compared with the resampling-based approach but this method avoids the LUCC distortion incurred by resampling.

  1. Mixed methods research in mental health nursing.

    PubMed

    Kettles, A M; Creswell, J W; Zhang, W

    2011-08-01

    Mixed methods research is becoming more widely used in order to answer research questions and to investigate research problems in mental health and psychiatric nursing. However, two separate literature searches, one in Scotland and one in the USA, revealed that few mental health nursing studies identified mixed methods research in their titles. Many studies used the term 'embedded' but few studies identified in the literature were mixed methods embedded studies. The history, philosophical underpinnings, definition, types of mixed methods research and associated pragmatism are discussed, as well as the need for mixed methods research. Examples of mental health nursing mixed methods research are used to illustrate the different types of mixed methods: convergent parallel, embedded, explanatory and exploratory in their sequential and concurrent combinations. Implementing mixed methods research is also discussed briefly and the problem of identifying mixed methods research in mental and psychiatric nursing are discussed with some possible solutions to the problem proposed. © 2011 Blackwell Publishing.

  2. Direct application of Padé approximant for solving nonlinear differential equations.

    PubMed

    Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Garcia-Gervacio, Jose Luis; Huerta-Chua, Jesus; Morales-Mendoza, Luis Javier; Gonzalez-Lee, Mario

    2014-01-01

    This work presents a direct procedure to apply Padé method to find approximate solutions for nonlinear differential equations. Moreover, we present some cases study showing the strength of the method to generate highly accurate rational approximate solutions compared to other semi-analytical methods. The type of tested nonlinear equations are: a highly nonlinear boundary value problem, a differential-algebraic oscillator problem, and an asymptotic problem. The high accurate handy approximations obtained by the direct application of Padé method shows the high potential if the proposed scheme to approximate a wide variety of problems. What is more, the direct application of the Padé approximant aids to avoid the previous application of an approximative method like Taylor series method, homotopy perturbation method, Adomian Decomposition method, homotopy analysis method, variational iteration method, among others, as tools to obtain a power series solutions to post-treat with the Padé approximant. 34L30.

  3. Methods of measuring soil moisture in the field

    USGS Publications Warehouse

    Johnson, A.I.

    1962-01-01

    For centuries, the amount of moisture in the soil has been of interest in agriculture. The subject of soil moisture is also of great importance to the hydrologist, forester, and soils engineer. Much equipment and many methods have been developed to measure soil moisture under field conditions. This report discusses and evaluates the various methods for measurement of soil moisture and describes the equipment needed for each method. The advantages and disadvantages of each method are discussed and an extensive list of references is provided for those desiring to study the subject in more detail. The gravimetric method is concluded to be the most satisfactory method for most problems requiring onetime moisture-content data. The radioactive method is normally best for obtaining repeated measurements of soil moisture in place. It is concluded that all methods have some limitations and that the ideal method for measurement of soil moisture under field conditions has yet to be perfected.

  4. Computational methods for internal flows with emphasis on turbomachinery

    NASA Technical Reports Server (NTRS)

    Mcnally, W. D.; Sockol, P. M.

    1981-01-01

    Current computational methods for analyzing flows in turbomachinery and other related internal propulsion components are presented. The methods are divided into two classes. The inviscid methods deal specifically with turbomachinery applications. Viscous methods, deal with generalized duct flows as well as flows in turbomachinery passages. Inviscid methods are categorized into the potential, stream function, and Euler aproaches. Viscous methods are treated in terms of parabolic, partially parabolic, and elliptic procedures. Various grids used in association with these procedures are also discussed.

  5. Digital signal processing methods for biosequence comparison.

    PubMed Central

    Benson, D C

    1990-01-01

    A method is discussed for DNA or protein sequence comparison using a finite field fast Fourier transform, a digital signal processing technique; and statistical methods are discussed for analyzing the output of this algorithm. This method compares two sequences of length N in computing time proportional to N log N compared to N2 for methods currently used. This method makes it feasible to compare very long sequences. An example is given to show that the method correctly identifies sites of known homology. PMID:2349096

  6. Evaluation of methods for the assay of radium-228 in water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noyce, J.R.

    1981-02-01

    The technical literature from 1967 to May 1980 was searched for methods for assaying radium-228 in water. These methods were evaluated for their suitability as potential EPA reference methods for drinking water assays. The authors suggest the present EPA reference method (Krieger, 1976) be retained but improved, and a second method (McCurdy and Mellor, 1979), which employs beta-gamma coincidence counting, be added. Included in this report is a table that lists the principal features of 17 methods for radium-228 assays.

  7. Comparing Methods for Assessing Reliability Uncertainty Based on Pass/Fail Data Collected Over Time

    DOE PAGES

    Abes, Jeff I.; Hamada, Michael S.; Hills, Charles R.

    2017-12-20

    In this paper, we compare statistical methods for analyzing pass/fail data collected over time; some methods are traditional and one (the RADAR or Rationale for Assessing Degradation Arriving at Random) was recently developed. These methods are used to provide uncertainty bounds on reliability. We make observations about the methods' assumptions and properties. Finally, we illustrate the differences between two traditional methods, logistic regression and Weibull failure time analysis, and the RADAR method using a numerical example.

  8. Comparing Methods for Assessing Reliability Uncertainty Based on Pass/Fail Data Collected Over Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abes, Jeff I.; Hamada, Michael S.; Hills, Charles R.

    In this paper, we compare statistical methods for analyzing pass/fail data collected over time; some methods are traditional and one (the RADAR or Rationale for Assessing Degradation Arriving at Random) was recently developed. These methods are used to provide uncertainty bounds on reliability. We make observations about the methods' assumptions and properties. Finally, we illustrate the differences between two traditional methods, logistic regression and Weibull failure time analysis, and the RADAR method using a numerical example.

  9. Remote air pollution measurement

    NASA Technical Reports Server (NTRS)

    Byer, R. L.

    1975-01-01

    This paper presents a discussion and comparison of the Raman method, the resonance and fluorescence backscatter method, long path absorption methods and the differential absorption method for remote air pollution measurement. A comparison of the above remote detection methods shows that the absorption methods offer the most sensitivity at the least required transmitted energy. Topographical absorption provides the advantage of a single ended measurement, and differential absorption offers the additional advantage of a fully depth resolved absorption measurement. Recent experimental results confirming the range and sensitivity of the methods are presented.

  10. Conservation properties of numerical integration methods for systems of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Rosenbaum, J. S.

    1976-01-01

    If a system of ordinary differential equations represents a property conserving system that can be expressed linearly (e.g., conservation of mass), it is then desirable that the numerical integration method used conserve the same quantity. It is shown that both linear multistep methods and Runge-Kutta methods are 'conservative' and that Newton-type methods used to solve the implicit equations preserve the inherent conservation of the numerical method. It is further shown that a method used by several authors is not conservative.

  11. The application of generalized, cyclic, and modified numerical integration algorithms to problems of satellite orbit computation

    NASA Technical Reports Server (NTRS)

    Chesler, L.; Pierce, S.

    1971-01-01

    Generalized, cyclic, and modified multistep numerical integration methods are developed and evaluated for application to problems of satellite orbit computation. Generalized methods are compared with the presently utilized Cowell methods; new cyclic methods are developed for special second-order differential equations; and several modified methods are developed and applied to orbit computation problems. Special computer programs were written to generate coefficients for these methods, and subroutines were written which allow use of these methods with NASA's GEOSTAR computer program.

  12. A comparison of modifications of the McMaster method for the enumeration of Ascaris suum eggs in pig faecal samples.

    PubMed

    Pereckiene, A; Kaziūnaite, V; Vysniauskas, A; Petkevicius, S; Malakauskas, A; Sarkūnas, M; Taylor, M A

    2007-10-21

    The comparative efficacies of seven published McMaster method modifications for faecal egg counting were evaluated on pig faecal samples containing Ascaris suum eggs. Comparisons were made as to the number of samples found to be positive by each of the methods, the total egg counts per gram (EPG) of faeces, the variations in EPG obtained in the samples examined, and the ease of use of each of the methods. Each method was evaluated after the examination of 30 samples of faeces. The positive samples were identified by counting A. suum eggs in one, two and three sections of newly designed McMaster chamber. In the present study compared methods were reported by: I-Henriksen and Aagaard [Henriksen, S.A., Aagaard, K.A., 1976. A simple flotation and McMaster method. Nord. Vet. Med. 28, 392-397]; II-Kassai [Kassai, T., 1999. Veterinary Helminthology. Butterworth-Heinemann, Oxford, 260 pp.]; III and IV-Urquhart et al. [Urquhart, G.M., Armour, J., Duncan, J.L., Dunn, A.M., Jennings, F.W., 1996. Veterinary Parasitology, 2nd ed. Blackwell Science Ltd., Oxford, UK, 307 pp.] (centrifugation and non-centrifugation methods); V and VI-Grønvold [Grønvold, J., 1991. Laboratory diagnoses of helminths common routine methods used in Denmark. In: Nansen, P., Grønvold, J., Bjørn, H. (Eds.), Seminars on Parasitic Problems in Farm Animals Related to Fodder Production and Management. The Estonian Academy of Sciences, Tartu, Estonia, pp. 47-48] (salt solution, and salt and glucose solution); VII-Thienpont et al. [Thienpont, D., Rochette, F., Vanparijs, O.F.J., 1986. Diagnosing Helminthiasis by Coprological Examination. Coprological Examination, 2nd ed. Janssen Research Foundation, Beerse, Belgium, 205 pp.]. The number of positive samples by examining single section ranged from 98.9% (method I), to 51.1% (method VII). Only with methods I and II, there was a 100% positivity in two out of three of the chambers examined, and FEC obtained using these methods were significantly (p<0.01) higher comparing to remaining methods. Mean FEC varied between 243 EPG (method I) and 82 EPG (method IV). Examination of all three chambers resulted in four methods (I, II, V and VI) having 100% sensitivity, while method VII had the lowest 83.3% sensitivity. Mean FEC in this case varied between 239 EPG (method I) and 81 EPG (method IV). Based on the mean FEC for two chambers, an efficiency coefficient (EF) was calculated and equated to 1 for the highest egg count (method I) and 0.87, 0.57, 0.34, 0.53, 0.49 and 0.50 for remaining methods (II-VII), respectively. Efficiency coefficients make it possible not only to recalculate and unify results of faeces examination obtained by any method but also to interpret coproscopical examinations by other authors. Method VII was the easiest and quickest but least sensitive, and method I the most complex but most sensitive. Examining two or three sections of the McMaster chamber resulted in increased sensitivity for all methods.

  13. Effect of preparation methods on dispersion stability and electrochemical performance of graphene sheets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Li, E-mail: chenli1981@lut.cn; Li, Na; Zhang, Mingxia

    Chemical exfoliation is one of the most important strategies for preparing graphene. The aggregation of graphene sheets severely prevents graphene from exhibiting excellent properties. However, there are no attempts to investigate the effect of preparation methods on the dispersity of graphene sheets. In this study, three chemical exfoliation methods, including Hummers method, modified Hummers method, and improved method, were used to prepare graphene sheets. The influence of preparation methods on the structure, dispersion stability in organic solvents, and electrochemical properties of graphene sheets were investigated. Fourier transform infrared microscopy, Raman spectra, transmission electron microscopy, and UV–vis spectrophotometry were employed tomore » analyze the structure of the as-prepared graphene sheets. The results showed that graphene prepared by improved method exhibits excellent dispersity and stability in organic solvents without any additional stabilizer or modifier, which is attributed to the completely exfoliation and regular structure. Moreover, cyclic voltammetric and electrochemical impedance spectroscopy measurements showed that graphene prepared by improved method exhibits superior electrochemical properties than that prepared by the other two methods. - Graphical abstract: Graphene oxides with different oxidation degree were obtained via three methods, and then graphene with different crystal structures were created by chemical reduction of exfoliated graphene oxides. - Highlights: • Graphene oxides with different oxidation degree were obtained via three oxidation methods. • The influence of oxidation methods on microstructure of graphene was investigated. • The effect of oxidation methods on dispersion stability of graphene was investigated. • The effect of oxidation methods on electrochemical properties of graphene was discussed.« less

  14. A rapid, efficient, and economic device and method for the isolation and purification of mouse islet cells

    PubMed Central

    Zongyi, Yin; Funian, Zou; Hao, Li; Ying, Cheng; Jialin, Zhang

    2017-01-01

    Rapid, efficient, and economic method for the isolation and purification of islets has been pursued by numerous islet-related researchers. In this study, we compared the advantages and disadvantages of our developed patented method with those of commonly used conventional methods (Ficoll-400, 1077, and handpicking methods). Cell viability was assayed using Trypan blue, cell purity and yield were assayed using diphenylthiocarbazone, and islet function was assayed using acridine orange/ethidium bromide staining and enzyme-linked immunosorbent assay-glucose stimulation testing 4 days after cultivation. The results showed that our islet isolation and purification method required 12 ± 3 min, which was significantly shorter than the time required in Ficoll-400, 1077, and HPU groups (34 ± 3, 41 ± 4, and 30 ± 4 min, respectively; P < 0.05). There was no significant difference in islet viability among the four groups. The islet purity, function, yield, and cost of our method were superior to those of the Ficoll-400 and 1077 methods, but inferior to the handpicking method. However, the handpicking method may cause wrist injury and visual impairment in researchers during large-scale islet isolation (>1000 islets). In summary, the MCT method is a rapid, efficient, and economic method for isolating and purifying murine islet cell clumps. This method overcomes some of the shortcomings of conventional methods, showing a relatively higher quality and yield of islets within a shorter duration at a lower cost. Therefore, the current method provides researchers with an alternative option for islet isolation and should be widely generalized. PMID:28207765

  15. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  16. A robust two-way semi-linear model for normalization of cDNA microarray data

    PubMed Central

    Wang, Deli; Huang, Jian; Xie, Hehuang; Manzella, Liliana; Soares, Marcelo Bento

    2005-01-01

    Background Normalization is a basic step in microarray data analysis. A proper normalization procedure ensures that the intensity ratios provide meaningful measures of relative expression values. Methods We propose a robust semiparametric method in a two-way semi-linear model (TW-SLM) for normalization of cDNA microarray data. This method does not make the usual assumptions underlying some of the existing methods. For example, it does not assume that: (i) the percentage of differentially expressed genes is small; or (ii) the numbers of up- and down-regulated genes are about the same, as required in the LOWESS normalization method. We conduct simulation studies to evaluate the proposed method and use a real data set from a specially designed microarray experiment to compare the performance of the proposed method with that of the LOWESS normalization approach. Results The simulation results show that the proposed method performs better than the LOWESS normalization method in terms of mean square errors for estimated gene effects. The results of analysis of the real data set also show that the proposed method yields more consistent results between the direct and the indirect comparisons and also can detect more differentially expressed genes than the LOWESS method. Conclusions Our simulation studies and the real data example indicate that the proposed robust TW-SLM method works at least as well as the LOWESS method and works better when the underlying assumptions for the LOWESS method are not satisfied. Therefore, it is a powerful alternative to the existing normalization methods. PMID:15663789

  17. A rapid, efficient, and economic device and method for the isolation and purification of mouse islet cells.

    PubMed

    Zongyi, Yin; Funian, Zou; Hao, Li; Ying, Cheng; Jialin, Zhang; Baifeng, Li

    2017-01-01

    Rapid, efficient, and economic method for the isolation and purification of islets has been pursued by numerous islet-related researchers. In this study, we compared the advantages and disadvantages of our developed patented method with those of commonly used conventional methods (Ficoll-400, 1077, and handpicking methods). Cell viability was assayed using Trypan blue, cell purity and yield were assayed using diphenylthiocarbazone, and islet function was assayed using acridine orange/ethidium bromide staining and enzyme-linked immunosorbent assay-glucose stimulation testing 4 days after cultivation. The results showed that our islet isolation and purification method required 12 ± 3 min, which was significantly shorter than the time required in Ficoll-400, 1077, and HPU groups (34 ± 3, 41 ± 4, and 30 ± 4 min, respectively; P < 0.05). There was no significant difference in islet viability among the four groups. The islet purity, function, yield, and cost of our method were superior to those of the Ficoll-400 and 1077 methods, but inferior to the handpicking method. However, the handpicking method may cause wrist injury and visual impairment in researchers during large-scale islet isolation (>1000 islets). In summary, the MCT method is a rapid, efficient, and economic method for isolating and purifying murine islet cell clumps. This method overcomes some of the shortcomings of conventional methods, showing a relatively higher quality and yield of islets within a shorter duration at a lower cost. Therefore, the current method provides researchers with an alternative option for islet isolation and should be widely generalized.

  18. A new cation-exchange method for accurate field speciation of hexavalent chromium

    USGS Publications Warehouse

    Ball, J.W.; McCleskey, R. Blaine

    2003-01-01

    A new method for field speciation of Cr(VI) has been developed to meet present stringent regulatory standards and to overcome the limitations of existing methods. The method consists of passing a water sample through strong acid cation-exchange resin at the field site, where Cr(III) is retained while Cr(VI) passes into the effluent and is preserved for later determination. The method is simple, rapid, portable, and accurate, and makes use of readily available, inexpensive materials. Cr(VI) concentrations are determined later in the laboratory using any elemental analysis instrument sufficiently sensitive to measure the Cr(VI) concentrations of interest. The new method allows measurement of Cr(VI) concentrations as low as 0.05 ??g 1-1, storage of samples for at least several weeks prior to analysis, and use of readily available analytical instrumentation. Cr(VI) can be separated from Cr(III) between pH 2 and 11 at Cr(III)/Cr(VI) concentration ratios as high as 1000. The new method has demonstrated excellent comparability with two commonly used methods, the Hach Company direct colorimetric method and USEPA method 218.6. The new method is superior to the Hach direct colorimetric method owing to its relative sensitivity and simplicity. The new method is superior to USEPA method 218.6 in the presence of Fe(II) concentrations up to 1 mg 1-1 and Fe(III) concentrations up to 10 mg 1-1. Time stability of preserved samples is a significant advantage over the 24-h time constraint specified for USEPA method 218.6.

  19. Whole-Body Computed Tomography-Based Body Mass and Body Fat Quantification: A Comparison to Hydrostatic Weighing and Air Displacement Plethysmography.

    PubMed

    Gibby, Jacob T; Njeru, Dennis K; Cvetko, Steve T; Heiny, Eric L; Creer, Andrew R; Gibby, Wendell A

    We correlate and evaluate the accuracy of accepted anthropometric methods of percent body fat (%BF) quantification, namely, hydrostatic weighing (HW) and air displacement plethysmography (ADP), to 2 automatic adipose tissue quantification methods using computed tomography (CT). Twenty volunteer subjects (14 men, 6 women) received head-to-toe CT scans. Hydrostatic weighing and ADP were obtained from 17 and 12 subjects, respectively. The CT data underwent conversion using 2 separate algorithms, namely, the Schneider method and the Beam method, to convert Hounsfield units to their respective tissue densities. The overall mass and %BF of both methods were compared with HW and ADP. When comparing ADP to CT data using the Schneider method and Beam method, correlations were r = 0.9806 and 0.9804, respectively. Paired t tests indicated there were no statistically significant biases. Additionally, observed average differences in %BF between ADP and the Schneider method and the Beam method were 0.38% and 0.77%, respectively. The %BF measured from ADP, the Schneider method, and the Beam method all had significantly higher mean differences when compared with HW (3.05%, 2.32%, and 1.94%, respectively). We have shown that total body mass correlates remarkably well with both the Schneider method and Beam method of mass quantification. Furthermore, %BF calculated with the Schneider method and Beam method CT algorithms correlates remarkably well with ADP. The application of these CT algorithms have utility in further research to accurately stratify risk factors with periorgan, visceral, and subcutaneous types of adipose tissue, and has the potential for significant clinical application.

  20. Method for determination of aflatoxin M₁ in cheese and butter by HPLC using an immunoaffinity column.

    PubMed

    Sakuma, Hisako; Kamata, Yoichi; Sugita-Konishi, Yoshiko; Kawakami, Hiroshi

    2011-01-01

    A rapid, sensitive convenient method for determination of aflatoxin M₁ (AFM₁) in cheese and butter by HPLC was developed and validated. The method employs a safe extraction solution (mixture of acetonitrile, methanol and water) and an immunoaffinity column (IAC) for clean-up. Compared with the widely used method employing chloroform and a Florisil column, the IAC method has a short analytical time and there are no interference peaks. The limits of quantification (LOQ) of the IAC method were 0.12 and 0.14 µg/kg, while those of the Florisil column method were 0.47 and 0.23 µg/kg in cheese and buffer, respectively. The recovery and relative standard deviation (RSD) for cheese (spiked at 0.5 µg/kg) in the IAC method were 92% and 7%, respectively, while for the Florisil column method the corresponding values were 76% and 10%. The recovery and RSD for butter (spiked at 0.5 µg/kg) in the IAC method were 97% and 9%, and those in the Florisil method were 74% and 9%, respectively. In the IAC method, the values of in-house precision (n=2, day=5) of cheese and butter (spiked at 0.5 µg/kg) were 9% and 13%, respectively. The IAC method is superior to the Florisil column method in terms of safety, ease of handling, sensitivity and reliability. A survey of AFM₁ contamination in imported cheese and butter in Japan was conducted by the IAC method. AFM₁ was not detected in 60 samples of cheese and 30 samples of butter.

  1. 26 CFR 1.412(c)(1)-3 - Applying the minimum funding requirements to restored plans.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...) In general—(1) Restoration method. The restoration method is a funding method that adapts the... spread gain method that maintains an unfunded liability. A plan may adopt any cost method that satisfies...

  2. Comparison of modal superposition methods for the analytical solution to moving load problems.

    DOT National Transportation Integrated Search

    1994-01-01

    The response of bridge structures to moving loads is investigated using modal superposition methods. Two distinct modal superposition methods are available: the modedisplacement method and the mode-acceleration method. While the mode-displacement met...

  3. Turbulent boundary layers over nonstationary plane boundaries

    NASA Technical Reports Server (NTRS)

    Roper, A. T.; Gentry, G. L., Jr.

    1978-01-01

    Methods of predicting integral parameters and skin friction coefficients of turbulent boundary layers developing over moving ground planes were evaluated. The three methods evaluated were: relative integral parameter method; relative power law method; and modified law of the wall method.

  4. Inventory-based estimates of forest biomass carbon stocks in China: A comparison of three methods

    Treesearch

    Zhaodi Guo; Jingyun Fang; Yude Pan; Richard Birdsey

    2010-01-01

    Several studies have reported different estimates for forest biomass carbon (C) stocks in China. The discrepancy among these estimates may be largely attributed to the methods used. In this study, we used three methods [mean biomass density method (MBM), mean ratio method (MRM), and continuous biomass expansion factor (BEF) method (abbreviated as CBM)] applied to...

  5. Comparative Evaluation of Two Methods to Estimate Natural Gas Production in Texas

    EIA Publications

    2003-01-01

    This report describes an evaluation conducted by the Energy Information Administration (EIA) in August 2003 of two methods that estimate natural gas production in Texas. The first method (parametric method) was used by EIA from February through August 2003 and the second method (multinomial method) replaced it starting in September 2003, based on the results of this evaluation.

  6. Hypothesis Testing Using Factor Score Regression: A Comparison of Four Methods

    ERIC Educational Resources Information Center

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2016-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and…

  7. Using Caspar Creek flow records to test peak flow estimation methods applicable to crossing design

    Treesearch

    Peter H. Cafferata; Leslie M. Reid

    2017-01-01

    Long-term flow records from sub-watersheds in the Caspar Creek Experimental Watersheds were used to test the accuracy of four methods commonly used to estimate peak flows in small forested watersheds: the Rational Method, the updated USGS Magnitude and Frequency Method, flow transference methods, and the NRCS curve number method. Comparison of measured and calculated...

  8. Slip and Slide Method of Factoring Trinomials with Integer Coefficients over the Integers

    ERIC Educational Resources Information Center

    Donnell, William A.

    2012-01-01

    In intermediate and college algebra courses there are a number of methods for factoring quadratic trinomials with integer coefficients over the integers. Some of these methods have been given names, such as trial and error, reversing FOIL, AC method, middle term splitting method and slip and slide method. The purpose of this article is to discuss…

  9. Evaluating IRT- and CTT-Based Methods of Estimating Classification Consistency and Accuracy Indices from Single Administrations

    ERIC Educational Resources Information Center

    Deng, Nina

    2011-01-01

    Three decision consistency and accuracy (DC/DA) methods, the Livingston and Lewis (LL) method, LEE method, and the Hambleton and Han (HH) method, were evaluated. The purposes of the study were: (1) to evaluate the accuracy and robustness of these methods, especially when their assumptions were not well satisfied, (2) to investigate the "true"…

  10. Rapid Radiochemical Method for Total Radiostrontium (Sr-90) ...

    EPA Pesticide Factsheets

    Technical Fact Sheet Analysis Purpose: Qualitative analysis Technique: Beta counting Method Developed for: Strontium-89 and strontium-90 in building materials Method Selected for: SAM lists this method for qualitative analysis of strontium-89 and strontium-90 in concrete or brick building materials Summary of subject analytical method which will be posted to the SAM website to allow access to the method.

  11. 26 CFR 1.472-2 - Requirements incident to adoption and use of LIFO inventory method.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... inventory method. (ii) Any method of establishing pools for inventory under the dollar-value LIFO inventory method. (iii) Any method of determining the LIFO value of a dollar-value inventory pool, such as the... selecting a price index to be used with the index or link chain method of valuing inventory pools under the...

  12. Attitudes of Teachers of Arabic as a Foreign Language toward Methods of Foreign Language Teaching

    ERIC Educational Resources Information Center

    Seraj, Sami A.

    2010-01-01

    This study examined the attitude of teachers of Arabic as a foreign language toward some of the most well known teaching methods. For this reason the following eight methods were selected: (1) the Grammar-Translation Method (GTM), (2) the Direct Method (DM), (3) the Audio-Lingual Method (ALM), (4) Total Physical Response (TPR), (5) Community…

  13. Effects of Anchor Item Methods on the Detection of Differential Item Functioning within the Family of Rasch Models

    ERIC Educational Resources Information Center

    Wang, Wen-Chung

    2004-01-01

    Scale indeterminacy in analysis of differential item functioning (DIF) within the framework of item response theory can be resolved by imposing 3 anchor item methods: the equal-mean-difficulty method, the all-other anchor item method, and the constant anchor item method. In this article, applicability and limitations of these 3 methods are…

  14. A Comparison of Cut Scores Using Multiple Standard Setting Methods.

    ERIC Educational Resources Information Center

    Impara, James C.; Plake, Barbara S.

    This paper reports the results of using several alternative methods of setting cut scores. The methods used were: (1) a variation of the Angoff method (1971); (2) a variation of the borderline group method; and (3) an advanced impact method (G. Dillon, 1996). The results discussed are from studies undertaken to set the cut scores for fourth grade…

  15. On finite element methods for the Helmholtz equation

    NASA Technical Reports Server (NTRS)

    Aziz, A. K.; Werschulz, A. G.

    1979-01-01

    The numerical solution of the Helmholtz equation is considered via finite element methods. A two-stage method which gives the same accuracy in the computed gradient as in the computed solution is discussed. Error estimates for the method using a newly developed proof are given, and the computational considerations which show this method to be computationally superior to previous methods are presented.

  16. Restricted random search method based on taboo search in the multiple minima problem

    NASA Astrophysics Data System (ADS)

    Hong, Seung Do; Jhon, Mu Shik

    1997-03-01

    The restricted random search method is proposed as a simple Monte Carlo sampling method to search minima fast in the multiple minima problem. This method is based on taboo search applied recently to continuous test functions. The concept of the taboo region instead of the taboo list is used and therefore the sampling of a region near an old configuration is restricted in this method. This method is applied to 2-dimensional test functions and the argon clusters. This method is found to be a practical and efficient method to search near-global configurations of test functions and the argon clusters.

  17. The Split Coefficient Matrix method for hyperbolic systems of gasdynamic equations

    NASA Technical Reports Server (NTRS)

    Chakravarthy, S. R.; Anderson, D. A.; Salas, M. D.

    1980-01-01

    The Split Coefficient Matrix (SCM) finite difference method for solving hyperbolic systems of equations is presented. This new method is based on the mathematical theory of characteristics. The development of the method from characteristic theory is presented. Boundary point calculation procedures consistent with the SCM method used at interior points are explained. The split coefficient matrices that define the method for steady supersonic and unsteady inviscid flows are given for several examples. The SCM method is used to compute several flow fields to demonstrate its accuracy and versatility. The similarities and differences between the SCM method and the lambda-scheme are discussed.

  18. A Multifunctional Interface Method for Coupling Finite Element and Finite Difference Methods: Two-Dimensional Scalar-Field Problems

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.

    2002-01-01

    A multifunctional interface method with capabilities for variable-fidelity modeling and multiple method analysis is presented. The methodology provides an effective capability by which domains with diverse idealizations can be modeled independently to exploit the advantages of one approach over another. The multifunctional method is used to couple independently discretized subdomains, and it is used to couple the finite element and the finite difference methods. The method is based on a weighted residual variational method and is presented for two-dimensional scalar-field problems. A verification test problem and a benchmark application are presented, and the computational implications are discussed.

  19. An Accurate and Stable FFT-based Method for Pricing Options under Exp-Lévy Processes

    NASA Astrophysics Data System (ADS)

    Ding, Deng; Chong U, Sio

    2010-05-01

    An accurate and stable method for pricing European options in exp-Lévy models is presented. The main idea of this new method is combining the quadrature technique and the Carr-Madan Fast Fourier Transform methods. The theoretical analysis shows that the overall complexity of this new method is still O(N log N) with N grid points as the fast Fourier transform methods. Numerical experiments for different exp-Lévy processes also show that the numerical algorithm proposed by this new method has an accuracy and stability for the small strike prices K. That develops and improves the Carr-Madan method.

  20. Methods for producing complex films, and films produced thereby

    DOEpatents

    Duty, Chad E.; Bennett, Charlee J. C.; Moon, Ji -Won; Phelps, Tommy J.; Blue, Craig A.; Dai, Quanqin; Hu, Michael Z.; Ivanov, Ilia N.; Jellison, Jr., Gerald E.; Love, Lonnie J.; Ott, Ronald D.; Parish, Chad M.; Walker, Steven

    2015-11-24

    A method for producing a film, the method comprising melting a layer of precursor particles on a substrate until at least a portion of the melted particles are planarized and merged to produce the film. The invention is also directed to a method for producing a photovoltaic film, the method comprising depositing particles having a photovoltaic or other property onto a substrate, and affixing the particles to the substrate, wherein the particles may or may not be subsequently melted. Also described herein are films produced by these methods, methods for producing a patterned film on a substrate, and methods for producing a multilayer structure.

  1. Divergence preserving discrete surface integral methods for Maxwell's curl equations using non-orthogonal unstructured grids

    NASA Technical Reports Server (NTRS)

    Madsen, Niel K.

    1992-01-01

    Several new discrete surface integral (DSI) methods for solving Maxwell's equations in the time-domain are presented. These methods, which allow the use of general nonorthogonal mixed-polyhedral unstructured grids, are direct generalizations of the canonical staggered-grid finite difference method. These methods are conservative in that they locally preserve divergence or charge. Employing mixed polyhedral cells, (hexahedral, tetrahedral, etc.) these methods allow more accurate modeling of non-rectangular structures and objects because the traditional stair-stepped boundary approximations associated with the orthogonal grid based finite difference methods can be avoided. Numerical results demonstrating the accuracy of these new methods are presented.

  2. Selection of neural network structure for system error correction of electro-optical tracker system with horizontal gimbal

    NASA Astrophysics Data System (ADS)

    Liu, Xing-fa; Cen, Ming

    2007-12-01

    Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.

  3. Research progress of nano self - cleaning anti-fouling coatings

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Zhao, Y. J.; Teng, J. L.; Wang, J. H.; Wu, L. S.; Zheng, Y. L.

    2018-01-01

    There are many methods of evaluating the performance of nano self-cleaning anti-fouling coatings, such as carbon blacking method, coating reflection coefficient method, glass microbead method, film method, contact angle and rolling angle method, organic degradation method, and the application of performance evaluation method in self-cleaning antifouling coating. For the more, the types of nano self-cleaning anti-fouling coatings based on aqueous media was described, such as photocatalytic self-cleaning coatings, silicone coatings, organic fluorine coatings, fluorosilicone coatings, fluorocarbon coatings, polysilazane self-cleaning coatings. The research and application of different kinds of nano self-cleaning antifouling coatings are anlysised, and the latest research results are summed.

  4. Analysis of a turbulent boundary layer over a moving ground plane

    NASA Technical Reports Server (NTRS)

    Roper, A. T.; Gentry, G. L., Jr.

    1972-01-01

    Four methods of predicting the integral and friction parameters for a turbulent boundary layer over a moving ground plane were evaluated by using test information obtained in 76.2- by 50.8-centimeter tunnel. The tunnel was operated in the open sidewall configuration. These methods are (1) relative integral parameter method, (2) modified power law method, (3) relative power law method, and (4) modified law of the wall method. The modified law of the wall method predicts a more rapid decrease in skin friction with an increase in the ratio of belt velocity to free steam velocity than do methods (1) and (3).

  5. Modified harmonic balance method for the solution of nonlinear jerk equations

    NASA Astrophysics Data System (ADS)

    Rahman, M. Saifur; Hasan, A. S. M. Z.

    2018-03-01

    In this paper, a second approximate solution of nonlinear jerk equations (third order differential equation) can be obtained by using modified harmonic balance method. The method is simpler and easier to carry out the solution of nonlinear differential equations due to less number of nonlinear equations are required to solve than the classical harmonic balance method. The results obtained from this method are compared with those obtained from the other existing analytical methods that are available in the literature and the numerical method. The solution shows a good agreement with the numerical solution as well as the analytical methods of the available literature.

  6. [Series: Utilization of Differential Equations and Methods for Solving Them in Medical Physics (2)].

    PubMed

    Murase, Kenya

    2015-01-01

    In this issue, symbolic methods for solving differential equations were firstly introduced. Of the symbolic methods, Laplace transform method was also introduced together with some examples, in which this method was applied to solving the differential equations derived from a two-compartment kinetic model and an equivalent circuit model for membrane potential. Second, series expansion methods for solving differential equations were introduced together with some examples, in which these methods were used to solve Bessel's and Legendre's differential equations. In the next issue, simultaneous differential equations and various methods for solving these differential equations will be introduced together with some examples in medical physics.

  7. The Robin Hood method - A novel numerical method for electrostatic problems based on a non-local charge transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lazic, Predrag; Stefancic, Hrvoje; Abraham, Hrvoje

    2006-03-20

    We introduce a novel numerical method, named the Robin Hood method, of solving electrostatic problems. The approach of the method is closest to the boundary element methods, although significant conceptual differences exist with respect to this class of methods. The method achieves equipotentiality of conducting surfaces by iterative non-local charge transfer. For each of the conducting surfaces, non-local charge transfers are performed between surface elements, which differ the most from the targeted equipotentiality of the surface. The method is tested against analytical solutions and its wide range of application is demonstrated. The method has appealing technical characteristics. For the problemmore » with N surface elements, the computational complexity of the method essentially scales with N {sup {alpha}}, where {alpha} < 2, the required computer memory scales with N, while the error of the potential decreases exponentially with the number of iterations for many orders of magnitude of the error, without the presence of the Critical Slowing Down. The Robin Hood method could prove useful in other classical or even quantum problems. Some future development ideas for possible applications outside electrostatics are addressed.« less

  8. Methods to control for unmeasured confounding in pharmacoepidemiology: an overview.

    PubMed

    Uddin, Md Jamal; Groenwold, Rolf H H; Ali, Mohammed Sanni; de Boer, Anthonius; Roes, Kit C B; Chowdhury, Muhammad A B; Klungel, Olaf H

    2016-06-01

    Background Unmeasured confounding is one of the principal problems in pharmacoepidemiologic studies. Several methods have been proposed to detect or control for unmeasured confounding either at the study design phase or the data analysis phase. Aim of the Review To provide an overview of commonly used methods to detect or control for unmeasured confounding and to provide recommendations for proper application in pharmacoepidemiology. Methods/Results Methods to control for unmeasured confounding in the design phase of a study are case only designs (e.g., case-crossover, case-time control, self-controlled case series) and the prior event rate ratio adjustment method. Methods that can be applied in the data analysis phase include, negative control method, perturbation variable method, instrumental variable methods, sensitivity analysis, and ecological analysis. A separate group of methods are those in which additional information on confounders is collected from a substudy. The latter group includes external adjustment, propensity score calibration, two-stage sampling, and multiple imputation. Conclusion As the performance and application of the methods to handle unmeasured confounding may differ across studies and across databases, we stress the importance of using both statistical evidence and substantial clinical knowledge for interpretation of the study results.

  9. A hybrid perturbation-Galerkin method for differential equations containing a parameter

    NASA Technical Reports Server (NTRS)

    Geer, James F.; Andersen, Carl M.

    1989-01-01

    A two-step hybrid perturbation-Galerkin method to solve a variety of differential equations which involve a parameter is presented and discussed. The method consists of: (1) the use of a perturbation method to determine the asymptotic expansion of the solution about one or more values of the parameter; and (2) the use of some of the perturbation coefficient functions as trial functions in the classical Bubnov-Galerkin method. This hybrid method has the potential of overcoming some of the drawbacks of the perturbation method and the Bubnov-Galerkin method when they are applied by themselves, while combining some of the good features of both. The proposed method is illustrated first with a simple linear two-point boundary value problem and is then applied to a nonlinear two-point boundary value problem in lubrication theory. The results obtained from the hybrid method are compared with approximate solutions obtained by purely numerical methods. Some general features of the method, as well as some special tips for its implementation, are discussed. A survey of some current research application areas is presented and its degree of applicability to broader problem areas is discussed.

  10. Validated univariate and multivariate spectrophotometric methods for the determination of pharmaceuticals mixture in complex wastewater

    NASA Astrophysics Data System (ADS)

    Riad, Safaa M.; Salem, Hesham; Elbalkiny, Heba T.; Khattab, Fatma I.

    2015-04-01

    Five, accurate, precise, and sensitive univariate and multivariate spectrophotometric methods were developed for the simultaneous determination of a ternary mixture containing Trimethoprim (TMP), Sulphamethoxazole (SMZ) and Oxytetracycline (OTC) in waste water samples collected from different cites either production wastewater or livestock wastewater after their solid phase extraction using OASIS HLB cartridges. In univariate methods OTC was determined at its λmax 355.7 nm (0D), while (TMP) and (SMZ) were determined by three different univariate methods. Method (A) is based on successive spectrophotometric resolution technique (SSRT). The technique starts with the ratio subtraction method followed by ratio difference method for determination of TMP and SMZ. Method (B) is successive derivative ratio technique (SDR). Method (C) is mean centering of the ratio spectra (MCR). The developed multivariate methods are principle component regression (PCR) and partial least squares (PLS). The specificity of the developed methods is investigated by analyzing laboratory prepared mixtures containing different ratios of the three drugs. The obtained results are statistically compared with those obtained by the official methods, showing no significant difference with respect to accuracy and precision at p = 0.05.

  11. Identifying outliers of non-Gaussian groundwater state data based on ensemble estimation for long-term trends

    NASA Astrophysics Data System (ADS)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kueyoung; Choung, Sungwook; Chung, Il Moon

    2017-05-01

    A hydrogeological dataset often includes substantial deviations that need to be inspected. In the present study, three outlier identification methods - the three sigma rule (3σ), inter quantile range (IQR), and median absolute deviation (MAD) - that take advantage of the ensemble regression method are proposed by considering non-Gaussian characteristics of groundwater data. For validation purposes, the performance of the methods is compared using simulated and actual groundwater data with a few hypothetical conditions. In the validations using simulated data, all of the proposed methods reasonably identify outliers at a 5% outlier level; whereas, only the IQR method performs well for identifying outliers at a 30% outlier level. When applying the methods to real groundwater data, the outlier identification performance of the IQR method is found to be superior to the other two methods. However, the IQR method shows limitation by identifying excessive false outliers, which may be overcome by its joint application with other methods (for example, the 3σ rule and MAD methods). The proposed methods can be also applied as potential tools for the detection of future anomalies by model training based on currently available data.

  12. Validated univariate and multivariate spectrophotometric methods for the determination of pharmaceuticals mixture in complex wastewater.

    PubMed

    Riad, Safaa M; Salem, Hesham; Elbalkiny, Heba T; Khattab, Fatma I

    2015-04-05

    Five, accurate, precise, and sensitive univariate and multivariate spectrophotometric methods were developed for the simultaneous determination of a ternary mixture containing Trimethoprim (TMP), Sulphamethoxazole (SMZ) and Oxytetracycline (OTC) in waste water samples collected from different cites either production wastewater or livestock wastewater after their solid phase extraction using OASIS HLB cartridges. In univariate methods OTC was determined at its λmax 355.7 nm (0D), while (TMP) and (SMZ) were determined by three different univariate methods. Method (A) is based on successive spectrophotometric resolution technique (SSRT). The technique starts with the ratio subtraction method followed by ratio difference method for determination of TMP and SMZ. Method (B) is successive derivative ratio technique (SDR). Method (C) is mean centering of the ratio spectra (MCR). The developed multivariate methods are principle component regression (PCR) and partial least squares (PLS). The specificity of the developed methods is investigated by analyzing laboratory prepared mixtures containing different ratios of the three drugs. The obtained results are statistically compared with those obtained by the official methods, showing no significant difference with respect to accuracy and precision at p=0.05. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Investigation of the low-depression velocity layer in desert area by multichannel analysis of surface-wave method

    USGS Publications Warehouse

    Cheng, S.; Tian, G.; Xia, J.; He, H.; Shi, Z.; ,

    2004-01-01

    The multichannel analysis of surface-wave method (MASW) is a newly development method. The method has been employed in various applications in environmental and engineering geophysics overseas. However, It can only be found a few case studies in China. Most importantly, there is no application of the MASW in desert area in China or abroad. We present a case study of investigating the low-depression velocity in Temple of North Taba Area in Erdos Basin. The MASW method successfully defined the low-depression velocity layer in the desert area. Comparing results obtained by the MASW method with results by refraction seismic method, we discussed efficiency and simplicity of applying the MASW method in the desert area. It is proved that the maximum investigation depth can reach 60m in the study area when the acquisition and procession parameters are carefully chosen. The MASW method can remedy the incompetence of the refraction method and the micro-seismograph log method in low-depression velocity layer's investigation. The MASW method is also a powerful tool in investigation of near-surface complicated materials and possesses many unique advantages.

  14. Dual domain material point method for multiphase flows

    NASA Astrophysics Data System (ADS)

    Zhang, Duan

    2017-11-01

    Although the particle-in-cell method was first invented in the 60's for fluid computations, one of its later versions, the material point method, is mostly used for solid calculations. Recent development of the multi-velocity formulations for multiphase flows and fluid-structure interactions requires the Lagrangian capability of the method be combined with Eulerian calculations for fluids. Because of different numerical representations of the materials, additional numerical schemes are needed to ensure continuity of the materials. New applications of the method to compute fluid motions have revealed numerical difficulties in various versions of the method. To resolve these difficulties, the dual domain material point method is introduced and improved. Unlike other particle based methods, the material point method uses both Lagrangian particles and Eulerian mesh, therefore it avoids direct communication between particles. With this unique property and the Lagrangian capability of the method, it is shown that a multiscale numerical scheme can be efficiently built based on the dual domain material point method. In this talk, the theoretical foundation of the method will be introduced. Numerical examples will be shown. Work sponsored by the next generation code project of LANL.

  15. What makes a contraceptive acceptable?

    PubMed

    Berer, M

    1995-01-01

    The women's health movement is developing an increasing number of negative campaigns against various contraceptive methods based on three assumptions: 1) user-controlled methods are better for women than provider-controlled methods, 2) long-acting methods are undesirable because of their susceptibility to abuse, and 3) systemic methods carry unacceptable health risks to women. While these objections have sparked helpful debate, criticizing an overreliance on such methods is one thing and calling for bans on the provision of injectables and implants and on the development of vaccine contraceptives is another. Examination of the terms "provider-controlled," "user-controlled," and "long-acting" reveals that their definitions are not as clear-cut as opponents would have us believe. Some women's health advocates find the methods that are long-acting and provider-controlled to be the most problematic. They also criticize the near 100% contraceptive effectiveness of the long-acting methods despite the fact that the goal of contraception is to prevent pregnancy. It is wrong to condemn these methods because of their link to population control policies of the 1960s, and it is important to understand that long-acting, effective methods are often beneficial to women who require contraception for 20-22 years of their lives. Arguments against systemic methods (including RU-486 for early abortion and contraceptive vaccines) rebound around issues of safety. Feminists have gone so far as to create an intolerable situation by publishing books that criticize these methods based on erroneous conclusions and faulty scientific analysis. While women's health advocates have always rightly called for bans on abuse of various methods, they have not extended this ban to the methods themselves. In settings where other methods are not available, bans can lead to harm or maternal deaths. Another perspective can be used to consider methods in terms of their relationship with the user (repeated application). While feminists have called for more barrier and natural methods, most people in the world today refuse to use condoms even though they are the best protection from infection. Instead science should pursue promising new methods as well as continue to improve existing methods and to fill important gaps. Feminists should be advocates for women and their diverse needs rather than advocates against specific contraceptive methods.

  16. A new method for water quality assessment: by harmony degree equation.

    PubMed

    Zuo, Qiting; Han, Chunhui; Liu, Jing; Ma, Junxia

    2018-02-22

    Water quality assessment is an important basic work in the development, utilization, management, and protection of water resources, and also a prerequisite for water safety. In this paper, the harmony degree equation (HDE) was introduced into the research of water quality assessment, and a new method for water quality assessment was proposed according to the HDE: by harmony degree equation (WQA-HDE). First of all, the calculation steps and ideas of this method were described in detail, and then, this method with some other important methods of water quality assessment (single factor assessment method, mean-type comprehensive index assessment method, and multi-level gray correlation assessment method) were used to assess the water quality of the Shaying River (the largest tributary of the Huaihe in China). For this purpose, 2 years (2013-2014) dataset of nine water quality variables covering seven monitoring sites, and approximately 189 observations were used to compare and analyze the characteristics and advantages of the new method. The results showed that the calculation steps of WQA-HDE are similar to the comprehensive assessment method, and WQA-HDE is more operational comparing with the results of other water quality assessment methods. In addition, this new method shows good flexibility by setting the judgment criteria value HD 0 of water quality; when HD 0  = 0.8, the results are closer to reality, and more realistic and reliable. Particularly, when HD 0  = 1, the results of WQA-HDE are consistent with the single factor assessment method, both methods are subject to the most stringent "one vote veto" judgment condition. So, WQA-HDE is a composite method that combines the single factor assessment and comprehensive assessment. This research not only broadens the research field of theoretical method system of harmony theory but also promotes the unity of water quality assessment method and can be used for reference in other comprehensive assessment.

  17. Timing of nest vegetation measurement may obscure adaptive significance of nest-site characteristics: A simulation study.

    PubMed

    McConnell, Mark D; Monroe, Adrian P; Burger, Loren Wes; Martin, James A

    2017-02-01

    Advances in understanding avian nesting ecology are hindered by a prevalent lack of agreement between nest-site characteristics and fitness metrics such as nest success. We posit this is a result of inconsistent and improper timing of nest-site vegetation measurements. Therefore, we evaluated how the timing of nest vegetation measurement influences the estimated effects of vegetation structure on nest survival. We simulated phenological changes in nest-site vegetation growth over a typical nesting season and modeled how the timing of measuring that vegetation, relative to nest fate, creates bias in conclusions regarding its influence on nest survival. We modeled the bias associated with four methods of measuring nest-site vegetation: Method 1-measuring at nest initiation, Method 2-measuring at nest termination regardless of fate, Method 3-measuring at nest termination for successful nests and at estimated completion for unsuccessful nests, and Method 4-measuring at nest termination regardless of fate while also accounting for initiation date. We quantified and compared bias for each method for varying simulated effects, ranked models for each method using AIC, and calculated the proportion of simulations in which each model (measurement method) was selected as the best model. Our results indicate that the risk of drawing an erroneous or spurious conclusion was present in all methods but greater with Method 2 which is the most common method reported in the literature. Methods 1 and 3 were similarly less biased. Method 4 provided no additional value as bias was similar to Method 2 for all scenarios. While Method 1 is seldom practical to collect in the field, Method 3 is logistically practical and minimizes inherent bias. Implementation of Method 3 will facilitate estimating the effect of nest-site vegetation on survival, in the least biased way, and allow reliable conclusions to be drawn.

  18. Psychological traits underlying different killing methods among Malaysian male murderers.

    PubMed

    Kamaluddin, Mohammad Rahim; Shariff, Nadiah Syariani; Nurfarliza, Siti; Othman, Azizah; Ismail, Khaidzir H; Mat Saat, Geshina Ayu

    2014-04-01

    Murder is the most notorious crime that violates religious, social and cultural norms. Examining the types and number of different killing methods that used are pivotal in a murder case. However, the psychological traits underlying specific and multiple killing methods are still understudied. The present study attempts to fill this gap in knowledge by identifying the underlying psychological traits of different killing methods among Malaysian murderers. The study adapted an observational cross-sectional methodology using a guided self-administered questionnaire for data collection. The sampling frame consisted of 71 Malaysian male murderers from 11 Malaysian prisons who were selected using purposive sampling method. The participants were also asked to provide the types and number of different killing methods used to kill their respective victims. An independent sample t-test was performed to establish the mean score difference of psychological traits between the murderers who used single and multiple types of killing methods. Kruskal-Wallis tests were carried out to ascertain the psychological trait differences between specific types of killing methods. The results suggest that specific psychological traits underlie the type and number of different killing methods used during murder. The majority (88.7%) of murderers used a single method of killing. Multiple methods of killing was evident in 'premeditated' murder compared to 'passion' murder, and revenge was a common motive. Examples of multiple methods are combinations of stabbing and strangulation or slashing and physical force. An exception was premeditated murder committed with shooting, when it was usually a single method, attributed to the high lethality of firearms. Shooting was also notable when the motive was financial gain or related to drug dealing. Murderers who used multiple killing methods were more aggressive and sadistic than those who used a single killing method. Those who used multiple methods or slashing also displayed a higher level of minimisation traits. Despite its limitations, this study has provided some light on the underlying psychological traits of different killing methods which is useful in the field of criminology.

  19. Technical note: Comparison of metal-on-metal hip simulator wear measured by gravimetric, CMM and optical profiling methods

    NASA Astrophysics Data System (ADS)

    Alberts, L. Russell; Martinez-Nogues, Vanesa; Baker Cook, Richard; Maul, Christian; Bills, Paul; Racasan, R.; Stolz, Martin; Wood, Robert J. K.

    2018-03-01

    Simulation of wear in artificial joint implants is critical for evaluating implant designs and materials. Traditional protocols employ the gravimetric method to determine the loss of material by measuring the weight of the implant components before and after various test intervals and after the completed test. However, the gravimetric method cannot identify the location, area coverage or maximum depth of the wear and it has difficulties with proportionally small weight changes in relatively heavy implants. In this study, we compare the gravimetric method with two geometric surface methods; an optical light method (RedLux) and a coordinate measuring method (CMM). We tested ten Adept hips in a simulator for 2 million cycles (MC). Gravimetric and optical methods were performed at 0.33, 0.66, 1.00, 1.33 and 2 MC. CMM measurements were done before and after the test. A high correlation was found between the gravimetric and optical methods for both heads (R 2  =  0.997) and for cups (R 2  =  0.96). Both geometric methods (optical and CMM) measured more volume loss than the gravimetric method (for the heads, p  =  0.004 (optical) and p  =  0.08 (CMM); for the cups p  =  0.01 (optical) and p  =  0.003 (CMM)). Two cups recorded negative wear at 2 MC by the gravimetric method but none did by either the optical method or by CMM. The geometric methods were prone to confounding factors such as surface deformation and the gravimetric method could be confounded by protein absorption and backside wear. Both of the geometric methods were able to show the location, area covered and depth of the wear on the bearing surfaces, and track their changes during the test run; providing significant advantages to solely using the gravimetric method.

  20. A Review of the Extraction and Determination Methods of Thirteen Essential Vitamins to the Human Body: An Update from 2010.

    PubMed

    Zhang, Yuan; Zhou, Wei-E; Yan, Jia-Qing; Liu, Min; Zhou, Yu; Shen, Xin; Ma, Ying-Lin; Feng, Xue-Song; Yang, Jun; Li, Guo-Hui

    2018-06-19

    Vitamins are a class of essential nutrients in the body; thus, they play important roles in human health. The chemicals are involved in many physiological functions and both their lack and excess can put health at risk. Therefore, the establishment of methods for monitoring vitamin concentrations in different matrices is necessary. In this review, an updated overview of the main pretreatments and determination methods that have been used since 2010 is given. Ultrasonic assisted extraction, liquid⁻liquid extraction, solid phase extraction and dispersive liquid⁻liquid microextraction are the most common pretreatment methods, while the determination methods involve chromatography methods, electrophoretic methods, microbiological assays, immunoassays, biosensors and several other methods. Different pretreatments and determination methods are discussed.

  1. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System

    PubMed Central

    Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan

    2017-01-01

    In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequency-domain and achieves computational complexity reduction. PMID:28230763

  2. Hybrid finite element and Brownian dynamics method for charged particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huber, Gary A., E-mail: ghuber@ucsd.edu; Miao, Yinglong; Zhou, Shenggao

    2016-04-28

    Diffusion is often the rate-determining step in many biological processes. Currently, the two main computational methods for studying diffusion are stochastic methods, such as Brownian dynamics, and continuum methods, such as the finite element method. A previous study introduced a new hybrid diffusion method that couples the strengths of each of these two methods, but was limited by the lack of interactions among the particles; the force on each particle had to be from an external field. This study further develops the method to allow charged particles. The method is derived for a general multidimensional system and is presented usingmore » a basic test case for a one-dimensional linear system with one charged species and a radially symmetric system with three charged species.« less

  3. A robust direct-integration method for rotorcraft maneuver and periodic response

    NASA Technical Reports Server (NTRS)

    Panda, Brahmananda

    1992-01-01

    The Newmark-Beta method and the Newton-Raphson iteration scheme are combined to develop a direct-integration method for evaluating the maneuver and periodic-response expressions for rotorcraft. The method requires the generation of Jacobians and includes higher derivatives in the formulation of the geometric stiffness matrix to enhance the convergence of the system. The method leads to effective convergence with nonlinear structural dynamics and aerodynamic terms. Singularities in the matrices can be addressed with the method as they arise from a Lagrange multiplier approach for coupling equations with nonlinear constraints. The method is also shown to be general enough to handle singularities from quasisteady control-system models. The method is shown to be more general and robust than the similar 2GCHAS method for analyzing rotorcraft dynamics.

  4. Sources of method bias in social science research and recommendations on how to control it.

    PubMed

    Podsakoff, Philip M; MacKenzie, Scott B; Podsakoff, Nathan P

    2012-01-01

    Despite the concern that has been expressed about potential method biases, and the pervasiveness of research settings with the potential to produce them, there is disagreement about whether they really are a problem for researchers in the behavioral sciences. Therefore, the purpose of this review is to explore the current state of knowledge about method biases. First, we explore the meaning of the terms "method" and "method bias" and then we examine whether method biases influence all measures equally. Next, we review the evidence of the effects that method biases have on individual measures and on the covariation between different constructs. Following this, we evaluate the procedural and statistical remedies that have been used to control method biases and provide recommendations for minimizing method bias.

  5. Projection methods for the numerical solution of Markov chain models

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1989-01-01

    Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.

  6. Kinematic Distances: A Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Wenger, Trey V.; Balser, Dana S.; Anderson, L. D.; Bania, T. M.

    2018-03-01

    Distances to high-mass star-forming regions (HMSFRs) in the Milky Way are a crucial constraint on the structure of the Galaxy. Only kinematic distances are available for a majority of the HMSFRs in the Milky Way. Here, we compare the kinematic and parallax distances of 75 Galactic HMSFRs to assess the accuracy of kinematic distances. We derive the kinematic distances using three different methods: the traditional method using the Brand & Blitz rotation curve (Method A), the traditional method using the Reid et al. rotation curve and updated solar motion parameters (Method B), and a Monte Carlo technique (Method C). Methods B and C produce kinematic distances closest to the parallax distances, with median differences of 13% (0.43 {kpc}) and 17% (0.42 {kpc}), respectively. Except in the vicinity of the tangent point, the kinematic distance uncertainties derived by Method C are smaller than those of Methods A and B. In a large region of the Galaxy, the Method C kinematic distances constrain both the distances and the Galactocentric positions of HMSFRs more accurately than parallax distances. Beyond the tangent point along ℓ = 30°, for example, the Method C kinematic distance uncertainties reach a minimum of 10% of the parallax distance uncertainty at a distance of 14 {kpc}. We develop a prescription for deriving and applying the Method C kinematic distances and distance uncertainties. The code to generate the Method C kinematic distances is publicly available and may be utilized through an online tool.

  7. Estimating dietary costs of low-income women in California: a comparison of 2 approaches.

    PubMed

    Aaron, Grant J; Keim, Nancy L; Drewnowski, Adam; Townsend, Marilyn S

    2013-04-01

    Currently, no simplified approach to estimating food costs exists for a large, nationally representative sample. The objective was to compare 2 approaches for estimating individual daily diet costs in a population of low-income women in California. Cost estimates based on time-intensive method 1 (three 24-h recalls and associated food prices on receipts) were compared with estimates made by using less intensive method 2 [a food-frequency questionnaire (FFQ) and store prices]. Low-income participants (n = 121) of USDA nutrition programs were recruited. Mean daily diet costs, both unadjusted and adjusted for energy, were compared by using Pearson correlation coefficients and the Bland-Altman 95% limits of agreement between methods. Energy and nutrient intakes derived by the 2 methods were comparable; where differences occurred, the FFQ (method 2) provided higher nutrient values than did the 24-h recall (method 1). The crude daily diet cost was $6.32 by the 24-h recall method and $5.93 by the FFQ method (P = 0.221). The energy-adjusted diet cost was $6.65 by the 24-h recall method and $5.98 by the FFQ method (P < 0.001). Although the agreement between methods was weaker than expected, both approaches may be useful. Additional research is needed to further refine a large national survey approach (method 2) to estimate daily dietary costs with the use of this minimal time-intensive method for the participant and moderate time-intensive method for the researcher.

  8. Modified Extraction-Free Ion-Pair Methods for the Determination of Flunarizine Dihydrochloride in Bulk Drug, Tablets, and Human Urine

    NASA Astrophysics Data System (ADS)

    Prashanth, K. N.; Basavaiah, K.

    2018-01-01

    Two simple and sensitive extraction-free spectrophotometric methods are described for the determination of flunarizine dihydrochloride. The methods are based on the ion-pair complex formation between the nitrogenous compound flunarizine (FNZ), converted from flunarizine dihydrochloride (FNH), and the acidic dye phenol red (PR), in which experimental variables were circumvented. The first method (method A) is based on the formation of a yellow-colored ion-pair complex (1:1 drug:dye) between FNZ and PR in chloroform, which is measured at 415 nm. In the second method (method B), the formed drug-dye ion-pair complex is treated with ethanolic potassium hydroxide in an ethanolic medium, and the resulting base form of the dye is measured at 580 nm. The stoichiometry of the formed ion-pair complex between the drug and dye (1:1) is determined by Job's continuous variations method, and the stability constant of the complex is also calculated. These methods quantify FNZ over the concentration ranges 5.0-70.0 in method A and 0.5-7.0 μg/mL in method B. The calculated molar absorptivities are 6.17 × 103 and 5.5 × 104 L/mol·cm-1 for method A and method B, respectively, with corresponding Sandell sensitivity values of 0.0655 and 0.0074 μg/cm2. The methods are applied to the determination of FNZ in pure drug and human urine.

  9. Development of a novel and highly efficient method of isolating bacteriophages from water.

    PubMed

    Liu, Weili; Li, Chao; Qiu, Zhi-Gang; Jin, Min; Wang, Jing-Feng; Yang, Dong; Xiao, Zhong-Hai; Yuan, Zhao-Kang; Li, Jun-Wen; Xu, Qun-Ying; Shen, Zhi-Qiang

    2017-08-01

    Bacteriophages are widely used to the treatment of drug-resistant bacteria and the improvement of food safety through bacterial lysis. However, the limited investigations on bacteriophage restrict their further application. In this study, a novel and highly efficient method was developed for isolating bacteriophage from water based on the electropositive silica gel particles (ESPs) method. To optimize the ESPs method, we evaluated the eluent type, flow rate, pH, temperature, and inoculation concentration of bacteriophage using bacteriophage f2. The quantitative detection reported that the recovery of the ESPs method reached over 90%. The qualitative detection demonstrated that the ESPs method effectively isolated 70% of extremely low-concentration bacteriophage (10 0 PFU/100L). Based on the host bacteria composed of 33 standard strains and 10 isolated strains, the bacteriophages in 18 water samples collected from the three sites in the Tianjin Haihe River Basin were isolated by the ESPs and traditional methods. Results showed that the ESPs method was significantly superior to the traditional method. The ESPs method isolated 32 strains of bacteriophage, whereas the traditional method isolated 15 strains. The sample isolation efficiency and bacteriophage isolation efficiency of the ESPs method were 3.28 and 2.13 times higher than those of the traditional method. The developed ESPs method was characterized by high isolation efficiency, efficient handling of large water sample size and low requirement on water quality. Copyright © 2017. Published by Elsevier B.V.

  10. Condition number estimation of preconditioned matrices.

    PubMed

    Kushida, Noriyuki

    2015-01-01

    The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager's method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei's matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei's matrix, and matrices generated with the finite element method.

  11. Computational Methods for Configurational Entropy Using Internal and Cartesian Coordinates.

    PubMed

    Hikiri, Simon; Yoshidome, Takashi; Ikeguchi, Mitsunori

    2016-12-13

    The configurational entropy of solute molecules is a crucially important quantity to study various biophysical processes. Consequently, it is necessary to establish an efficient quantitative computational method to calculate configurational entropy as accurately as possible. In the present paper, we investigate the quantitative performance of the quasi-harmonic and related computational methods, including widely used methods implemented in popular molecular dynamics (MD) software packages, compared with the Clausius method, which is capable of accurately computing the change of the configurational entropy upon temperature change. Notably, we focused on the choice of the coordinate systems (i.e., internal or Cartesian coordinates). The Boltzmann-quasi-harmonic (BQH) method using internal coordinates outperformed all the six methods examined here. The introduction of improper torsions in the BQH method improves its performance, and anharmonicity of proper torsions in proteins is identified to be the origin of the superior performance of the BQH method. In contrast, widely used methods implemented in MD packages show rather poor performance. In addition, the enhanced sampling of replica-exchange MD simulations was found to be efficient for the convergent behavior of entropy calculations. Also in folding/unfolding transitions of a small protein, Chignolin, the BQH method was reasonably accurate. However, the independent term without the correlation term in the BQH method was most accurate for the folding entropy among the methods considered in this study, because the QH approximation of the correlation term in the BQH method was no longer valid for the divergent unfolded structures.

  12. Comparison of haemoglobin estimates using direct & indirect cyanmethaemoglobin methods.

    PubMed

    Bansal, Priyanka Gupta; Toteja, Gurudayal Singh; Bhatia, Neena; Gupta, Sanjeev; Kaur, Manpreet; Adhikari, Tulsi; Garg, Ashok Kumar

    2016-10-01

    Estimation of haemoglobin is the most widely used method to assess anaemia. Although direct cyanmethaemoglobin method is the recommended method for estimation of haemoglobin, but it may not be feasible under field conditions. Hence, the present study was undertaken to compare indirect cyanmethaemoglobin method against the conventional direct method for haemoglobin estimation. Haemoglobin levels were estimated for 888 adolescent girls aged 11-18 yr residing in an urban slum in Delhi by both direct and indirect cyanmethaemoglobin methods, and the results were compared. The mean haemoglobin levels for 888 whole blood samples estimated by direct and indirect cyanmethaemoglobin method were 116.1 ± 12.7 and 110.5 ± 12.5 g/l, respectively, with a mean difference of 5.67 g/l (95% confidence interval: 5.45 to 5.90, P<0.001); which is equivalent to 0.567 g%. The prevalence of anaemia was reported as 59.6 and 78.2 per cent by direct and indirect methods, respectively. Sensitivity and specificity of indirect cyanmethaemoglobin method were 99.2 and 56.4 per cent, respectively. Using regression analysis, prediction equation was developed for indirect haemoglobin values. The present findings revealed that indirect cyanmethaemoglobin method overestimated the prevalence of anaemia as compared to the direct method. However, if a correction factor is applied, indirect method could be successfully used for estimating true haemoglobin level. More studies should be undertaken to establish agreement and correction factor between direct and indirect cyanmethaemoglobin methods.

  13. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhakal, Tilak Raj

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crackmore » tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared with direct MD simulation results to demonstrate the feasibility of the method. Also, the multi-scale method is applied for a two dimensional problem of jet formation around copper notch under a strong impact.« less

  14. Teaching Fashion Illustration to University Students: Experiential and Expository Methods.

    ERIC Educational Resources Information Center

    Dragoo, Sheri; Martin, Ruth E.; Horridge, Patricia

    1998-01-01

    In a fashion illustration course, 24 students were taught using expository methods and 28 with experiential methods. Each method involved 20 lessons over eight weeks. Pre/posttest results indicated that both methods were equally effective in improving scores. (SK)

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogt, J R

    A total of 75 papers were presented on nuclear methods for analysis of environmental and biological samples. Sessions were devoted to software and mathematical methods; nuclear methods in atmospheric and water research; nuclear and atomic methodology; nuclear methods in biology and medicine; and nuclear methods in energy research.

  16. 40 CFR 440.50 - Applicability; description of the titanium ore subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) mills beneficiating titanium ores by electrostatic methods, magnetic and physical methods, or flotation methods; and (c) mines engaged in the dredge mining of placer deposits of sands containing rutile... methods in conjunction with electrostatic or magnetic methods). ...

  17. Japan Report, Science and Technology.

    DTIC Science & Technology

    1987-03-18

    electromelting and desiliconizing method and the alkali melting method are conventional methods to manufacture zirconia powder . The former method is low...cost, but does not produce high purity zirconia powder . In contrast, the latter method produces high-strength and ultrafine zirconia powder , but is

  18. Robust large-scale parallel nonlinear solvers for simulations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their usemore » in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write and easily portable. However, the method usually takes twice as long to solve as Newton-GMRES on general problems because it solves two linear systems at each iteration. In this paper, we discuss modifications to Bouaricha's method for a practical implementation, including a special globalization technique and other modifications for greater efficiency. We present numerical results showing computational advantages over Newton-GMRES on some realistic problems. We further discuss a new approach for dealing with singular (or ill-conditioned) matrices. In particular, we modify an algorithm for identifying a turning point so that an increasingly ill-conditioned Jacobian does not prevent convergence.« less

  19. Evaluation of catchment delineation methods for the medium-resolution National Hydrography Dataset

    USGS Publications Warehouse

    Johnston, Craig M.; Dewald, Thomas G.; Bondelid, Timothy R.; Worstell, Bruce B.; McKay, Lucinda D.; Rea, Alan; Moore, Richard B.; Goodall, Jonathan L.

    2009-01-01

    Different methods for determining catchments (incremental drainage areas) for stream segments of the medium-resolution (1:100,000-scale) National Hydrography Dataset (NHD) were evaluated by the U.S. Geological Survey (USGS), in cooperation with the U.S. Environmental Protection Agency (USEPA). The NHD is a comprehensive set of digital spatial data that contains information about surface-water features (such as lakes, ponds, streams, and rivers) of the United States. The need for NHD catchments was driven primarily by the goal to estimate NHD streamflow and velocity to support water-quality modeling. The application of catchments for this purpose also demonstrates the broader value of NHD catchments for supporting landscape characterization and analysis. Five catchment delineation methods were evaluated. Four of the methods use topographic information for the delineation of the NHD catchments. These methods include the Raster Seeding Method; two variants of a method first used in a USGS New England study-one used the Watershed Boundary Dataset (WBD) and the other did not-termed the 'New England Methods'; and the Outlet Matching Method. For these topographically based methods, the elevation data source was the 30-meter (m) resolution National Elevation Dataset (NED), as this was the highest resolution available for the conterminous United States and Hawaii. The fifth method evaluated, the Thiessen Polygon Method, uses distance to the nearest NHD stream segments to determine catchment boundaries. Catchments were generated using each method for NHD stream segments within six hydrologically and geographically distinct Subbasins to evaluate the applicability of the method across the United States. The five methods were evaluated by comparing the resulting catchments with the boundaries and the computed area measurements available from several verification datasets that were developed independently using manual methods. The results of the evaluation indicated that the two New England Methods provided the most accurate catchment boundaries. The New England Method with the WBD provided the most accurate results. The time and cost to implement and apply these automated methods were also considered in ultimately selecting the methods used to produce NHD catchments for the conterminous United States and Hawaii. This study was conducted by a joint USGS-USEPA team during the 2-year period that ended in September 2004. During the following 2-year period ending in the fall of 2006, the New England Methods were used to produce NHD catchments as part of a multiagency effort to generate the NHD streamflow and velocity estimates for a suite of integrated geospatial products known as 'NHDPlus.'

  20. Interpretation of biological and mechanical variations between the Lowry versus Bradford method for protein quantification.

    PubMed

    Lu, Tzong-Shi; Yiao, Szu-Yu; Lim, Kenneth; Jensen, Roderick V; Hsiao, Li-Li

    2010-07-01

    The identification of differences in protein expression resulting from methodical variations is an essential component to the interpretation of true, biologically significant results. We used the Lowry and Bradford methods- two most commonly used methods for protein quantification, to assess whether differential protein expressions are a result of true biological or methodical variations. MATERIAL #ENTITYSTARTX00026; Differential protein expression patterns was assessed by western blot following protein quantification by the Lowry and Bradford methods. We have observed significant variations in protein concentrations following assessment with the Lowry versus Bradford methods, using identical samples. Greater variations in protein concentration readings were observed over time and in samples with higher concentrations, with the Bradford method. Identical samples quantified using both methods yielded significantly different expression patterns on Western blot. We show for the first time that methodical variations observed in these protein assay techniques, can potentially translate into differential protein expression patterns, that can be falsely taken to be biologically significant. Our study therefore highlights the pivotal need to carefully consider methodical approaches to protein quantification in techniques that report quantitative differences.

Top