Sample records for prony method

  1. Study of eigenfrequencies with the help of Prony's method

    NASA Astrophysics Data System (ADS)

    Drobakhin, O. O.; Olevskyi, O. V.; Olevskyi, V. I.

    2017-10-01

    Eigenfrequencies can be crucial in the design of a construction. They define many parameters that determine limit parameters of the structure. Exceeding these values can lead to the structural failure of an object. It is especially important in the design of structures which support heavy equipment or are subjected to the forces of airflow. One of the most effective ways to acquire the frequencies' values is a computer-based numerical simulation. The existing methods do not allow to acquire the whole range of needed parameters. It is well known that Prony's method, is highly effective for the investigation of dynamic processes. Thus, it is rational to adapt Prony's method for such investigation. The Prony method has advantage in comparison with other numerical schemes because it provides the possibility to process not only the results of numerical simulation, but also real experimental data. The research was carried out for a computer model of a steel plate. The input data was obtained by using the Dassault Systems SolidWorks computer package with the Simulation add-on. We investigated the acquired input data with the help of Prony's method. The result of the numerical experiment shows that Prony's method can be used to investigate the mechanical eigenfrequencies with good accuracy. The output of Prony's method not only contains the information about values of frequencies themselves, but also contains data regarding the amplitudes, initial phases and decaying factors of any given mode of oscillation, which can also be used in engineering.

  2. Investigation of Procedures for Automatic Resonance Extraction from Noisy Transient Electromagnetics Data. Volume III. Translation of Prony’s Original Paper and Bibliography of Prony’s Method

    DTIC Science & Technology

    1981-08-17

    Van Blaricum, "On the Source of Parameter Bias in Prony’s Method," 1980 NEM Conference, Disneyland Hotel, August 1980. Auton, J.R., "An Unbiased...Method for the Estimation of the SEM Parameters of an Electromagnetic System," 1980 NEM Conference, Disneyland Hotel, August 1980. Auton, J.R. and M.L...34 1980 NEM Conference, Disneyland Hotel, August 5-7, 1980. Chuang, C.W. and D.L. Moffatt, "Complex Natural Responances of Radar Targets via Prony’s

  3. Automatic Implementation of Prony Analysis for Electromechanical Mode Identification from Phasor Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.

    2010-07-31

    Small signal stability problems are one of the major threats to grid stability and reliability. Prony analysis has been successfully applied on ringdown data to monitor electromechanical modes of a power system using phasor measurement unit (PMU) data. To facilitate an on-line application of mode estimation, this paper developed a recursive algorithm for implementing Prony analysis and proposed an oscillation detection method to detect ringdown data in real time. By automatically detect ringdown data, the proposed method helps guarantee that Prony analysis is applied properly and timely on the ringdown data. Thus, the mode estimation results can be performed reliablymore » and timely. The proposed method is tested using Monte Carlo simulations based on a 17-machine model and is shown to be able to properly identify the oscillation data for on-line application of Prony analysis.« less

  4. Improving the Performance of the Prony Method Using a Wavelet Domain Filter for MRI Denoising

    PubMed Central

    Lentini, Marianela; Paluszny, Marco

    2014-01-01

    The Prony methods are used for exponential fitting. We use a variant of the Prony method for abnormal brain tissue detection in sequences of T 2 weighted magnetic resonance images. Here, MR images are considered to be affected only by Rician noise, and a new wavelet domain bilateral filtering process is implemented to reduce the noise in the images. This filter is a modification of Kazubek's algorithm and we use synthetic images to show the ability of the new procedure to suppress noise and compare its performance with respect to the original filter, using quantitative and qualitative criteria. The tissue classification process is illustrated using a real sequence of T 2 MR images, and the filter is applied to each image before using the variant of the Prony method. PMID:24834108

  5. Improving the performance of the prony method using a wavelet domain filter for MRI denoising.

    PubMed

    Jaramillo, Rodney; Lentini, Marianela; Paluszny, Marco

    2014-01-01

    The Prony methods are used for exponential fitting. We use a variant of the Prony method for abnormal brain tissue detection in sequences of T 2 weighted magnetic resonance images. Here, MR images are considered to be affected only by Rician noise, and a new wavelet domain bilateral filtering process is implemented to reduce the noise in the images. This filter is a modification of Kazubek's algorithm and we use synthetic images to show the ability of the new procedure to suppress noise and compare its performance with respect to the original filter, using quantitative and qualitative criteria. The tissue classification process is illustrated using a real sequence of T 2 MR images, and the filter is applied to each image before using the variant of the Prony method.

  6. Fitting Prony Series To Data On Viscoelastic Materials

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1995-01-01

    Improved method of fitting Prony series to data on viscoelastic materials involves use of least-squares optimization techniques. Based on optimization techniques yields closer correlation with data than traditional method. Involves no assumptions regarding the gamma'(sub i)s and higher-order terms, and provides for as many Prony terms as needed to represent higher-order subtleties in data. Curve-fitting problem treated as design-optimization problem and solved by use of partially-constrained-optimization techniques.

  7. Numerical solution methods for viscoelastic orthotropic materials

    NASA Technical Reports Server (NTRS)

    Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.

    1988-01-01

    Numerical solution methods for viscoelastic orthotropic materials, specifically fiber reinforced composite materials, are examined. The methods include classical lamination theory using time increments, direction solution of the Volterra Integral, Zienkiewicz's linear Prony series method, and a new method called Nonlinear Differential Equation Method (NDEM) which uses a nonlinear Prony series. The criteria used for comparison of the various methods include the stability of the solution technique, time step size stability, computer solution time length, and computer memory storage. The Volterra Integral allowed the implementation of higher order solution techniques but had difficulties solving singular and weakly singular compliance function. The Zienkiewicz solution technique, which requires the viscoelastic response to be modeled by a Prony series, works well for linear viscoelastic isotropic materials and small time steps. The new method, NDEM, uses a modified Prony series which allows nonlinear stress effects to be included and can be used with orthotropic nonlinear viscoelastic materials. The NDEM technique is shown to be accurate and stable for both linear and nonlinear conditions with minimal computer time.

  8. Determining a Prony Series for a Viscoelastic Material From Time Varying Strain Data

    NASA Technical Reports Server (NTRS)

    Tzikang, Chen

    2000-01-01

    In this study a method of determining the coefficients in a Prony series representation of a viscoelastic modulus from rate dependent data is presented. Load versus time test data for a sequence of different rate loading segments is least-squares fitted to a Prony series hereditary integral model of the material tested. A nonlinear least squares regression algorithm is employed. The measured data includes ramp loading, relaxation, and unloading stress-strain data. The resulting Prony series which captures strain rate loading and unloading effects, produces an excellent fit to the complex loading sequence.

  9. Algorithm Summary and Evaluation: Automatic Implementation of Ringdown Analysis for Electromechanical Mode Identification from Phasor Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ning; Huang, Zhenyu; Tuffner, Francis K.

    2010-02-28

    Small signal stability problems are one of the major threats to grid stability and reliability. Prony analysis has been successfully applied on ringdown data to monitor electromechanical modes of a power system using phasor measurement unit (PMU) data. To facilitate an on-line application of mode estimation, this paper develops a recursive algorithm for implementing Prony analysis and proposed an oscillation detection method to detect ringdown data in real time. By automatically detecting ringdown data, the proposed method helps guarantee that Prony analysis is applied properly and timely on the ringdown data. Thus, the mode estimation results can be performed reliablymore » and timely. The proposed method is tested using Monte Carlo simulations based on a 17-machine model and is shown to be able to properly identify the oscillation data for on-line application of Prony analysis. In addition, the proposed method is applied to field measurement data from WECC to show the performance of the proposed algorithm.« less

  10. Calculation of light delay for coupled microrings by FDTD technique and Padé approximation.

    PubMed

    Huang, Yong-Zhen; Yang, Yue-De

    2009-11-01

    The Padé approximation with Baker's algorithm is compared with the least-squares Prony method and the generalized pencil-of-functions (GPOF) method for calculating mode frequencies and mode Q factors for coupled optical microdisks by FDTD technique. Comparisons of intensity spectra and the corresponding mode frequencies and Q factors show that the Padé approximation can yield more stable results than the Prony and the GPOF methods, especially the intensity spectrum. The results of the Prony method and the GPOF method are greatly influenced by the selected number of resonant modes, which need to be optimized during the data processing, in addition to the length of the time response signal. Furthermore, the Padé approximation is applied to calculate light delay for embedded microring resonators from complex transmission spectra obtained by the Padé approximation from a FDTD output. The Prony and the GPOF methods cannot be applied to calculate the transmission spectra, because the transmission signal obtained by the FDTD simulation cannot be expressed as a sum of damped complex exponentials.

  11. Prony Ringdown GUI (CERTS Prony Ringdown, part of the DSI Tool Box)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuffner, Francis; Marinovici, PNNL Laurentiu; Hauer, PNNL John

    2014-02-21

    The PNNL Prony Ringdown graphical user interface is one analysis tool included in the Dynamic System Identification toolbox (DSI Toolbox). The Dynamic System Identification toolbox is a MATLAB-based collection of tools for parsing and analyzing phasor measurement unit data, especially in regards to small signal stability. It includes tools to read the data, preprocess it, and perform small signal analysis. 5. Method of Solution: The Dynamic System Identification Toolbox (DSI Toolbox) is designed to provide a research environment for examining phasor measurement unit data and performing small signal stability analysis. The software uses a series of text-driven menus to helpmore » guide users and organize the toolbox features. Methods for reading in populate phasor measurement unit data are provided, with appropriate preprocessing options for small-signal-stability analysis. The toolbox includes the Prony Ringdown GUI and basic algorithms to estimate information on oscillatory modes of the system, such as modal frequency and damping ratio.« less

  12. Unstable optical resonator loss calculations using the prony method.

    PubMed

    Siegman, A E; Miller, H Y

    1970-12-01

    The eigenvalues for all the significant low-order resonant modes of an unstable optical resonator with circular mirrors are computed using an eigenvalue method called the Prony method. A general equivalence relation is also given, by means of which one can obtain the design parameters for a single-ended unstable resonator of the type usually employed in practical lasers, from the calculated or tabulated values for an equivalent symmetric or double-ended unstable resonator.

  13. Functional Techniques for Data Analysis

    NASA Technical Reports Server (NTRS)

    Tomlinson, John R.

    1997-01-01

    This dissertation develops a new general method of solving Prony's problem. Two special cases of this new method have been developed previously. They are the Matrix Pencil and the Osculatory Interpolation. The dissertation shows that they are instances of a more general solution type which allows a wide ranging class of linear functional to be used in the solution of the problem. This class provides a continuum of functionals which provide new methods that can be used to solve Prony's problem.

  14. Prony series spectra of structural relaxation in N-BK7 for finite element modeling.

    PubMed

    Koontz, Erick; Blouin, Vincent; Wachtel, Peter; Musgraves, J David; Richardson, Kathleen

    2012-12-20

    Structural relaxation behavior of N-BK7 glass was characterized at temperatures 20 °C above and below T(12) for this glass, using a thermo mechanical analyzer (TMA). T(12) is a characteristic temperature corresponding to a viscosity of 10(12) Pa·s. The glass was subject to quick temperature down-jumps preceded and followed by long isothermal holds. The exponential-like decay of the sample height was recorded and fitted using a unique Prony series method. The result of his method was a plot of the fit parameters revealing the presence of four distinct peaks or distributions of relaxation times. The number of relaxation times decreased as final test temperature was increased. The relaxation times did not shift significantly with changing temperature; however, the Prony weight terms varied essentially linearly with temperature. It was also found that the structural relaxation behavior of the glass trended toward single exponential behavior at temperatures above the testing range. The result of the analysis was a temperature-dependent Prony series model that can be used in finite element modeling of glass behavior in processes such as precision glass molding (PGM).

  15. Time-domain system for identification of the natural resonant frequencies of aircraft relevant to electromagnetic compatibility testing

    NASA Astrophysics Data System (ADS)

    Adams, J. W.; Ondrejka, A. R.; Medley, H. W.

    1987-11-01

    A method of measuring the natural resonant frequencies of a structure is described. The measurement involves irradiating this structure, in this case a helicopter, with an impulsive electromagnetic (EM) field and receiving the echo reflected from the helicopter. Resonances are identified by using a mathematical algorithm based on Prony's method to operate on the digitized reflected signal. The measurement system consists of special TEM horns, pulse generators, a time-domain system, and Prony's algorithm. The frequency range covered is 5 megahertz to 250 megahertz. This range is determined by antenna and circuit characteristics. The measurement system is demonstrated, and measured data from several different helicopters are presented in different forms. These different forms are needed to determine which of the resonant frequencies are real and which are false. The false frequencies are byproducts of Prony's algorithm.

  16. Remote Acoustic Sensing of Oceanic Fluid and Biological Processes.

    DTIC Science & Technology

    1980-06-01

    Oceanography (FISHER and SQUIER, 1975; SQUIER, WILLIAMS , BURKE and FISHER, 1976) have developed 3 and used a narrow-beam 87.5 kHz echo sounder and detected...of the ocean (PRONI and APEL , 1975; PRONI, 1978). He has detected internal waves and interleaving water masses (NEWMAN, PRONI and WALTER, 1977). He...Theoretical considerations (WESTON, 1958; TATARSKII, 1961; MUNK and GARRETT, 1973; PRONI and APEL , 1975; ORR and HESS, 1978b) indicate that the

  17. High-Resolution Array with Prony, MUSIC, and ESPRIT Algorithms

    DTIC Science & Technology

    1992-08-25

    N avalI Research La bora tory AD-A255 514 Washington, DC 20375-5320 NRL/FR/5324-92-9397 High-resolution Array with Prony, music , and ESPRIT...unlimited t"orm n pprovoiREPORT DOCUMENTATION PAGE OMB. o 0 104 0188 4. TITLE AND SUBTITLE S. FUNDING NUMBERS High-resolution Array with Prony. MUSIC . and...the array high-resolution properties of three algorithms: the Prony algo- rithm, the MUSIC algorithm, and the ESPRIT algorithm. MUSIC has been much

  18. On the Prony series representation of stretched exponential relaxation

    NASA Astrophysics Data System (ADS)

    Mauro, John C.; Mauro, Yihong Z.

    2018-09-01

    Stretched exponential relaxation is a ubiquitous feature of homogeneous glasses. The stretched exponential decay function can be derived from the diffusion-trap model, which predicts certain critical values of the fractional stretching exponent, β. In practical implementations of glass relaxation models, it is computationally convenient to represent the stretched exponential function as a Prony series of simple exponentials. Here, we perform a comprehensive mathematical analysis of the Prony series approximation of the stretched exponential relaxation, including optimized coefficients for certain critical values of β. The fitting quality of the Prony series is analyzed as a function of the number of terms in the series. With a sufficient number of terms, the Prony series can accurately capture the time evolution of the stretched exponential function, including its "fat tail" at long times. However, it is unable to capture the divergence of the first-derivative of the stretched exponential function in the limit of zero time. We also present a frequency-domain analysis of the Prony series representation of the stretched exponential function and discuss its physical implications for the modeling of glass relaxation behavior.

  19. Polymeric Materials Models in the Warrior Injury Assessment Manikin (WIAMan) Anthropomorphic Test Device (ATD) Tech Demonstrator

    DTIC Science & Technology

    2017-01-01

    are the shear relaxation moduli and relaxation times , which make up the classical Prony series . A Prony- series expansion is a relaxation function...approximation for modeling time -dependent damping. The scalar parameters 1 and 2 control the nonlinearity of the Prony series . Under the...Velodyne that best fit the experimental stress-strain data. To do so, the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA

  20. On the analytical determination of relaxation modulus of viscoelastic materials by Prony's interpolation method

    NASA Technical Reports Server (NTRS)

    Rodriguez, Pedro I.

    1986-01-01

    A computer implementation to Prony's curve fitting by exponential functions is presented. The method, although more than one hundred years old, has not been utilized to its fullest capabilities due to the restriction that the time range must be given in equal increments in order to obtain the best curve fit for a given set of data. The procedure used in this paper utilizes the 3-dimensional capabilities of the Interactive Graphics Design System (I.G.D.S.) in order to obtain the equal time increments. The resultant information is then input into a computer program that solves directly for the exponential constants yielding the best curve fit. Once the exponential constants are known, a simple least squares solution can be applied to obtain the final form of the equation.

  1. The analytical representation of viscoelastic material properties using optimization techniques

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1993-01-01

    This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.

  2. Metagenomic and PCR-Based Diversity Surveys of [FeFe]-Hydrogenases Combined with Isolation of Alkaliphilic Hydrogen-Producing Bacteria from the Serpentinite-Hosted Prony Hydrothermal Field, New Caledonia.

    PubMed

    Mei, Nan; Postec, Anne; Monnin, Christophe; Pelletier, Bernard; Payri, Claude E; Ménez, Bénédicte; Frouin, Eléonore; Ollivier, Bernard; Erauso, Gaël; Quéméneur, Marianne

    2016-01-01

    High amounts of hydrogen are emitted in the serpentinite-hosted hydrothermal field of the Prony Bay (PHF, New Caledonia), where high-pH (~11), low-temperature (< 40°C), and low-salinity fluids are discharged in both intertidal and shallow submarine environments. In this study, we investigated the diversity and distribution of potentially hydrogen-producing bacteria in Prony hyperalkaline springs by using metagenomic analyses and different PCR-amplified DNA sequencing methods. The retrieved sequences of hydA genes, encoding the catalytic subunit of [FeFe]-hydrogenases and, used as a molecular marker of hydrogen-producing bacteria, were mainly related to those of Firmicutes and clustered into two distinct groups depending on sampling locations. Intertidal samples were dominated by new hydA sequences related to uncultured Firmicutes retrieved from paddy soils, while submarine samples were dominated by diverse hydA sequences affiliated with anaerobic and/or thermophilic submarine Firmicutes pertaining to the orders Thermoanaerobacterales or Clostridiales. The novelty and diversity of these [FeFe]-hydrogenases may reflect the unique environmental conditions prevailing in the PHF (i.e., high-pH, low-salt, mesothermic fluids). In addition, novel alkaliphilic hydrogen-producing Firmicutes (Clostridiales and Bacillales) were successfully isolated from both intertidal and submarine PHF chimney samples. Both molecular and cultivation-based data demonstrated the ability of Firmicutes originating from serpentinite-hosted environments to produce hydrogen by fermentation, potentially contributing to the molecular hydrogen balance in situ.

  3. Enhanced vasomotion of cerebral arterioles in spontaneously hypertensive rats

    NASA Technical Reports Server (NTRS)

    Lefer, D. J.; Lynch, C. D.; Lapinski, K. C.; Hutchins, P. M.

    1990-01-01

    Intrinsic rhythmic changes in the diameter of pial cerebral arterioles (30-70 microns) in anesthetized normotensive and hypertensive rats were assessed in vivo to determine if any significant differences exist between the two strains. All diameter measurements were analyzed using a traditional graphic analysis technique and a new frequency spectrum analysis technique known as the Prony Spectral Line Estimator. Graphic analysis of the data revealed that spontaneously hypertensive rats (SHR) possess a significantly greater fundamental frequency (5.57 +/- 0.28 cycles/min) of vasomotion compared to the control Wistar-Kyoto normotensive rats (WKY) (1.95 +/- 0.37 cycles/min). Furthermore, the SHR cerebral arterioles exhibited a significantly greater amplitude of vasomotion (10.07 +/- 0.70 microns) when compared to the WKY cerebral arterioles of the same diameter (8.10 +/- 0.70 microns). Diameter measurements processed with the Prony technique revealed that the fundamental frequency of vasomotion in SHR cerebral arterioles (6.14 +/- 0.39 cycles/min) was also significantly greater than that of the WKY cerebral arterioles (2.99 +/- 0.42 cycles/min). The mean amplitudes of vasomotion in the SHR and WKY strains obtained by the Prony analysis were found not to be statistically significant in contrast to the graphic analysis of the vasomotion amplitude of the arterioles. In addition, the Prony system was able to consistently uncover a very low frequency of vasomotion in both strains of rats that was typically less than 1 cycle/min and was not significantly different between the two strains. The amplitude of this slow frequency was also not significantly different between the two strains. The amplitude of the slow frequency of vasomotion (less than 1 cycle/min) was not different from the amplitude of the higher frequency (2-6 cycles/min) vasomotion by Prony or graphic analysis. These data suggest that a fundamental intrinsic defect exists in the spontaneously hypertensive rat that may contribute to the pathogenesis of hypertension in these animals.

  4. Two-Port Representation of a Linear Transmission Line in the Time Domain.

    DTIC Science & Technology

    1980-01-01

    which is a rational function. To use the Prony procedure it is necessary to inverse transform the admittance functions. For the transmission line, most...impulse is a constant, the inverse transform of Y0(s) contains an impulse of value ._ Therefore, if we were to numerically inverse transform Yo(s), we...would remove this im- pulse and inverse transform Y-(S) Y (S) 1’LR+C~ (23) The prony procedure would then be applied to the result. Of course, an impulse

  5. Conventional, Bayesian, and Modified Prony's methods for characterizing fast and slow waves in equine cancellous bone

    PubMed Central

    Groopman, Amber M.; Katz, Jonathan I.; Holland, Mark R.; Fujita, Fuminori; Matsukawa, Mami; Mizuno, Katsunori; Wear, Keith A.; Miller, James G.

    2015-01-01

    Conventional, Bayesian, and the modified least-squares Prony's plus curve-fitting (MLSP + CF) methods were applied to data acquired using 1 MHz center frequency, broadband transducers on a single equine cancellous bone specimen that was systematically shortened from 11.8 mm down to 0.5 mm for a total of 24 sample thicknesses. Due to overlapping fast and slow waves, conventional analysis methods were restricted to data from sample thicknesses ranging from 11.8 mm to 6.0 mm. In contrast, Bayesian and MLSP + CF methods successfully separated fast and slow waves and provided reliable estimates of the ultrasonic properties of fast and slow waves for sample thicknesses ranging from 11.8 mm down to 3.5 mm. Comparisons of the three methods were carried out for phase velocity at the center frequency and the slope of the attenuation coefficient for the fast and slow waves. Good agreement among the three methods was also observed for average signal loss at the center frequency. The Bayesian and MLSP + CF approaches were able to separate the fast and slow waves and provide good estimates of the fast and slow wave properties even when the two wave modes overlapped in both time and frequency domains making conventional analysis methods unreliable. PMID:26328678

  6. Cancellous bone analysis with modified least squares Prony's method and chirp filter: phantom experiments and simulation.

    PubMed

    Wear, Keith A

    2010-10-01

    The presence of two longitudinal waves in porous media is predicted by Biot's theory and has been confirmed experimentally in cancellous bone. When cancellous bone samples are interrogated in through-transmission, these two waves can overlap in time. Previously, the Modified Least-Squares Prony's (MLSP) method was validated for estimation of amplitudes, attenuation coefficients, and phase velocities of fast and slow waves, but tended to overestimate phase velocities by up to about 5%. In the present paper, a pre-processing chirp filter to mitigate the phase velocity bias is derived. The MLSP/chirp filter (MLSPCF) method was tested for decomposition of a 500 kHz-center-frequency signal containing two overlapping components: one passing through a low-density-polyethylene plate (fast wave) and another passing through a cancellous-bone-mimicking phantom material (slow wave). The chirp filter reduced phase velocity bias from 100 m/s (5.1%) to 69 m/s (3.5%) (fast wave) and from 29 m/s (1.9%) to 10 m/s (0.7%) (slow wave). Similar improvements were found for 1) measurements in polycarbonate (fast wave) and a cancellous-bone-mimicking phantom (slow wave), and 2) a simulation based on parameters mimicking bovine cancellous bone. The MLSPCF method did not offer consistent improvement in estimates of attenuation coefficient or amplitude.

  7. A Comparison of the Pencil-of-Function Method with Prony’s Method, Wiener Filters and Other Identification Techniques,

    DTIC Science & Technology

    1977-12-01

    exponentials encountered are complex and zhey are approximately at harmonic frequencies. Moreover, the real parts of the complex exponencials are much...functions as a basis for expanding the current distribution on an antenna by the method of moments results in a regularized ill-posed problem with respect...to the current distribution on the antenna structure. However, the problem is not regularized with respect to chaoge because the chaPge distribution

  8. Spatial distribution of microbial communities in the shallow submarine alkaline hydrothermal field of the Prony Bay, New Caledonia.

    PubMed

    Quéméneur, Marianne; Bes, Méline; Postec, Anne; Mei, Nan; Hamelin, Jérôme; Monnin, Christophe; Chavagnac, Valérie; Payri, Claude; Pelletier, Bernard; Guentas-Dombrowsky, Linda; Gérard, Martine; Pisapia, Céline; Gérard, Emmanuelle; Ménez, Bénédicte; Ollivier, Bernard; Erauso, Gaël

    2014-12-01

    The shallow submarine hydrothermal field of the Prony Bay (New Caledonia) discharges hydrogen- and methane-rich fluids with low salinity, temperature (< 40°C) and high pH (11) produced by the serpentinization reactions of the ultramafic basement into the lagoon seawater. They are responsible for the formation of carbonate chimneys at the lagoon seafloor. Capillary electrophoresis single-strand conformation polymorphism fingerprinting, quantitative polymerase chain reaction and sequence analysis of 16S rRNA genes revealed changes in microbial community structure, abundance and diversity depending on the location, water depth, and structure of the carbonate chimneys. The low archaeal diversity was dominated by few uncultured Methanosarcinales similar to those found in other serpentinization-driven submarine and subterrestrial ecosystems (e.g. Lost City, The Cedars). The most abundant and diverse bacterial communities were mainly composed of Chloroflexi, Deinococcus-Thermus, Firmicutes and Proteobacteria. Functional gene analysis revealed similar abundance and diversity of both Methanosarcinales methanoarchaea, and Desulfovibrionales and Desulfobacterales sulfate-reducers in the studied sites. Molecular studies suggest that redox reactions involving hydrogen, methane and sulfur compounds (e.g. sulfate) are the energy driving forces of the microbial communities inhabiting the Prony hydrothermal system.

  9. Application of the Virtual Fields Method to a relaxation behaviour of rubbers

    NASA Astrophysics Data System (ADS)

    Yoon, Sung-ho; Siviour, Clive R.

    2018-07-01

    This paper presents the application of the Virtual Fields Method (VFM) for the characterization of viscoelastic behaviour of rubbers. The relaxation behaviour of the rubbers following a dynamic loading event is characterized using the dynamic VFM in which full-field (two dimensional) strain and acceleration data, obtained from high-speed imaging, are analysed by the principle of virtual work without traction force data, instead using the acceleration fields in the specimen to provide stress information. Two (silicone and nitrile) rubbers were tested in tension using a drop-weight apparatus. It is assumed that the dynamic behaviour is described by the combination of hyperelastic and Prony series models. A VFM based procedure is designed and used to produce the identification of the modulus term of a hyperelastic model and the Prony series parameters within a time scale determined by two experimental factors: imaging speed and loading duration. Then, the time range of the data is extended using experiments at different temperatures combined with the time-temperature superposition principle. Prior to these experimental analyses, finite element simulations were performed to validate the application of the proposed VFM analysis. Therefore, for the first time, it has been possible to identify relaxation behaviour of a material following dynamic loading, using a technique that can be applied to both small and large deformations.

  10. Estimation of fast and slow wave properties in cancellous bone using Prony's method and curve fitting.

    PubMed

    Wear, Keith A

    2013-04-01

    The presence of two longitudinal waves in poroelastic media is predicted by Biot's theory and has been confirmed experimentally in through-transmission measurements in cancellous bone. Estimation of attenuation coefficients and velocities of the two waves is challenging when the two waves overlap in time. The modified least squares Prony's (MLSP) method in conjuction with curve-fitting (MLSP + CF) is tested using simulations based on published values for fast and slow wave attenuation coefficients and velocities in cancellous bone from several studies in bovine femur, human femur, and human calcaneus. The search algorithm is accelerated by exploiting correlations among search parameters. The performance of the algorithm is evaluated as a function of signal-to-noise ratio (SNR). For a typical experimental SNR (40 dB), the root-mean-square errors (RMSEs) for one example (human femur) with fast and slow waves separated by approximately half of a pulse duration were 1 m/s (slow wave velocity), 4 m/s (fast wave velocity), 0.4 dB/cm MHz (slow wave attenuation slope), and 1.7 dB/cm MHz (fast wave attenuation slope). The MLSP + CF method is fast (requiring less than 2 s at SNR = 40 dB on a consumer-grade notebook computer) and is flexible with respect to the functional form of the parametric model for the transmission coefficient. The MLSP + CF method provides sufficient accuracy and precision for many applications such that experimental error is a greater limiting factor than estimation error.

  11. The Derivation of Simple Poles in a Transfer Function from Real Frequency Information. Part 3. Object Classification and Identification,

    DTIC Science & Technology

    1977-01-10

    This report is the third in a series of three that evaluate a technique (frequency-domain Prony) for obtaining the poles of a transfer function. The...main objective was to assess the feasibility of classifying or identifying ship-like targets by using pole sets derived from frequency-domain data. A...predictor-correlator procedure for using spectral data and library pole sets for this purpose was developed. Also studied was an iterative method for

  12. Koopman Mode Decomposition Methods in Dynamic Stall: Reduced Order Modeling and Control

    DTIC Science & Technology

    2015-11-10

    the flow phenomena by separating them into individual modes. The technique of Proper Orthogonal Decomposition (POD), see [ Holmes : 1998] is a popular...sampled values h(k), k = 0,…,2M-1, of the exponential sum 1. Solve the following linear system where 2. Compute all zeros zj  D, j = 1,…,M...of the Prony polynomial i.e., calculate all eigenvalues of the associated companion matrix and form fj = log zj for j = 1,…,M, where log is the

  13. Microbial diversity in a submarine carbonate edifice from the serpentinizing hydrothermal system of the Prony Bay (New Caledonia) over a 6-year period.

    PubMed

    Postec, Anne; Quéméneur, Marianne; Bes, Méline; Mei, Nan; Benaïssa, Fatma; Payri, Claude; Pelletier, Bernard; Monnin, Christophe; Guentas-Dombrowsky, Linda; Ollivier, Bernard; Gérard, Emmanuelle; Pisapia, Céline; Gérard, Martine; Ménez, Bénédicte; Erauso, Gaël

    2015-01-01

    Active carbonate chimneys from the shallow marine serpentinizing Prony Hydrothermal Field were sampled 3 times over a 6 years period at site ST09. Archaeal and bacterial communities composition was investigated using PCR-based methods (clone libraries, Denaturating Gel Gradient Electrophoresis, quantitative PCR) targeting 16S rRNA genes, methyl coenzyme M reductase A and dissimilatory sulfite reductase subunit B genes. Methanosarcinales (Euryarchaeota) and Thaumarchaea were the main archaeal members. The Methanosarcinales, also observed by epifluorescent microscopy and FISH, consisted of two phylotypes that were previously solely detected in two other serpentinitzing ecosystems (The Cedars and Lost City Hydrothermal Field). Surprisingly, members of the hyperthermophilic order Thermococcales were also found which may indicate the presence of a hot subsurface biosphere. The bacterial community mainly consisted of Firmicutes, Chloroflexi, Alpha-, Gamma-, Beta-, and Delta-proteobacteria and of the candidate division NPL-UPA2. Members of these taxa were consistently found each year and may therefore represent a stable core of the indigenous bacterial community of the PHF chimneys. Firmicutes isolates representing new bacterial taxa were obtained by cultivation under anaerobic conditions. Our study revealed diverse microbial communities in PHF ST09 related to methane and sulfur compounds that share common populations with other terrestrial or submarine serpentinizing ecosystems.

  14. Signal Analysis Algorithms for Optimized Fitting of Nonresonant Laser Induced Thermal Acoustics Damped Sinusoids

    NASA Technical Reports Server (NTRS)

    Balla, R. Jeffrey; Miller, Corey A.

    2008-01-01

    This study seeks a numerical algorithm which optimizes frequency precision for the damped sinusoids generated by the nonresonant LITA technique. It compares computed frequencies, frequency errors, and fit errors obtained using five primary signal analysis methods. Using variations on different algorithms within each primary method, results from 73 fits are presented. Best results are obtained using an AutoRegressive method. Compared to previous results using Prony s method, single shot waveform frequencies are reduced approx.0.4% and frequency errors are reduced by a factor of approx.20 at 303K to approx. 0.1%. We explore the advantages of high waveform sample rates and potential for measurements in low density gases.

  15. Vibrations Detection in Industrial Pumps Based on Spectral Analysis to Increase Their Efficiency

    NASA Astrophysics Data System (ADS)

    Rachid, Belhadef; Hafaifa, Ahmed; Boumehraz, Mohamed

    2016-03-01

    Spectral analysis is the key tool for the study of vibration signals in rotating machinery. In this work, the vibration analysis applied for conditional preventive maintenance of such machines is proposed, as part of resolved problems related to vibration detection on the organs of these machines. The vibration signal of a centrifugal pump was treated to mount the benefits of the approach proposed. The obtained results present the signal estimation of a pump vibration using Fourier transform technique compared by the spectral analysis methods based on Prony approach.

  16. Skeletal muscle tensile strain dependence: hyperviscoelastic nonlinearity

    PubMed Central

    Wheatley, Benjamin B; Morrow, Duane A; Odegard, Gregory M; Kaufman, Kenton R; Donahue, Tammy L Haut

    2015-01-01

    Introduction Computational modeling of skeletal muscle requires characterization at the tissue level. While most skeletal muscle studies focus on hyperelasticity, the goal of this study was to examine and model the nonlinear behavior of both time-independent and time-dependent properties of skeletal muscle as a function of strain. Materials and Methods Nine tibialis anterior muscles from New Zealand White rabbits were subject to five consecutive stress relaxation cycles of roughly 3% strain. Individual relaxation steps were fit with a three-term linear Prony series. Prony series coefficients and relaxation ratio were assessed for strain dependence using a general linear statistical model. A fully nonlinear constitutive model was employed to capture the strain dependence of both the viscoelastic and instantaneous components. Results Instantaneous modulus (p<0.0005) and mid-range relaxation (p<0.0005) increased significantly with strain level, while relaxation at longer time periods decreased with strain (p<0.0005). Time constants and overall relaxation ratio did not change with strain level (p>0.1). Additionally, the fully nonlinear hyperviscoelastic constitutive model provided an excellent fit to experimental data, while other models which included linear components failed to capture muscle function as accurately. Conclusions Material properties of skeletal muscle are strain-dependent at the tissue level. This strain dependence can be included in computational models of skeletal muscle performance with a fully nonlinear hyperviscoelastic model. PMID:26409235

  17. Microbial diversity in a submarine carbonate edifice from the serpentinizing hydrothermal system of the Prony Bay (New Caledonia) over a 6-year period

    PubMed Central

    Postec, Anne; Quéméneur, Marianne; Bes, Méline; Mei, Nan; Benaïssa, Fatma; Payri, Claude; Pelletier, Bernard; Monnin, Christophe; Guentas-Dombrowsky, Linda; Ollivier, Bernard; Gérard, Emmanuelle; Pisapia, Céline; Gérard, Martine; Ménez, Bénédicte; Erauso, Gaël

    2015-01-01

    Active carbonate chimneys from the shallow marine serpentinizing Prony Hydrothermal Field were sampled 3 times over a 6 years period at site ST09. Archaeal and bacterial communities composition was investigated using PCR-based methods (clone libraries, Denaturating Gel Gradient Electrophoresis, quantitative PCR) targeting 16S rRNA genes, methyl coenzyme M reductase A and dissimilatory sulfite reductase subunit B genes. Methanosarcinales (Euryarchaeota) and Thaumarchaea were the main archaeal members. The Methanosarcinales, also observed by epifluorescent microscopy and FISH, consisted of two phylotypes that were previously solely detected in two other serpentinitzing ecosystems (The Cedars and Lost City Hydrothermal Field). Surprisingly, members of the hyperthermophilic order Thermococcales were also found which may indicate the presence of a hot subsurface biosphere. The bacterial community mainly consisted of Firmicutes, Chloroflexi, Alpha-, Gamma-, Beta-, and Delta-proteobacteria and of the candidate division NPL-UPA2. Members of these taxa were consistently found each year and may therefore represent a stable core of the indigenous bacterial community of the PHF chimneys. Firmicutes isolates representing new bacterial taxa were obtained by cultivation under anaerobic conditions. Our study revealed diverse microbial communities in PHF ST09 related to methane and sulfur compounds that share common populations with other terrestrial or submarine serpentinizing ecosystems. PMID:26379636

  18. Diversity of Rare and Abundant Prokaryotic Phylotypes in the Prony Hydrothermal Field and Comparison with Other Serpentinite-Hosted Ecosystems.

    PubMed

    Frouin, Eléonore; Bes, Méline; Ollivier, Bernard; Quéméneur, Marianne; Postec, Anne; Debroas, Didier; Armougom, Fabrice; Erauso, Gaël

    2018-01-01

    The Bay of Prony, South of New Caledonia, represents a unique serpentinite-hosted hydrothermal field due to its coastal situation. It harbors both submarine and intertidal active sites, discharging hydrogen- and methane-rich alkaline fluids of low salinity and mild temperature through porous carbonate edifices. In this study, we have extensively investigated the bacterial and archaeal communities inhabiting the hydrothermal chimneys from one intertidal and three submarine sites by 16S rRNA gene amplicon sequencing. We show that the bacterial community of the intertidal site is clearly distinct from that of the submarine sites with species distribution patterns driven by only a few abundant populations, affiliated to the Chloroflexi and Proteobacteria phyla. In contrast, the distribution of archaeal taxa seems less site-dependent, as exemplified by the co-occurrence, in both submarine and intertidal sites, of two dominant phylotypes of Methanosarcinales previously thought to be restricted to serpentinizing systems, either marine (Lost City Hydrothermal Field) or terrestrial (The Cedars ultrabasic springs). Over 70% of the phylotypes were rare and included, among others, all those affiliated to candidate divisions. We finally compared the distribution of bacterial and archaeal phylotypes of Prony Hydrothermal Field with those of five previously studied serpentinizing systems of geographically distant sites. Although sensu stricto no core microbial community was identified, a few uncultivated lineages, notably within the archaeal order Methanosarcinales and the bacterial class Dehalococcoidia (the candidate division MSBL5) were exclusively found in a few serpentinizing systems while other operational taxonomic units belonging to the orders Clostridiales, Thermoanaerobacterales , or the genus Hydrogenophaga , were abundantly distributed in several sites. These lineages may represent taxonomic signatures of serpentinizing ecosystems. These findings extend our current knowledge of the microbial diversity inhabiting serpentinizing systems and their biogeography.

  19. Comparative Study of Impedance Eduction Methods, Part 2: NASA Tests and Methodology

    NASA Technical Reports Server (NTRS)

    Jones, Michael G.; Watson, Willie R.; Howerton, Brian M.; Busse-Gerstengarbe, Stefan

    2013-01-01

    A number of methods have been developed at NASA Langley Research Center for eduction of the acoustic impedance of sound-absorbing liners mounted in the wall of a flow duct. This investigation uses methods based on the Pridmore-Brown and convected Helmholtz equations to study the acoustic behavior of a single-layer, conventional liner fabricated by the German Aerospace Center and tested in the NASA Langley Grazing Flow Impedance Tube. Two key assumptions are explored in this portion of the investigation. First, a comparison of results achieved with uniform-flow and shear-flow impedance eduction methods is considered. Also, an approach based on the Prony method is used to extend these methods from single-mode to multi-mode implementations. Finally, a detailed investigation into the effects of harmonic distortion on the educed impedance is performed, and the results are used to develop guidelines regarding acceptable levels of harmonic distortion

  20. Laboratory modeling and analysis of aircraft-lightning interactions

    NASA Technical Reports Server (NTRS)

    Turner, C. D.; Trost, T. F.

    1982-01-01

    Modeling studies of the interaction of a delta wing aircraft with direct lightning strikes were carried out using an approximate scale model of an F-106B. The model, which is three feet in length, is subjected to direct injection of fast current pulses supplied by wires, which simulate the lightning channel and are attached at various locations on the model. Measurements are made of the resulting transient electromagnetic fields using time derivative sensors. The sensor outputs are sampled and digitized by computer. The noise level is reduced by averaging the sensor output from ten input pulses at each sample time. Computer analysis of the measured fields includes Fourier transformation and the computation of transfer functions for the model. Prony analysis is also used to determine the natural frequencies of the model. Comparisons of model natural frequencies extracted by Prony analysis with those for in flight direct strike data usually show lower damping in the in flight case. This is indicative of either a lightning channel with a higher impedance than the wires on the model, only one attachment point, or short streamers instead of a long channel.

  1. Compression of head-related transfer function using autoregressive-moving-average models and Legendre polynomials.

    PubMed

    Shekarchi, Sayedali; Hallam, John; Christensen-Dalsgaard, Jakob

    2013-11-01

    Head-related transfer functions (HRTFs) are generally large datasets, which can be an important constraint for embedded real-time applications. A method is proposed here to reduce redundancy and compress the datasets. In this method, HRTFs are first compressed by conversion into autoregressive-moving-average (ARMA) filters whose coefficients are calculated using Prony's method. Such filters are specified by a few coefficients which can generate the full head-related impulse responses (HRIRs). Next, Legendre polynomials (LPs) are used to compress the ARMA filter coefficients. LPs are derived on the sphere and form an orthonormal basis set for spherical functions. Higher-order LPs capture increasingly fine spatial details. The number of LPs needed to represent an HRTF, therefore, is indicative of its spatial complexity. The results indicate that compression ratios can exceed 98% while maintaining a spectral error of less than 4 dB in the recovered HRTFs.

  2. An Approximate Dissipation Function for Large Strain Rubber Thermo-Mechanical Analyses

    NASA Technical Reports Server (NTRS)

    Johnson, Arthur R.; Chen, Tzi-Kang

    2003-01-01

    Mechanically induced viscoelastic dissipation is difficult to compute. When the constitutive model is defined by history integrals, the formula for dissipation is a double convolution integral. Since double convolution integrals are difficult to approximate, coupled thermo-mechanical analyses of highly viscous rubber-like materials cannot be made with most commercial finite element software. In this study, we present a method to approximate the dissipation for history integral constitutive models that represent Maxwell-like materials without approximating the double convolution integral. The method requires that the total stress can be separated into elastic and viscous components, and that the relaxation form of the constitutive law is defined with a Prony series. Numerical data is provided to demonstrate the limitations of this approximate method for determining dissipation. Rubber cylinders with imbedded steel disks and with an imbedded steel ball are dynamically loaded, and the nonuniform heating within the cylinders is computed.

  3. Some advanced parametric methods for assessing waveform distortion in a smart grid with renewable generation

    NASA Astrophysics Data System (ADS)

    Alfieri, Luisa

    2015-12-01

    Power quality (PQ) disturbances are becoming an important issue in smart grids (SGs) due to the significant economic consequences that they can generate on sensible loads. However, SGs include several distributed energy resources (DERs) that can be interconnected to the grid with static converters, which lead to a reduction of the PQ levels. Among DERs, wind turbines and photovoltaic systems are expected to be used extensively due to the forecasted reduction in investment costs and other economic incentives. These systems can introduce significant time-varying voltage and current waveform distortions that require advanced spectral analysis methods to be used. This paper provides an application of advanced parametric methods for assessing waveform distortions in SGs with dispersed generation. In particular, the Standard International Electrotechnical Committee (IEC) method, some parametric methods (such as Prony and Estimation of Signal Parameters by Rotational Invariance Technique (ESPRIT)), and some hybrid methods are critically compared on the basis of their accuracy and the computational effort required.

  4. Numerical integration of the extended variable generalized Langevin equation with a positive Prony representable memory kernel.

    PubMed

    Baczewski, Andrew D; Bond, Stephen D

    2013-07-28

    Generalized Langevin dynamics (GLD) arise in the modeling of a number of systems, ranging from structured fluids that exhibit a viscoelastic mechanical response, to biological systems, and other media that exhibit anomalous diffusive phenomena. Molecular dynamics (MD) simulations that include GLD in conjunction with external and/or pairwise forces require the development of numerical integrators that are efficient, stable, and have known convergence properties. In this article, we derive a family of extended variable integrators for the Generalized Langevin equation with a positive Prony series memory kernel. Using stability and error analysis, we identify a superlative choice of parameters and implement the corresponding numerical algorithm in the LAMMPS MD software package. Salient features of the algorithm include exact conservation of the first and second moments of the equilibrium velocity distribution in some important cases, stable behavior in the limit of conventional Langevin dynamics, and the use of a convolution-free formalism that obviates the need for explicit storage of the time history of particle velocities. Capability is demonstrated with respect to accuracy in numerous canonical examples, stability in certain limits, and an exemplary application in which the effect of a harmonic confining potential is mapped onto a memory kernel.

  5. Simple, Effective Computation of Principal Eigen-Vectors and Their Eigenvalues and Application to High-Resolution Estimation of Frequencies

    DTIC Science & Technology

    1985-10-01

    written 3 as follows: m 4 cg ° + C + + - c =0n-1u-1 n C + c 2 g 1 +. . c 0 clg o Cngn-1 cn+ 1 (10a) cng° + Cn+11 + + C 2n-lgn_1 + C 2 n 0 or in...matrix form, C " I = 0 (10b) A non-zero solution is possible if the determinant of C is zero. From the theory of Prony’s method [133 g (k1 = % n + kn... g , ki + go = 0 II) hence the polynomial coefficient vector g is also orthogonal to the vector (1 X i ki 2 .Xik)T where %i’s are the

  6. Quasi-Static Viscoelasticity Loading Measurements of an Aircraft Tire

    NASA Technical Reports Server (NTRS)

    Mason, Angela J.; Tanner, John A.; Johnson, Arthur R.

    1997-01-01

    Stair-step loading, cyclic loading, and long-term relaxation tests were performed on an aircraft tire to observe the quasi-static viscoelastic response of the tire. The data indicate that the tire continues to respond viscoelastically even after it has been softened by deformation. Load relaxation data from the stair-step test at the 15,000-lb loading was fit to a monotonically decreasing Prony series.

  7. The Marvels of Electromagnetic Band Gap (EBG) Structures

    DTIC Science & Technology

    2003-11-01

    terminology of "Electromagnetic conference papers and journal articles dealing with Band- gaps (EBG)". Recently, many researchers the characterizations...Band Gap (EBG) Structures 9 utilized to reduce the mutual coupling between Structures: An FDTD/Prony Technique elements of antenna arrays. based on the...Band- Gap of several patents. He has had pioneering research contributions in diverse areas of electromagnetics,Snteructure", Dymposiget o l 21 IE 48

  8. The general theory of the Quasi-reproducible experiments: How to describe the measured data of complex systems?

    NASA Astrophysics Data System (ADS)

    Nigmatullin, Raoul R.; Maione, Guido; Lino, Paolo; Saponaro, Fabrizio; Zhang, Wei

    2017-01-01

    In this paper, we suggest a general theory that enables to describe experiments associated with reproducible or quasi-reproducible data reflecting the dynamical and self-similar properties of a wide class of complex systems. Under complex system we understand a system when the model based on microscopic principles and suppositions about the nature of the matter is absent. This microscopic model is usually determined as ;the best fit" model. The behavior of the complex system relatively to a control variable (time, frequency, wavelength, etc.) can be described in terms of the so-called intermediate model (IM). One can prove that the fitting parameters of the IM are associated with the amplitude-frequency response of the segment of the Prony series. The segment of the Prony series including the set of the decomposition coefficients and the set of the exponential functions (with k = 1,2,…,K) is limited by the final mode K. The exponential functions of this decomposition depend on time and are found by the original algorithm described in the paper. This approach serves as a logical continuation of the results obtained earlier in paper [Nigmatullin RR, W. Zhang and Striccoli D. General theory of experiment containing reproducible data: The reduction to an ideal experiment. Commun Nonlinear Sci Numer Simul, 27, (2015), pp 175-192] for reproducible experiments and includes the previous results as a partial case. In this paper, we consider a more complex case when the available data can create short samplings or exhibit some instability during the process of measurements. We give some justified evidences and conditions proving the validity of this theory for the description of a wide class of complex systems in terms of the reduced set of the fitting parameters belonging to the segment of the Prony series. The elimination of uncontrollable factors expressed in the form of the apparatus function is discussed. To illustrate how to apply the theory and take advantage of its benefits, we consider the experimental data associated with typical working conditions of the injection system in a common rail diesel engine. In particular, the flow rate of the injected fuel is considered at different reference rail pressures. The measured data are treated by the proposed algorithm to verify the adherence to the proposed general theory. The obtained results demonstrate the undoubted effectiveness of the proposed theory.

  9. The Generation and Propagation of Internal Solitary Waves in the South China Sea

    DTIC Science & Technology

    2013-12-05

    ISWs) have been frequently observed in the world oceans by satellite remote sensing [e.g., Apel et al., 1975; Osborne and Burch, 1980; Klemas, 2012...Kaartvedt et al., 2012], sedi- ment resuspension [Quaresma et al., 2007; Pomar et al., 2012], acoustic wave propagation [ Williams et al., 2001...073.1. Apel , J. R., H. M. Byrne, J. R. Proni, and R. L. Charnell (1975), Observa- tions of oceanic internal and surface-waves from earth resources

  10. Detection of quasi-periodic processes in repeated measurements: New approach for the fitting and clusterization of different data

    NASA Astrophysics Data System (ADS)

    Nigmatullin, R.; Rakhmatullin, R.

    2014-12-01

    Many experimentalists were accustomed to think that any independent measurement forms a non-correlated measurement that depends weakly from others. We are trying to reconsider this conventional point of view and prove that similar measurements form a strongly-correlated sequence of random functions with memory. In other words, successive measurements "remember" each other at least their nearest neighbors. This observation and justification on real data help to fit the wide set of data based on the Prony's function. The Prony's decomposition follows from the quasi-periodic (QP) properties of the measured functions and includes the Fourier transform as a partial case. New type of decomposition helps to obtain a specific amplitude-frequency response (AFR) of the measured (random) functions analyzed and each random function contains less number of the fitting parameters in comparison with its number of initial data points. Actually, the calculated AFR can be considered as the generalized Prony's spectrum (GPS), which will be extremely useful in cases where the simple model pretending on description of the measured data is absent but vital necessity of their quantitative description is remained. These possibilities open a new way for clusterization of the initial data and new information that is contained in these data gives a chance for their detailed analysis. The electron paramagnetic resonance (EPR) measurements realized for empty resonator (pure noise data) and resonator containing a sample (CeO2 in our case) confirmed the existence of the QP processes in reality. But we think that the detection of the QP processes is a common feature of many repeated measurements and this new property of successive measurements can attract an attention of many experimentalists. To formulate some general conditions that help to identify and then detect the presence of some QP process in the repeated experimental measurements. To find a functional equation and its solution that yields the description of the identified QP process. To suggest some computing algorithm for fitting of the QP data to the analytical function that follows from the solution of the corresponding functional equation. The content of this paper is organized as follows. In the Section 2 we will try to find the answers on the problem posed in this introductory section. It contains also the mathematical description of the QP process and interpretation of the meaning of the generalized Prony's spectrum (GPS). The GPS includes the conventional Fourier decomposition as a partial case. Section 3 contains the experimental details associated with receiving of the desired data. Section 4 includes some important details explaining specific features of application of general algorithm to concrete data. In Section 5 we summarize the results and outline the perspectives of this approach for quantitative description of time-dependent random data that are registered in different complex systems and experimental devices. Here we should notice that under the complex system we imply a system when a conventional model is absent[6]. Under simplicity of the acceptable model we imply the proper hypothesis ("best fit" model) containing minimal number of the fitting parameters that describes the behavior of the system considered quantitatively. The different approaches that exist in nowadays for description of these systems are collected in the recent review [7].

  11. Estimation of viscoelastic parameters in Prony series from shear wave propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, Jae-Wook; Hong, Jung-Wuk, E-mail: j.hong@kaist.ac.kr, E-mail: jwhong@alum.mit.edu; Lee, Hyoung-Ki

    2016-06-21

    When acquiring accurate ultrasonic images, we must precisely estimate the mechanical properties of the soft tissue. This study investigates and estimates the viscoelastic properties of the tissue by analyzing shear waves generated through an acoustic radiation force. The shear waves are sourced from a localized pushing force acting for a certain duration, and the generated waves travel horizontally. The wave velocities depend on the mechanical properties of the tissue such as the shear modulus and viscoelastic properties; therefore, we can inversely calculate the properties of the tissue through parametric studies.

  12. Spectrum Modal Analysis for the Detection of Low-Altitude Windshear with Airborne Doppler Radar

    NASA Technical Reports Server (NTRS)

    Kunkel, Matthew W.

    1992-01-01

    A major obstacle in the estimation of windspeed patterns associated with low-altitude windshear with an airborne pulsed Doppler radar system is the presence of strong levels of ground clutter which can strongly bias a windspeed estimate. Typical solutions attempt to remove the clutter energy from the return through clutter rejection filtering. Proposed is a method whereby both the weather and clutter modes present in a return spectrum can be identified to yield an unbiased estimate of the weather mode without the need for clutter rejection filtering. An attempt will be made to show that modeling through a second order extended Prony approach is sufficient for the identification of the weather mode. A pattern recognition approach to windspeed estimation from the identified modes is derived and applied to both simulated and actual flight data. Comparisons between windspeed estimates derived from modal analysis and the pulse-pair estimator are included as well as associated hazard factors. Also included is a computationally attractive method for estimating windspeeds directly from the coefficients of a second-order autoregressive model. Extensions and recommendations for further study are included.

  13. Constitutive modeling of the human Anterior Cruciate Ligament (ACL) under uniaxial loading using viscoelastic prony series and hyperelastic five parameter Mooney-Rivlin model

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Mondal, Debabrata; Motalab, Mohammad

    2016-07-01

    In this present study, the stress-strain behavior of the Human Anterior Cruciate Ligament (ACL) is studied under uniaxial loads applied with various strain rates. Tensile testing of the human ACL samples requires state of the art test facilities. Furthermore, difficulty in finding human ligament for testing purpose results in very limited archival data. Nominal Stress vs. deformation gradient plots for different strain rates, as found in literature, is used to model the material behavior either as a hyperelastic or as a viscoelastic material. The well-known five parameter Mooney-Rivlin constitutivemodel for hyperelastic material and the Prony Series model for viscoelastic material are used and the objective of the analyses comprises of determining the model constants and their variation-trend with strain rates for the Human Anterior Cruciate Ligament (ACL) material using the non-linear curve fitting tool. The relationship between the model constants and strain rate, using the Hyperelastic Mooney-Rivlin model, has been obtained. The variation of the values of each coefficient with strain rates, obtained using Hyperelastic Mooney-Rivlin model are then plotted and variation of the values with strain rates are obtained for all the model constants. These plots are again fitted using the software package MATLAB and a power law relationship between the model constants and strain rates is obtained for each constant. The obtained material model for Human Anterior Cruciate Ligament (ACL) material can be implemented in any commercial finite element software package for stress analysis.

  14. Viscoelastic Properties of Human Tracheal Tissues.

    PubMed

    Safshekan, Farzaneh; Tafazzoli-Shadpour, Mohammad; Abdouss, Majid; Shadmehr, Mohammad B

    2017-01-01

    The physiological performance of trachea is highly dependent on its mechanical behavior, and therefore, the mechanical properties of its components. Mechanical characterization of trachea is key to succeed in new treatments such as tissue engineering, which requires the utilization of scaffolds which are mechanically compatible with the native human trachea. In this study, after isolating human trachea samples from brain-dead cases and proper storage, we assessed the viscoelastic properties of tracheal cartilage, smooth muscle, and connective tissue based on stress relaxation tests (at 5% and 10% strains for cartilage and 20%, 30%, and 40% for smooth muscle and connective tissue). After investigation of viscoelastic linearity, constitutive models including Prony series for linear viscoelasticity and quasi-linear viscoelastic, modified superposition, and Schapery models for nonlinear viscoelasticity were fitted to the experimental data to find the best model for each tissue. We also investigated the effect of age on the viscoelastic behavior of tracheal tissues. Based on the results, all three tissues exhibited a (nonsignificant) decrease in relaxation rate with increasing the strain, indicating viscoelastic nonlinearity which was most evident for cartilage and with the least effect for connective tissue. The three-term Prony model was selected for describing the linear viscoelasticity. Among different models, the modified superposition model was best able to capture the relaxation behavior of the three tracheal components. We observed a general (but not significant) stiffening of tracheal cartilage and connective tissue with aging. No change in the stress relaxation percentage with aging was observed. The results of this study may be useful in the design and fabrication of tracheal tissue engineering scaffolds.

  15. Elastic and viscoelastic mechanical properties of brain tissues on the implanting trajectory of sub-thalamic nucleus stimulation.

    PubMed

    Li, Yan; Deng, Jianxin; Zhou, Jun; Li, Xueen

    2016-11-01

    Corresponding to pre-puncture and post-puncture insertion, elastic and viscoelastic mechanical properties of brain tissues on the implanting trajectory of sub-thalamic nucleus stimulation are investigated, respectively. Elastic mechanical properties in pre-puncture are investigated through pre-puncture needle insertion experiments using whole porcine brains. A linear polynomial and a second order polynomial are fitted to the average insertion force in pre-puncture. The Young's modulus in pre-puncture is calculated from the slope of the two fittings. Viscoelastic mechanical properties of brain tissues in post-puncture insertion are investigated through indentation stress relaxation tests for six interested regions along a planned trajectory. A linear viscoelastic model with a Prony series approximation is fitted to the average load trace of each region using Boltzmann hereditary integral. Shear relaxation moduli of each region are calculated using the parameters of the Prony series approximation. The results show that, in pre-puncture insertion, needle force almost increases linearly with needle displacement. Both fitting lines can perfectly fit the average insertion force. The Young's moduli calculated from the slope of the two fittings are worthy of trust to model linearly or nonlinearly instantaneous elastic responses of brain tissues, respectively. In post-puncture insertion, both region and time significantly affect the viscoelastic behaviors. Six tested regions can be classified into three categories in stiffness. Shear relaxation moduli decay dramatically in short time scales but equilibrium is never truly achieved. The regional and temporal viscoelastic mechanical properties in post-puncture insertion are valuable for guiding probe insertion into each region on the implanting trajectory.

  16. Time-domain separation of interfering waves in cancellous bone using bandlimited deconvolution: simulation and phantom study.

    PubMed

    Wear, Keith A

    2014-04-01

    In through-transmission interrogation of cancellous bone, two longitudinal pulses ("fast" and "slow" waves) may be generated. Fast and slow wave properties convey information about material and micro-architectural characteristics of bone. However, these properties can be difficult to assess when fast and slow wave pulses overlap in time and frequency domains. In this paper, two methods are applied to decompose signals into fast and slow waves: bandlimited deconvolution and modified least-squares Prony's method with curve-fitting (MLSP + CF). The methods were tested in plastic and Zerdine(®) samples that provided fast and slow wave velocities commensurate with velocities for cancellous bone. Phase velocity estimates were accurate to within 6 m/s (0.4%) (slow wave with both methods and fast wave with MLSP + CF) and 26 m/s (1.2%) (fast wave with bandlimited deconvolution). Midband signal loss estimates were accurate to within 0.2 dB (1.7%) (fast wave with both methods), and 1.0 dB (3.7%) (slow wave with both methods). Similar accuracies were found for simulations based on fast and slow wave parameter values published for cancellous bone. These methods provide sufficient accuracy and precision for many applications in cancellous bone such that experimental error is likely to be a greater limiting factor than estimation error.

  17. Structural Model for Viscoelastic Properties of Pericardial Bioprosthetic Valves.

    PubMed

    Rassoli, Aisa; Fatouraee, Nasser; Guidoin, Robert

    2018-03-30

    The benefit of bioprosthetic aortic valve over mechanical valve replacements is the release of thromboembolism and digression of long-term anticoagulation treatment. The function of bioprostheses and their efficiency is known to depend on the mechanical properties of the leaflet tissue. So it is necessary to select a suitable tissue for the bioprosthesis. The purpose of the present study is to clarify the viscoelastic behavior of bovine, equine, and porcine pericardium. In this study, pericardiums were compared mechanically from the viscoelastic aspect. After fixation of the tissues in glutaraldehyde, first uniaxial tests with different extension rates in the fiber direction were performed. Then, the stress relaxation tests in the fiber direction were done on these pericardial tissues by exerting 20, 30,40, and 50% strains. After evaluation of viscoelastic linearity, the Prony series, quasilinear viscoelastic (QLV) and modified superposition theory were applied to the stress relaxation data. Finally, the parameters of these constitutive models were extracted for each pericardium tissue. All three tissues exhibited a decrease in relaxation rate with elevating strain, indicating the nonlinear viscoelastic behavior of these tissues. The three-term Prony model was selected for describing the linear viscoelasticity. Among different models, the QLV model was best able to capture the relaxation behavior of the pericardium tissues. More stiffness of porcine pericardium was observed in comparison to the two other pericardium tissues. The relaxation percentage of porcine pericardium was less than the two others. It can be concluded that porcine pericardium behaves more as an elastic and less like a viscous tissue in comparison to the bovine and equine pericardium. © 2018 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  18. Analysis and improved design considerations for airborne pulse Doppler radar signal processing in the detection of hazardous windshear

    NASA Technical Reports Server (NTRS)

    Lee, Jonggil

    1990-01-01

    High resolution windspeed profile measurements are needed to provide reliable detection of hazardous low altitude windshear with an airborne pulse Doppler radar. The system phase noise in a Doppler weather radar may degrade the spectrum moment estimation quality and the clutter cancellation capability which are important in windshear detection. Also the bias due to weather return Doppler spectrum skewness may cause large errors in pulse pair spectral parameter estimates. These effects are analyzed for the improvement of an airborne Doppler weather radar signal processing design. A method is presented for the direct measurement of windspeed gradient using low pulse repetition frequency (PRF) radar. This spatial gradient is essential in obtaining the windshear hazard index. As an alternative, the modified Prony method is suggested as a spectrum mode estimator for both the clutter and weather signal. Estimation of Doppler spectrum modes may provide the desired windshear hazard information without the need of any preliminary processing requirement such as clutter filtering. The results obtained by processing a NASA simulation model output support consideration of mode identification as one component of a windshear detection algorithm.

  19. An internal reference model-based PRF temperature mapping method with Cramer-Rao lower bound noise performance analysis.

    PubMed

    Li, Cheng; Pan, Xinyi; Ying, Kui; Zhang, Qiang; An, Jing; Weng, Dehe; Qin, Wen; Li, Kuncheng

    2009-11-01

    The conventional phase difference method for MR thermometry suffers from disturbances caused by the presence of lipid protons, motion-induced error, and field drift. A signal model is presented with multi-echo gradient echo (GRE) sequence using a fat signal as an internal reference to overcome these problems. The internal reference signal model is fit to the water and fat signals by the extended Prony algorithm and the Levenberg-Marquardt algorithm to estimate the chemical shifts between water and fat which contain temperature information. A noise analysis of the signal model was conducted using the Cramer-Rao lower bound to evaluate the noise performance of various algorithms, the effects of imaging parameters, and the influence of the water:fat signal ratio in a sample on the temperature estimate. Comparison of the calculated temperature map and thermocouple temperature measurements shows that the maximum temperature estimation error is 0.614 degrees C, with a standard deviation of 0.06 degrees C, confirming the feasibility of this model-based temperature mapping method. The influence of sample water:fat signal ratio on the accuracy of the temperature estimate is evaluated in a water-fat mixed phantom experiment with an optimal ratio of approximately 0.66:1. (c) 2009 Wiley-Liss, Inc.

  20. Les résonances d'un trou noir de Schwarzschild.

    NASA Astrophysics Data System (ADS)

    Bachelot, A.; Motet-Bachelot, A.

    1993-09-01

    This paper is devoted to the theoretical and computational investigations of the scattering frequencies of scalar, electromagnetic, gravitational waves around a spherical black hole. The authors adopt a time dependent approach: construction of wave operators for the hyperbolic Regge-Wheeler equation; asymptotic completeness; outgoing and incoming spectral representations; meromorphic continuation of the Heisenberg matrix; approximation by dumping and cut-off of the potentials and interpretation of the semi group Z(t) in the framework of the membrane paradigma. They develop a new procedure for the computation of the resonances by the spectral analysis of the transient scattered wave, based on Prony's algorithm.

  1. The Impact of Uncertain Physical Parameters on HVAC Demand Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Elizondo, Marcelo A.; Lu, Shuai

    HVAC units are currently one of the major resources providing demand response (DR) in residential buildings. Models of HVAC with DR function can improve understanding of its impact on power system operations and facilitate the deployment of DR technologies. This paper investigates the importance of various physical parameters and their distributions to the HVAC response to DR signals, which is a key step to the construction of HVAC models for a population of units with insufficient data. These parameters include the size of floors, insulation efficiency, the amount of solid mass in the house, and efficiency of the HVAC units.more » These parameters are usually assumed to follow Gaussian or Uniform distributions. We study the effect of uncertainty in the chosen parameter distributions on the aggregate HVAC response to DR signals, during transient phase and in steady state. We use a quasi-Monte Carlo sampling method with linear regression and Prony analysis to evaluate sensitivity of DR output to the uncertainty in the distribution parameters. The significance ranking on the uncertainty sources is given for future guidance in the modeling of HVAC demand response.« less

  2. Continuous relaxation and retardation spectrum method for viscoelastic characterization of asphalt concrete

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, Sudip; Swamy, Aravind Krishna; Daniel, Jo S.

    2012-08-01

    This paper presents a simple and practical approach to obtain the continuous relaxation and retardation spectra of asphalt concrete directly from the complex (dynamic) modulus test data. The spectra thus obtained are continuous functions of relaxation and retardation time. The major advantage of this method is that the continuous form is directly obtained from the master curves which are readily available from the standard characterization tests of linearly viscoelastic behavior of asphalt concrete. The continuous spectrum method offers efficient alternative to the numerical computation of discrete spectra and can be easily used for modeling viscoelastic behavior. In this research, asphalt concrete specimens have been tested for linearly viscoelastic characterization. The linearly viscoelastic test data have been used to develop storage modulus and storage compliance master curves. The continuous spectra are obtained from the fitted sigmoid function of the master curves via the inverse integral transform. The continuous spectra are shown to be the limiting case of the discrete distributions. The continuous spectra and the time-domain viscoelastic functions (relaxation modulus and creep compliance) computed from the spectra matched very well with the approximate solutions. It is observed that the shape of the spectra is dependent on the master curve parameters. The continuous spectra thus obtained can easily be implemented in material mix design process. Prony-series coefficients can be easily obtained from the continuous spectra and used in numerical analysis such as finite element analysis.

  3. Numerical analysis of multicomponent responses of surface-hole transient electromagnetic method

    NASA Astrophysics Data System (ADS)

    Meng, Qing-Xin; Hu, Xiang-Yun; Pan, He-Ping; Zhou, Feng

    2017-03-01

    We calculate the multicomponent responses of surface-hole transient electromagnetic method. The methods and models are unsuitable as geoelectric models of conductive surrounding rocks because they are based on regular local targets. We also propose a calculation and analysis scheme based on numerical simulations of the subsurface transient electromagnetic fields. In the modeling of the electromagnetic fields, the forward modeling simulations are performed by using the finite-difference time-domain method and the discrete image method, which combines the Gaver-Stehfest inverse Laplace transform with the Prony method to solve the initial electromagnetic fields. The precision in the iterative computations is ensured by using the transmission boundary conditions. For the response analysis, we customize geoelectric models consisting of near-borehole targets and conductive wall rocks and implement forward modeling simulations. The observed electric fields are converted into induced electromotive force responses using multicomponent observation devices. By comparing the transient electric fields and multicomponent responses under different conditions, we suggest that the multicomponent-induced electromotive force responses are related to the horizontal and vertical gradient variations of the transient electric field at different times. The characteristics of the response are determined by the varying the subsurface transient electromagnetic fields, i.e., diffusion, attenuation and distortion, under different conditions as well as the electromagnetic fields at the observation positions. The calculation and analysis scheme of the response consider the surrounding rocks and the anomalous field of the local targets. It therefore can account for the geological data better than conventional transient field response analysis of local targets.

  4. Tensile properties of latex paint films with TiO2 pigment

    NASA Astrophysics Data System (ADS)

    Hagan, Eric W. S.; Charalambides, Maria N.; Young, Christina T.; Learner, Thomas J. S.; Hackney, Stephen

    2009-05-01

    The tensile properties of latex paint films containing TiO2 pigment were studied with respect to temperature, strain-rate and moisture content. The purpose of performing these experiments was to assist museums in defining safe conditions for modern paintings held in collections. The glass transition temperature of latex paint binders is in close proximity to ambient temperature, resulting in high strain-rate dependence in typical exposure environments. Time dependence of modulus and failure strain is discussed in the context of time-temperature superposition, which was used to extend the experimental time scale. Nonlinear viscoelastic material models are also presented, which incorporate a Prony series with the Ogden or Neo-Hookean hyperelastic function for different TiO2 concentrations.

  5. A new approximation of Fermi-Dirac integrals of order 1/2 for degenerate semiconductor devices

    NASA Astrophysics Data System (ADS)

    AlQurashi, Ahmed; Selvakumar, C. R.

    2018-06-01

    There had been tremendous growth in the field of Integrated circuits (ICs) in the past fifty years. Scaling laws mandated both lateral and vertical dimensions to be reduced and a steady increase in doping densities. Most of the modern semiconductor devices have invariably heavily doped regions where Fermi-Dirac Integrals are required. Several attempts have been devoted to developing analytical approximations for Fermi-Dirac Integrals since numerical computations of Fermi-Dirac Integrals are difficult to use in semiconductor devices, although there are several highly accurate tabulated functions available. Most of these analytical expressions are not sufficiently suitable to be employed in semiconductor device applications due to their poor accuracy, the requirement of complicated calculations, and difficulties in differentiating and integrating. A new approximation has been developed for the Fermi-Dirac integrals of the order 1/2 by using Prony's method and discussed in this paper. The approximation is accurate enough (Mean Absolute Error (MAE) = 0.38%) and easy enough to be used in semiconductor device equations. The new approximation of Fermi-Dirac Integrals is applied to a more generalized Einstein Relation which is an important relation in semiconductor devices.

  6. Mineralizing Filamentous Bacteria from the Prony Bay Hydrothermal Field Give New Insights into the Functioning of Serpentinization-Based Subseafloor Ecosystems

    PubMed Central

    Pisapia, Céline; Gérard, Emmanuelle; Gérard, Martine; Lecourt, Léna; Lang, Susan Q.; Pelletier, Bernard; Payri, Claude E.; Monnin, Christophe; Guentas, Linda; Postec, Anne; Quéméneur, Marianne; Erauso, Gaël; Ménez, Bénédicte

    2017-01-01

    Despite their potential importance as analogs of primitive microbial metabolisms, the knowledge of the structure and functioning of the deep ecosystems associated with serpentinizing environments is hampered by the lack of accessibility to relevant systems. These hyperalkaline environments are depleted in dissolved inorganic carbon (DIC), making the carbon sources and assimilation pathways in the associated ecosystems highly enigmatic. The Prony Bay Hydrothermal Field (PHF) is an active serpentinization site where, similar to Lost City (Mid-Atlantic Ridge), high-pH fluids rich in H2 and CH4 are discharged from carbonate chimneys at the seafloor, but in a shallower lagoonal environment. This study aimed to characterize the subsurface microbial ecology of this environment by focusing on the earliest stages of chimney construction, dominated by the discharge of hydrothermal fluids of subseafloor origin. By jointly examining the mineralogy and the microbial diversity of the conduits of juvenile edifices at the micrometric scale, we find a central role of uncultivated bacteria belonging to the Firmicutes in the ecology of the PHF. These bacteria, along with members of the phyla Acetothermia and Omnitrophica, are identified as the first chimneys inhabitants before archaeal Methanosarcinales. They are involved in the construction and early consolidation of the carbonate structures via organomineralization processes. Their predominance in the most juvenile and nascent hydrothermal chimneys, and their affiliation with environmental subsurface microorganisms, indicate that they are likely discharged with hydrothermal fluids from the subseafloor. They may thus be representative of endolithic serpentinization-based ecosystems, in an environment where DIC is limited. In contrast, heterotrophic and fermentative microorganisms may consume organic compounds from the abiotic by-products of serpentinization processes and/or from life in the deeper subsurface. We thus propose that the Firmicutes identified at PHF may have a versatile metabolism with the capability to use diverse organic compounds from biological or abiotic origin. From that perspective, this study sheds new light on the structure of deep microbial communities living at the energetic edge in serpentinites and may provide an alternative model of the earliest metabolisms. PMID:28197130

  7. Mineralizing Filamentous Bacteria from the Prony Bay Hydrothermal Field Give New Insights into the Functioning of Serpentinization-Based Subseafloor Ecosystems.

    PubMed

    Pisapia, Céline; Gérard, Emmanuelle; Gérard, Martine; Lecourt, Léna; Lang, Susan Q; Pelletier, Bernard; Payri, Claude E; Monnin, Christophe; Guentas, Linda; Postec, Anne; Quéméneur, Marianne; Erauso, Gaël; Ménez, Bénédicte

    2017-01-01

    Despite their potential importance as analogs of primitive microbial metabolisms, the knowledge of the structure and functioning of the deep ecosystems associated with serpentinizing environments is hampered by the lack of accessibility to relevant systems. These hyperalkaline environments are depleted in dissolved inorganic carbon (DIC), making the carbon sources and assimilation pathways in the associated ecosystems highly enigmatic. The Prony Bay Hydrothermal Field (PHF) is an active serpentinization site where, similar to Lost City (Mid-Atlantic Ridge), high-pH fluids rich in H 2 and CH 4 are discharged from carbonate chimneys at the seafloor, but in a shallower lagoonal environment. This study aimed to characterize the subsurface microbial ecology of this environment by focusing on the earliest stages of chimney construction, dominated by the discharge of hydrothermal fluids of subseafloor origin. By jointly examining the mineralogy and the microbial diversity of the conduits of juvenile edifices at the micrometric scale, we find a central role of uncultivated bacteria belonging to the Firmicutes in the ecology of the PHF. These bacteria, along with members of the phyla Acetothermia and Omnitrophica , are identified as the first chimneys inhabitants before archaeal Methanosarcinales . They are involved in the construction and early consolidation of the carbonate structures via organomineralization processes. Their predominance in the most juvenile and nascent hydrothermal chimneys, and their affiliation with environmental subsurface microorganisms, indicate that they are likely discharged with hydrothermal fluids from the subseafloor. They may thus be representative of endolithic serpentinization-based ecosystems, in an environment where DIC is limited. In contrast, heterotrophic and fermentative microorganisms may consume organic compounds from the abiotic by-products of serpentinization processes and/or from life in the deeper subsurface. We thus propose that the Firmicutes identified at PHF may have a versatile metabolism with the capability to use diverse organic compounds from biological or abiotic origin. From that perspective, this study sheds new light on the structure of deep microbial communities living at the energetic edge in serpentinites and may provide an alternative model of the earliest metabolisms.

  8. A viscoelastic fluid-structure interaction model for carotid arteries under pulsatile flow.

    PubMed

    Wang, Zhongjie; Wood, Nigel B; Xu, Xiao Yun

    2015-05-01

    In this study, a fluid-structure interaction model (FSI) incorporating viscoelastic wall behaviour is developed and applied to an idealized model of the carotid artery under pulsatile flow. The shear and bulk moduli of the arterial wall are described by Prony series, where the parameters can be derived from in vivo measurements. The aim is to develop a fully coupled FSI model that can be applied to realistic arterial geometries with normal or pathological viscoelastic wall behaviour. Comparisons between the numerical and analytical solutions for wall displacements demonstrate that the coupled model is capable of predicting the viscoelastic behaviour of carotid arteries. Comparisons are also made between the solid only and FSI viscoelastic models, and the results suggest that the difference in radial displacement between the two models is negligible. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Accoustic waveform logging--Advances in theory and application

    USGS Publications Warehouse

    Paillet, F.L.; Cheng, C.H.; Pennington , W.D.

    1992-01-01

    Full-waveform acoustic logging has made significant advances in both theory and application in recent years, and these advances have greatly increased the capability of log analysts to measure the physical properties of formations. Advances in theory provide the analytical tools required to understand the properties of measured seismic waves, and to relate those properties to such quantities as shear and compressional velocity and attenuation, and primary and fracture porosity and permeability of potential reservoir rocks. The theory demonstrates that all parts of recorded waveforms are related to various modes of propagation, even in the case of dipole and quadrupole source logging. However, the theory also indicates that these mode properties can be used to design velocity and attenuation picking schemes, and shows how source frequency spectra can be selected to optimize results in specific applications. Synthetic microseismogram computations are an effective tool in waveform interpretation theory; they demonstrate how shear arrival picks and mode attenuation can be used to compute shear velocity and intrinsic attenuation, and formation permeability for monopole, dipole and quadrupole sources. Array processing of multi-receiver data offers the opportunity to apply even more sophisticated analysis techniques. Synthetic microseismogram data is used to illustrate the application of the maximum-likelihood method, semblance cross-correlation, and Prony's method analysis techniques to determine seismic velocities and attenuations. The interpretation of acoustic waveform logs is illustrated by reviews of various practical applications, including synthetic seismogram generation, lithology determination, estimation of geomechanical properties in situ, permeability estimation, and design of hydraulic fracture operations.

  10. Calm Multi-Baryon Operators

    NASA Astrophysics Data System (ADS)

    Berkowitz, Evan; Nicholson, Amy; Chang, Chia Cheng; Rinaldi, Enrico; Clark, M. A.; Joó, Bálint; Kurth, Thorsten; Vranas, Pavlos; Walker-Loud, André

    2018-03-01

    There are many outstanding problems in nuclear physics which require input and guidance from lattice QCD calculations of few baryons systems. However, these calculations suffer from an exponentially bad signal-to-noise problem which has prevented a controlled extrapolation to the physical point. The variational method has been applied very successfully to two-meson systems, allowing for the extraction of the two-meson states very early in Euclidean time through the use of improved single hadron operators. The sheer numerical cost of using the same techniques in two-baryon systems has so far been prohibitive. We present an alternate strategy which offers some of the same advantages as the variational method while being significantly less numerically expensive. We first use the Matrix Prony method to form an optimal linear combination of single baryon interpolating fields generated from the same source and different sink interpolating fields. Very early in Euclidean time this optimal linear combination is numerically free of excited state contamination, so we coin it a calm baryon. This calm baryon operator is then used in the construction of the two-baryon correlation functions. To test this method, we perform calculations on the WM/JLab iso-clover gauge configurations at the SU(3) flavor symmetric point with mπ 800 MeV — the same configurations we have previously used for the calculation of two-nucleon correlation functions. We observe the calm baryon significantly removes the excited state contamination from the two-nucleon correlation function to as early a time as the single-nucleon is improved, provided non-local (displaced nucleon) sources are used. For the local two-nucleon correlation function (where both nucleons are created from the same space-time location) there is still improvement, but there is significant excited state contamination in the region the single calm baryon displays no excited state contamination.

  11. Mixed-mode fatigue fracture of adhesive joints in harsh environments and nonlinear viscoelastic modeling of the adhesive

    NASA Astrophysics Data System (ADS)

    Arzoumanidis, Alexis Gerasimos

    A four point bend, mixed-mode, reinforced, cracked lap shear specimen experimentally simulated adhesive joints between load bearing composite parts in automotive components. The experiments accounted for fatigue, solvent and temperature effects on a swirled glass fiber composite adherend/urethane adhesive system. Crack length measurements based on compliance facilitated determination of da/dN curves. A digital image processing technique was also utilized to monitor crack growth from in situ images of the side of the specimen. Linear elastic fracture mechanics and finite elements were used to determine energy release rate and mode-mix as a function of crack length for this specimen. Experiments were conducted in air and in a salt water bath at 10, 26 and 90°C. Joints tested in the solvent were fully saturated. In air, both increasing and decreasing temperature relative to 26°C accelerated crack growth rates. In salt water, crack growth rates increased with increasing temperature. Threshold energy release rate is shown to be the most appropriate design criteria for joints of this system. In addition, path of the crack is discussed and fracture surfaces are examined on three length scales. Three linear viscoelastic properties were measured for the neat urethane adhesive. Dynamic tensile compliance (D*) was found using a novel extensometer and results were considerably more accurate and precise than standard DMTA testing. Dynamic shear compliance (J*) was determined using an Arcan specimen. Dynamic Poisson's ratio (nu*) was extracted from strain gage data analyzed to include gage reinforcement. Experiments spanned three frequency decades and isothermal data was shifted by time-temperature superposition to create master curves spanning thirty decades. Master curves were fit to time domain Prony series. Shear compliance inferred from D* and nu* compared well with measured J*, forming a basis for finding the complete time dependent material property matrix for this isotropic material. A constitutive model is introduced which replaces time with internal energy in time-temperature superposition. Internal energy for mechanical loading was calculated from stress history and time domain Prony series representation of compliance. The model also included pressure and volume effects. Ramp loading experiments conducted at strain rates spanning three decades were effectively predicted, but unloading predictions were poor.

  12. Inspiral, merger, and ringdown of unequal mass black hole binaries: A multipolar analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berti, Emanuele; Cardoso, Vitor; Gonzalez, Jose A.

    We study the inspiral, merger, and ringdown of unequal mass black hole binaries by analyzing a catalogue of numerical simulations for seven different values of the mass ratio (from q=M{sub 2}/M{sub 1}=1 to q=4). We compare numerical and post-Newtonian results by projecting the waveforms onto spin-weighted spherical harmonics, characterized by angular indices (l,m). We find that the post-Newtonian equations predict remarkably well the relation between the wave amplitude and the orbital frequency for each (l,m), and that the convergence of the post-Newtonian series to the numerical results is nonmonotonic. To leading order, the total energy emitted in the merger phasemore » scales like {eta}{sup 2} and the spin of the final black hole scales like {eta}, where {eta}=q/(1+q){sup 2} is the symmetric mass ratio. We study the multipolar distribution of the radiation, finding that odd-l multipoles are suppressed in the equal mass limit. Higher multipoles carry a larger fraction of the total energy as q increases. We introduce and compare three different definitions for the ringdown starting time. Applying linear-estimation methods (the so-called Prony methods) to the ringdown phase, we find resolution-dependent time variations in the fitted parameters of the final black hole. By cross correlating information from different multipoles, we show that ringdown fits can be used to obtain precise estimates of the mass and spin of the final black hole, which are in remarkable agreement with energy and angular momentum balance calculations.« less

  13. On the maximum-entropy/autoregressive modeling of time series

    NASA Technical Reports Server (NTRS)

    Chao, B. F.

    1984-01-01

    The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.

  14. FEM simulation of single beard hair cutting with foil-blade-shaving system.

    PubMed

    Fang, Gang; Köppl, Alois

    2015-06-01

    The performance of dry-shavers depends on the interaction of the shaving components, hair and skin. Finite element models on the ABAQUS/Explicit platform are established to simulate the process of beard hair cutting. The skin is modelled as three-layer structure with a single cylindrical hair inserted into the skin. The material properties of skin are considered as Neo-Hookean hyper-elastic (epidermis) and Prony visco-elastic (dermis and hypodermis) with finite deformations. The hair is modelled as elastic-plastic material with shear damage. The cutting system is composed of a blade and a foil of shaver. The simulation results of cutting processes are analyzed, including the skin compression, hair bending, hair cutting and hair severance. Calculations of cutting loads, skin stress, and hair damage show the impact of clearance, skin bulge height, blade dimension and shape on cutting results. The details show the build-up of finite element models for hair cutting, and highlight the challenges arising during model construction and numerical simulation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Development of pneumatic actuator with low-wave reflection characteristics

    NASA Astrophysics Data System (ADS)

    Chang, H.; Tsung, T. T.; Jwo, C. S.; Chiang, J. C.

    2010-08-01

    This study aims at the development of a less reflective electromagnetic pneumatic actuator often used in the anechoic chamber. Because a pneumatic actuator on the market is not appropriate for use in such a chamber and a metallic one has high dielectric constant which generates reflective electromagnetic waves to influence test parameters in the chamber. The newly developed pneumatic actuator is made from low dielectric constant plastics with less reflective of electromagnetic. A turbine-type air motor is used to develop the pneumatic actuator and a employ Prony tester is used to run the brake horsepower test for the performance test of pneumatic actuator. Test results indicate that the pneumatic actuator in the minimal starting flow is 17 l/min, and it generates a brake horsepower of 48 mW; in the maximum flow is 26 l/min, it generates a brake horsepower of 108 mW. Therefore, it works with a torque between 0.24 N-m and 0.55 N-m, and such a torque will be sufficient to drive the target button.

  16. Identification of a thermo-elasto-viscoplastic behavior law for the simulation of thermoforming of high impact polystyrene

    NASA Astrophysics Data System (ADS)

    Atmani, O.; Abbès, B.; Abbès, F.; Li, Y. M.; Batkam, S.

    2018-05-01

    Thermoforming of high impact polystyrene sheets (HIPS) requires technical knowledge on material behavior, mold type, mold material, and process variables. Accurate thermoforming simulations are needed in the optimization process. Determining the behavior of the material under thermoforming conditions is one of the key parameters for an accurate simulation. The aim of this work is to identify the thermomechanical behavior of HIPS in the thermoforming conditions. HIPS behavior is highly dependent on temperature and strain rate. In order to reproduce the behavior of such material, a thermo-elasto-viscoplastic constitutive law was implement in the finite element code ABAQUS. The proposed model parameters are considered as thermo-dependent. The strain-dependence effect is introduced using Prony series. Tensile tests were carried out at different temperatures and strain rates. The material parameters were then identified using a NSGA-II algorithm. To validate the rheological model, experimental blowing tests were carried out on a thermoforming pilot machine. To compare the numerical results with the experimental ones the thickness distribution and the bubble shape were investigated.

  17. NAFASS: Fluctuation spectroscopy and the Prony spectrum for description of multi-frequency signals in complex systems

    NASA Astrophysics Data System (ADS)

    Nigmatullin, R. R.; Gubaidullin, I. A.

    2018-03-01

    In this paper, we essentially modernize the NAFASS (Non-orthogonal Amplitude Frequency Analysis of the Smoothed Signals) approach suggested earlier. Actually, we solved two important problems: (a) new and effective algorithm was proposed and (b) we proved that the segment of the Prony spectrum could be used as the fitting function for description of the desired frequency spectrum. These two basic elements open an alternative way for creation of the fluctuation spectroscopy when the segment of the Fourier series can fit any random signal with trend but the dispersion spectrum of the Fourier series ω0 · k(ω0 ≡ 2 π / T) ⇒Ωk(k = 0 , 1 , 2 , . . . , K - 1) is replaced by the specific dispersion law Ωk calculated with the help of original algorithm described below. It implies that any finite signal will have a compact amplitude-frequency response (AFR), where the number of the modes is much less in comparison with the number of data points (K << N). The NAFASS approach can be applicable for quantitative description of a wide set of random signals/fluctuations and allows one to compare them with each other based on one general platform. As the first example, we considered economic data and compare 30-years world prices for meat (beef, chicken, lamb and pork) entering as the basic components to every-day food consumption. These data were taken from the official site http://www.indexmundi.com/commodities/. We fitted these random functions with the high accuracy and calculated the desired ;amplitude-frequency; response for these random price fluctuations. The calculated distribution of the amplitudes (Ack, Ask) and frequency spectrum Ωk (k = 0, 1,…, K-1) allows one to compress initial data (K (number of modes) << N (number of data points), N/K ≅ 20-40) and receive an additional information for their comparison with each other. As the second example, we considered the transcendental/irrational numbers description in the frame of the proposed NAFASS approach, as well. This possibility was demonstrated on the quantitative description of the transcendental number π = 3.1415926535897932…, containing initially 6ṡ104 digits. The results obtained for the second type of data can be useful for cryptography purposes. We do believe that the NAFASS approach can be widely used for creation of the new metrological standards based on comparison of different test fluctuations with the fluctuations registered from the pattern equipment. Apart from this obvious application, the NAFASS approach can be applicable for description of different nonlinear random signals containing the hidden beatings in radioelectronics and acoustics.

  18. Parameter optimization for the visco-hyperelastic constitutive model of tendon using FEM.

    PubMed

    Tang, C Y; Ng, G Y F; Wang, Z W; Tsui, C P; Zhang, G

    2011-01-01

    Numerous constitutive models describing the mechanical properties of tendons have been proposed during the past few decades. However, few were widely used owing to the lack of implementation in the general finite element (FE) software, and very few systematic studies have been done on selecting the most appropriate parameters for these constitutive laws. In this work, the visco-hyperelastic constitutive model of the tendon implemented through the use of three-parameter Mooney-Rivlin form and sixty-four-parameter Prony series were firstly analyzed using ANSYS FE software. Afterwards, an integrated optimization scheme was developed by coupling two optimization toolboxes (OPTs) of ANSYS and MATLAB for estimating these unknown constitutive parameters of the tendon. Finally, a group of Sprague-Dawley rat tendons was used to execute experimental and numerical simulation investigation. The simulated results showed good agreement with the experimental data. An important finding revealed that too many Maxwell elements was not necessary for assuring accuracy of the model, which is often neglected in most open literatures. Thus, all these proved that the constitutive parameter optimization scheme was reliable and highly efficient. Furthermore, the approach can be extended to study other tendons or ligaments, as well as any visco-hyperelastic solid materials.

  19. Finite element modeling of hyper-viscoelasticity of peripheral nerve ultrastructures.

    PubMed

    Chang, Cheng-Tao; Chen, Yu-Hsing; Lin, Chou-Ching K; Ju, Ming-Shaung

    2015-07-16

    The mechanical characteristics of ultrastructures of rat sciatic nerves were investigated through animal experiments and finite element analyses. A custom-designed dynamic testing apparatus was used to conduct in vitro transverse compression experiments on the nerves. The optical coherence tomography (OCT) was utilized to record the cross-sectional images of nerve during the dynamic testing. Two-dimensional finite element models of the nerves were built based on their OCT images. A hyper-viscoelastic model was employed to describe the elastic and stress relaxation response of each ultrastructure of the nerve, namely the endoneurium, the perineurium and the epineurium. The first-order Ogden model was employed to describe the elasticity of each ultrastructure and a generalized Maxwell model for the relaxation. The inverse finite element analysis was used to estimate the material parameters of the ultrastructures. The results show the instantaneous shear modulus of the ultrastructures in decreasing order is perineurium, endoneurium, and epineurium. The FE model combined with the first-order Ogden model and the second-order Prony series is good enough for describing the compress-and-hold response of the nerve ultrastructures. The integration of OCT and the nonlinear finite element modeling may be applicable to study the viscoelasticity of peripheral nerve down to the ultrastructural level. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Three-dimensional finite element models of the human pubic symphysis with viscohyperelastic soft tissues.

    PubMed

    Li, Zuoping; Alonso, Jorge E; Kim, Jong-Eun; Davidson, James S; Etheridge, Brandon S; Eberhardt, Alan W

    2006-09-01

    Three-dimensional finite element (FE) models of human pubic symphyses were constructed from computed tomography image data of one male and one female cadaver pelvis. The pubic bones, interpubic fibrocartilaginous disc and four pubic ligaments were segmented semi-automatically and meshed with hexahedral elements using automatic mesh generation schemes. A two-term viscoelastic Prony series, determined by curve fitting results of compressive creep experiments, was used to model the rate-dependent effects of the interpubic disc and the pubic ligaments. Three-parameter Mooney-Rivlin material coefficients were calculated for the discs using a heuristic FE approach based on average experimental joint compression data. Similarly, a transversely isotropic hyperelastic material model was applied to the ligaments to capture average tensile responses. Linear elastic isotropic properties were assigned to bone. The applicability of the resulting models was tested in bending simulations in four directions and in tensile tests of varying load rates. The model-predicted results correlated reasonably with the joint bending stiffnesses and rate-dependent tensile responses measured in experiments, supporting the validity of the estimated material coefficients and overall modeling approach. This study represents an important and necessary step in the eventual development of biofidelic pelvis models to investigate symphysis response under high-energy impact conditions, such as motor vehicle collisions.

  1. Structure-based modeling of head-related transfer functions towards interactive customization of binaural sound systems

    NASA Astrophysics Data System (ADS)

    Gupta, Navarun

    2003-10-01

    One of the most popular techniques for creating spatialized virtual sounds is based on the use of Head-Related Transfer Functions (HRTFs). HRTFs are signal processing models that represent the modifications undergone by the acoustic signal as it travels from a sound source to each of the listener's eardrums. These modifications are due to the interaction of the acoustic waves with the listener's torso, shoulders, head and pinnae, or outer ears. As such, HRTFs are somewhat different for each listener. For a listener to perceive synthesized 3-D sound cues correctly, the synthesized cues must be similar to the listener's own HRTFs. One can measure individual HRTFs using specialized recording systems, however, these systems are prohibitively expensive and restrict the portability of the 3-D sound system. HRTF-based systems also face several computational challenges. This dissertation presents an alternative method for the synthesis of binaural spatialized sounds. The sound entering the pinna undergoes several reflective, diffractive and resonant phenomena, which determine the HRTF. Using signal processing tools, such as Prony's signal modeling method, an appropriate set of time delays and a resonant frequency were used to approximate the measured Head-Related Impulse Responses (HRIRs). Statistical analysis was used to find out empirical equations describing how the reflections and resonances are determined by the shape and size of the pinna features obtained from 3D images of 15 experimental subjects modeled in the project. These equations were used to yield "Model HRTFs" that can create elevation effects. Listening tests conducted on 10 subjects show that these model HRTFs are 5% more effective than generic HRTFs when it comes to localizing sounds in the frontal plane. The number of reversals (perception of sound source above the horizontal plane when actually it is below the plane and vice versa) was also reduced by 5.7%, showing the perceptual effectiveness of this approach. The model is simple, yet versatile because it relies on easy to measure parameters to create an individualized HRTF. This low-order parameterized model also reduces the computational and storage demands, while maintaining a sufficient number of perceptually relevant spectral cues.

  2. Measurements of shock-induced guided and surface acoustic waves along boreholes in poroelastic materials

    NASA Astrophysics Data System (ADS)

    Chao, Gabriel; Smeulders, D. M. J.; van Dongen, M. E. H.

    2006-05-01

    Acoustic experiments on the propagation of guided waves along water-filled boreholes in water-saturated porous materials are reported. The experiments were conducted using a shock tube technique. An acoustic funnel structure was placed inside the tube just above the sample in order to enhance the excitation of the surface modes. A fast Fourier transform-Prony-spectral ratio method is implemented to transform the data from the time-space domain to the frequency-wave-number domain. Frequency-dependent phase velocities and attenuation coefficients were measured using this technique. The results for a Berea sandstone material show a clear excitation of the fundamental surface mode, the pseudo-Stoneley wave. The comparison of the experimental results with numerical predictions based on Biot's theory of poromechanics [J. Acoust. Soc. Am. 28, 168 (1956)], shows that the oscillating fluid flow at the borehole wall is the dominant loss mechanism governing the pseudo-Stoneley wave and it is properly described by the Biot's model at frequencies below 40 kHz. At higher frequencies, a systematic underestimation of the theoretical predictions is found, which can be attributed to the existence of other losses mechanisms neglected in the Biot formulation. Higher-order guided modes associated with the compressional wave in the porous formation and the cylindrical geometry of the shock tube were excited, and detailed information was obtained on the frequency-dependent phase velocity and attenuation in highly porous and permeable materials. The measured attenuation of the guided wave associated with the compressional wave reveals the presence of regular oscillatory patterns that can be attributed to radial resonances. This oscillatory behavior is also numerically predicted, although the measured attenuation values are one order of magnitude higher than the corresponding theoretical values. The phase velocities of the higher-order modes are generally well predicted by theory.

  3. Characterization of structural relaxation in inorganic glasses using length dilatometry

    NASA Astrophysics Data System (ADS)

    Koontz, Erick

    The processes that govern how a glass relaxes towards its thermodynamic quasi-equilibrium state are major factors in understanding glass behavior near the glass transition region, as characterized by the glass transition temperature (Tg). Intrinsic glass properties such as specific volume, enthalpy, entropy, density, etc. are used to map the behavior of the glass network below in and near the transition region. The question of whether a true thermodynamic second order phase transition takes place in the glass transition region is another pending question. Linking viscosity behavior to entropy, or viewing the glass configuration as an energy landscape are just a couple of the most prevalent methods used for attempting to understand the glass transition. The structural relaxation behavior of inorganic glasses is important for more than scientific reasons, many commercial glass processing operations including glass melting and certain forms of optical fabrication include significant time spent in the glass transition region. For this reason knowledge of structural relaxation processes can, at a minimum, provide information for annealing duration of melt-quenched glasses. The development of a predictive model for annealing time prescription has the potential to save glass manufacturers significant time and money as well as increasing volume throughput. In optical hot forming processes such as precision glass molding, molded optical components can significantly change in shape upon cooling through the glass transition. This change in shape is not scientifically predictable as of yet though manufacturers typically use empirical rules developed in house. The classification of glass behavior in the glass transition region would allow molds to be accurately designed and save money for the producers. The work discussed in this dissertation is comprised of the development of a dilatometric measurement and characterization method of structural relaxation. The measurement and characterization technique is comprised of three main components: experimental measurements, fitting of configurational length change, and description of glass behavior by analysis of fitting parameters. N-BK7 optical glass from Schott was used as the proof of concept glass but the main scientific interest was in three chalcogenide glasses: As40Se 60, As20Se80, and Ge17.9As19.7 Se62.4. The dilatometric experiments were carried out using a thermomechanical analyzer (TMA) on glass sample that were synthesized by the author, in all cases except N-BK7. Isothermal structural relaxation measurements were done on (12 mm tall x 3 mm x 3 mm) beams placed vertically in the TMA. The samples were equilibrated at a starting temperature (T 0) until structural equilibrium was reached then a temperature down step was initiated to the final temperature (T 1) and held isothermally until relaxation concluded. The configurational aspect of length relaxation, and therefore volume relaxation was extracted and fit with a Prony series. The Prony series parameters indicated a number of relaxation events occurring within the glass on timescales typically an order of magnitude apart in time. The data analysis showed as many as 4 discrete relaxation times at lower temperatures. The number of discrete relaxation decreased as the temperature increased until just one single relaxation was left in the temperature range just at or above Tg. In the case of N-BK7 these trends were utilized to construct a simple model that could be applied to glass manufacturing in the areas of annealing or PGM. A future development of a rather simple finite element model (FEM) would easily be able to use this model to predict the exponential-like, temperature and time dependent relaxation behaviors of the glass. The predictive model was not extended to the chalcogenide glass studied here, but could easily be applied to them in the future. The relaxation time trends versus temperature showed a definite region of transition between a low temperature state with many relaxations to a high temperature state with only a single relaxation. Evidence was found for the existence of a definitive transition of some kind in the range of Tg possibly relating the idea of a percolation temperature (T*) as defined by Carmi. The results of the measurements showed substantial support for both the Adam-Gibbs interpretation of decreasing entropy towards the Kauzmann temperature, while also displaying trends compatible with energy landscape theory and the idea of broken ergodicity of glass configuration below Tg. In addition effective relaxation energies were calculated and the energy needed for relaxation showed a definite upward trend with decreasing temperature also supporting the idea of reduced entropy and configurational freedom at lower temperatures. The effective relaxation energies are not purely thermodynamic in nature because they also characterize the effects of viscosity and the kinetics of the material that was relaxing. (Abstract shortened by UMI.).

  4. Fractional calculus model of articular cartilage based on experimental stress-relaxation

    NASA Astrophysics Data System (ADS)

    Smyth, P. A.; Green, I.

    2015-05-01

    Articular cartilage is a unique substance that protects joints from damage and wear. Many decades of research have led to detailed biphasic and triphasic models for the intricate structure and behavior of cartilage. However, the models contain many assumptions on boundary conditions, permeability, viscosity, model size, loading, etc., that complicate the description of cartilage. For impact studies or biomimetic applications, cartilage can be studied phenomenologically to reduce modeling complexity. This work reports experimental results on the stress-relaxation of equine articular cartilage in unconfined loading. The response is described by a fractional calculus viscoelastic model, which gives storage and loss moduli as functions of frequency, rendering multiple advantages: (1) the fractional calculus model is robust, meaning that fewer constants are needed to accurately capture a wide spectrum of viscoelastic behavior compared to other viscoelastic models (e.g., Prony series), (2) in the special case where the fractional derivative is 1/2, it is shown that there is a straightforward time-domain representation, (3) the eigenvalue problem is simplified in subsequent dynamic studies, and (4) cartilage stress-relaxation can be described with as few as three constants, giving an advantage for large-scale dynamic studies that account for joint motion or impact. Moreover, the resulting storage and loss moduli can quantify healthy, damaged, or cultured cartilage, as well as artificial joints. The proposed characterization is suited for high-level analysis of multiphase materials, where the separate contribution of each phase is not desired. Potential uses of this analysis include biomimetic dampers and bearings, or artificial joints where the effective stiffness and damping are fundamental parameters.

  5. The performance & flow visualization studies of three-dimensional (3-D) wind turbine blade models

    NASA Astrophysics Data System (ADS)

    Sutrisno, Prajitno, Purnomo, W., Setyawan B.

    2016-06-01

    Recently, studies on the design of 3-D wind turbine blades have a less attention even though 3-D blade products are widely sold. In contrary, advanced studies in 3-D helicopter blade tip have been studied rigorously. Studies in wind turbine blade modeling are mostly assumed that blade spanwise sections behave as independent two-dimensional airfoils, implying that there is no exchange of momentum in the spanwise direction. Moreover, flow visualization experiments are infrequently conducted. Therefore, a modeling study of wind turbine blade with visualization experiment is needed to be improved to obtain a better understanding. The purpose of this study is to investigate the performance of 3-D wind turbine blade models with backward-forward swept and verify the flow patterns using flow visualization. In this research, the blade models are constructed based on the twist and chord distributions following Schmitz's formula. Forward and backward swept are added to the rotating blades. Based on this, the additional swept would enhance or diminish outward flow disturbance or stall development propagation on the spanwise blade surfaces to give better blade design. Some combinations, i. e., b lades with backward swept, provide a better 3-D favorable rotational force of the rotor system. The performance of the 3-D wind turbine system model is measured by a torque meter, employing Prony's braking system. Furthermore, the 3-D flow patterns around the rotating blade models are investigated by applying "tuft-visualization technique", to study the appearance of laminar, separated, and boundary layer flow patterns surrounding the 3-dimentional blade system.

  6. Influence of strain rate on indentation response of porcine brain.

    PubMed

    Qian, Long; Zhao, Hongwei; Guo, Yue; Li, Yuanshang; Zhou, Mingxing; Yang, Liguo; Wang, Zhiwei; Sun, Yifan

    2018-06-01

    Knowledge of brain tissue mechanical properties may be critical for formulating hypotheses about some specific diseases mechanisms and its accurate simulations such as traumatic brain injury (TBI) and tumor growth. Compared to traditional tests (e.g. tensile and compression), indentation shows superiority by virtue of its pinpoint and nondestructive/quasi-nondestructive. As a viscoelastic material, the properties of brain tissue depend on the strain rate by definition. However most efforts focus on the aspect of velocity in the field of brain indentation, rather than strain rate. The influence of strain rate on indentation response of brain tissue is taken little attention. Further, by comparing different results from literatures, it is also obvious that strain rate rather than velocity is more appropriate to characterize mechanical properties of brain. In this paper, to systematically characterize the influence of strain rate, a series of indentation-relaxation tests n = 210) are performed on the cortex of porcine brain using a custom-designed indentation device. The mechanical response that correlates with indenter diameters, depths of indentation and velocities, is revealed for the indentation portion, and elastic behavior of brain tissue is analyzed as the function of strain rate. Similarly, a linear viscoelastic model with a Prony series is employed for the indentation-relaxation portion, wherein the brain tissue shows more viscous and responds more quickly with increasing strain rate. Understanding the effect of strain rate on mechanical properties of brain indentation may be far-reaching for brain injury biomechanics and accurate simulations, but be important for bridging between indentation results of different literatures. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. New methodology for mechanical characterization of human superficial facial tissue anisotropic behaviour in vivo.

    PubMed

    Then, C; Stassen, B; Depta, K; Silber, G

    2017-07-01

    Mechanical characterization of human superficial facial tissue has important applications in biomedical science, computer assisted forensics, graphics, and consumer goods development. Specifically, the latter may include facial hair removal devices. Predictive accuracy of numerical models and their ability to elucidate biomechanically relevant questions depends on the acquisition of experimental data and mechanical tissue behavior representation. Anisotropic viscoelastic behavioral characterization of human facial tissue, deformed in vivo with finite strain, however, is sparse. Employing an experimental-numerical approach, a procedure is presented to evaluate multidirectional tensile properties of superficial tissue layers of the face in vivo. Specifically, in addition to stress relaxation, displacement-controlled multi-step ramp-and-hold protocols were performed to separate elastic from inelastic properties. For numerical representation, an anisotropic hyperelastic material model in conjunction with a time domain linear viscoelasticity formulation with Prony series was employed. Model parameters were inversely derived, employing finite element models, using multi-criteria optimization. The methodology provides insight into mechanical superficial facial tissue properties. Experimental data shows pronounced anisotropy, especially with large strain. The stress relaxation rate does not depend on the loading direction, but is strain-dependent. Preconditioning eliminates equilibrium hysteresis effects and leads to stress-strain repeatability. In the preconditioned state tissue stiffness and hysteresis insensitivity to strain rate in the applied range is evident. The employed material model fits the nonlinear anisotropic elastic results and the viscoelasticity model reasonably reproduces time-dependent results. Inversely deduced maximum anisotropic long-term shear modulus of linear elasticity is G ∞,max aniso =2.43kPa and instantaneous initial shear modulus at an applied rate of ramp loading is G 0,max aniso =15.38kPa. Derived mechanical model parameters constitute a basis for complex skin interaction simulation. Copyright © 2017. Published by Elsevier Ltd.

  8. Characterizing viscoelastic properties of breast cancer tissue in a mouse model using indentation.

    PubMed

    Qiu, Suhao; Zhao, Xuefeng; Chen, Jiayao; Zeng, Jianfeng; Chen, Shuangqing; Chen, Lei; Meng, You; Liu, Biao; Shan, Hong; Gao, Mingyuan; Feng, Yuan

    2018-03-01

    Breast cancer is one of the leading cancer forms affecting females worldwide. Characterizing the mechanical properties of breast cancer tissue is important for diagnosis and uncovering the mechanobiology mechanism. Although most of the studies were based on human cancer tissue, an animal model is still describable for preclinical analysis. Using a custom-build indentation device, we measured the viscoelastic properties of breast cancer tissue from 4T1 and SKBR3 cell lines. A total of 7 samples were tested for each cancer tissue using a mouse model. We observed that a viscoelastic model with 2-term Prony series could best describe the ramp and stress relaxation of the tissue. For long-term responses, the SKBR3 tissues were stiffer in the strain levels of 4-10%, while no significant differences were found for the instantaneous elastic modulus. We also found tissues from both cell lines appeared to be strain-independent for the instantaneous elastic modulus and for the long-term elastic modulus in the strain level of 4-10%. In addition, by inspecting the cellular morphological structure of the two tissues, we found that SKBR3 tissues had a larger volume ratio of nuclei and a smaller volume ratio of extracellular matrix (ECM). Compared with prior cellular mechanics studies, our results indicated that ECM could contribute to the stiffening the tissue-level behavior. The viscoelastic characterization of the breast cancer tissue contributed to the scarce animal model data and provided support for the linear viscoelastic model used for in vivo elastography studies. Results also supplied helpful information for modeling of the breast cancer tissue in the tissue and cellular levels. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Characterization of viscoelastic response and damping of composite materials used in flywheel rotors

    NASA Astrophysics Data System (ADS)

    Chen, Jianmin

    The long-term goal for spacecraft flywheel systems with higher energy density at the system level requires new and innovative composite material concepts. Multi-Direction Composite (MDC) offers significant advantages over traditional filament-wound and multi-ring press-fit filament-wound wheels in providing higher energy density (i.e., less mass), better crack resistance, and enhanced safety. However there is a lack of systematic characterization for dynamic properties of MDC composite materials. In order to improve the flywheel materials reliability, durability and life time, it is very important to evaluate the time dependent aging effects and damping properties of MDC material, which are significant dynamic parameter for vibration and sound control, fatigue endurance, and impact resistance. The physical aging effects are quantified based on a set of creep curves measured at different aging time or different aging temperature. One parameter (tau) curve fit was proposed to represent the relationship of aging time and aging temperature between different master curves. The long term mechanical behavior was predicted by obtained master curves. The time and temperature shift factors of matrix were obtained from creep curves and the aging time shift rate were calculated. The aging effects on composite are obtained from experiments and compared with prediction. The mechanical quasi-behavior of MDC composite was analyzed. The correspondence principle was used to relate quasi-static elastic properties of composite materials to time-dependent properties of its constituent materials (i.e., fiber and matrix). The Prony series combined with the multi-data fitting method was applied to inverse Laplace transform and to calculate the time dependent stiffness matrix effectively. Accelerated time-dependent deformation of two flywheel rim designs were studied for a period equivalent to 31 years and are compared with hoop reinforcement only composite. Damping of pure resin and T700/epoxy composite lamina and laminate in longitudinal and transverse directions were investigated experimentally and analytically. The effect of aging on damping was also studied by placing samples at 60°C in an oven for extended periods. Damping master curves versus frequency were constructed from individual curves at different temperatures based on the Arrhenius equation. The damping response of the composite lamina was used to predict the response of laminate composites. Analytical results give close numerical values to experimental results from damping of cantilever beam laminated composite samples.

  10. Aeromechanical stability augmentation using semi-active friction-based lead-lag damper

    NASA Astrophysics Data System (ADS)

    Agarwal, Sandeep

    2005-11-01

    Lead-lag dampers are present in most rotors to provide the required level of damping in all flight conditions. These dampers are a critical component of the rotor system, but they also represent a major source of maintenance cost. In present rotor systems, both hydraulic and elastomeric lead-lag dampers have been used. Hydraulic dampers are complex mechanical components that require hydraulic fluids and have high associated maintenance costs. Elastomeric dampers are conceptually simpler and provide a "dry" rotor, but are rather costly. Furthermore, their damping characteristics can degrade with time without showing external signs of failure. Hence, the dampers must be replaced on a regular basis. A semi-active friction based lead-lag damper is proposed as a replacement for hydraulic and elastomeric dampers. Damping is provided by optimized energy dissipation due to frictional forces in semi-active joints. An actuator in the joint modulates the normal force that controls energy dissipation at the frictional interfaces, resulting in large hysteretic loops. Various selective damping strategies are developed and tested for a simple system containing two different frequency modes in its response, one of which needs to be damped out. The system reflects the situation encountered in rotor response where 1P excitation is present along with the potentially unstable regressive lag motion. Simulation of the system response is obtained to compare their effectiveness. Next, a control law governing the actuation in the lag damper is designed to generate the desired level of damping for performing adaptive selective damping of individual blade lag motion. Further, conceptual design of a piezoelectric friction based lag damper for a full-scale rotor is presented and various factors affecting size, design and maintenance cost, damping capacity, and power requirements of the damper are discussed. The selective semi-active damping strategy is then studied in the context of classical ground resonance problem. In view of the inherent nonlinearity in the system due to friction phenomena, multiblade transformation from rotating frame to nonrotating frame is not useful. Stability analysis of the system is performed in the rotating frame to gain an understanding of the dynamic characteristics of rotor system with attached semi-active friction based lag dampers. This investigation is extended to the ground resonance stability analysis of a comprehensive UH-60 model within the framework of finite element based multibody dynamics formulations. Simulations are conducted to study the performance of several integrated lag dampers ranging from passive to semi-active ones with varying levels of selectivity. Stability analysis is performed for a nominal range of rotor speeds using Prony's method.

  11. Numerical simulation of a relaxation test designed to fit a quasi-linear viscoelastic model for temporomandibular joint discs.

    PubMed

    Commisso, Maria S; Martínez-Reina, Javier; Mayo, Juana; Domínguez, Jaime

    2013-02-01

    The main objectives of this work are: (a) to introduce an algorithm for adjusting the quasi-linear viscoelastic model to fit a material using a stress relaxation test and (b) to validate a protocol for performing such tests in temporomandibular joint discs. This algorithm is intended for fitting the Prony series coefficients and the hyperelastic constants of the quasi-linear viscoelastic model by considering that the relaxation test is performed with an initial ramp loading at a certain rate. This algorithm was validated before being applied to achieve the second objective. Generally, the complete three-dimensional formulation of the quasi-linear viscoelastic model is very complex. Therefore, it is necessary to design an experimental test to ensure a simple stress state, such as uniaxial compression to facilitate obtaining the viscoelastic properties. This work provides some recommendations about the experimental setup, which are important to follow, as an inadequate setup could produce a stress state far from uniaxial, thus, distorting the material constants determined from the experiment. The test considered is a stress relaxation test using unconfined compression performed in cylindrical specimens extracted from temporomandibular joint discs. To validate the experimental protocol, the test was numerically simulated using finite-element modelling. The disc was arbitrarily assigned a set of quasi-linear viscoelastic constants (c1) in the finite-element model. Another set of constants (c2) was obtained by fitting the results of the simulated test with the proposed algorithm. The deviation of constants c2 from constants c1 measures how far the stresses are from the uniaxial state. The effects of the following features of the experimental setup on this deviation have been analysed: (a) the friction coefficient between the compression plates and the specimen (which should be as low as possible); (b) the portion of the specimen glued to the compression plates (smaller areas glued are better); and (c) the variation in the thickness of the specimen. The specimen's faces should be parallel to ensure a uniaxial stress state. However, this is not possible in real specimens, and a criterion must be defined to accept the specimen in terms of the specimen's thickness variation and the deviation of the fitted constants arising from such a variation.

  12. Comprehensive Understanding of the Shinkai Seep Field in the Southern Mariana Forearc Based On High-Resolution Bathymetry Data

    NASA Astrophysics Data System (ADS)

    Ohara, Y.; Okumura, T.; Stern, R. J.; Fujii, M.; Kasaya, T.; Martinez, F.; Michibayashi, K.

    2016-12-01

    The Shinkai Seep Field (SSF) is a serpentinite-hosted cold seep and associated ecosystem in the southern Mariana forearc near the Challenger Deep [Ohara et al., PNAS, 2012] discovered as a massive vesicomyid clam colony site by a DSV Shinkai 6500 dive in September 2010. Serpentinite-hosted alkaline seep system is believed to be important for considering the habitats of the earliest life on Earth as well as extraterrestrial life such as on Saturn's moon Enceladus. SSF is the fourth known major location of such a serpentinite-hosted alkaline seep system, following the Lost City Field in the Mid-Atlantic Ridge, South Chamorro Seamount in the Mariana Forearc, and the Prony Bay Field in New Caledonia. Following SSF discovery, three JAMSTEC expeditions with DSV Shinaki 6500 and a single NSF-funded US expedition with a deep-towed side-scan sonar IMI-30 investigated the SSF. These follow-up expeditions further discovered brucite and carbonate chimney sites and another vesicomyid clam colony sites [Okumura et al., submitted], locating the geographical positions for these sites. We now estimate that the areal extent of the SSF is approximately 500 m by 300 m. However, this estimation is based on the shipboard multibeam bathymetry of R/V Yokosuka, which has the grid size of approximately 50 m. Therefore, our understanding of the spatial relationships of chimneys and colonies is not as well-constrained as it could be, hindering to discuss the subseafloor hydrological structure of the SSF. In order to advance our understanding of the SSF, we need to directly sample the fluid and understand the detailed spatial relationship between SSF chimneys. We will have an expedition using JAMSTEC's R/V Kairei and ROV Kaiko Mk-IV in early November (KR16-14 cruise) to obtain this information. Near-bottom high-resolution bathymetric data (submeter-scale) of the SSF and the forearc rift in the vicinity of the SSF will be obtained with a multibeam sonar SeaBat 7125 system to be installed on the ROV Kaiko Mk-IV, keeping the ROV's altitude of 80 m with the cruising speed of 2 knots. In this contribution, we will report expedition results, discussing implications for the subseafloor hydrological structure of the SSF and its vicinity.

  13. Transient stability enhancement of modern power grid using predictive Wide-Area Monitoring and Control

    NASA Astrophysics Data System (ADS)

    Yousefian, Reza

    This dissertation presents a real-time Wide-Area Control (WAC) designed based on artificial intelligence for large scale modern power systems transient stability enhancement. The WAC using the measurements available from Phasor Measurement Units (PMUs) at generator buses, monitors the global oscillations in the system and optimally augments the local excitation system of the synchronous generators. The complexity of the power system stability problem along with uncertainties and nonlinearities makes the conventional modeling non-practical or inaccurate. In this work Reinforcement Learning (RL) algorithm on the benchmark of Neural Networks (NNs) is used to map the nonlinearities of the system in real-time. This method different from both the centralized and the decentralized control schemes, employs a number of semi-autonomous agents to collaborate with each other to perform optimal control theory well-suited for WAC applications. Also, to handle the delays in Wide-Area Monitoring (WAM) and adapt the RL toward the robust control design, Temporal Difference (TD) is proposed as a solver for RL problem or optimal cost function. However, the main drawback of such WAC design is that it is challenging to determine if an offline trained network is valid to assess the stability of the power system once the system is evolved to a different operating state or network topology. In order to address the generality issue of NNs, a value priority scheme is proposed in this work to design a hybrid linear and nonlinear controllers. The algorithm so-called supervised RL is based on mixture of experts, where it is initialized by linear controller and as the performance and identification of the RL controller improves in real-time switches to the other controller. This work also focuses on transient stability and develops Lyapunov energy functions for synchronous generators to monitor the stability stress of the system. Using such energies as a cost function guarantees the convergence toward optimal post-fault solutions. These energy functions are developed on inter-area oscillations of the system identified online with Prony analysis. Finally, this work investigates the impacts of renewable energy resources, in specific Doubly Fed Induction Generator (DFIG)-based wind turbines, on power system transient stability and control. As the penetration of such resources is increased in transmission power system, neglecting the impacts of them will make the WAC design non-realistic. An energy function is proposed for DFIGs based on their dynamic performance in transient disturbances. Further, this energy is augmented to synchronous generators' energy as a global cost function, which is minimized by the WAC signals. We discuss the relative advantages and bottlenecks of each architecture and methodology using dynamic simulations of several test systems including a 2-area 8 bus system, IEEE 39 bus system, and IEEE 68 bus system in EMTP and real-time simulators. Being nonlinear-based, fast, accurate, and non-model based design, the proposed WAC system shows better transient and damping response when compared to conventional control schemes and local PSSs.

  14. Renewable source controls for grid stability.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byrne, Raymond Harry; Elliott, Ryan Thomas; Neely, Jason C.

    2012-12-01

    The goal of this study was to evaluate the small signal and transient stability of the Western Electric- ity Coordinating Council (WECC) under high penetrations of renewable energy, and to identify control technologies that would improve the system performance. The WECC is the regional entity responsible for coordinating and promoting bulk electric system reliability in the Western Interconnection. Transient stability is the ability of the power system to maintain synchronism after a large disturbance while small signal stability is the ability of the power system to maintain synchronism after a small disturbance. Tran- sient stability analysis usually focuses on themore » relative rotor angle between synchronous machines compared to some stability margin. For this study we employed generator speed relative to system speed as a metric for assessing transient stability. In addition, we evaluated the system transient response using the system frequency nadir, which provides an assessment of the adequacy of the primary frequency control reserves. Small signal stability analysis typically identi es the eigenvalues or modes of the system in response to a disturbance. For this study we developed mode shape maps for the di erent scenarios. Prony analysis was applied to generator speed after a 1.4 GW, 0.5 second, brake insertion at various locations. Six di erent WECC base cases were analyzed, including the 2022 light spring case which meets the renewable portfolio standards. Because of the di culty in identifying the cause and e ect relationship in large power system models with di erent scenarios, several simulations were run on a 7-bus, 5-generator system to isolate the e ects of di erent con gurations. Based on the results of the study, for a large power system like the WECC, incorporating frequency droop into wind/solar systems provides a larger bene t to system transient response than replacing the lost inertia with synthetic inertia. From a small signal stability perspective, the increase in renewable penetration results in subtle changes to the system modes. In gen- eral, mode frequencies increase slightly, and mode shapes remain similar. The system frequency nadir for the 2022 light spring case was slightly lower than the other cases, largely because of the reduced system inertia. However, the nadir is still well above the minimum load shedding frequency of 59.5 Hz. Finally, several discrepancies were identi ed between actual and reported wind penetration, and additional work on wind/solar modeling is required to increase the delity of the WECC models.« less

  15. Compressive mechanical characterization of non-human primate spinal cord white matter.

    PubMed

    Jannesar, Shervin; Allen, Mark; Mills, Sarah; Gibbons, Anne; Bresnahan, Jacqueline C; Salegio, Ernesto A; Sparrey, Carolyn J

    2018-05-02

    The goal of developing computational models of spinal cord injury (SCI) is to better understand the human injury condition. However, finite element models of human SCI have used rodent spinal cord tissue properties due to a lack of experimental data. Central nervous system tissues in non human primates (NHP) closely resemble that of humans and therefore, it is expected that material constitutive models obtained from NHPs will increase the fidelity and the accuracy of human SCI models. Human SCI most often results from compressive loading and spinal cord white matter properties affect FE predicted patterns of injury; therefore, the objectives of this study were to characterize the unconfined compressive response of NHP spinal cord white matter and present an experimentally derived, finite element tractable constitutive model for the tissue. Cervical spinal cords were harvested from nine male adult NHPs (Macaca mulatta). White matter biopsy samples (3 mm in diameter) were taken from both lateral columns of the spinal cord and were divided into four strain rate groups for unconfined dynamic compression and stress relaxation (post-mortem <1-hour). The NHP spinal cord white matter compressive response was sensitive to strain rate and showed substantial stress relaxation confirming the viscoelastic behavior of the material. An Ogden 1st order model best captured the non-linear behavior of NHP white matter in a quasi-linear viscoelastic material model with 4-term Prony series. This study is the first to characterize NHP spinal cord white matter at high (>10/sec) strain rates typical of traumatic injury. The finite element derived material constitutive model of this study will increase the fidelity of SCI computational models and provide important insights for transferring pre-clinical findings to clinical treatments. Spinal cord injury (SCI) finite element (FE) models provide an important tool to bridge the gap between animal studies and human injury, assess injury prevention technologies (e.g. helmets, seatbelts), and provide insight into the mechanisms of injury. Although, FE model outcomes depend on the assumed material constitutive model, there is limited experimental data for fresh spinal cords and all was obtained from rodent, porcine or bovine tissues. Central nervous system tissues in non human primates (NHP) more closely resemble humans. This study characterizes fresh NHP spinal cord material properties at high strains rates and large deformations typical of SCI for the first time. A constitutive model was defined that can be readily implemented in finite strain FE analysis of SCI. Copyright © 2018. Published by Elsevier Ltd.

  16. Crystallization Experiments in the MgO-CO2-H2O system: Role of Amorphous Magnesium Carbonate Precursors in Magnesium Carbonate Hydrated Phases and Morphologies in Low Temperature Hydrothermal Fluids

    NASA Astrophysics Data System (ADS)

    Giampouras, Manolis; Garcia-Ruiz, Juan Manuel; Garrido, Carlos J.

    2017-04-01

    Numerous forms of hydrated or basic magnesium carbonates occur in the complex MgO-CO2-H2O system. Mineral saturation states from low temperature hydrothermal fluids in Semail Ophiolite (Oman), Prony Bay (New Caledonia) and Lost City hydrothermal field (mid-Atlantic ridge) strongly indicate the presence of magnesium hydroxy-carbonate hydrates (e.g. hydromagnesite) and magnesium hydroxides (brucite). Study of formation mechanisms and morphological features of minerals forming in the MgO-CO2-H2O system could give insights into serpentinization-driven, hydrothermal, alkaline environments, which are related to early Earth conditions. Temperature, hydration degree, pH and fluid composition are crucial factors regarding the formation, coexistence and transformation of such mineral phases. The rate of supersaturation, on the other hand, is a fundamental parameter to understand nucleation and crystal growth processes. All these parameters can be examined in a solution using different crystallization techniques. In the present study, we applied different crystallization techniques to synthesize and monitor the crystallization of Mg-bearing carbonates and hydroxides under abiotic conditions. Various crystallization techniques (counter-diffusion, vapor diffusion and unseeded solution mixing) were used to screen the formation conditions of each phase, transformation processes and structural development. Mineral and textural characterization of the different synthesized phases were carried out by X-ray diffraction (XRD), Raman spectroscopy and scanning electron microscopy coupled to dispersive energy spectroscopy (FE-SEM-EDS). Experimental investigation of the effect of pH level and silica content under variable reactant concentrations revealed the importance of Amorphous Magnesium Carbonate (AMC) in the formation of hydroxy-carbonate phases (hydromagnesite and dypingite). Micro-structural resemblance between AMC precursors and later stage crystalline phases highlights the critical role of internal molecule re-organization to form crystalline structures. Aggregation of AMC spherulites triggers biomimetic morphologies forming curling laminar structures and rings. The size and number of nesquehonite (MgCO3.3H2O) crystals are controlled by pH and Mg2+ ions at pH < 9. As pH increases, nesquehonite transforms to spherical, rosette-like dypingite and/or hydromagnesite. Crystallization experiments within silica gel impedes the normal growth of prismatic nesquehonite crystals and generates peculiar dendritic crystalline structures. Finally, vapor diffusion techniques resulted in synthesis of NH4+-bearing hydrated compounds after ammonium incorporation when [NH4+]/[Mg2+] ≥ 1 and ≥ 0.5M [NH4+]. Funding: We acknowledge funding from the People programme (Marie Curie Actions - ITN) of the European Union FP7 under REA Grant Agreement n˚ 608001.

  17. Time and Temperature Dependence of Viscoelastic Stress Relaxation in Gold and Gold Alloy Thin Films

    NASA Astrophysics Data System (ADS)

    Mongkolsuttirat, Kittisun

    Radio frequency (RF) switches based on capacitive MicroElectroMechanical System (MEMS) devices have been proposed as replacements for traditional solid-state field effect transistor (FET) devices. However, one of the limitations of the existing capacitive switch designs is long-term reliability. Failure is generally attributed to electrical charging in the capacitor's dielectric layer that creates an attractive electrostatic force between a moving upper capacitor plate (a metal membrane) and the dielectric. This acts as an attractive stiction force between them that may cause the switch to stay permanently in the closed state. The force that is responsible for opening the switch is the elastic restoring force due to stress in the film membrane. If the restoring force decreases over time due to stress relaxation, the tendency for stiction failure behavior will increase. Au films have been shown to exhibit stress relaxation even at room temperature. The stress relaxation observed is a type of viscoelastic behavior that is more significant in thin metal films than in bulk materials. Metal films with a high relaxation resistance would have a lower probability of device failure due to stress relaxation. It has been shown that solid solution and oxide dispersion can strengthen a material without unacceptable decreases in electrical conductivity. In this study, the viscoelastic behavior of Au, AuV solid solution and AuV2O5 dispersion created by DC magnetron sputtering are investigated using the gas pressure bulge testing technique in the temperature range from 20 to 80°C. The effectiveness of the two strengthening approaches is compared with the pure Au in terms of relaxation modulus and 3 hour modulus decay. The time dependent relaxation curves can be fitted very well with a four-term Prony series model. From the temperature dependence of the terms of the series, activation energies have been deduced to identify the possible dominant relaxation mechanism. The measured modulus relaxation of Au films also proves that the films exhibit linear viscoelastic behavior. From this, a linear viscoelastic model is shown to fit very well to experimental steady state stress relaxation data and can predict time dependent stress for complex loading histories including the ability to predict stress-time behavior at other strain rates during loading. Two specific factors that are expected to influence the viscoelastic behavior-degree of alloying and grain size are investigated to explore the influence of V concentration in solid solution and grain size of pure Au. It is found that the normalized modulus of Au films is dependent on both concentration (C) and grain size (D) with proportionalities of C1/3 and D 2, respectively. A quantitative model of the rate-equation for dislocation glide plasticity based on Frost and Ashby is proposed and fitted well with steady state anelastic stress relaxation experimental data. The activation volume and the density of mobile dislocations is determined using repeated stress relaxation tests in order to further understand the viscoelastic relaxation mechanism. A rapid decrease of mobile dislocation density is found at the beginning of relaxation, which correlates well with a large reduction of viscoelastic modulus at the early stage of relaxation. The extracted activation volume and dislocation mobility can be ascribed to mobile dislocation loops with double kinks generated at grain boundaries, consistent with the dislocation mechanism proposed for the low activation energy measured in this study.

  18. Evaluation of Two New Smoothing Methods in Equating: The Cubic B-Spline Presmoothing Method and the Direct Presmoothing Method

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2009-01-01

    This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…

  19. Comparison of DNA extraction methods for meat analysis.

    PubMed

    Yalçınkaya, Burhanettin; Yumbul, Eylem; Mozioğlu, Erkan; Akgoz, Muslum

    2017-04-15

    Preventing adulteration of meat and meat products with less desirable or objectionable meat species is important not only for economical, religious and health reasons, but also, it is important for fair trade practices, therefore, several methods for identification of meat and meat products have been developed. In the present study, ten different DNA extraction methods, including Tris-EDTA Method, a modified Cetyltrimethylammonium Bromide (CTAB) Method, Alkaline Method, Urea Method, Salt Method, Guanidinium Isothiocyanate (GuSCN) Method, Wizard Method, Qiagen Method, Zymogen Method and Genespin Method were examined to determine their relative effectiveness for extracting DNA from meat samples. The results show that the salt method is easy to perform, inexpensive and environmentally friendly. Additionally, it has the highest yield among all the isolation methods tested. We suggest this method as an alternative method for DNA isolation from meat and meat products. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Study of New Method Combined Ultra-High Frequency (UHF) Method and Ultrasonic Method on PD Detection for GIS

    NASA Astrophysics Data System (ADS)

    Li, Yanran; Chen, Duo; Zhang, Jiwei; Chen, Ning; Li, Xiaoqi; Gong, Xiaojing

    2017-09-01

    GIS (gas insulated switchgear), is an important equipment in power system. Partial discharge plays an important role in detecting the insulation performance of GIS. UHF method and ultrasonic method frequently used in partial discharge (PD) detection for GIS. It is necessary to investigate UHF method and ultrasonic method for partial discharge in GIS. However, very few studies have been conducted on the method combined this two methods. From the view point of safety, a new method based on UHF method and ultrasonic method of PD detection for GIS is proposed in order to greatly enhance the ability of anti-interference of signal detection and the accuracy of fault localization. This paper presents study aimed at clarifying the effect of the new method combined UHF method and ultrasonic method. Partial discharge tests were performed in laboratory simulated environment. Obtained results show the ability of anti-interference of signal detection and the accuracy of fault localization for this new method combined UHF method and ultrasonic method.

  1. The multigrid preconditioned conjugate gradient method

    NASA Technical Reports Server (NTRS)

    Tatebe, Osamu

    1993-01-01

    A multigrid preconditioned conjugate gradient method (MGCG method), which uses the multigrid method as a preconditioner of the PCG method, is proposed. The multigrid method has inherent high parallelism and improves convergence of long wavelength components, which is important in iterative methods. By using this method as a preconditioner of the PCG method, an efficient method with high parallelism and fast convergence is obtained. First, it is considered a necessary condition of the multigrid preconditioner in order to satisfy requirements of a preconditioner of the PCG method. Next numerical experiments show a behavior of the MGCG method and that the MGCG method is superior to both the ICCG method and the multigrid method in point of fast convergence and high parallelism. This fast convergence is understood in terms of the eigenvalue analysis of the preconditioned matrix. From this observation of the multigrid preconditioner, it is realized that the MGCG method converges in very few iterations and the multigrid preconditioner is a desirable preconditioner of the conjugate gradient method.

  2. Energy minimization in medical image analysis: Methodologies and applications.

    PubMed

    Zhao, Feng; Xie, Xianghua

    2016-02-01

    Energy minimization is of particular interest in medical image analysis. In the past two decades, a variety of optimization schemes have been developed. In this paper, we present a comprehensive survey of the state-of-the-art optimization approaches. These algorithms are mainly classified into two categories: continuous method and discrete method. The former includes Newton-Raphson method, gradient descent method, conjugate gradient method, proximal gradient method, coordinate descent method, and genetic algorithm-based method, while the latter covers graph cuts method, belief propagation method, tree-reweighted message passing method, linear programming method, maximum margin learning method, simulated annealing method, and iterated conditional modes method. We also discuss the minimal surface method, primal-dual method, and the multi-objective optimization method. In addition, we review several comparative studies that evaluate the performance of different minimization techniques in terms of accuracy, efficiency, or complexity. These optimization techniques are widely used in many medical applications, for example, image segmentation, registration, reconstruction, motion tracking, and compressed sensing. We thus give an overview on those applications as well. Copyright © 2015 John Wiley & Sons, Ltd.

  3. [Comparative study on four kinds of assessment methods of post-marketing safety of Danhong injection].

    PubMed

    Li, Xuelin; Tang, Jinfa; Meng, Fei; Li, Chunxiao; Xie, Yanming

    2011-10-01

    To study the adverse reaction of Danhong injection with four kinds of methods, central monitoring method, chart review method, literature study method and spontaneous reporting method, and to compare the differences between them, explore an appropriate method to carry out post-marketing safety evaluation of traditional Chinese medicine injection. Set down the adverse reactions' questionnaire of four kinds of methods, central monitoring method, chart review method, literature study method and collect the information on adverse reactions in a certain period. Danhong injection adverse reaction information from Henan Province spontaneous reporting system was collected with spontaneous reporting method. Carry on data summary and descriptive analysis. Study the adverse reaction of Danhong injection with four methods of central monitoring method, chart review method, literature study method and spontaneous reporting method, the rates of adverse events were 0.993%, 0.336%, 0.515%, 0.067%, respectively. Cyanosis, arrhythmia, hypotension, sweating, erythema, hemorrhage dermatitis, rash, irritability, bleeding gums, toothache, tinnitus, asthma, elevated aminotransferases, constipation, pain are new discovered adverse reactions. The central monitoring method is the appropriate method to carry out post-marketing safety evaluation of traditional Chinese medicine injection, which could objectively reflect the real world of clinical usage.

  4. Ensemble Methods for MiRNA Target Prediction from Expression Data.

    PubMed

    Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong

    2015-01-01

    microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials.

  5. Ensemble Methods for MiRNA Target Prediction from Expression Data

    PubMed Central

    Le, Thuc Duy; Zhang, Junpeng; Liu, Lin; Li, Jiuyong

    2015-01-01

    Background microRNAs (miRNAs) are short regulatory RNAs that are involved in several diseases, including cancers. Identifying miRNA functions is very important in understanding disease mechanisms and determining the efficacy of drugs. An increasing number of computational methods have been developed to explore miRNA functions by inferring the miRNA-mRNA regulatory relationships from data. Each of the methods is developed based on some assumptions and constraints, for instance, assuming linear relationships between variables. For such reasons, computational methods are often subject to the problem of inconsistent performance across different datasets. On the other hand, ensemble methods integrate the results from individual methods and have been proved to outperform each of their individual component methods in theory. Results In this paper, we investigate the performance of some ensemble methods over the commonly used miRNA target prediction methods. We apply eight different popular miRNA target prediction methods to three cancer datasets, and compare their performance with the ensemble methods which integrate the results from each combination of the individual methods. The validation results using experimentally confirmed databases show that the results of the ensemble methods complement those obtained by the individual methods and the ensemble methods perform better than the individual methods across different datasets. The ensemble method, Pearson+IDA+Lasso, which combines methods in different approaches, including a correlation method, a causal inference method, and a regression method, is the best performed ensemble method in this study. Further analysis of the results of this ensemble method shows that the ensemble method can obtain more targets which could not be found by any of the single methods, and the discovered targets are more statistically significant and functionally enriched. The source codes, datasets, miRNA target predictions by all methods, and the ground truth for validation are available in the Supplementary materials. PMID:26114448

  6. 46 CFR 160.077-5 - Incorporation by reference.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., Breaking of Woven Cloth; Grab Method. (ii) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method. (iii) Method 5134, Strength of Cloth, Tearing; Tongue Method. (iv) Method 5804.1, Weathering Resistance of Cloth; Accelerated Weathering Method. (v) Method 5762, Mildew Resistance of Textile Materials...

  7. 46 CFR 160.077-5 - Incorporation by reference.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Elongation, Breaking of Woven Cloth; Grab Method. (2) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method. (3) Method 5134, Strength of Cloth, Tearing; Tongue Method. (4) Method 5804.1, Weathering Resistance of Cloth; Accelerated Weathering Method. (5) Method 5762, Mildew Resistance of Textile Materials...

  8. 46 CFR 160.077-5 - Incorporation by reference.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., Breaking of Woven Cloth; Grab Method. (ii) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method. (iii) Method 5134, Strength of Cloth, Tearing; Tongue Method. (iv) Method 5804.1, Weathering Resistance of Cloth; Accelerated Weathering Method. (v) Method 5762, Mildew Resistance of Textile Materials...

  9. 46 CFR 160.077-5 - Incorporation by reference.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Elongation, Breaking of Woven Cloth; Grab Method. (2) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method. (3) Method 5134, Strength of Cloth, Tearing; Tongue Method. (4) Method 5804.1, Weathering Resistance of Cloth; Accelerated Weathering Method. (5) Method 5762, Mildew Resistance of Textile Materials...

  10. Methods for analysis of cracks in three-dimensional solids

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Newman, J. C., Jr.

    1984-01-01

    Various analytical and numerical methods used to evaluate the stress intensity factors for cracks in three-dimensional (3-D) solids are reviewed. Classical exact solutions and many of the approximate methods used in 3-D analyses of cracks are reviewed. The exact solutions for embedded elliptic cracks in infinite solids are discussed. The approximate methods reviewed are the finite element methods, the boundary integral equation (BIE) method, the mixed methods (superposition of analytical and finite element method, stress difference method, discretization-error method, alternating method, finite element-alternating method), and the line-spring model. The finite element method with singularity elements is the most widely used method. The BIE method only needs modeling of the surfaces of the solid and so is gaining popularity. The line-spring model appears to be the quickest way to obtain good estimates of the stress intensity factors. The finite element-alternating method appears to yield the most accurate solution at the minimum cost.

  11. Development and validation of spectrophotometric methods for estimating amisulpride in pharmaceutical preparations.

    PubMed

    Sharma, Sangita; Neog, Madhurjya; Prajapati, Vipul; Patel, Hiren; Dabhi, Dipti

    2010-01-01

    Five simple, sensitive, accurate and rapid visible spectrophotometric methods (A, B, C, D and E) have been developed for estimating Amisulpride in pharmaceutical preparations. These are based on the diazotization of Amisulpride with sodium nitrite and hydrochloric acid, followed by coupling with N-(1-naphthyl)ethylenediamine dihydrochloride (Method A), diphenylamine (Method B), beta-naphthol in an alkaline medium (Method C), resorcinol in an alkaline medium (Method D) and chromotropic acid in an alkaline medium (Method E) to form a colored chromogen. The absorption maxima, lambda(max), are at 523 nm for Method A, 382 and 490 nm for Method B, 527 nm for Method C, 521 nm for Method D and 486 nm for Method E. Beer's law was obeyed in the concentration range of 2.5-12.5 microg mL(-1) in Method A, 5-25 and 10-50 microg mL(-1) in Method B, 4-20 microg mL(-1) in Method C, 2.5-12.5 microg mL(-1) in Method D and 5-15 microg mL(-1) in Method E. The results obtained for the proposed methods are in good agreement with labeled amounts, when marketed pharmaceutical preparations were analyzed.

  12. Reconstruction of fluorescence molecular tomography with a cosinoidal level set method.

    PubMed

    Zhang, Xuanxuan; Cao, Xu; Zhu, Shouping

    2017-06-27

    Implicit shape-based reconstruction method in fluorescence molecular tomography (FMT) is capable of achieving higher image clarity than image-based reconstruction method. However, the implicit shape method suffers from a low convergence speed and performs unstably due to the utilization of gradient-based optimization methods. Moreover, the implicit shape method requires priori information about the number of targets. A shape-based reconstruction scheme of FMT with a cosinoidal level set method is proposed in this paper. The Heaviside function in the classical implicit shape method is replaced with a cosine function, and then the reconstruction can be accomplished with the Levenberg-Marquardt method rather than gradient-based methods. As a result, the priori information about the number of targets is not required anymore and the choice of step length is avoided. Numerical simulations and phantom experiments were carried out to validate the proposed method. Results of the proposed method show higher contrast to noise ratios and Pearson correlations than the implicit shape method and image-based reconstruction method. Moreover, the number of iterations required in the proposed method is much less than the implicit shape method. The proposed method performs more stably, provides a faster convergence speed than the implicit shape method, and achieves higher image clarity than the image-based reconstruction method.

  13. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    PubMed

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.

  14. Method Engineering: A Service-Oriented Approach

    NASA Astrophysics Data System (ADS)

    Cauvet, Corine

    In the past, a large variety of methods have been published ranging from very generic frameworks to methods for specific information systems. Method Engineering has emerged as a research discipline for designing, constructing and adapting methods for Information Systems development. Several approaches have been proposed as paradigms in method engineering. The meta modeling approach provides means for building methods by instantiation, the component-based approach aims at supporting the development of methods by using modularization constructs such as method fragments, method chunks and method components. This chapter presents an approach (SO2M) for method engineering based on the service paradigm. We consider services as autonomous computational entities that are self-describing, self-configuring and self-adapting. They can be described, published, discovered and dynamically composed for processing a consumer's demand (a developer's requirement). The method service concept is proposed to capture a development process fragment for achieving a goal. Goal orientation in service specification and the principle of service dynamic composition support method construction and method adaptation to different development contexts.

  15. Simultaneous determination of a binary mixture of pantoprazole sodium and itopride hydrochloride by four spectrophotometric methods.

    PubMed

    Ramadan, Nesrin K; El-Ragehy, Nariman A; Ragab, Mona T; El-Zeany, Badr A

    2015-02-25

    Four simple, sensitive, accurate and precise spectrophotometric methods were developed for the simultaneous determination of a binary mixture containing Pantoprazole Sodium Sesquihydrate (PAN) and Itopride Hydrochloride (ITH). Method (A) is the derivative ratio method ((1)DD), method (B) is the mean centering of ratio spectra method (MCR), method (C) is the ratio difference method (RD) and method (D) is the isoabsorptive point coupled with third derivative method ((3)D). Linear correlation was obtained in range 8-44 μg/mL for PAN by the four proposed methods, 8-40 μg/mL for ITH by methods A, B and C and 10-40 μg/mL for ITH by method D. The suggested methods were validated according to ICH guidelines. The obtained results were statistically compared with those obtained by the official and a reported method for PAN and ITH, respectively, showing no significant difference with respect to accuracy and precision. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Simultaneous determination of a binary mixture of pantoprazole sodium and itopride hydrochloride by four spectrophotometric methods

    NASA Astrophysics Data System (ADS)

    Ramadan, Nesrin K.; El-Ragehy, Nariman A.; Ragab, Mona T.; El-Zeany, Badr A.

    2015-02-01

    Four simple, sensitive, accurate and precise spectrophotometric methods were developed for the simultaneous determination of a binary mixture containing Pantoprazole Sodium Sesquihydrate (PAN) and Itopride Hydrochloride (ITH). Method (A) is the derivative ratio method (1DD), method (B) is the mean centering of ratio spectra method (MCR), method (C) is the ratio difference method (RD) and method (D) is the isoabsorptive point coupled with third derivative method (3D). Linear correlation was obtained in range 8-44 μg/mL for PAN by the four proposed methods, 8-40 μg/mL for ITH by methods A, B and C and 10-40 μg/mL for ITH by method D. The suggested methods were validated according to ICH guidelines. The obtained results were statistically compared with those obtained by the official and a reported method for PAN and ITH, respectively, showing no significant difference with respect to accuracy and precision.

  17. Evaluating the efficiency of spectral resolution of univariate methods manipulating ratio spectra and comparing to multivariate methods: An application to ternary mixture in common cold preparation

    NASA Astrophysics Data System (ADS)

    Moustafa, Azza Aziz; Salem, Hesham; Hegazy, Maha; Ali, Omnia

    2015-02-01

    Simple, accurate, and selective methods have been developed and validated for simultaneous determination of a ternary mixture of Chlorpheniramine maleate (CPM), Pseudoephedrine HCl (PSE) and Ibuprofen (IBF), in tablet dosage form. Four univariate methods manipulating ratio spectra were applied, method A is the double divisor-ratio difference spectrophotometric method (DD-RD). Method B is double divisor-derivative ratio spectrophotometric method (DD-RD). Method C is derivative ratio spectrum-zero crossing method (DRZC), while method D is mean centering of ratio spectra (MCR). Two multivariate methods were also developed and validated, methods E and F are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods have the advantage of simultaneous determination of the mentioned drugs without prior separation steps. They were successfully applied to laboratory-prepared mixtures and to commercial pharmaceutical preparation without any interference from additives. The proposed methods were validated according to the ICH guidelines. The obtained results were statistically compared with the official methods where no significant difference was observed regarding both accuracy and precision.

  18. Methods for elimination of dampness in Building walls

    NASA Astrophysics Data System (ADS)

    Campian, Cristina; Pop, Maria

    2016-06-01

    Dampness elimination in building walls is a very sensitive problem, with high costs. Many methods are used, as: chemical method, electro osmotic method or physical method. The RECON method is a representative and a sustainable method in Romania. Italy has the most radical method from all methods. The technology consists in cutting the brick walls, insertion of a special plastic sheeting and injection of a pre-mixed anti-shrinking mortar.

  19. A comparison of several methods of solving nonlinear regression groundwater flow problems

    USGS Publications Warehouse

    Cooley, Richard L.

    1985-01-01

    Computational efficiency and computer memory requirements for four methods of minimizing functions were compared for four test nonlinear-regression steady state groundwater flow problems. The fastest methods were the Marquardt and quasi-linearization methods, which required almost identical computer times and numbers of iterations; the next fastest was the quasi-Newton method, and last was the Fletcher-Reeves method, which did not converge in 100 iterations for two of the problems. The fastest method per iteration was the Fletcher-Reeves method, and this was followed closely by the quasi-Newton method. The Marquardt and quasi-linearization methods were slower. For all four methods the speed per iteration was directly related to the number of parameters in the model. However, this effect was much more pronounced for the Marquardt and quasi-linearization methods than for the other two. Hence the quasi-Newton (and perhaps Fletcher-Reeves) method might be more efficient than either the Marquardt or quasi-linearization methods if the number of parameters in a particular model were large, although this remains to be proven. The Marquardt method required somewhat less central memory than the quasi-linearization metilod for three of the four problems. For all four problems the quasi-Newton method required roughly two thirds to three quarters of the memory required by the Marquardt method, and the Fletcher-Reeves method required slightly less memory than the quasi-Newton method. Memory requirements were not excessive for any of the four methods.

  20. Hybrid DFP-CG method for solving unconstrained optimization problems

    NASA Astrophysics Data System (ADS)

    Osman, Wan Farah Hanan Wan; Asrul Hery Ibrahim, Mohd; Mamat, Mustafa

    2017-09-01

    The conjugate gradient (CG) method and quasi-Newton method are both well known method for solving unconstrained optimization method. In this paper, we proposed a new method by combining the search direction between conjugate gradient method and quasi-Newton method based on BFGS-CG method developed by Ibrahim et al. The Davidon-Fletcher-Powell (DFP) update formula is used as an approximation of Hessian for this new hybrid algorithm. Numerical result showed that the new algorithm perform well than the ordinary DFP method and proven to posses both sufficient descent and global convergence properties.

  1. Generalization of the Engineering Method to the UNIVERSAL METHOD.

    ERIC Educational Resources Information Center

    Koen, Billy Vaughn

    1987-01-01

    Proposes that there is a universal method for all realms of knowledge. Reviews Descartes's definition of the universal method, the engineering definition, and the philosophical basis for the universal method. Contends that the engineering method best represents the universal method. (ML)

  2. Colloidal Electrolytes and the Critical Micelle Concentration

    ERIC Educational Resources Information Center

    Knowlton, L. G.

    1970-01-01

    Describes methods for determining the Critical Micelle Concentration of Colloidal Electrolytes; methods described are: (1) methods based on Colligative Properties, (2) methods based on the Electrical Conductivity of Colloidal Electrolytic Solutions, (3) Dye Method, (4) Dye Solubilization Method, and (5) Surface Tension Method. (BR)

  3. Theoretical analysis of three methods for calculating thermal insulation of clothing from thermal manikin.

    PubMed

    Huang, Jianhua

    2012-07-01

    There are three methods for calculating thermal insulation of clothing measured with a thermal manikin, i.e. the global method, the serial method, and the parallel method. Under the condition of homogeneous clothing insulation, these three methods yield the same insulation values. If the local heat flux is uniform over the manikin body, the global and serial methods provide the same insulation value. In most cases, the serial method gives a higher insulation value than the global method. There is a possibility that the insulation value from the serial method is lower than the value from the global method. The serial method always gives higher insulation value than the parallel method. The insulation value from the parallel method is higher or lower than the value from the global method, depending on the relationship between the heat loss distribution and the surface temperatures. Under the circumstance of uniform surface temperature distribution over the manikin body, the global and parallel methods give the same insulation value. If the constant surface temperature mode is used in the manikin test, the parallel method can be used to calculate the thermal insulation of clothing. If the constant heat flux mode is used in the manikin test, the serial method can be used to calculate the thermal insulation of clothing. The global method should be used for calculating thermal insulation of clothing for all manikin control modes, especially for thermal comfort regulation mode. The global method should be chosen by clothing manufacturers for labelling their products. The serial and parallel methods provide more information with respect to the different parts of clothing.

  4. Comparison of five methods for the estimation of methane production from vented in vitro systems.

    PubMed

    Alvarez Hess, P S; Eckard, R J; Jacobs, J L; Hannah, M C; Moate, P J

    2018-05-23

    There are several methods for estimating methane production (MP) from feedstuffs in vented in vitro systems. One method (A; "gold standard") measures methane proportions in the incubation bottle's head space (HS) and in the vented gas collected in gas bags. Four other methods (B, C, D and E) measure methane proportion in a single gas sample from HS. Method B assumes the same methane proportion in the vented gas as in HS, method C assumes constant methane to carbon dioxide ratio, method D has been developed based on empirical data and method E assumes constant individual venting volumes. This study aimed to compare the MP predictions from these methods to that of the gold standard method under different incubation scenarios, to validate these methods based on their concordance with a gold standard method. Methods C, D and E had greater concordance (0.85, 0.88 and 0.81), lower root mean square error (RMSE) (0.80, 0.72 and 0.85) and lower mean bias (0.20, 0.35, -0.35) with the gold standard than did method B (concordance 0.67, RMSE 1.49 and mean bias 1.26). Methods D and E were simpler to perform than method C and method D was slightly more accurate than method E. Based on precision, accuracy and simplicity of implementation, it is recommended that, when method A cannot be used, methods D and E are preferred to estimate MP from vented in vitro systems. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  5. Kennard-Stone combined with least square support vector machine method for noncontact discriminating human blood species

    NASA Astrophysics Data System (ADS)

    Zhang, Linna; Li, Gang; Sun, Meixiu; Li, Hongxiao; Wang, Zhennan; Li, Yingxin; Lin, Ling

    2017-11-01

    Identifying whole bloods to be either human or nonhuman is an important responsibility for import-export ports and inspection and quarantine departments. Analytical methods and DNA testing methods are usually destructive. Previous studies demonstrated that visible diffuse reflectance spectroscopy method can realize noncontact human and nonhuman blood discrimination. An appropriate method for calibration set selection was very important for a robust quantitative model. In this paper, Random Selection (RS) method and Kennard-Stone (KS) method was applied in selecting samples for calibration set. Moreover, proper stoichiometry method can be greatly beneficial for improving the performance of classification model or quantification model. Partial Least Square Discrimination Analysis (PLSDA) method was commonly used in identification of blood species with spectroscopy methods. Least Square Support Vector Machine (LSSVM) was proved to be perfect for discrimination analysis. In this research, PLSDA method and LSSVM method was used for human blood discrimination. Compared with the results of PLSDA method, this method could enhance the performance of identified models. The overall results convinced that LSSVM method was more feasible for identifying human and animal blood species, and sufficiently demonstrated LSSVM method was a reliable and robust method for human blood identification, and can be more effective and accurate.

  6. A Novel Method to Identify Differential Pathways in Hippocampus Alzheimer's Disease.

    PubMed

    Liu, Chun-Han; Liu, Lian

    2017-05-08

    BACKGROUND Alzheimer's disease (AD) is the most common type of dementia. The objective of this paper is to propose a novel method to identify differential pathways in hippocampus AD. MATERIAL AND METHODS We proposed a combined method by merging existed methods. Firstly, pathways were identified by four known methods (DAVID, the neaGUI package, the pathway-based co-expressed method, and the pathway network approach), and differential pathways were evaluated through setting weight thresholds. Subsequently, we combined all pathways by a rank-based algorithm and called the method the combined method. Finally, common differential pathways across two or more of five methods were selected. RESULTS Pathways obtained from different methods were also different. The combined method obtained 1639 pathways and 596 differential pathways, which included all pathways gained from the four existing methods; hence, the novel method solved the problem of inconsistent results. Besides, a total of 13 common pathways were identified, such as metabolism, immune system, and cell cycle. CONCLUSIONS We have proposed a novel method by combining four existing methods based on a rank product algorithm, and identified 13 significant differential pathways based on it. These differential pathways might provide insight into treatment and diagnosis of hippocampus AD.

  7. Improved accuracy for finite element structural analysis via an integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  8. Study of comparison between Ultra-high Frequency (UHF) method and ultrasonic method on PD detection for GIS

    NASA Astrophysics Data System (ADS)

    Li, Yanran; Chen, Duo; Li, Li; Zhang, Jiwei; Li, Guang; Liu, Hongxia

    2017-11-01

    GIS (gas insulated switchgear), is an important equipment in power system. Partial discharge plays an important role in detecting the insulation performance of GIS. UHF method and ultrasonic method frequently used in partial discharge (PD) detection for GIS. However, few studies have been conducted on comparison of this two methods. From the view point of safety, it is necessary to investigate UHF method and ultrasonic method for partial discharge in GIS. This paper presents study aimed at clarifying the effect of UHF method and ultrasonic method for partial discharge caused by free metal particles in GIS. Partial discharge tests were performed in laboratory simulated environment. Obtained results show the ability of anti-interference of signal detection and the accuracy of fault localization for UHF method and ultrasonic method. A new method based on UHF method and ultrasonic method of PD detection for GIS is proposed in order to greatly enhance the ability of anti-interference of signal detection and the accuracy of detection localization.

  9. Comparison of four extraction/methylation analytical methods to measure fatty acid composition by gas chromatography in meat.

    PubMed

    Juárez, M; Polvillo, O; Contò, M; Ficco, A; Ballico, S; Failla, S

    2008-05-09

    Four different extraction-derivatization methods commonly used for fatty acid analysis in meat (in situ or one-step method, saponification method, classic method and a combination of classic extraction and saponification derivatization) were tested. The in situ method had low recovery and variation. The saponification method showed the best balance between recovery, precision, repeatability and reproducibility. The classic method had high recovery and acceptable variation values, except for the polyunsaturated fatty acids, showing higher variation than the former methods. The combination of extraction and methylation steps had great recovery values, but the precision, repeatability and reproducibility were not acceptable. Therefore the saponification method would be more convenient for polyunsaturated fatty acid analysis, whereas the in situ method would be an alternative for fast analysis. However the classic method would be the method of choice for the determination of the different lipid classes.

  10. Birth Control Methods

    MedlinePlus

    ... Z Health Topics Birth control methods Birth control methods > A-Z Health Topics Birth control methods fact ... To receive Publications email updates Submit Birth control methods Birth control (contraception) is any method, medicine, or ...

  11. 26 CFR 1.381(c)(5)-1 - Inventories.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... the dollar-value method, use the double-extension method, pool under the natural business unit method... double-extension method, pool under the natural business unit method, and value annual inventory... natural business unit method while P corporation pools under the multiple pool method. In addition, O...

  12. 26 CFR 1.381(c)(5)-1 - Inventories.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... the dollar-value method, use the double-extension method, pool under the natural business unit method... double-extension method, pool under the natural business unit method, and value annual inventory... natural business unit method while P corporation pools under the multiple pool method. In addition, O...

  13. 46 CFR 160.076-11 - Incorporation by reference.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... following methods: (1) Method 5100, Strength and Elongation, Breaking of Woven Cloth; Grab Method, 160.076-25; (2) Method 5132, Strength of Cloth, Tearing; Falling-Pendulum Method, 160.076-25; (3) Method 5134, Strength of Cloth, Tearing; Tongue Method, 160.076-25. Underwriters Laboratories (UL) Underwriters...

  14. Costs and Efficiency of Online and Offline Recruitment Methods: A Web-Based Cohort Study

    PubMed Central

    Riis, Anders H; Hatch, Elizabeth E; Wise, Lauren A; Nielsen, Marie G; Rothman, Kenneth J; Toft Sørensen, Henrik; Mikkelsen, Ellen M

    2017-01-01

    Background The Internet is widely used to conduct research studies on health issues. Many different methods are used to recruit participants for such studies, but little is known about how various recruitment methods compare in terms of efficiency and costs. Objective The aim of our study was to compare online and offline recruitment methods for Internet-based studies in terms of efficiency (number of recruited participants) and costs per participant. Methods We employed several online and offline recruitment methods to enroll 18- to 45-year-old women in an Internet-based Danish prospective cohort study on fertility. Offline methods included press releases, posters, and flyers. Online methods comprised advertisements placed on five different websites, including Facebook and Netdoktor.dk. We defined seven categories of mutually exclusive recruitment methods and used electronic tracking via unique Uniform Resource Locator (URL) and self-reported data to identify the recruitment method for each participant. For each method, we calculated the average cost per participant and efficiency, that is, the total number of recruited participants. Results We recruited 8252 study participants. Of these, 534 were excluded as they could not be assigned to a specific recruitment method. The final study population included 7724 participants, of whom 803 (10.4%) were recruited by offline methods, 3985 (51.6%) by online methods, 2382 (30.8%) by online methods not initiated by us, and 554 (7.2%) by other methods. Overall, the average cost per participant was €6.22 for online methods initiated by us versus €9.06 for offline methods. Costs per participant ranged from €2.74 to €105.53 for online methods and from €0 to €67.50 for offline methods. Lowest average costs per participant were for those recruited from Netdoktor.dk (€2.99) and from Facebook (€3.44). Conclusions In our Internet-based cohort study, online recruitment methods were superior to offline methods in terms of efficiency (total number of participants enrolled). The average cost per recruited participant was also lower for online than for offline methods, although costs varied greatly among both online and offline recruitment methods. We observed a decrease in the efficiency of some online recruitment methods over time, suggesting that it may be optimal to adopt multiple online methods. PMID:28249833

  15. Interior-Point Methods for Linear Programming: A Review

    ERIC Educational Resources Information Center

    Singh, J. N.; Singh, D.

    2002-01-01

    The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…

  16. The Relation of Finite Element and Finite Difference Methods

    NASA Technical Reports Server (NTRS)

    Vinokur, M.

    1976-01-01

    Finite element and finite difference methods are examined in order to bring out their relationship. It is shown that both methods use two types of discrete representations of continuous functions. They differ in that finite difference methods emphasize the discretization of independent variable, while finite element methods emphasize the discretization of dependent variable (referred to as functional approximations). An important point is that finite element methods use global piecewise functional approximations, while finite difference methods normally use local functional approximations. A general conclusion is that finite element methods are best designed to handle complex boundaries, while finite difference methods are superior for complex equations. It is also shown that finite volume difference methods possess many of the advantages attributed to finite element methods.

  17. [Baseflow separation methods in hydrological process research: a review].

    PubMed

    Xu, Lei-Lei; Liu, Jing-Lin; Jin, Chang-Jie; Wang, An-Zhi; Guan, De-Xin; Wu, Jia-Bing; Yuan, Feng-Hui

    2011-11-01

    Baseflow separation research is regarded as one of the most important and difficult issues in hydrology and ecohydrology, but lacked of unified standards in the concepts and methods. This paper introduced the theories of baseflow separation based on the definitions of baseflow components, and analyzed the development course of different baseflow separation methods. Among the methods developed, graph separation method is simple and applicable but arbitrary, balance method accords with hydrological mechanism but is difficult in application, whereas time series separation method and isotopic method can overcome the subjective and arbitrary defects caused by graph separation method, and thus can obtain the baseflow procedure quickly and efficiently. In recent years, hydrological modeling, digital filtering, and isotopic method are the main methods used for baseflow separation.

  18. Semi top-down method combined with earth-bank, an effective method for basement construction.

    NASA Astrophysics Data System (ADS)

    Tuan, B. Q.; Tam, Ng M.

    2018-04-01

    Choosing an appropriate method of deep excavation not only plays a decisive role in technical success, but also in economics of the construction project. Presently, we mainly base on to key methods: “Bottom-up” and “Top-down” construction method. Right now, this paper presents an another method of construction that is “Semi Top-down method combining with earth-bank” in order to take the advantages and limit the weakness of the above methods. The Bottom-up method was improved by using the earth-bank to stabilize retaining walls instead of the bracing steel struts. The Top-down method was improved by using the open cut method for the half of the earthwork quantities.

  19. Marker-based reconstruction of the kinematics of a chain of segments: a new method that incorporates joint kinematic constraints.

    PubMed

    Klous, Miriam; Klous, Sander

    2010-07-01

    The aim of skin-marker-based motion analysis is to reconstruct the motion of a kinematical model from noisy measured motion of skin markers. Existing kinematic models for reconstruction of chains of segments can be divided into two categories: analytical methods that do not take joint constraints into account and numerical global optimization methods that do take joint constraints into account but require numerical optimization of a large number of degrees of freedom, especially when the number of segments increases. In this study, a new and largely analytical method for a chain of rigid bodies is presented, interconnected in spherical joints (chain-method). In this method, the number of generalized coordinates to be determined through numerical optimization is three, irrespective of the number of segments. This new method is compared with the analytical method of Veldpaus et al. [1988, "A Least-Squares Algorithm for the Equiform Transformation From Spatial Marker Co-Ordinates," J. Biomech., 21, pp. 45-54] (Veldpaus-method, a method of the first category) and the numerical global optimization method of Lu and O'Connor [1999, "Bone Position Estimation From Skin-Marker Co-Ordinates Using Global Optimization With Joint Constraints," J. Biomech., 32, pp. 129-134] (Lu-method, a method of the second category) regarding the effects of continuous noise simulating skin movement artifacts and regarding systematic errors in joint constraints. The study is based on simulated data to allow a comparison of the results of the different algorithms with true (noise- and error-free) marker locations. Results indicate a clear trend that accuracy for the chain-method is higher than the Veldpaus-method and similar to the Lu-method. Because large parts of the equations in the chain-method can be solved analytically, the speed of convergence in this method is substantially higher than in the Lu-method. With only three segments, the average number of required iterations with the chain-method is 3.0+/-0.2 times lower than with the Lu-method when skin movement artifacts are simulated by applying a continuous noise model. When simulating systematic errors in joint constraints, the number of iterations for the chain-method was almost a factor 5 lower than the number of iterations for the Lu-method. However, the Lu-method performs slightly better than the chain-method. The RMSD value between the reconstructed and actual marker positions is approximately 57% of the systematic error on the joint center positions for the Lu-method compared with 59% for the chain-method.

  20. Novel two wavelength spectrophotometric methods for simultaneous determination of binary mixtures with severely overlapping spectra

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam M.; Saleh, Sarah S.; Hassan, Nagiba Y.; Salem, Hesham

    2015-02-01

    This work presents the application of different spectrophotometric techniques based on two wavelengths for the determination of severely overlapped spectral components in a binary mixture without prior separation. Four novel spectrophotometric methods were developed namely: induced dual wavelength method (IDW), dual wavelength resolution technique (DWRT), advanced amplitude modulation method (AAM) and induced amplitude modulation method (IAM). The results of the novel methods were compared to that of three well-established methods which were: dual wavelength method (DW), Vierordt's method (VD) and bivariate method (BV). The developed methods were applied for the analysis of the binary mixture of hydrocortisone acetate (HCA) and fusidic acid (FSA) formulated as topical cream accompanied by the determination of methyl paraben and propyl paraben present as preservatives. The specificity of the novel methods was investigated by analyzing laboratory prepared mixtures and the combined dosage form. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed. No difference was observed between the obtained results when compared to the reported HPLC method, which proved that the developed methods could be alternative to HPLC techniques in quality control laboratories.

  1. Determination of Slope Safety Factor with Analytical Solution and Searching Critical Slip Surface with Genetic-Traversal Random Method

    PubMed Central

    2014-01-01

    In the current practice, to determine the safety factor of a slope with two-dimensional circular potential failure surface, one of the searching methods for the critical slip surface is Genetic Algorithm (GA), while the method to calculate the slope safety factor is Fellenius' slices method. However GA needs to be validated with more numeric tests, while Fellenius' slices method is just an approximate method like finite element method. This paper proposed a new method to determine the minimum slope safety factor which is the determination of slope safety factor with analytical solution and searching critical slip surface with Genetic-Traversal Random Method. The analytical solution is more accurate than Fellenius' slices method. The Genetic-Traversal Random Method uses random pick to utilize mutation. A computer automatic search program is developed for the Genetic-Traversal Random Method. After comparison with other methods like slope/w software, results indicate that the Genetic-Traversal Random Search Method can give very low safety factor which is about half of the other methods. However the obtained minimum safety factor with Genetic-Traversal Random Search Method is very close to the lower bound solutions of slope safety factor given by the Ansys software. PMID:24782679

  2. Enumeration of total aerobic microorganisms in foods by SimPlate Total Plate Count-Color Indicator methods and conventional culture methods: collaborative study.

    PubMed

    Feldsine, Philip T; Leung, Stephanie C; Lienau, Andrew H; Mui, Linda A; Townsend, David E

    2003-01-01

    The relative efficacy of the SimPlate Total Plate Count-Color Indicator (TPC-CI) method (SimPlate 35 degrees C) was compared with the AOAC Official Method 966.23 (AOAC 35 degrees C) for enumeration of total aerobic microorganisms in foods. The SimPlate TPC-CI method, incubated at 30 degrees C (SimPlate 30 degrees C), was also compared with the International Organization for Standardization (ISO) 4833 method (ISO 30 degrees C). Six food types were analyzed: ground black pepper, flour, nut meats, frozen hamburger patties, frozen fruits, and fresh vegetables. All foods tested were naturally contaminated. Nineteen laboratories throughout North America and Europe participated in the study. Three method comparisons were conducted. In general, there was <0.3 mean log count difference in recovery among the SimPlate methods and their corresponding reference methods. Mean log counts between the 2 reference methods were also very similar. Repeatability (Sr) and reproducibility (SR) standard deviations were similar among the 3 method comparisons. The SimPlate method (35 degrees C) and the AOAC method were comparable for enumerating total aerobic microorganisms in foods. Similarly, the SimPlate method (30 degrees C) was comparable to the ISO method when samples were prepared and incubated according to the ISO method.

  3. Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation

    NASA Astrophysics Data System (ADS)

    Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab

    2015-05-01

    3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.

  4. Completed Suicide with Violent and Non-Violent Methods in Rural Shandong, China: A Psychological Autopsy Study

    PubMed Central

    Sun, Shi-Hua; Jia, Cun-Xian

    2014-01-01

    Background This study aims to describe the specific characteristics of completed suicides by violent methods and non-violent methods in rural Chinese population, and to explore the related factors for corresponding methods. Methods Data of this study came from investigation of 199 completed suicide cases and their paired controls of rural areas in three different counties in Shandong, China, by interviewing one informant of each subject using the method of Psychological Autopsy (PA). Results There were 78 (39.2%) suicides with violent methods and 121 (60.8%) suicides with non-violent methods. Ingesting pesticides, as a non-violent method, appeared to be the most common suicide method (103, 51.8%). Hanging (73 cases, 36.7%) and drowning (5 cases, 2.5%) were the only violent methods observed. Storage of pesticides at home and higher suicide intent score were significantly associated with choice of violent methods while committing suicide. Risk factors related to suicide death included negative life events and hopelessness. Conclusions Suicide with violent methods has different factors from suicide with non-violent methods. Suicide methods should be considered in suicide prevention and intervention strategies. PMID:25111835

  5. A review of propeller noise prediction methodology: 1919-1994

    NASA Technical Reports Server (NTRS)

    Metzger, F. Bruce

    1995-01-01

    This report summarizes a review of the literature regarding propeller noise prediction methods. The review is divided into six sections: (1) early methods; (2) more recent methods based on earlier theory; (3) more recent methods based on the Acoustic Analogy; (4) more recent methods based on Computational Acoustics; (5) empirical methods; and (6) broadband methods. The report concludes that there are a large number of noise prediction procedures available which vary markedly in complexity. Deficiencies in accuracy of methods in many cases may be related, not to the methods themselves, but the accuracy and detail of the aerodynamic inputs used to calculate noise. The steps recommended in the report to provide accurate and easy to use prediction methods are: (1) identify reliable test data; (2) define and conduct test programs to fill gaps in the existing data base; (3) identify the most promising prediction methods; (4) evaluate promising prediction methods relative to the data base; (5) identify and correct the weaknesses in the prediction methods, including lack of user friendliness, and include features now available only in research codes; (6) confirm the accuracy of improved prediction methods to the data base; and (7) make the methods widely available and provide training in their use.

  6. A different approach to estimate nonlinear regression model using numerical methods

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].

  7. Sorting protein decoys by machine-learning-to-rank

    PubMed Central

    Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen

    2016-01-01

    Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset. PMID:27530967

  8. Sorting protein decoys by machine-learning-to-rank.

    PubMed

    Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen

    2016-08-17

    Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset.

  9. Improved accuracy for finite element structural analysis via a new integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  10. Wideband characterization of the complex wave number and characteristic impedance of sound absorbers.

    PubMed

    Salissou, Yacoubou; Panneton, Raymond

    2010-11-01

    Several methods for measuring the complex wave number and the characteristic impedance of sound absorbers have been proposed in the literature. These methods can be classified into single frequency and wideband methods. In this paper, the main existing methods are revisited and discussed. An alternative method which is not well known or discussed in the literature while exhibiting great potential is also discussed. This method is essentially an improvement of the wideband method described by Iwase et al., rewritten so that the setup is more ISO 10534-2 standard-compliant. Glass wool, melamine foam and acoustical/thermal insulator wool are used to compare the main existing wideband non-iterative methods with this alternative method. It is found that, in the middle and high frequency ranges the alternative method yields results that are comparable in accuracy to the classical two-cavity method and the four-microphone transfer-matrix method. However, in the low frequency range, the alternative method appears to be more accurate than the other methods, especially when measuring the complex wave number.

  11. Methods for environmental change; an exploratory study.

    PubMed

    Kok, Gerjo; Gottlieb, Nell H; Panne, Robert; Smerecnik, Chris

    2012-11-28

    While the interest of health promotion researchers in change methods directed at the target population has a long tradition, interest in change methods directed at the environment is still developing. In this survey, the focus is on methods for environmental change; especially about how these are composed of methods for individual change ('Bundling') and how within one environmental level, organizations, methods differ when directed at the management ('At') or applied by the management ('From'). The first part of this online survey dealt with examining the 'bundling' of individual level methods to methods at the environmental level. The question asked was to what extent the use of an environmental level method would involve the use of certain individual level methods. In the second part of the survey the question was whether there are differences between applying methods directed 'at' an organization (for instance, by a health promoter) versus 'from' within an organization itself. All of the 20 respondents are experts in the field of health promotion. Methods at the individual level are frequently bundled together as part of a method at a higher ecological level. A number of individual level methods are popular as part of most of the environmental level methods, while others are not chosen very often. Interventions directed at environmental agents often have a strong focus on the motivational part of behavior change.There are different approaches targeting a level or being targeted from a level. The health promoter will use combinations of motivation and facilitation. The manager will use individual level change methods focusing on self-efficacy and skills. Respondents think that any method may be used under the right circumstances, although few endorsed coercive methods. Taxonomies of theoretical change methods for environmental change should include combinations of individual level methods that may be bundled and separate suggestions for methods targeting a level or being targeted from a level. Future research needs to cover more methods to rate and to be rated. Qualitative data may explain some of the surprising outcomes, such as the lack of large differences and the avoidance of coercion. Taxonomies should include the theoretical parameters that limit the effectiveness of the method.

  12. A comparison theorem for the SOR iterative method

    NASA Astrophysics Data System (ADS)

    Sun, Li-Ying

    2005-09-01

    In 1997, Kohno et al. have reported numerically that the improving modified Gauss-Seidel method, which was referred to as the IMGS method, is superior to the SOR iterative method. In this paper, we prove that the spectral radius of the IMGS method is smaller than that of the SOR method and Gauss-Seidel method, if the relaxation parameter [omega][set membership, variant](0,1]. As a result, we prove theoretically that this method is succeeded in improving the convergence of some classical iterative methods. Some recent results are improved.

  13. A review of parametric approaches specific to aerodynamic design process

    NASA Astrophysics Data System (ADS)

    Zhang, Tian-tian; Wang, Zhen-guo; Huang, Wei; Yan, Li

    2018-04-01

    Parametric modeling of aircrafts plays a crucial role in the aerodynamic design process. Effective parametric approaches have large design space with a few variables. Parametric methods that commonly used nowadays are summarized in this paper, and their principles have been introduced briefly. Two-dimensional parametric methods include B-Spline method, Class/Shape function transformation method, Parametric Section method, Hicks-Henne method and Singular Value Decomposition method, and all of them have wide application in the design of the airfoil. This survey made a comparison among them to find out their abilities in the design of the airfoil, and the results show that the Singular Value Decomposition method has the best parametric accuracy. The development of three-dimensional parametric methods is limited, and the most popular one is the Free-form deformation method. Those methods extended from two-dimensional parametric methods have promising prospect in aircraft modeling. Since different parametric methods differ in their characteristics, real design process needs flexible choice among them to adapt to subsequent optimization procedure.

  14. A Review and Comparison of Methods for Recreating Individual Patient Data from Published Kaplan-Meier Survival Curves for Economic Evaluations: A Simulation Study

    PubMed Central

    Wan, Xiaomin; Peng, Liubao; Li, Yuanjian

    2015-01-01

    Background In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. Methods A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. Results All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. Conclusions The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method. PMID:25803659

  15. Comparisons of Lagrangian and Eulerian PDF methods in simulations of non-premixed turbulent jet flames with moderate-to-strong turbulence-chemistry interactions

    NASA Astrophysics Data System (ADS)

    Jaishree, J.; Haworth, D. C.

    2012-06-01

    Transported probability density function (PDF) methods have been applied widely and effectively for modelling turbulent reacting flows. In most applications of PDF methods to date, Lagrangian particle Monte Carlo algorithms have been used to solve a modelled PDF transport equation. However, Lagrangian particle PDF methods are computationally intensive and are not readily integrated into conventional Eulerian computational fluid dynamics (CFD) codes. Eulerian field PDF methods have been proposed as an alternative. Here a systematic comparison is performed among three methods for solving the same underlying modelled composition PDF transport equation: a consistent hybrid Lagrangian particle/Eulerian mesh (LPEM) method, a stochastic Eulerian field (SEF) method and a deterministic Eulerian field method with a direct-quadrature-method-of-moments closure (a multi-environment PDF-MEPDF method). The comparisons have been made in simulations of a series of three non-premixed, piloted methane-air turbulent jet flames that exhibit progressively increasing levels of local extinction and turbulence-chemistry interactions: Sandia/TUD flames D, E and F. The three PDF methods have been implemented using the same underlying CFD solver, and results obtained using the three methods have been compared using (to the extent possible) equivalent physical models and numerical parameters. Reasonably converged mean and rms scalar profiles are obtained using 40 particles per cell for the LPEM method or 40 Eulerian fields for the SEF method. Results from these stochastic methods are compared with results obtained using two- and three-environment MEPDF methods. The relative advantages and disadvantages of each method in terms of accuracy and computational requirements are explored and identified. In general, the results obtained from the two stochastic methods (LPEM and SEF) are very similar, and are in closer agreement with experimental measurements than those obtained using the MEPDF method, while MEPDF is the most computationally efficient of the three methods. These and other findings are discussed in detail.

  16. AN EULERIAN-LAGRANGIAN LOCALIZED ADJOINT METHOD FOR THE ADVECTION-DIFFUSION EQUATION

    EPA Science Inventory

    Many numerical methods use characteristic analysis to accommodate the advective component of transport. Such characteristic methods include Eulerian-Lagrangian methods (ELM), modified method of characteristics (MMOC), and operator splitting methods. A generalization of characteri...

  17. Capital investment analysis: three methods.

    PubMed

    Gapenski, L C

    1993-08-01

    Three cash flow/discount rate methods can be used when conducting capital budgeting financial analyses: the net operating cash flow method, the net cash flow to investors method, and the net cash flow to equity holders method. The three methods differ in how the financing mix and the benefits of debt financing are incorporated. This article explains the three methods, demonstrates that they are essentially equivalent, and recommends which method to use under specific circumstances.

  18. Effective description of a 3D object for photon transportation in Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Suganuma, R.; Ogawa, K.

    2000-06-01

    Photon transport simulation by means of the Monte Carlo method is an indispensable technique for examining scatter and absorption correction methods in SPECT and PET. The authors have developed a method for object description with maximum size regions (maximum rectangular regions: MRRs) to speed up photon transport simulation, and compared the computation time with that for conventional object description methods, a voxel-based (VB) method and an octree method, in the simulations of two kinds of phantoms. The simulation results showed that the computation time with the proposed method became about 50% of that with the VD method and about 70% of that with the octree method for a high resolution MCAT phantom. Here, details of the expansion of the MRR method to three dimensions are given. Moreover, the effectiveness of the proposed method was compared with the VB and octree methods.

  19. Region of influence regression for estimating the 50-year flood at ungaged sites

    USGS Publications Warehouse

    Tasker, Gary D.; Hodge, S.A.; Barks, C.S.

    1996-01-01

    Five methods of developing regional regression models to estimate flood characteristics at ungaged sites in Arkansas are examined. The methods differ in the manner in which the State is divided into subrogions. Each successive method (A to E) is computationally more complex than the previous method. Method A makes no subdivision. Methods B and C define two and four geographic subrogions, respectively. Method D uses cluster/discriminant analysis to define subrogions on the basis of similarities in watershed characteristics. Method E, the new region of influence method, defines a unique subregion for each ungaged site. Split-sample results indicate that, in terms of root-mean-square error, method E (38 percent error) is best. Methods C and D (42 and 41 percent error) were in a virtual tie for second, and methods B (44 percent error) and A (49 percent error) were fourth and fifth best.

  20. Scalable parallel elastic-plastic finite element analysis using a quasi-Newton method with a balancing domain decomposition preconditioner

    NASA Astrophysics Data System (ADS)

    Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu

    2018-04-01

    A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.

  1. Designing Class Methods from Dataflow Diagrams

    NASA Astrophysics Data System (ADS)

    Shoval, Peretz; Kabeli-Shani, Judith

    A method for designing the class methods of an information system is described. The method is part of FOOM - Functional and Object-Oriented Methodology. In the analysis phase of FOOM, two models defining the users' requirements are created: a conceptual data model - an initial class diagram; and a functional model - hierarchical OO-DFDs (object-oriented dataflow diagrams). Based on these models, a well-defined process of methods design is applied. First, the OO-DFDs are converted into transactions, i.e., system processes that supports user task. The components and the process logic of each transaction are described in detail, using pseudocode. Then, each transaction is decomposed, according to well-defined rules, into class methods of various types: basic methods, application-specific methods and main transaction (control) methods. Each method is attached to a proper class; messages between methods express the process logic of each transaction. The methods are defined using pseudocode or message charts.

  2. Simple Test Functions in Meshless Local Petrov-Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.

    2016-01-01

    Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.

  3. Leapfrog variants of iterative methods for linear algebra equations

    NASA Technical Reports Server (NTRS)

    Saylor, Paul E.

    1988-01-01

    Two iterative methods are considered, Richardson's method and a general second order method. For both methods, a variant of the method is derived for which only even numbered iterates are computed. The variant is called a leapfrog method. Comparisons between the conventional form of the methods and the leapfrog form are made under the assumption that the number of unknowns is large. In the case of Richardson's method, it is possible to express the final iterate in terms of only the initial approximation, a variant of the iteration called the grand-leap method. In the case of the grand-leap variant, a set of parameters is required. An algorithm is presented to compute these parameters that is related to algorithms to compute the weights and abscissas for Gaussian quadrature. General algorithms to implement the leapfrog and grand-leap methods are presented. Algorithms for the important special case of the Chebyshev method are also given.

  4. Development of a Coordinate Transformation method for direct georeferencing in map projection frames

    NASA Astrophysics Data System (ADS)

    Zhao, Haitao; Zhang, Bing; Wu, Changshan; Zuo, Zhengli; Chen, Zhengchao

    2013-03-01

    This paper develops a novel Coordinate Transformation method (CT-method), with which the orientation angles (roll, pitch, heading) of the local tangent frame of the GPS/INS system are transformed into those (omega, phi, kappa) of the map projection frame for direct georeferencing (DG). Especially, the orientation angles in the map projection frame were derived from a sequence of coordinate transformations. The effectiveness of orientation angles transformation was verified through comparing with DG results obtained from conventional methods (Legat method and POSPac method) using empirical data. Moreover, the CT-method was also validated with simulated data. One advantage of the proposed method is that the orientation angles can be acquired simultaneously while calculating position elements of exterior orientation (EO) parameters and auxiliary points coordinates by coordinate transformation. These three methods were demonstrated and compared using empirical data. Empirical results show that the CT-method is both as sound and effective as Legat method. Compared with POSPac method, the CT-method is more suitable for calculating EO parameters for DG in map projection frames. DG accuracy of the CT-method and Legat method are at the same level. DG results of all these three methods have systematic errors in height due to inconsistent length projection distortion in the vertical and horizontal components, and these errors can be significantly reduced using the EO height correction technique in Legat's approach. Similar to the results obtained with empirical data, the effectiveness of the CT-method was also proved with simulated data. POSPac method: The method is presented by Applanix POSPac software technical note (Hutton and Savina, 1997). It is implemented in the POSEO module of POSPac software.

  5. Comparison of four USEPA digestion methods for trace metal analysis using certified and Florida soils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, M.; Ma, L.Q.

    1998-11-01

    It is critical to compare existing sample digestion methods for evaluating soil contamination and remediation. USEPA Methods 3050, 3051, 3051a, and 3052 were used to digest standard reference materials and representative Florida surface soils. Fifteen trace metals (Ag, As, Ba, Be, Cd, Cr, Cu, Hg, Mn, Mo, Ni, Pb, Sb, Se, and Za), and six macro elements (Al, Ca, Fe, K, Mg, and P) were analyzed. Precise analysis was achieved for all elements except for Cd, Mo, Se, and Sb in NIST SRMs 2704 and 2709 by USEPA Methods 3050 and 3051, and for all elements except for As, Mo,more » Sb, and Se in NIST SRM 2711 by USEPA Method 3052. No significant differences were observed for the three NIST SRMs between the microwave-assisted USEPA Methods 3051 and 3051A and the conventional USEPA Method 3050 Methods 3051 and 3051a and the conventional USEPA Method 3050 except for Hg, Sb, and Se. USEPA Method 3051a provided comparable values for NIST SRMs certified using USEPA Method 3050. However, for method correlation coefficients and elemental recoveries in 40 Florida surface soils, USEPA Method 3051a was an overall better alternative for Method 3050 than was Method 3051. Among the four digestion methods, the microwave-assisted USEPA Method 3052 achieved satisfactory recoveries for all elements except As and Mg using NIST SRM 2711. This total-total digestion method provided greater recoveries for 12 elements Ag, Be, Cr, Fe, K, Mn, Mo, Ni, Pb, Sb, Se, and Zn, but lower recoveries for Mg in Florida soils than did the total-recoverable digestion methods.« less

  6. [Comparative analysis between diatom nitric acid digestion method and plankton 16S rDNA PCR method].

    PubMed

    Han, Jun-ge; Wang, Cheng-bao; Li, Xing-biao; Fan, Yan-yan; Feng, Xiang-ping

    2013-10-01

    To compare and explore the application value of diatom nitric acid digestion method and plankton 16S rDNA PCR method for drowning identification. Forty drowning cases from 2010 to 2011 were collected from Department of Forensic Medicine of Wenzhou Medical University. Samples including lung, kidney, liver and field water from each case were tested with diatom nitric acid digestion method and plankton 16S rDNA PCR method, respectively. The Diatom nitric acid digestion method and plankton 16S rDNA PCR method required 20 g and 2 g of each organ, and 15 mL and 1.5 mL of field water, respectively. The inspection time and detection rate were compared between the two methods. Diatom nitric acid digestion method mainly detected two species of diatoms, Centriae and Pennatae, while plankton 16S rDNA PCR method amplified a length of 162 bp band. The average inspection time of each case of the Diatom nitric acid digestion method was (95.30 +/- 2.78) min less than (325.33 +/- 14.18) min of plankton 16S rDNA PCR method (P < 0.05). The detection rates of two methods for field water and lung were both 100%. For liver and kidney, the detection rate of plankton 16S rDNA PCR method was both 80%, higher than 40% and 30% of diatom nitric acid digestion method (P < 0.05), respectively. The laboratory testing method needs to be appropriately selected according to the specific circumstances in the forensic appraisal of drowning. Compared with diatom nitric acid digestion method, plankton 16S rDNA PCR method has practice values with such advantages as less quantity of samples, huge information and high specificity.

  7. Reliable clarity automatic-evaluation method for optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Qin, Bangyong; Shang, Ren; Li, Shengyang; Hei, Baoqin; Liu, Zhiwen

    2015-10-01

    Image clarity, which reflects the sharpness degree at the edge of objects in images, is an important quality evaluate index for optical remote sensing images. Scholars at home and abroad have done a lot of work on estimation of image clarity. At present, common clarity-estimation methods for digital images mainly include frequency-domain function methods, statistical parametric methods, gradient function methods and edge acutance methods. Frequency-domain function method is an accurate clarity-measure approach. However, its calculation process is complicate and cannot be carried out automatically. Statistical parametric methods and gradient function methods are both sensitive to clarity of images, while their results are easy to be affected by the complex degree of images. Edge acutance method is an effective approach for clarity estimate, while it needs picking out the edges manually. Due to the limits in accuracy, consistent or automation, these existing methods are not applicable to quality evaluation of optical remote sensing images. In this article, a new clarity-evaluation method, which is based on the principle of edge acutance algorithm, is proposed. In the new method, edge detection algorithm and gradient search algorithm are adopted to automatically search the object edges in images. Moreover, The calculation algorithm for edge sharpness has been improved. The new method has been tested with several groups of optical remote sensing images. Compared with the existing automatic evaluation methods, the new method perform better both in accuracy and consistency. Thus, the new method is an effective clarity evaluation method for optical remote sensing images.

  8. 26 CFR 1.412(c)(1)-2 - Shortfall method.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 5 2013-04-01 2013-04-01 false Shortfall method. 1.412(c)(1)-2 Section 1.412(c... Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's underlying funding method for purposes of section 412. As such, the use of the shortfall...

  9. 26 CFR 1.412(c)(1)-2 - Shortfall method.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 5 2012-04-01 2011-04-01 true Shortfall method. 1.412(c)(1)-2 Section 1.412(c... Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's underlying funding method for purposes of section 412. As such, the use of the shortfall...

  10. 26 CFR 1.412(c)(1)-2 - Shortfall method.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 5 2014-04-01 2014-04-01 false Shortfall method. 1.412(c)(1)-2 Section 1.412(c... Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's underlying funding method for purposes of section 412. As such, the use of the shortfall...

  11. 26 CFR 1.412(c)(1)-2 - Shortfall method.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 5 2011-04-01 2011-04-01 false Shortfall method. 1.412(c)(1)-2 Section 1.412(c... Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's underlying funding method for purposes of section 412. As such, the use of the shortfall...

  12. 40 CFR 60.547 - Test methods and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...

  13. 40 CFR 60.547 - Test methods and procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...

  14. 40 CFR 60.547 - Test methods and procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... sample will be representative of the material as applied in the affected facility. (2) Method 25 as the... by the Administrator. (3) Method 2, 2A, 2C, or 2D, as appropriate, as the reference method for...

  15. The Dramatic Methods of Hans van Dam.

    ERIC Educational Resources Information Center

    van de Water, Manon

    1994-01-01

    Interprets for the American reader the untranslated dramatic methods of Hans van Dam, a leading drama theorist in the Netherlands. Discusses the functions of drama as a method, closed dramatic methods, open dramatic methods, and applying van Dam's methods. (SR)

  16. Methods for environmental change; an exploratory study

    PubMed Central

    2012-01-01

    Background While the interest of health promotion researchers in change methods directed at the target population has a long tradition, interest in change methods directed at the environment is still developing. In this survey, the focus is on methods for environmental change; especially about how these are composed of methods for individual change (‘Bundling’) and how within one environmental level, organizations, methods differ when directed at the management (‘At’) or applied by the management (‘From’). Methods The first part of this online survey dealt with examining the ‘bundling’ of individual level methods to methods at the environmental level. The question asked was to what extent the use of an environmental level method would involve the use of certain individual level methods. In the second part of the survey the question was whether there are differences between applying methods directed ‘at’ an organization (for instance, by a health promoter) versus ‘from’ within an organization itself. All of the 20 respondents are experts in the field of health promotion. Results Methods at the individual level are frequently bundled together as part of a method at a higher ecological level. A number of individual level methods are popular as part of most of the environmental level methods, while others are not chosen very often. Interventions directed at environmental agents often have a strong focus on the motivational part of behavior change. There are different approaches targeting a level or being targeted from a level. The health promoter will use combinations of motivation and facilitation. The manager will use individual level change methods focusing on self-efficacy and skills. Respondents think that any method may be used under the right circumstances, although few endorsed coercive methods. Conclusions Taxonomies of theoretical change methods for environmental change should include combinations of individual level methods that may be bundled and separate suggestions for methods targeting a level or being targeted from a level. Future research needs to cover more methods to rate and to be rated. Qualitative data may explain some of the surprising outcomes, such as the lack of large differences and the avoidance of coercion. Taxonomies should include the theoretical parameters that limit the effectiveness of the method. PMID:23190712

  17. Implementation of an improved adaptive-implicit method in a thermal compositional simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, T.B.

    1988-11-01

    A multicomponent thermal simulator with an adaptive-implicit-method (AIM) formulation/inexact-adaptive-Newton (IAN) method is presented. The final coefficient matrix retains the original banded structure so that conventional iterative methods can be used. Various methods for selection of the eliminated unknowns are tested. AIM/IAN method has a lower work count per Newtonian iteration than fully implicit methods, but a wrong choice of unknowns will result in excessive Newtonian iterations. For the problems tested, the residual-error method described in the paper for selecting implicit unknowns, together with the IAN method, had an improvement of up to 28% of the CPU time over the fullymore » implicit method.« less

  18. Approaches to Mixed Methods Dissemination and Implementation Research: Methods, Strengths, Caveats, and Opportunities.

    PubMed

    Green, Carla A; Duan, Naihua; Gibbons, Robert D; Hoagwood, Kimberly E; Palinkas, Lawrence A; Wisdom, Jennifer P

    2015-09-01

    Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings.

  19. Approaches to Mixed Methods Dissemination and Implementation Research: Methods, Strengths, Caveats, and Opportunities

    PubMed Central

    Green, Carla A.; Duan, Naihua; Gibbons, Robert D.; Hoagwood, Kimberly E.; Palinkas, Lawrence A.; Wisdom, Jennifer P.

    2015-01-01

    Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings. PMID:24722814

  20. Bond additivity corrections for quantum chemistry methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    C. F. Melius; M. D. Allendorf

    1999-04-01

    In the 1980's, the authors developed a bond-additivity correction procedure for quantum chemical calculations called BAC-MP4, which has proven reliable in calculating the thermochemical properties of molecular species, including radicals as well as stable closed-shell species. New Bond Additivity Correction (BAC) methods have been developed for the G2 method, BAC-G2, as well as for a hybrid DFT/MP2 method, BAC-Hybrid. These BAC methods use a new form of BAC corrections, involving atomic, molecular, and bond-wise additive terms. These terms enable one to treat positive and negative ions as well as neutrals. The BAC-G2 method reduces errors in the G2 method duemore » to nearest-neighbor bonds. The parameters within the BAC-G2 method only depend on atom types. Thus the BAC-G2 method can be used to determine the parameters needed by BAC methods involving lower levels of theory, such as BAC-Hybrid and BAC-MP4. The BAC-Hybrid method should scale well for large molecules. The BAC-Hybrid method uses the differences between the DFT and MP2 as an indicator of the method's accuracy, while the BAC-G2 method uses its internal methods (G1 and G2MP2) to provide an indicator of its accuracy. Indications of the average error as well as worst cases are provided for each of the BAC methods.« less

  1. Comparison of different methods to quantify fat classes in bakery products.

    PubMed

    Shin, Jae-Min; Hwang, Young-Ok; Tu, Ock-Ju; Jo, Han-Bin; Kim, Jung-Hun; Chae, Young-Zoo; Rhu, Kyung-Hun; Park, Seung-Kook

    2013-01-15

    The definition of fat differs in different countries; thus whether fat is listed on food labels depends on the country. Some countries list crude fat content in the 'Fat' section on the food label, whereas other countries list total fat. In this study, three methods were used for determining fat classes and content in bakery products: the Folch method, the automated Soxhlet method, and the AOAC 996.06 method. The results using these methods were compared. Fat (crude) extracted by the Folch and Soxhlet methods was gravimetrically determined and assessed by fat class using capillary gas chromatography (GC). In most samples, fat (total) content determined by the AOAC 996.06 method was lower than the fat (crude) content determined by the Folch or automated Soxhlet methods. Furthermore, monounsaturated fat or saturated fat content determined by the AOAC 996.06 method was lowest. Almost no difference was observed between fat (crude) content determined by the Folch method and that determined by the automated Soxhlet method for nearly all samples. In three samples (wheat biscuits, butter cookies-1, and chocolate chip cookies), monounsaturated fat, saturated fat, and trans fat content obtained by the automated Soxhlet method was higher than that obtained by the Folch method. The polyunsaturated fat content obtained by the automated Soxhlet method was not higher than that obtained by the Folch method in any sample. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. A CLASS OF RECONSTRUCTED DISCONTINUOUS GALERKIN METHODS IN COMPUTATIONAL FLUID DYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Luo; Yidong Xia; Robert Nourgaliev

    2011-05-01

    A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison.more » Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness.« less

  3. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method

    PubMed Central

    2014-01-01

    Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018

  4. Integral methods of solving boundary-value problems of nonstationary heat conduction and their comparative analysis

    NASA Astrophysics Data System (ADS)

    Kot, V. A.

    2017-11-01

    The modern state of approximate integral methods used in applications, where the processes of heat conduction and heat and mass transfer are of first importance, is considered. Integral methods have found a wide utility in different fields of knowledge: problems of heat conduction with different heat-exchange conditions, simulation of thermal protection, Stefantype problems, microwave heating of a substance, problems on a boundary layer, simulation of a fluid flow in a channel, thermal explosion, laser and plasma treatment of materials, simulation of the formation and melting of ice, inverse heat problems, temperature and thermal definition of nanoparticles and nanoliquids, and others. Moreover, polynomial solutions are of interest because the determination of a temperature (concentration) field is an intermediate stage in the mathematical description of any other process. The following main methods were investigated on the basis of the error norms: the Tsoi and Postol’nik methods, the method of integral relations, the Gudman integral method of heat balance, the improved Volkov integral method, the matched integral method, the modified Hristov method, the Mayer integral method, the Kudinov method of additional boundary conditions, the Fedorov boundary method, the method of weighted temperature function, the integral method of boundary characteristics. It was established that the two last-mentioned methods are characterized by high convergence and frequently give solutions whose accuracy is not worse that the accuracy of numerical solutions.

  5. Method for producing smooth inner surfaces

    DOEpatents

    Cooper, Charles A.

    2016-05-17

    The invention provides a method for preparing superconducting cavities, the method comprising causing polishing media to tumble by centrifugal barrel polishing within the cavities for a time sufficient to attain a surface smoothness of less than 15 nm root mean square roughness over approximately a 1 mm.sup.2 scan area. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media bound to a carrier to tumble within the cavities. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media in a slurry to tumble within the cavities.

  6. A Hybrid Method for Pancreas Extraction from CT Image Based on Level Set Methods

    PubMed Central

    Tan, Hanqing; Fujita, Hiroshi

    2013-01-01

    This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction. PMID:24066016

  7. Optimal back-extrapolation method for estimating plasma volume in humans using the indocyanine green dilution method.

    PubMed

    Polidori, David; Rowley, Clarence

    2014-07-22

    The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.

  8. Trends in the Contraceptive Method Mix in Low- and Middle-Income Countries: Analysis Using a New “Average Deviation” Measure

    PubMed Central

    Ross, John; Keesbury, Jill; Hardee, Karen

    2015-01-01

    ABSTRACT The method mix of contraceptive use is severely unbalanced in many countries, with over half of all use provided by just 1 or 2 methods. That tends to limit the range of user options and constrains the total prevalence of use, leading to unplanned pregnancies and births or abortions. Previous analyses of method mix distortions focused on countries where a single method accounted for more than half of all use (the 50% rule). We introduce a new measure that uses the average deviation (AD) of method shares around their own mean and apply that to a secondary analysis of method mix data for 8 contraceptive methods from 666 national surveys in 123 countries. A high AD value indicates a skewed method mix while a low AD value indicates a more uniform pattern across methods; the values can range from 0 to 21.9. Most AD values ranged from 6 to 19, with an interquartile range of 8.6 to 12.2. Using the AD measure, we identified 15 countries where the method mix has evolved from a distorted one to a better balanced one, with AD values declining, on average, by 35% over time. Countries show disparate paths in method gains and losses toward a balanced mix, but 4 patterns are suggested: (1) rise of one method partially offset by changes in other methods, (2) replacement of traditional with modern methods, (3) continued but declining domination by a single method, and (4) declines in dominant methods with increases in other methods toward a balanced mix. Regions differ markedly in their method mix profiles and preferences, raising the question of whether programmatic resources are best devoted to better provision of the well-accepted methods or to deploying neglected or new ones, or to a combination of both approaches. PMID:25745119

  9. A review and comparison of methods for recreating individual patient data from published Kaplan-Meier survival curves for economic evaluations: a simulation study.

    PubMed

    Wan, Xiaomin; Peng, Liubao; Li, Yuanjian

    2015-01-01

    In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method.

  10. Achieving cost-neutrality with long-acting reversible contraceptive methods.

    PubMed

    Trussell, James; Hassan, Fareen; Lowin, Julia; Law, Amy; Filonenko, Anna

    2015-01-01

    This analysis aimed to estimate the average annual cost of available reversible contraceptive methods in the United States. In line with literature suggesting long-acting reversible contraceptive (LARC) methods become increasingly cost-saving with extended duration of use, it aimed to also quantify minimum duration of use required for LARC methods to achieve cost-neutrality relative to other reversible contraceptive methods while taking into consideration discontinuation. A three-state economic model was developed to estimate relative costs of no method (chance), four short-acting reversible (SARC) methods (oral contraceptive, ring, patch and injection) and three LARC methods [implant, copper intrauterine device (IUD) and levonorgestrel intrauterine system (LNG-IUS) 20 mcg/24 h (total content 52 mg)]. The analysis was conducted over a 5-year time horizon in 1000 women aged 20-29 years. Method-specific failure and discontinuation rates were based on published literature. Costs associated with drug acquisition, administration and failure (defined as an unintended pregnancy) were considered. Key model outputs were annual average cost per method and minimum duration of LARC method usage to achieve cost-savings compared to SARC methods. The two least expensive methods were copper IUD ($304 per women, per year) and LNG-IUS 20 mcg/24 h ($308). Cost of SARC methods ranged between $432 (injection) and $730 (patch), per women, per year. A minimum of 2.1 years of LARC usage would result in cost-savings compared to SARC usage. This analysis finds that even if LARC methods are not used for their full durations of efficacy, they become cost-saving relative to SARC methods within 3 years of use. Previous economic arguments in support of using LARC methods have been criticized for not considering that LARC methods are not always used for their full duration of efficacy. This study calculated that cost-savings from LARC methods relative to SARC methods, with discontinuation rates considered, can be realized within 3 years. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. [Analyzing and modeling methods of near infrared spectroscopy for in-situ prediction of oil yield from oil shale].

    PubMed

    Liu, Jie; Zhang, Fu-Dong; Teng, Fei; Li, Jun; Wang, Zhi-Hong

    2014-10-01

    In order to in-situ detect the oil yield of oil shale, based on portable near infrared spectroscopy analytical technology, with 66 rock core samples from No. 2 well drilling of Fuyu oil shale base in Jilin, the modeling and analyzing methods for in-situ detection were researched. By the developed portable spectrometer, 3 data formats (reflectance, absorbance and K-M function) spectra were acquired. With 4 different modeling data optimization methods: principal component-mahalanobis distance (PCA-MD) for eliminating abnormal samples, uninformative variables elimination (UVE) for wavelength selection and their combina- tions: PCA-MD + UVE and UVE + PCA-MD, 2 modeling methods: partial least square (PLS) and back propagation artificial neural network (BPANN), and the same data pre-processing, the modeling and analyzing experiment were performed to determine the optimum analysis model and method. The results show that the data format, modeling data optimization method and modeling method all affect the analysis precision of model. Results show that whether or not using the optimization method, reflectance or K-M function is the proper spectrum format of the modeling database for two modeling methods. Using two different modeling methods and four different data optimization methods, the model precisions of the same modeling database are different. For PLS modeling method, the PCA-MD and UVE + PCA-MD data optimization methods can improve the modeling precision of database using K-M function spectrum data format. For BPANN modeling method, UVE, UVE + PCA-MD and PCA- MD + UVE data optimization methods can improve the modeling precision of database using any of the 3 spectrum data formats. In addition to using the reflectance spectra and PCA-MD data optimization method, modeling precision by BPANN method is better than that by PLS method. And modeling with reflectance spectra, UVE optimization method and BPANN modeling method, the model gets the highest analysis precision, its correlation coefficient (Rp) is 0.92, and its standard error of prediction (SEP) is 0.69%.

  12. Relative effectiveness of the Bacteriological Analytical Manual method for the recovery of Salmonella from whole cantaloupes and cantaloupe rinses with selected preenrichment media and rapid methods.

    PubMed

    Hammack, Thomas S; Valentin-Bon, Iris E; Jacobson, Andrew P; Andrews, Wallace H

    2004-05-01

    Soak and rinse methods were compared for the recovery of Salmonella from whole cantaloupes. Cantaloupes were surface inoculated with Salmonella cell suspensions and stored for 4 days at 2 to 6 degrees C. Cantaloupes were placed in sterile plastic bags with a nonselective preenrichment broth at a 1:1.5 cantaloupe weight-to-broth volume ratio. The cantaloupe broths were shaken for 5 min at 100 rpm after which 25-ml aliquots (rinse) were removed from the bags. The 25-ml rinses were preenriched in 225-ml portions of the same uninoculated broth type at 35 degrees C for 24 h (rinse method). The remaining cantaloupe broths were incubated at 35 degrees C for 24 h (soak method). The preenrichment broths used were buffered peptone water (BPW), modified BPW, lactose (LAC) broth, and Universal Preenrichment (UP) broth. The Bacteriological Analytical Manual Salmonella culture method was compared with the following rapid methods: the TECRA Unique Salmonella method, the VIDAS ICS/SLM method, and the VIDAS SLM method. The soak method detected significantly more Salmonella-positive cantaloupes (P < 0.05) than did the rinse method: 367 Salmonella-positive cantaloupes of 540 test cantaloupes by the soak method and 24 Salmonella-positive cantaloupes of 540 test cantaloupes by the rinse method. Overall, BPW, LAC, and UP broths were equivalent for the recovery of Salmonella from cantaloupes. Both the VIDAS ICS/SLM and TECRA Unique Salmonella methods detected significantly fewer Salmonella-positive cantaloupes than did the culture method: the VIDAS ICS/SLM method detected 23 of 50 Salmonella-positive cantaloupes (60 tested) and the TECRA Unique Salmonella method detected 16 of 29 Salmonella-positive cantaloupes (60 tested). The VIDAS SLM and culture methods were equivalent: both methods detected 37 of 37 Salmonella-positive cantaloupes (60 tested).

  13. Temperature Profiles of Different Cooling Methods in Porcine Pancreas Procurement

    PubMed Central

    Weegman, Brad P.; Suszynski, Thomas M.; Scott, William E.; Ferrer, Joana; Avgoustiniatos, Efstathios S.; Anazawa, Takayuki; O’Brien, Timothy D.; Rizzari, Michael D.; Karatzas, Theodore; Jie, Tun; Sutherland, David ER.; Hering, Bernhard J.; Papas, Klearchos K.

    2014-01-01

    Background Porcine islet xenotransplantation is a promising alternative to human islet allotransplantation. Porcine pancreas cooling needs to be optimized to reduce the warm ischemia time (WIT) following donation after cardiac death, which is associated with poorer islet isolation outcomes. Methods This study examines the effect of 4 different cooling Methods on core porcine pancreas temperature (n=24) and histopathology (n=16). All Methods involved surface cooling with crushed ice and chilled irrigation. Method A, which is the standard for porcine pancreas procurement, used only surface cooling. Method B involved an intravascular flush with cold solution through the pancreas arterial system. Method C involved an intraductal infusion with cold solution through the major pancreatic duct, and Method D combined all 3 cooling Methods. Results Surface cooling alone (Method A) gradually decreased core pancreas temperature to < 10 °C after 30 minutes. Using an intravascular flush (Method B) improved cooling during the entire duration of procurement, but incorporating an intraductal infusion (Method C) rapidly reduced core temperature 15–20 °C within the first 2 minutes of cooling. Combining all methods (Method D) was the most effective at rapidly reducing temperature and providing sustained cooling throughout the duration of procurement, although the recorded WIT was not different between Methods (p=0.36). Histological scores were different between the cooling Methods (p=0.02) and the worst with Method A. There were differences in histological scores between Methods A and C (p=0.02) and Methods A and D (p=0.02), but not between Methods C and D (p=0.95), which may highlight the importance of early cooling using an intraductal infusion. Conclusions In conclusion, surface cooling alone cannot rapidly cool large (porcine or human) pancreata. Additional cooling with an intravascular flush and intraductal infusion results in improved core porcine pancreas temperature profiles during procurement and histopathology scores. These data may also have implications on human pancreas procurement since use of an intraductal infusion is not common practice. PMID:25040217

  14. A comparison of Ki-67 counting methods in luminal Breast Cancer: The Average Method vs. the Hot Spot Method

    PubMed Central

    Jang, Min Hye; Kim, Hyun Jung; Chung, Yul Ri; Lee, Yangkyu

    2017-01-01

    In spite of the usefulness of the Ki-67 labeling index (LI) as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67) between the two methods and the ratio of the Ki-67 LIs (H/A ratio) of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700). In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility. PMID:28187177

  15. A comparison of Ki-67 counting methods in luminal Breast Cancer: The Average Method vs. the Hot Spot Method.

    PubMed

    Jang, Min Hye; Kim, Hyun Jung; Chung, Yul Ri; Lee, Yangkyu; Park, So Yeon

    2017-01-01

    In spite of the usefulness of the Ki-67 labeling index (LI) as a prognostic and predictive marker in breast cancer, its clinical application remains limited due to variability in its measurement and the absence of a standard method of interpretation. This study was designed to compare the two methods of assessing Ki-67 LI: the average method vs. the hot spot method and thus to determine which method is more appropriate in predicting prognosis of luminal/HER2-negative breast cancers. Ki-67 LIs were calculated by direct counting of three representative areas of 493 luminal/HER2-negative breast cancers using the two methods. We calculated the differences in the Ki-67 LIs (ΔKi-67) between the two methods and the ratio of the Ki-67 LIs (H/A ratio) of the two methods. In addition, we compared the performance of the Ki-67 LIs obtained by the two methods as prognostic markers. ΔKi-67 ranged from 0.01% to 33.3% and the H/A ratio ranged from 1.0 to 2.6. Based on the receiver operating characteristic curve method, the predictive powers of the KI-67 LI measured by the two methods were similar (Area under curve: hot spot method, 0.711; average method, 0.700). In multivariate analysis, high Ki-67 LI based on either method was an independent poor prognostic factor, along with high T stage and node metastasis. However, in repeated counts, the hot spot method did not consistently classify tumors into high vs. low Ki-67 LI groups. In conclusion, both the average and hot spot method of evaluating Ki-67 LI have good predictive performances for tumor recurrence in luminal/HER2-negative breast cancers. However, we recommend using the average method for the present because of its greater reproducibility.

  16. Estimating Tree Height-Diameter Models with the Bayesian Method

    PubMed Central

    Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733

  17. Estimating tree height-diameter models with the Bayesian method.

    PubMed

    Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei

    2014-01-01

    Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.

  18. A comparison of treatment effectiveness between the CAD/CAM method and the manual method for managing adolescent idiopathic scoliosis.

    PubMed

    Wong, M S; Cheng, J C Y; Lo, K H

    2005-04-01

    The treatment effectiveness of the CAD/CAM method and the manual method in managing adolescent idiopathic scoliosis (AIS) was compared. Forty subjects were recruited with twenty subjects for each method. The clinical parameters namely Cobb's angle and apical vertebral rotation were evaluated at the pre-brace and the immediate in-brace visits. The results demonstrated that orthotic treatments rendered by the CAD/CAM method and the conventional manual method were effective in providing initial control of Cobb's angle. Significant decreases (p < 0.05) were found between the pre-brace and immediate in-brace visits for both methods. The mean reductions of Cobb's angle were 12.8 degrees (41.9%) for the CAD/CAM method and 9.8 degrees (32.1%) for the manual method. An initial control of the apical vertebral rotation was not shown in this study. In the comparison between the CAD/CAM method and the manual method, no significant difference was found in the control of Cobb's angle and apical vertebral rotation. The current study demonstrated that the CAD/CAM method can provide similar result in the initial stage of treatment as compared with the manual method.

  19. A brief introduction to computer-intensive methods, with a view towards applications in spatial statistics and stereology.

    PubMed

    Mattfeldt, Torsten

    2011-04-01

    Computer-intensive methods may be defined as data analytical procedures involving a huge number of highly repetitive computations. We mention resampling methods with replacement (bootstrap methods), resampling methods without replacement (randomization tests) and simulation methods. The resampling methods are based on simple and robust principles and are largely free from distributional assumptions. Bootstrap methods may be used to compute confidence intervals for a scalar model parameter and for summary statistics from replicated planar point patterns, and for significance tests. For some simple models of planar point processes, point patterns can be simulated by elementary Monte Carlo methods. The simulation of models with more complex interaction properties usually requires more advanced computing methods. In this context, we mention simulation of Gibbs processes with Markov chain Monte Carlo methods using the Metropolis-Hastings algorithm. An alternative to simulations on the basis of a parametric model consists of stochastic reconstruction methods. The basic ideas behind the methods are briefly reviewed and illustrated by simple worked examples in order to encourage novices in the field to use computer-intensive methods. © 2010 The Authors Journal of Microscopy © 2010 Royal Microscopical Society.

  20. Costs and Efficiency of Online and Offline Recruitment Methods: A Web-Based Cohort Study.

    PubMed

    Christensen, Tina; Riis, Anders H; Hatch, Elizabeth E; Wise, Lauren A; Nielsen, Marie G; Rothman, Kenneth J; Toft Sørensen, Henrik; Mikkelsen, Ellen M

    2017-03-01

    The Internet is widely used to conduct research studies on health issues. Many different methods are used to recruit participants for such studies, but little is known about how various recruitment methods compare in terms of efficiency and costs. The aim of our study was to compare online and offline recruitment methods for Internet-based studies in terms of efficiency (number of recruited participants) and costs per participant. We employed several online and offline recruitment methods to enroll 18- to 45-year-old women in an Internet-based Danish prospective cohort study on fertility. Offline methods included press releases, posters, and flyers. Online methods comprised advertisements placed on five different websites, including Facebook and Netdoktor.dk. We defined seven categories of mutually exclusive recruitment methods and used electronic tracking via unique Uniform Resource Locator (URL) and self-reported data to identify the recruitment method for each participant. For each method, we calculated the average cost per participant and efficiency, that is, the total number of recruited participants. We recruited 8252 study participants. Of these, 534 were excluded as they could not be assigned to a specific recruitment method. The final study population included 7724 participants, of whom 803 (10.4%) were recruited by offline methods, 3985 (51.6%) by online methods, 2382 (30.8%) by online methods not initiated by us, and 554 (7.2%) by other methods. Overall, the average cost per participant was €6.22 for online methods initiated by us versus €9.06 for offline methods. Costs per participant ranged from €2.74 to €105.53 for online methods and from €0 to €67.50 for offline methods. Lowest average costs per participant were for those recruited from Netdoktor.dk (€2.99) and from Facebook (€3.44). In our Internet-based cohort study, online recruitment methods were superior to offline methods in terms of efficiency (total number of participants enrolled). The average cost per recruited participant was also lower for online than for offline methods, although costs varied greatly among both online and offline recruitment methods. We observed a decrease in the efficiency of some online recruitment methods over time, suggesting that it may be optimal to adopt multiple online methods. ©Tina Christensen, Anders H Riis, Elizabeth E Hatch, Lauren A Wise, Marie G Nielsen, Kenneth J Rothman, Henrik Toft Sørensen, Ellen M Mikkelsen. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 01.03.2017.

  1. A simple high performance liquid chromatography method for analyzing paraquat in soil solution samples.

    PubMed

    Ouyang, Ying; Mansell, Robert S; Nkedi-Kizza, Peter

    2004-01-01

    A high performance liquid chromatography (HPLC) method with UV detection was developed to analyze paraquat (1,1'-dimethyl-4,4'-dipyridinium dichloride) herbicide content in soil solution samples. The analytical method was compared with the liquid scintillation counting (LSC) method using 14C-paraquat. Agreement obtained between the two methods was reasonable. However, the detection limit for paraquat analysis was 0.5 mg L(-1) by the HPLC method and 0.05 mg L(-1) by the LSC method. The LSC method was, therefore, 10 times more precise than the HPLC method for solution concentrations less than 1 mg L(-1). In spite of the high detection limit, the UC (nonradioactive) HPLC method provides an inexpensive and environmentally safe means for determining paraquat concentration in soil solution compared with the 14C-LSC method.

  2. Hybrid finite element and Brownian dynamics method for diffusion-controlled reactions.

    PubMed

    Bauler, Patricia; Huber, Gary A; McCammon, J Andrew

    2012-04-28

    Diffusion is often the rate determining step in many biological processes. Currently, the two main computational methods for studying diffusion are stochastic methods, such as Brownian dynamics, and continuum methods, such as the finite element method. This paper proposes a new hybrid diffusion method that couples the strengths of each of these two methods. The method is derived for a general multidimensional system, and is presented using a basic test case for 1D linear and radially symmetric diffusion systems.

  3. Application of multiattribute decision-making methods for the determination of relative significance factor of impact categories.

    PubMed

    Noh, Jaesung; Lee, Kun Mo

    2003-05-01

    A relative significance factor (f(i)) of an impact category is the external weight of the impact category. The objective of this study is to propose a systematic and easy-to-use method for the determination of f(i). Multiattribute decision-making (MADM) methods including the analytical hierarchy process (AHP), the rank-order centroid method, and the fuzzy method were evaluated for this purpose. The results and practical aspects of using the three methods are compared. Each method shows the same trend, with minor differences in the value of f(i). Thus, all three methods can be applied to the determination of f(i). The rank order centroid method reduces the number of pairwise comparisons by placing the alternatives in order, although it has inherent weakness over the fuzzy method in expressing the degree of vagueness associated with assigning weights to criteria and alternatives. The rank order centroid method is considered a practical method for the determination of f(i) because it is easier and simpler to use compared to the AHP and the fuzzy method.

  4. Utility of N-Bromosuccinimide for the Titrimetric and Spectrophotometric Determination of Famotidine in Pharmaceutical Formulations

    PubMed Central

    Zenita, O.; Basavaiah, K.

    2011-01-01

    Two titrimetric and two spectrophotometric methods are described for the assay of famotidine (FMT) in tablets using N-bromosuccinimide (NBS). The first titrimetric method is direct in which FMT is titrated directly with NBS in HCl medium using methyl orange as indicator (method A). The remaining three methods are indirect in which the unreacted NBS is determined after the complete reaction between FMT and NBS by iodometric back titration (method B) or by reacting with a fixed amount of either indigo carmine (method C) or neutral red (method D). The method A and method B are applicable over the range of 2–9 mg and 1–7 mg, respectively. In spectrophotometric methods, Beer's law is obeyed over the concentration ranges of 0.75–6.0 μg mL−1 (method C) and 0.3–3.0 μg mL−1 (method D). The applicability of the developed methods was demonstrated by the determination of FMT in pure drug as well as in tablets. PMID:21760785

  5. Twostep-by-twostep PIRK-type PC methods with continuous output formulas

    NASA Astrophysics Data System (ADS)

    Cong, Nguyen Huu; Xuan, Le Ngoc

    2008-11-01

    This paper deals with parallel predictor-corrector (PC) iteration methods based on collocation Runge-Kutta (RK) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of first-order differential equations. At nth step, the continuous output formulas are used not only for predicting the stage values in the PC iteration methods but also for calculating the step values at (n+2)th step. In this case, the integration processes can be proceeded twostep-by-twostep. The resulting twostep-by-twostep (TBT) parallel-iterated RK-type (PIRK-type) methods with continuous output formulas (twostep-by-twostep PIRKC methods or TBTPIRKC methods) give us a faster integration process. Fixed stepsize applications of these TBTPIRKC methods to a few widely-used test problems reveal that the new PC methods are much more efficient when compared with the well-known parallel-iterated RK methods (PIRK methods), parallel-iterated RK-type PC methods with continuous output formulas (PIRKC methods) and sequential explicit RK codes DOPRI5 and DOP853 available from the literature.

  6. Which method should be the reference method to evaluate the severity of rheumatic mitral stenosis? Gorlin's method versus 3D-echo.

    PubMed

    Pérez de Isla, Leopoldo; Casanova, Carlos; Almería, Carlos; Rodrigo, José Luis; Cordeiro, Pedro; Mataix, Luis; Aubele, Ada Lia; Lang, Roberto; Zamorano, José Luis

    2007-12-01

    Several studies have shown a wide variability among different methods to determine the valve area in patients with rheumatic mitral stenosis. Our aim was to evaluate if 3D-echo planimetry is more accurate than the Gorlin method to measure the valve area. Twenty-six patients with mitral stenosis underwent 2D and 3D-echo echocardiographic examinations and catheterization. Valve area was estimated by different methods. A median value of the mitral valve area, obtained from the measurements of three classical non-invasive methods (2D planimetry, pressure half-time and PISA method), was used as the reference method and it was compared with 3D-echo planimetry and Gorlin's method. Our results showed that the accuracy of 3D-echo planimetry is superior to the accuracy of the Gorlin method for the assessment of mitral valve area. We should keep in mind the fact that 3D-echo planimetry may be a better reference method than the Gorlin method to assess the severity of rheumatic mitral stenosis.

  7. Evaluation and comparison of Abbott Jaffe and enzymatic creatinine methods: Could the old method meet the new requirements?

    PubMed

    Küme, Tuncay; Sağlam, Barıs; Ergon, Cem; Sisman, Ali Rıza

    2018-01-01

    The aim of this study is to evaluate and compare the analytical performance characteristics of the two creatinine methods based on the Jaffe and enzymatic methods. Two original creatinine methods, Jaffe and enzymatic, were evaluated on Architect c16000 automated analyzer via limit of detection (LOD) and limit of quantitation (LOQ), linearity, intra-assay and inter-assay precision, and comparability in serum and urine samples. The method comparison and bias estimation using patient samples according to CLSI guideline were performed on 230 serum and 141 urine samples by analyzing on the same auto-analyzer. The LODs were determined as 0.1 mg/dL for both serum methods and as 0.25 and 0.07 mg/dL for the Jaffe and the enzymatic urine method respectively. The LOQs were similar with 0.05 mg/dL value for both serum methods, and enzymatic urine method had a lower LOQ than Jaffe urine method, values at 0.5 and 2 mg/dL respectively. Both methods were linear up to 65 mg/dL for serum and 260 mg/dL for urine. The intra-assay and inter-assay precision data were under desirable levels in both methods. The higher correlations were determined between two methods in serum and urine (r=.9994, r=.9998 respectively). On the other hand, Jaffe method gave the higher creatinine results than enzymatic method, especially at the low concentrations in both serum and urine. Both Jaffe and enzymatic methods were found to meet the analytical performance requirements in routine use. However, enzymatic method was found to have better performance in low creatinine levels. © 2017 Wiley Periodicals, Inc.

  8. Comparison of the lysis centrifugation method with the conventional blood culture method in cases of sepsis in a tertiary care hospital.

    PubMed

    Parikh, Harshal R; De, Anuradha S; Baveja, Sujata M

    2012-07-01

    Physicians and microbiologists have long recognized that the presence of living microorganisms in the blood of a patient carries with it considerable morbidity and mortality. Hence, blood cultures have become critically important and frequently performed test in clinical microbiology laboratories for diagnosis of sepsis. To compare the conventional blood culture method with the lysis centrifugation method in cases of sepsis. Two hundred nonduplicate blood cultures from cases of sepsis were analyzed using two blood culture methods concurrently for recovery of bacteria from patients diagnosed clinically with sepsis - the conventional blood culture method using trypticase soy broth and the lysis centrifugation method using saponin by centrifuging at 3000 g for 30 minutes. Overall bacteria recovered from 200 blood cultures were 17.5%. The conventional blood culture method had a higher yield of organisms, especially Gram positive cocci. The lysis centrifugation method was comparable with the former method with respect to Gram negative bacilli. The sensitivity of lysis centrifugation method in comparison to conventional blood culture method was 49.75% in this study, specificity was 98.21% and diagnostic accuracy was 89.5%. In almost every instance, the time required for detection of the growth was earlier by lysis centrifugation method, which was statistically significant. Contamination by lysis centrifugation was minimal, while that by conventional method was high. Time to growth by the lysis centrifugation method was highly significant (P value 0.000) as compared to time to growth by the conventional blood culture method. For the diagnosis of sepsis, combination of the lysis centrifugation method and the conventional blood culture method with trypticase soy broth or biphasic media is advocable, in order to achieve faster recovery and a better yield of microorganisms.

  9. Optimization and validation of spectrophotometric methods for determination of finasteride in dosage and biological forms

    PubMed Central

    Amin, Alaa S.; Kassem, Mohammed A.

    2012-01-01

    Aim and Background: Three simple, accurate and sensitive spectrophotometric methods for the determination of finasteride in pure, dosage and biological forms, and in the presence of its oxidative degradates were developed. Materials and Methods: These methods are indirect, involve the addition of excess oxidant potassium permanganate for method A; cerric sulfate [Ce(SO4)2] for methods B; and N-bromosuccinimide (NBS) for method C of known concentration in acid medium to finasteride, and the determination of the unreacted oxidant by measurement of the decrease in absorbance of methylene blue for method A, chromotrope 2R for method B, and amaranth for method C at a suitable maximum wavelength, λmax: 663, 528, and 520 nm, for the three methods, respectively. The reaction conditions for each method were optimized. Results: Regression analysis of the Beer plots showed good correlation in the concentration ranges of 0.12–3.84 μg mL–1 for method A, and 0.12–3.28 μg mL–1 for method B and 0.14 – 3.56 μg mL–1 for method C. The apparent molar absorptivity, Sandell sensitivity, detection and quantification limits were evaluated. The stoichiometric ratio between the finasteride and the oxidant was estimated. The validity of the proposed methods was tested by analyzing dosage forms and biological samples containing finasteride with relative standard deviation ≤ 0.95. Conclusion: The proposed methods could successfully determine the studied drug with varying excess of its oxidative degradation products, with recovery between 99.0 and 101.4, 99.2 and 101.6, and 99.6 and 101.0% for methods A, B, and C, respectively. PMID:23781478

  10. John Butcher and hybrid methods

    NASA Astrophysics Data System (ADS)

    Mehdiyeva, Galina; Imanova, Mehriban; Ibrahimov, Vagif

    2017-07-01

    As is known there are the mainly two classes of the numerical methods for solving ODE, which is commonly called a one and multistep methods. Each of these methods has certain advantages and disadvantages. It is obvious that the method which has better properties of these methods should be constructed at the junction of them. In the middle of the XX century, Butcher and Gear has constructed at the junction of the methods of Runge-Kutta and Adams, which is called hybrid method. Here considers the construction of certain generalizations of hybrid methods, with the high order of accuracy and to explore their application to solving the Ordinary Differential, Volterra Integral and Integro-Differential equations. Also have constructed some specific hybrid methods with the degree p ≤ 10.

  11. Critical study of higher order numerical methods for solving the boundary-layer equations

    NASA Technical Reports Server (NTRS)

    Wornom, S. F.

    1978-01-01

    A fourth order box method is presented for calculating numerical solutions to parabolic, partial differential equations in two variables or ordinary differential equations. The method, which is the natural extension of the second order box scheme to fourth order, was demonstrated with application to the incompressible, laminar and turbulent, boundary layer equations. The efficiency of the present method is compared with two point and three point higher order methods, namely, the Keller box scheme with Richardson extrapolation, the method of deferred corrections, a three point spline method, and a modified finite element method. For equivalent accuracy, numerical results show the present method to be more efficient than higher order methods for both laminar and turbulent flows.

  12. A temperature match based optimization method for daily load prediction considering DLC effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Z.

    This paper presents a unique optimization method for short term load forecasting. The new method is based on the optimal template temperature match between the future and past temperatures. The optimal error reduction technique is a new concept introduced in this paper. Two case studies show that for hourly load forecasting, this method can yield results as good as the rather complicated Box-Jenkins Transfer Function method, and better than the Box-Jenkins method; for peak load prediction, this method is comparable in accuracy to the neural network method with back propagation, and can produce more accurate results than the multi-linear regressionmore » method. The DLC effect on system load is also considered in this method.« less

  13. [Isolation and identification methods of enterobacteria group and its technological advancement].

    PubMed

    Furuta, Itaru

    2007-08-01

    In the last half-century, isolation and identification methods of enterobacteria groups have markedly improved by technological advancement. Clinical microbiology tests have changed overtime from tube methods to commercial identification kits and automated identification. Tube methods are the original method for the identification of enterobacteria groups, that is, a basically essential method to recognize bacterial fermentation and biochemical principles. In this paper, traditional tube tests are discussed, such as the utilization of carbohydrates, indole, methyl red, and citrate and urease tests. Commercial identification kits and automated instruments by computer based analysis as current methods are also discussed, and those methods provide rapidity and accuracy. Nonculture techniques of nucleic acid typing methods using PCR analysis, and immunochemical methods using monoclonal antibodies can be further developed.

  14. Comparison of three commercially available fit-test methods.

    PubMed

    Janssen, Larry L; Luinenburg, D Michael; Mullins, Haskell E; Nelson, Thomas J

    2002-01-01

    American National Standards Institute (ANSI) standard Z88.10, Respirator Fit Testing Methods, includes criteria to evaluate new fit-tests. The standard allows generated aerosol, particle counting, or controlled negative pressure quantitative fit-tests to be used as the reference method to determine acceptability of a new test. This study examined (1) comparability of three Occupational Safety and Health Administration-accepted fit-test methods, all of which were validated using generated aerosol as the reference method; and (2) the effect of the reference method on the apparent performance of a fit-test method under evaluation. Sequential fit-tests were performed using the controlled negative pressure and particle counting quantitative fit-tests and the bitter aerosol qualitative fit-test. Of 75 fit-tests conducted with each method, the controlled negative pressure method identified 24 failures; bitter aerosol identified 22 failures; and the particle counting method identified 15 failures. The sensitivity of each method, that is, agreement with the reference method in identifying unacceptable fits, was calculated using each of the other two methods as the reference. None of the test methods met the ANSI sensitivity criterion of 0.95 or greater when compared with either of the other two methods. These results demonstrate that (1) the apparent performance of any fit-test depends on the reference method used, and (2) the fit-tests evaluated use different criteria to identify inadequately fitting respirators. Although "acceptable fit" cannot be defined in absolute terms at this time, the ability of existing fit-test methods to reject poor fits can be inferred from workplace protection factor studies.

  15. A Tale of Two Methods: Chart and Interview Methods for Identifying Delirium

    PubMed Central

    Saczynski, Jane S.; Kosar, Cyrus M.; Xu, Guoquan; Puelle, Margaret R.; Schmitt, Eva; Jones, Richard N.; Marcantonio, Edward R.; Wong, Bonnie; Isaza, Ilean; Inouye, Sharon K.

    2014-01-01

    Background Interview and chart-based methods for identifying delirium have been validated. However, relative strengths and limitations of each method have not been described, nor has a combined approach (using both interviews and chart), been systematically examined. Objectives To compare chart and interview-based methods for identification of delirium. Design, Setting and Participants Participants were 300 patients aged 70+ undergoing major elective surgery (majority were orthopedic surgery) interviewed daily during hospitalization for delirium using the Confusion Assessment Method (CAM; interview-based method) and whose medical charts were reviewed for delirium using a validated chart-review method (chart-based method). We examined rate of agreement on the two methods and patient characteristics of those identified using each approach. Predictive validity for clinical outcomes (length of stay, postoperative complications, discharge disposition) was compared. In the absence of a gold-standard, predictive value could not be calculated. Results The cumulative incidence of delirium was 23% (n= 68) by the interview-based method, 12% (n=35) by the chart-based method and 27% (n=82) by the combined approach. Overall agreement was 80%; kappa was 0.30. The methods differed in detection of psychomotor features and time of onset. The chart-based method missed delirium in CAM-identified patients laacking features of psychomotor agitation or inappropriate behavior. The CAM-based method missed chart-identified cases occurring during the night shift. The combined method had high predictive validity for all clinical outcomes. Conclusions Interview and chart-based methods have specific strengths for identification of delirium. A combined approach captures the largest number and the broadest range of delirium cases. PMID:24512042

  16. Inventory Management for Irregular Shipment of Goods in Distribution Centre

    NASA Astrophysics Data System (ADS)

    Takeda, Hitoshi; Kitaoka, Masatoshi; Usuki, Jun

    2016-01-01

    The shipping amount of commodity goods (Foods, confectionery, dairy products, such as public cosmetic pharmaceutical products) changes irregularly at the distribution center dealing with the general consumer goods. Because the shipment time and the amount of the shipment are irregular, the demand forecast becomes very difficult. For this, the inventory control becomes difficult, too. It cannot be applied to the shipment of the commodity by the conventional inventory control methods. This paper proposes the method for inventory control by cumulative flow curve method. It proposed the method of deciding the order quantity of the inventory control by the cumulative flow curve. Here, it proposes three methods. 1) Power method,2) Polynomial method and 3)Revised Holt's linear method that forecasts data with trends that is a kind of exponential smoothing method. This paper compares the economics of the conventional method, which is managed by the experienced and three new proposed methods. And, the effectiveness of the proposal method is verified from the numerical calculations.

  17. Computational Methods in Drug Discovery

    PubMed Central

    Sliwoski, Gregory; Kothiwale, Sandeepkumar; Meiler, Jens

    2014-01-01

    Computer-aided drug discovery/design methods have played a major role in the development of therapeutically important small molecules for over three decades. These methods are broadly classified as either structure-based or ligand-based methods. Structure-based methods are in principle analogous to high-throughput screening in that both target and ligand structure information is imperative. Structure-based approaches include ligand docking, pharmacophore, and ligand design methods. The article discusses theory behind the most important methods and recent successful applications. Ligand-based methods use only ligand information for predicting activity depending on its similarity/dissimilarity to previously known active ligands. We review widely used ligand-based methods such as ligand-based pharmacophores, molecular descriptors, and quantitative structure-activity relationships. In addition, important tools such as target/ligand data bases, homology modeling, ligand fingerprint methods, etc., necessary for successful implementation of various computer-aided drug discovery/design methods in a drug discovery campaign are discussed. Finally, computational methods for toxicity prediction and optimization for favorable physiologic properties are discussed with successful examples from literature. PMID:24381236

  18. [Primary culture of human normal epithelial cells].

    PubMed

    Tang, Yu; Xu, Wenji; Guo, Wanbei; Xie, Ming; Fang, Huilong; Chen, Chen; Zhou, Jun

    2017-11-28

    The traditional primary culture methods of human normal epithelial cells have disadvantages of low activity of cultured cells, the low cultivated rate and complicated operation. To solve these problems, researchers made many studies on culture process of human normal primary epithelial cell. In this paper, we mainly introduce some methods used in separation and purification of human normal epithelial cells, such as tissue separation method, enzyme digestion separation method, mechanical brushing method, red blood cell lysis method, percoll layered medium density gradient separation method. We also review some methods used in the culture and subculture, including serum-free medium combined with low mass fraction serum culture method, mouse tail collagen coating method, and glass culture bottle combined with plastic culture dish culture method. The biological characteristics of human normal epithelial cells, the methods of immunocytochemical staining, trypan blue exclusion are described. Moreover, the factors affecting the aseptic operation, the conditions of the extracellular environment, the conditions of the extracellular environment during culture, the number of differential adhesion, and the selection and dosage of additives are summarized.

  19. A Modified Magnetic Gradient Contraction Based Method for Ferromagnetic Target Localization

    PubMed Central

    Wang, Chen; Zhang, Xiaojuan; Qu, Xiaodong; Pan, Xiao; Fang, Guangyou; Chen, Luzhao

    2016-01-01

    The Scalar Triangulation and Ranging (STAR) method, which is based upon the unique properties of magnetic gradient contraction, is a high real-time ferromagnetic target localization method. Only one measurement point is required in the STAR method and it is not sensitive to changes in sensing platform orientation. However, the localization accuracy of the method is limited by the asphericity errors and the inaccurate value of position leads to larger errors in the estimation of magnetic moment. To improve the localization accuracy, a modified STAR method is proposed. In the proposed method, the asphericity errors of the traditional STAR method are compensated with an iterative algorithm. The proposed method has a fast convergence rate which meets the requirement of high real-time localization. Simulations and field experiments have been done to evaluate the performance of the proposed method. The results indicate that target parameters estimated by the modified STAR method are more accurate than the traditional STAR method. PMID:27999322

  20. Comparison of three explicit multigrid methods for the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.; Turkel, Eli; Schaffer, Steve

    1987-01-01

    Three explicit multigrid methods, Ni's method, Jameson's finite-volume method, and a finite-difference method based on Brandt's work, are described and compared for two model problems. All three methods use an explicit multistage Runge-Kutta scheme on the fine grid, and this scheme is also described. Convergence histories for inviscid flow over a bump in a channel for the fine-grid scheme alone show that convergence rate is proportional to Courant number and that implicit residual smoothing can significantly accelerate the scheme. Ni's method was slightly slower than the implicitly-smoothed scheme alone. Brandt's and Jameson's methods are shown to be equivalent in form but differ in their node versus cell-centered implementations. They are about 8.5 times faster than Ni's method in terms of CPU time. Results for an oblique shock/boundary layer interaction problem verify the accuracy of the finite-difference code. All methods slowed considerably on the stretched viscous grid but Brandt's method was still 2.1 times faster than Ni's method.

  1. Robust numerical solution of the reservoir routing equation

    NASA Astrophysics Data System (ADS)

    Fiorentini, Marcello; Orlandini, Stefano

    2013-09-01

    The robustness of numerical methods for the solution of the reservoir routing equation is evaluated. The methods considered in this study are: (1) the Laurenson-Pilgrim method, (2) the fourth-order Runge-Kutta method, and (3) the fixed order Cash-Karp method. Method (1) is unable to handle nonmonotonic outflow rating curves. Method (2) is found to fail under critical conditions occurring, especially at the end of inflow recession limbs, when large time steps (greater than 12 min in this application) are used. Method (3) is computationally intensive and it does not solve the limitations of method (2). The limitations of method (2) can be efficiently overcome by reducing the time step in the critical phases of the simulation so as to ensure that water level remains inside the domains of the storage function and the outflow rating curve. The incorporation of a simple backstepping procedure implementing this control into the method (2) yields a robust and accurate reservoir routing method that can be safely used in distributed time-continuous catchment models.

  2. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    PubMed

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Construction of exponentially fitted symplectic Runge-Kutta-Nyström methods from partitioned Runge-Kutta methods

    NASA Astrophysics Data System (ADS)

    Monovasilis, Theodore; Kalogiratou, Zacharoula; Simos, T. E.

    2014-10-01

    In this work we derive exponentially fitted symplectic Runge-Kutta-Nyström (RKN) methods from symplectic exponentially fitted partitioned Runge-Kutta (PRK) methods methods (for the approximate solution of general problems of this category see [18] - [40] and references therein). We construct RKN methods from PRK methods with up to five stages and fourth algebraic order.

  4. Why, and how, mixed methods research is undertaken in health services research in England: a mixed methods study

    PubMed Central

    O'Cathain, Alicia; Murphy, Elizabeth; Nicholl, Jon

    2007-01-01

    Background Recently, there has been a surge of international interest in combining qualitative and quantitative methods in a single study – often called mixed methods research. It is timely to consider why and how mixed methods research is used in health services research (HSR). Methods Documentary analysis of proposals and reports of 75 mixed methods studies funded by a research commissioner of HSR in England between 1994 and 2004. Face-to-face semi-structured interviews with 20 researchers sampled from these studies. Results 18% (119/647) of HSR studies were classified as mixed methods research. In the documentation, comprehensiveness was the main driver for using mixed methods research, with researchers wanting to address a wider range of questions than quantitative methods alone would allow. Interviewees elaborated on this, identifying the need for qualitative research to engage with the complexity of health, health care interventions, and the environment in which studies took place. Motivations for adopting a mixed methods approach were not always based on the intrinsic value of mixed methods research for addressing the research question; they could be strategic, for example, to obtain funding. Mixed methods research was used in the context of evaluation, including randomised and non-randomised designs; survey and fieldwork exploratory studies; and instrument development. Studies drew on a limited number of methods – particularly surveys and individual interviews – but used methods in a wide range of roles. Conclusion Mixed methods research is common in HSR in the UK. Its use is driven by pragmatism rather than principle, motivated by the perceived deficit of quantitative methods alone to address the complexity of research in health care, as well as other more strategic gains. Methods are combined in a range of contexts, yet the emerging methodological contributions from HSR to the field of mixed methods research are currently limited to the single context of combining qualitative methods and randomised controlled trials. Health services researchers could further contribute to the development of mixed methods research in the contexts of instrument development, survey and fieldwork, and non-randomised evaluations. PMID:17570838

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor-Pashow, K.; Fondeur, F.; White, T.

    Savannah River National Laboratory (SRNL) was tasked with identifying and developing at least one, but preferably two methods for quantifying the suppressor in the Next Generation Solvent (NGS) system. The suppressor is a guanidine derivative, N,N',N"-tris(3,7-dimethyloctyl)guanidine (TiDG). A list of 10 possible methods was generated, and screening experiments were performed for 8 of the 10 methods. After completion of the screening experiments, the non-aqueous acid-base titration was determined to be the most promising, and was selected for further development as the primary method. {sup 1}H NMR also showed promising results from the screening experiments, and this method was selected formore » further development as the secondary method. Other methods, including {sup 36}Cl radiocounting and ion chromatography, also showed promise; however, due to the similarity to the primary method (titration) and the inability to differentiate between TiDG and TOA (tri-n-ocytlamine) in the blended solvent, {sup 1}H NMR was selected over these methods. Analysis of radioactive samples obtained from real waste ESS (extraction, scrub, strip) testing using the titration method showed good results. Based on these results, the titration method was selected as the method of choice for TiDG measurement. {sup 1}H NMR has been selected as the secondary (back-up) method, and additional work is planned to further develop this method and to verify the method using radioactive samples. Procedures for analyzing radioactive samples of both pure NGS and blended solvent were developed and issued for the both methods.« less

  6. Novel atomic absorption spectrometric and rapid spectrophotometric methods for the quantitation of paracetamol in saliva: application to pharmacokinetic studies.

    PubMed

    Issa, M M; Nejem, R M; El-Abadla, N S; Al-Kholy, M; Saleh, Akila A

    2008-01-01

    A novel atomic absorption spectrometric method and two highly sensitive spectrophotometric methods were developed for the determination of paracetamol. These techniques based on the oxidation of paracetamol by iron (III) (method I); oxidation of p-aminophenol after the hydrolysis of paracetamol (method II). Iron (II) then reacts with potassium ferricyanide to form Prussian blue color with a maximum absorbance at 700 nm. The atomic absorption method was accomplished by extracting the excess iron (III) in method II and aspirates the aqueous layer into air-acetylene flame to measure the absorbance of iron (II) at 302.1 nm. The reactions have been spectrometrically evaluated to attain optimum experimental conditions. Linear responses were exhibited over the ranges 1.0-10, 0.2-2.0 and 0.1-1.0 mug/ml for method I, method II and atomic absorption spectrometric method, respectively. A high sensitivity is recorded for the proposed methods I and II and atomic absorption spectrometric method value indicate: 0.05, 0.022 and 0.012 mug/ml, respectively. The limit of quantitation of paracetamol by method II and atomic absorption spectrometric method were 0.20 and 0.10 mug/ml. Method II and the atomic absorption spectrometric method were applied to demonstrate a pharmacokinetic study by means of salivary samples in normal volunteers who received 1.0 g paracetamol. Intra and inter-day precision did not exceed 6.9%.

  7. Novel Atomic Absorption Spectrometric and Rapid Spectrophotometric Methods for the Quantitation of Paracetamol in Saliva: Application to Pharmacokinetic Studies

    PubMed Central

    Issa, M. M.; Nejem, R. M.; El-Abadla, N. S.; Al-Kholy, M.; Saleh, Akila. A.

    2008-01-01

    A novel atomic absorption spectrometric method and two highly sensitive spectrophotometric methods were developed for the determination of paracetamol. These techniques based on the oxidation of paracetamol by iron (III) (method I); oxidation of p-aminophenol after the hydrolysis of paracetamol (method II). Iron (II) then reacts with potassium ferricyanide to form Prussian blue color with a maximum absorbance at 700 nm. The atomic absorption method was accomplished by extracting the excess iron (III) in method II and aspirates the aqueous layer into air-acetylene flame to measure the absorbance of iron (II) at 302.1 nm. The reactions have been spectrometrically evaluated to attain optimum experimental conditions. Linear responses were exhibited over the ranges 1.0-10, 0.2-2.0 and 0.1-1.0 μg/ml for method I, method II and atomic absorption spectrometric method, respectively. A high sensitivity is recorded for the proposed methods I and II and atomic absorption spectrometric method value indicate: 0.05, 0.022 and 0.012 μg/ml, respectively. The limit of quantitation of paracetamol by method II and atomic absorption spectrometric method were 0.20 and 0.10 μg/ml. Method II and the atomic absorption spectrometric method were applied to demonstrate a pharmacokinetic study by means of salivary samples in normal volunteers who received 1.0 g paracetamol. Intra and inter-day precision did not exceed 6.9%. PMID:20046743

  8. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.

  9. X-ray imaging using amorphous selenium: a photoinduced discharge readout method for digital mammography.

    PubMed

    Rowlands, J A; Hunter, D M; Araj, N

    1991-01-01

    A new digital image readout method for electrostatic charge images on photoconductive plates is described. The method can be used to read out images on selenium plates similar to those used in xeromammography. The readout method, called the air-gap photoinduced discharge method (PID), discharges the latent image pixel by pixel and measures the charge. The PID readout method, like electrometer methods, is linear. However, the PID method permits much better resolution than scanning electrometers while maintaining quantum limited performance at high radiation exposure levels. Thus the air-gap PID method appears to be uniquely superior for high-resolution digital imaging tasks such as mammography.

  10. Quantitative naturalistic methods for detecting change points in psychotherapy research: an illustration with alliance ruptures.

    PubMed

    Eubanks-Carter, Catherine; Gorman, Bernard S; Muran, J Christopher

    2012-01-01

    Analysis of change points in psychotherapy process could increase our understanding of mechanisms of change. In particular, naturalistic change point detection methods that identify turning points or breakpoints in time series data could enhance our ability to identify and study alliance ruptures and resolutions. This paper presents four categories of statistical methods for detecting change points in psychotherapy process: criterion-based methods, control chart methods, partitioning methods, and regression methods. Each method's utility for identifying shifts in the alliance is illustrated using a case example from the Beth Israel Psychotherapy Research program. Advantages and disadvantages of the various methods are discussed.

  11. A comparative study of interface reconstruction methods for multi-material ALE simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kucharik, Milan; Garimalla, Rao; Schofield, Samuel

    2009-01-01

    In this paper we compare the performance of different methods for reconstructing interfaces in multi-material compressible flow simulations. The methods compared are a material-order-dependent Volume-of-Fluid (VOF) method, a material-order-independent VOF method based on power diagram partitioning of cells and the Moment-of-Fluid method (MOF). We demonstrate that the MOF method provides the most accurate tracking of interfaces, followed by the VOF method with the right material ordering. The material-order-independent VOF method performs some-what worse than the above two while the solutions with VOF using the wrong material order are considerably worse.

  12. Digital photography and transparency-based methods for measuring wound surface area.

    PubMed

    Bhedi, Amul; Saxena, Atul K; Gadani, Ravi; Patel, Ritesh

    2013-04-01

    To compare and determine a credible method of measurement of wound surface area by linear, transparency, and photographic methods for monitoring progress of wound healing accurately and ascertaining whether these methods are significantly different. From April 2005 to December 2006, 40 patients (30 men, 5 women, 5 children) admitted to the surgical ward of Shree Sayaji General Hospital, Baroda, had clean as well as infected wound following trauma, debridement, pressure sore, venous ulcer, and incision and drainage. Wound surface areas were measured by these three methods (linear, transparency, and photographic methods) simultaneously on alternate days. The linear method is statistically and significantly different from transparency and photographic methods (P value <0.05), but there is no significant difference between transparency and photographic methods (P value >0.05). Photographic and transparency methods provided measurements of wound surface area with equivalent result and there was no statistically significant difference between these two methods.

  13. Anatomically-Aided PET Reconstruction Using the Kernel Method

    PubMed Central

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-01-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810

  14. Anatomically-aided PET reconstruction using the kernel method.

    PubMed

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  15. [An automatic peak detection method for LIBS spectrum based on continuous wavelet transform].

    PubMed

    Chen, Peng-Fei; Tian, Di; Qiao, Shu-Jun; Yang, Guang

    2014-07-01

    Spectrum peak detection in the laser-induced breakdown spectroscopy (LIBS) is an essential step, but the presence of background and noise seriously disturb the accuracy of peak position. The present paper proposed a method applied to automatic peak detection for LIBS spectrum in order to enhance the ability of overlapping peaks searching and adaptivity. We introduced the ridge peak detection method based on continuous wavelet transform to LIBS, and discussed the choice of the mother wavelet and optimized the scale factor and the shift factor. This method also improved the ridge peak detection method with a correcting ridge method. The experimental results show that compared with other peak detection methods (the direct comparison method, derivative method and ridge peak search method), our method had a significant advantage on the ability to distinguish overlapping peaks and the precision of peak detection, and could be be applied to data processing in LIBS.

  16. A Method of DTM Construction Based on Quadrangular Irregular Networks and Related Error Analysis

    PubMed Central

    Kang, Mengjun

    2015-01-01

    A new method of DTM construction based on quadrangular irregular networks (QINs) that considers all the original data points and has a topological matrix is presented. A numerical test and a real-world example are used to comparatively analyse the accuracy of QINs against classical interpolation methods and other DTM representation methods, including SPLINE, KRIGING and triangulated irregular networks (TINs). The numerical test finds that the QIN method is the second-most accurate of the four methods. In the real-world example, DTMs are constructed using QINs and the three classical interpolation methods. The results indicate that the QIN method is the most accurate method tested. The difference in accuracy rank seems to be caused by the locations of the data points sampled. Although the QIN method has drawbacks, it is an alternative method for DTM construction. PMID:25996691

  17. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  18. [Theory, method and application of method R on estimation of (co)variance components].

    PubMed

    Liu, Wen-Zhong

    2004-07-01

    Theory, method and application of Method R on estimation of (co)variance components were reviewed in order to make the method be reasonably used. Estimation requires R values,which are regressions of predicted random effects that are calculated using complete dataset on predicted random effects that are calculated using random subsets of the same data. By using multivariate iteration algorithm based on a transformation matrix,and combining with the preconditioned conjugate gradient to solve the mixed model equations, the computation efficiency of Method R is much improved. Method R is computationally inexpensive,and the sampling errors and approximate credible intervals of estimates can be obtained. Disadvantages of Method R include a larger sampling variance than other methods for the same data,and biased estimates in small datasets. As an alternative method, Method R can be used in larger datasets. It is necessary to study its theoretical properties and broaden its application range further.

  19. Multiple zeros of polynomials

    NASA Technical Reports Server (NTRS)

    Wood, C. A.

    1974-01-01

    For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.

  20. Evaluation of the methods for enumerating coliform bacteria from water samples using precise reference standards.

    PubMed

    Wohlsen, T; Bates, J; Vesey, G; Robinson, W A; Katouli, M

    2006-04-01

    To use BioBall cultures as a precise reference standard to evaluate methods for enumeration of Escherichia coli and other coliform bacteria in water samples. Eight methods were evaluated including membrane filtration, standard plate count (pour and spread plate methods), defined substrate technology methods (Colilert and Colisure), the most probable number method and the Petrifilm disposable plate method. Escherichia coli and Enterobacter aerogenes BioBall cultures containing 30 organisms each were used. All tests were performed using 10 replicates. The mean recovery of both bacteria varied with the different methods employed. The best and most consistent results were obtained with Petrifilm and the pour plate method. Other methods either yielded a low recovery or showed significantly high variability between replicates. The BioBall is a very suitable quality control tool for evaluating the efficiency of methods for bacterial enumeration in water samples.

  1. Wilsonian methods of concept analysis: a critique.

    PubMed

    Hupcey, J E; Morse, J M; Lenz, E R; Tasón, M C

    1996-01-01

    Wilsonian methods of concept analysis--that is, the method proposed by Wilson and Wilson-derived methods in nursing (as described by Walker and Avant; Chinn and Kramer [Jacobs]; Schwartz-Barcott and Kim; and Rodgers)--are discussed and compared in this article. The evolution and modifications of Wilson's method in nursing are described and research that has used these methods, assessed. The transformation of Wilson's method is traced as each author has adopted his techniques and attempted to modify the method to correct for limitations. We suggest that these adaptations and modifications ultimately erode Wilson's method. Further, the Wilson-derived methods have been overly simplified and used by nurse researchers in a prescriptive manner, and the results often do not serve the purpose of expanding nursing knowledge. We conclude that, considering the significance of concept development for the nursing profession, the development of new methods and a means for evaluating conceptual inquiry must be given priority.

  2. The Application of Continuous Wavelet Transform Based Foreground Subtraction Method in 21 cm Sky Surveys

    NASA Astrophysics Data System (ADS)

    Gu, Junhua; Xu, Haiguang; Wang, Jingying; An, Tao; Chen, Wen

    2013-08-01

    We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.

  3. Study report on a double isotope method of calcium absorption

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Some of the pros and cons of three methods to study gastrointestinal calcium absorption are briefly discussed. The methods are: (1) a balance study; (2) a single isotope method; and (3) a double isotope method. A procedure for the double isotope method is also included.

  4. Comparison on genomic predictions using three GBLUP methods and two single-step blending methods in the Nordic Holstein population

    PubMed Central

    2012-01-01

    Background A single-step blending approach allows genomic prediction using information of genotyped and non-genotyped animals simultaneously. However, the combined relationship matrix in a single-step method may need to be adjusted because marker-based and pedigree-based relationship matrices may not be on the same scale. The same may apply when a GBLUP model includes both genomic breeding values and residual polygenic effects. The objective of this study was to compare single-step blending methods and GBLUP methods with and without adjustment of the genomic relationship matrix for genomic prediction of 16 traits in the Nordic Holstein population. Methods The data consisted of de-regressed proofs (DRP) for 5 214 genotyped and 9 374 non-genotyped bulls. The bulls were divided into a training and a validation population by birth date, October 1, 2001. Five approaches for genomic prediction were used: 1) a simple GBLUP method, 2) a GBLUP method with a polygenic effect, 3) an adjusted GBLUP method with a polygenic effect, 4) a single-step blending method, and 5) an adjusted single-step blending method. In the adjusted GBLUP and single-step methods, the genomic relationship matrix was adjusted for the difference of scale between the genomic and the pedigree relationship matrices. A set of weights on the pedigree relationship matrix (ranging from 0.05 to 0.40) was used to build the combined relationship matrix in the single-step blending method and the GBLUP method with a polygenetic effect. Results Averaged over the 16 traits, reliabilities of genomic breeding values predicted using the GBLUP method with a polygenic effect (relative weight of 0.20) were 0.3% higher than reliabilities from the simple GBLUP method (without a polygenic effect). The adjusted single-step blending and original single-step blending methods (relative weight of 0.20) had average reliabilities that were 2.1% and 1.8% higher than the simple GBLUP method, respectively. In addition, the GBLUP method with a polygenic effect led to less bias of genomic predictions than the simple GBLUP method, and both single-step blending methods yielded less bias of predictions than all GBLUP methods. Conclusions The single-step blending method is an appealing approach for practical genomic prediction in dairy cattle. Genomic prediction from the single-step blending method can be improved by adjusting the scale of the genomic relationship matrix. PMID:22455934

  5. Roka Listeria detection method using transcription mediated amplification to detect Listeria species in select foods and surfaces. Performance Tested Method(SM) 011201.

    PubMed

    Hua, Yang; Kaplan, Shannon; Reshatoff, Michael; Hu, Ernie; Zukowski, Alexis; Schweis, Franz; Gin, Cristal; Maroni, Brett; Becker, Michael; Wisniewski, Michele

    2012-01-01

    The Roka Listeria Detection Assay was compared to the reference culture methods for nine select foods and three select surfaces. The Roka method used Half-Fraser Broth for enrichment at 35 +/- 2 degrees C for 24-28 h. Comparison of Roka's method to reference methods requires an unpaired approach. Each method had a total of 545 samples inoculated with a Listeria strain. Each food and surface was inoculated with a different strain of Listeria at two different levels per method. For the dairy products (Brie cheese, whole milk, and ice cream), our method was compared to AOAC Official Method(SM) 993.12. For the ready-to-eat meats (deli chicken, cured ham, chicken salad, and hot dogs) and environmental surfaces (sealed concrete, stainless steel, and plastic), these samples were compared to the U.S. Department of Agriculture/Food Safety and Inspection Service-Microbiology Laboratory Guidebook (USDA/FSIS-MLG) method MLG 8.07. Cold-smoked salmon and romaine lettuce were compared to the U.S. Food and Drug Administration/Bacteriological Analytical Manual, Chapter 10 (FDA/BAM) method. Roka's method had 358 positives out of 545 total inoculated samples compared to 332 positive for the reference methods. Overall the probability of detection analysis of the results showed better or equivalent performance compared to the reference methods.

  6. A propagation method with adaptive mesh grid based on wave characteristics for wave optics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan

    2015-10-01

    Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.

  7. Reliability and accuracy of real-time visualization techniques for measuring school cafeteria tray waste: validating the quarter-waste method.

    PubMed

    Hanks, Andrew S; Wansink, Brian; Just, David R

    2014-03-01

    Measuring food waste is essential to determine the impact of school interventions on what children eat. There are multiple methods used for measuring food waste, yet it is unclear which method is most appropriate in large-scale interventions with restricted resources. This study examines which of three visual tray waste measurement methods is most reliable, accurate, and cost-effective compared with the gold standard of individually weighing leftovers. School cafeteria researchers used the following three visual methods to capture tray waste in addition to actual food waste weights for 197 lunch trays: the quarter-waste method, the half-waste method, and the photograph method. Inter-rater and inter-method reliability were highest for on-site visual methods (0.90 for the quarter-waste method and 0.83 for the half-waste method) and lowest for the photograph method (0.48). This low reliability is partially due to the inability of photographs to determine whether packaged items (such as milk or yogurt) are empty or full. In sum, the quarter-waste method was the most appropriate for calculating accurate amounts of tray waste, and the photograph method might be appropriate if researchers only wish to detect significant differences in waste or consumption of selected, unpackaged food. Copyright © 2014 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  8. Modified flotation method with the use of Percoll for the detection of Isospora suis oocysts in suckling piglet faeces.

    PubMed

    Karamon, Jacek; Ziomko, Irena; Cencek, Tomasz; Sroka, Jacek

    2008-10-01

    The modification of flotation method for the examination of diarrhoeic piglet faeces for the detection of Isospora suis oocysts was elaborated. The method was based on removing fractions of fat from the sample of faeces by centrifugation with a 25% Percoll solution. The investigations were carried out in comparison to the McMaster method. From five variants of the Percoll flotation method, the best results were obtained when 2ml of flotation liquid per 1g of faeces were used. The limit of detection in the Percoll flotation method was 160 oocysts per 1g, and was better than with the McMaster method. The efficacy of the modified method was confirmed by results obtained in the examination of the I. suis infected piglets. From all faecal samples, positive samples in the Percoll flotation method were double the results than that of the routine method. Oocysts were first detected by the Percoll flotation method on day 4 post-invasion, i.e. one-day earlier than with the McMaster method. During the experiment (except for 3 days), the extensity of I. suis invasion in the litter examined by the Percoll flotation method was higher than that with the McMaster method. The obtained results show that the modified flotation method with the use of Percoll could be applied in the diagnostics of suckling piglet isosporosis.

  9. Comparison of concentration methods for rapid detection of hookworm ova in wastewater matrices using quantitative PCR.

    PubMed

    Gyawali, P; Ahmed, W; Jagals, P; Sidhu, J P S; Toze, S

    2015-12-01

    Hookworm infection contributes around 700 million infections worldwide especially in developing nations due to increased use of wastewater for crop production. The effective recovery of hookworm ova from wastewater matrices is difficult due to their low concentrations and heterogeneous distribution. In this study, we compared the recovery rates of (i) four rapid hookworm ova concentration methods from municipal wastewater, and (ii) two concentration methods from sludge samples. Ancylostoma caninum ova were used as surrogate for human hookworm (Ancylostoma duodenale and Necator americanus). Known concentration of A. caninum hookworm ova were seeded into wastewater (treated and raw) and sludge samples collected from two wastewater treatment plants (WWTPs) in Brisbane and Perth, Australia. The A. caninum ova were concentrated from treated and raw wastewater samples using centrifugation (Method A), hollow fiber ultrafiltration (HFUF) (Method B), filtration (Method C) and flotation (Method D) methods. For sludge samples, flotation (Method E) and direct DNA extraction (Method F) methods were used. Among the four methods tested, filtration (Method C) method was able to recover higher concentrations of A. caninum ova consistently from treated wastewater (39-50%) and raw wastewater (7.1-12%) samples collected from both WWTPs. The remaining methods (Methods A, B and D) yielded variable recovery rate ranging from 0.2 to 40% for treated and raw wastewater samples. The recovery rates for sludge samples were poor (0.02-4.7), although, Method F (direct DNA extraction) provided 1-2 orders of magnitude higher recovery rate than Method E (flotation). Based on our results it can be concluded that the recovery rates of hookworm ova from wastewater matrices, especially sludge samples, can be poor and highly variable. Therefore, choice of concentration method is vital for the sensitive detection of hookworm ova in wastewater matrices. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  10. Achieving cost-neutrality with long-acting reversible contraceptive methods⋆

    PubMed Central

    Trussell, James; Hassan, Fareen; Lowin, Julia; Law, Amy; Filonenko, Anna

    2014-01-01

    Objectives This analysis aimed to estimate the average annual cost of available reversible contraceptive methods in the United States. In line with literature suggesting long-acting reversible contraceptive (LARC) methods become increasingly cost-saving with extended duration of use, it aimed to also quantify minimum duration of use required for LARC methods to achieve cost-neutrality relative to other reversible contraceptive methods while taking into consideration discontinuation. Study design A three-state economic model was developed to estimate relative costs of no method (chance), four short-acting reversible (SARC) methods (oral contraceptive, ring, patch and injection) and three LARC methods [implant, copper intrauterine device (IUD) and levonorgestrel intrauterine system (LNG-IUS) 20 mcg/24 h (total content 52 mg)]. The analysis was conducted over a 5-year time horizon in 1000 women aged 20–29 years. Method-specific failure and discontinuation rates were based on published literature. Costs associated with drug acquisition, administration and failure (defined as an unintended pregnancy) were considered. Key model outputs were annual average cost per method and minimum duration of LARC method usage to achieve cost-savings compared to SARC methods. Results The two least expensive methods were copper IUD ($304 per women, per year) and LNG-IUS 20 mcg/24 h ($308). Cost of SARC methods ranged between $432 (injection) and $730 (patch), per women, per year. A minimum of 2.1 years of LARC usage would result in cost-savings compared to SARC usage. Conclusions This analysis finds that even if LARC methods are not used for their full durations of efficacy, they become cost-saving relative to SARC methods within 3 years of use. Implications Previous economic arguments in support of using LARC methods have been criticized for not considering that LARC methods are not always used for their full duration of efficacy. This study calculated that cost-savings from LARC methods relative to SARC methods, with discontinuation rates considered, can be realized within 3 years. PMID:25282161

  11. A method for addressing differences in concentrations of fipronil and three degradates obtained by two different laboratory methods

    USGS Publications Warehouse

    Crawford, Charles G.; Martin, Jeffrey D.

    2017-07-21

    In October 2012, the U.S. Geological Survey (USGS) began measuring the concentration of the pesticide fipronil and three of its degradates (desulfinylfipronil, fipronil sulfide, and fipronil sulfone) by a new laboratory method using direct aqueous-injection liquid chromatography tandem mass spectrometry (DAI LC–MS/MS). This method replaced the previous method—in use since 2002—that used gas chromatography/mass spectrometry (GC/MS). The performance of the two methods is not comparable for fipronil and the three degradates. Concentrations of these four chemical compounds determined by the DAI LC–MS/MS method are substantially lower than the GC/MS method. A method was developed to correct for the difference in concentrations obtained by the two laboratory methods based on a methods comparison field study done in 2012. Environmental and field matrix spike samples to be analyzed by both methods from 48 stream sites from across the United States were sampled approximately three times each for this study. These data were used to develop a relation between the two laboratory methods for each compound using regression analysis. The relations were used to calibrate data obtained by the older method to the new method in order to remove any biases attributable to differences in the methods. The coefficients of the equations obtained from the regressions were used to calibrate over 16,600 observations of fipronil, as well as the three degradates determined by the GC/MS method retrieved from the USGS National Water Information System. The calibrated values were then compared to over 7,800 observations of fipronil and to the three degradates determined by the DAI LC–MS/MS method also retrieved from the National Water Information System. The original and calibrated values from the GC/MS method, along with measures of uncertainty in the calibrated values and the original values from the DAI LC–MS/MS method, are provided in an accompanying data release.

  12. 24 CFR 291.90 - Sales methods.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...

  13. 24 CFR 291.90 - Sales methods.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...

  14. 24 CFR 291.90 - Sales methods.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...

  15. 24 CFR 291.90 - Sales methods.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...

  16. 77 FR 48733 - Transitional Program for Covered Business Method Patents-Definitions of Covered Business Method...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-14

    ... Office 37 CFR Part 42 Transitional Program for Covered Business Method Patents--Definitions of Covered Business Method Patent and Technological Invention; Final Rule #0;#0;Federal Register / Vol. 77 , No. 157... Business Method Patents-- Definitions of Covered Business Method Patent and Technological Invention AGENCY...

  17. 24 CFR 291.90 - Sales methods.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Sales methods. 291.90 Section 291....90 Sales methods. HUD will prescribe the terms and conditions for all methods of sale. HUD may, in... following methods of sale: (a) Future REO acquisition method. The Future Real Estate-Owned (REO) acquisition...

  18. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method... (analytical method) provided that the chemistry of the method or the determinative technique is not changed... prevent efficient recovery of organic pollutants and prevent the method from meeting QC requirements, the...

  19. A Review of Methods for Missing Data.

    ERIC Educational Resources Information Center

    Pigott, Therese D.

    2001-01-01

    Reviews methods for handling missing data in a research study. Model-based methods, such as maximum likelihood using the EM algorithm and multiple imputation, hold more promise than ad hoc methods. Although model-based methods require more specialized computer programs and assumptions about the nature of missing data, these methods are appropriate…

  20. The Views of Turkish Pre-Service Teachers about Effectiveness of Cluster Method as a Teaching Writing Method

    ERIC Educational Resources Information Center

    Kitis, Emine; Türkel, Ali

    2017-01-01

    The aim of this study is to find out Turkish pre-service teachers' views on effectiveness of cluster method as a writing teaching method. The Cluster Method can be defined as a connotative creative writing method. The way the method works is that the person who brainstorms on connotations of a word or a concept in abscence of any kind of…

  1. Assay of fluoxetine hydrochloride by titrimetric and HPLC methods.

    PubMed

    Bueno, F; Bergold, A M; Fröehlich, P E

    2000-01-01

    Two alternative methods were proposed to assay Fluoxetine Hydrochloride: a titrimetric method and another by HPLC using as mobile phase water pH 3.5: acetonitrile (65:35). These methods were applied to the determination of Fluoxetine as such or in formulations (capsules). The titrimetric method is an alternative for pharmacies and small industries. Both methods showed accuracy and precision and are an alternative to the official methods.

  2. Thermophysical Properties of Matter - The TPRC Data Series. Volume 3. Thermal Conductivity - Nonmetallic Liquids and Gases

    DTIC Science & Technology

    1970-01-01

    design and experimentation. I. The Shock- Tube Method Smiley [546] introduced the use of shock waves...one of the greatest disadvantages of this technique. Both the unique adaptability of the shock tube method for high -temperature measurement of...Line-Source Flow Method H. The Hot-Wire Thermal Diffusion Column Method I. The Shock- Tube Method J. The Arc Method K. The Ultrasonic Method .

  3. New methods for the numerical integration of ordinary differential equations and their application to the equations of motion of spacecraft

    NASA Technical Reports Server (NTRS)

    Banyukevich, A.; Ziolkovski, K.

    1975-01-01

    A number of hybrid methods for solving Cauchy problems are described on the basis of an evaluation of advantages of single and multiple-point numerical integration methods. The selection criterion is the principle of minimizing computer time. The methods discussed include the Nordsieck method, the Bulirsch-Stoer extrapolation method, and the method of recursive Taylor-Steffensen power series.

  4. Comparison of measurement methods for capacitive tactile sensors and their implementation

    NASA Astrophysics Data System (ADS)

    Tarapata, Grzegorz; Sienkiewicz, Rafał

    2015-09-01

    This paper presents a review of ideas and implementations of measurement methods utilized for capacity measurements in tactile sensors. The paper describes technical method, charge amplification method, generation and as well integration method. Three selected methods were implemented in dedicated measurement system and utilised for capacitance measurements of ourselves made tactile sensors. The tactile sensors tested in this work were fully fabricated with the inkjet printing technology. The tests result were presented and summarised. The charge amplification method (CDC) was selected as the best method for the measurement of the tactile sensors.

  5. On time discretizations for spectral methods. [numerical integration of Fourier and Chebyshev methods for dynamic partial differential equations

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Turkel, E.

    1980-01-01

    New methods are introduced for the time integration of the Fourier and Chebyshev methods of solution for dynamic differential equations. These methods are unconditionally stable, even though no matrix inversions are required. Time steps are chosen by accuracy requirements alone. For the Fourier method both leapfrog and Runge-Kutta methods are considered. For the Chebyshev method only Runge-Kutta schemes are tested. Numerical calculations are presented to verify the analytic results. Applications to the shallow water equations are presented.

  6. Comparison of Response Surface Construction Methods for Derivative Estimation Using Moving Least Squares, Kriging and Radial Basis Functions

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2005-01-01

    Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.

  7. Two smart spectrophotometric methods for the simultaneous estimation of Simvastatin and Ezetimibe in combined dosage form

    NASA Astrophysics Data System (ADS)

    Magdy, Nancy; Ayad, Miriam F.

    2015-02-01

    Two simple, accurate, precise, sensitive and economic spectrophotometric methods were developed for the simultaneous determination of Simvastatin and Ezetimibe in fixed dose combination products without prior separation. The first method depends on a new chemometrics-assisted ratio spectra derivative method using moving window polynomial least square fitting method (Savitzky-Golay filters). The second method is based on a simple modification for the ratio subtraction method. The suggested methods were validated according to USP guidelines and can be applied for routine quality control testing.

  8. Application of LC/MS/MS Techniques to Development of US ...

    EPA Pesticide Factsheets

    This presentation will describe the U.S. EPA’s drinking water and ambient water method development program in relation to the process employed and the typical challenges encountered in developing standardized LC/MS/MS methods for chemicals of emerging concern. The EPA’s Drinking Water Contaminant Candidate List and Unregulated Contaminant Monitoring Regulations, which are the driving forces behind drinking water method development, will be introduced. Three drinking water LC/MS/MS methods (Methods 537, 544 and a new method for nonylphenol) and two ambient water LC/MS/MS methods for cyanotoxins will be described that highlight some of the challenges encountered during development of these methods. This presentation will provide the audience with basic understanding of EPA's drinking water method development program and an introduction to two new ambient water EPA methods.

  9. The Roche Immunoturbidimetric Albumin Method on Cobas c 501 Gives Higher Values Than the Abbott and Roche BCP Methods When Analyzing Patient Plasma Samples.

    PubMed

    Helmersson-Karlqvist, Johanna; Flodin, Mats; Havelka, Aleksandra Mandic; Xu, Xiao Yan; Larsson, Anders

    2016-09-01

    Serum/plasma albumin is an important and widely used laboratory marker and it is important that we measure albumin correctly without bias. We had indications that the immunoturbidimetric method on Cobas c 501 and the bromocresol purple (BCP) method on Architect 16000 differed, so we decided to study these methods more closely. A total of 1,951 patient requests with albumin measured with both the Architect BCP and Cobas immunoturbidimetric methods were extracted from the laboratory system. A comparison with fresh plasma samples was also performed that included immunoturbidimetric and BCP methods on Cobas c 501 and analysis of the international protein calibrator ERM-DA470k/IFCC. The median difference between the Abbott BCP and Roche immunoturbidimetric methods was 3.3 g/l and the Roche method overestimated ERM-DA470k/IFCC by 2.2 g/l. The Roche immunoturbidimetric method gave higher values than the Roche BCP method: y = 1.111x - 0.739, R² = 0.971. The Roche immunoturbidimetric albumin method gives clearly higher values than the Abbott and Roche BCP methods when analyzing fresh patient samples. The differences between the two methods were similar at normal and low albumin levels. © 2016 Wiley Periodicals, Inc.

  10. Manual tracing versus smartphone application (app) tracing: a comparative study.

    PubMed

    Sayar, Gülşilay; Kilinc, Delal Dara

    2017-11-01

    This study aimed to compare the results of conventional manual cephalometric tracing with those acquired with smartphone application cephalometric tracing. The cephalometric radiographs of 55 patients (25 females and 30 males) were traced via the manual and app methods and were subsequently examined with Steiner's analysis. Five skeletal measurements, five dental measurements and two soft tissue measurements were managed based on 21 landmarks. The durations of the performances of the two methods were also compared. SNA (Sella, Nasion, A point angle) and SNB (Sella, Nasion, B point angle) values for the manual method were statistically lower (p < .001) than those for the app method. The ANB value for the manual method was statistically lower than that of app method. L1-NB (°) and upper lip protrusion values for the manual method were statistically higher than those for the app method. Go-GN/SN, U1-NA (°) and U1-NA (mm) values for manual method were statistically lower than those for the app method. No differences between the two methods were found in the L1-NB (mm), occlusal plane to SN, interincisal angle or lower lip protrusion values. Although statistically significant differences were found between the two methods, the cephalometric tracing proceeded faster with the app method than with the manual method.

  11. Contraceptive Method Choice Among Young Adults: Influence of Individual and Relationship Factors.

    PubMed

    Harvey, S Marie; Oakley, Lisa P; Washburn, Isaac; Agnew, Christopher R

    2018-01-26

    Because decisions related to contraceptive behavior are often made by young adults in the context of specific relationships, the relational context likely influences use of contraceptives. Data presented here are from in-person structured interviews with 536 Black, Hispanic, and White young adults from East Los Angeles, California. We collected partner-specific relational and contraceptive data on all sexual partnerships for each individual, on four occasions, over one year. Using three-level multinomial logistic regression models, we examined individual and relationship factors predictive of contraceptive use. Results indicated that both individual and relationship factors predicted contraceptive use, but factors varied by method. Participants reporting greater perceived partner exclusivity and relationship commitment were more likely to use hormonal/long-acting methods only or a less effective method/no method versus condoms only. Those with greater participation in sexual decision making were more likely to use any method over a less effective method/no method and were more likely to use condoms only or dual methods versus a hormonal/long-acting method only. In addition, for women only, those who reported greater relationship commitment were more likely to use hormonal/long-acting methods or a less effective method/no method versus a dual method. In summary, interactive relationship qualities and dynamics (commitment and sexual decision making) significantly predicted contraceptive use.

  12. [A study for testing the antifungal susceptibility of yeast by the Japanese Society for Medical Mycology (JSMM) method. The proposal of the modified JSMM method 2009].

    PubMed

    Nishiyama, Yayoi; Abe, Michiko; Ikeda, Reiko; Uno, Jun; Oguri, Toyoko; Shibuya, Kazutoshi; Maesaki, Shigefumi; Mohri, Shinobu; Yamada, Tsuyoshi; Ishibashi, Hiroko; Hasumi, Yayoi; Abe, Shigeru

    2010-01-01

    The Japanese Society for Medical Mycology (JSMM) method used for testing the antifungal susceptibility of yeast, the MIC end point for azole antifungal agents, is currently set at IC(80). It was recently shown, however that there is an inconsistency in the MIC value between the JSMM method and the CLSI M27-A2 (CLSI) method, in which the end- point was to read as IC(50). To resolve this discrepancy and reassess the JSMM method, the MIC for three azoles, fluconazole, itraconazole and voriconazole were compared to 5 strains of each of the following Candida species: C. albicans, C. glabrata, C. tropicalis, C. parapsilosis and C. krusei, for a total of 25 comparisons, using the JSMM method, a modified JSMM method, and the CLSI method. The results showed that when the MIC end- point criterion of the JSMM method was changed from IC(80) to IC(50) (the modified JSMM method) , the MIC value was consistent and compatible with the CLSI method. Finally, it should be emphasized that the JSMM method, using a spectrophotometer for MIC measurement, was superior in both stability and reproducibility, as compared to the CLSI method in which growth was assessed by visual observation.

  13. Modified Fully Utilized Design (MFUD) Method for Stress and Displacement Constraints

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya; Gendy, Atef; Berke, Laszlo; Hopkins, Dale

    1997-01-01

    The traditional fully stressed method performs satisfactorily for stress-limited structural design. When this method is extended to include displacement limitations in addition to stress constraints, it is known as the fully utilized design (FUD). Typically, the FUD produces an overdesign, which is the primary limitation of this otherwise elegant method. We have modified FUD in an attempt to alleviate the limitation. This new method, called the modified fully utilized design (MFUD) method, has been tested successfully on a number of designs that were subjected to multiple loads and had both stress and displacement constraints. The solutions obtained with MFUD compare favorably with the optimum results that can be generated by using nonlinear mathematical programming techniques. The MFUD method appears to have alleviated the overdesign condition and offers the simplicity of a direct, fully stressed type of design method that is distinctly different from optimization and optimality criteria formulations. The MFUD method is being developed for practicing engineers who favor traditional design methods rather than methods based on advanced calculus and nonlinear mathematical programming techniques. The Integrated Force Method (IFM) was found to be the appropriate analysis tool in the development of the MFUD method. In this paper, the MFUD method and its optimality are presented along with a number of illustrative examples.

  14. Accuracy of two geocoding methods for geographic information system-based exposure assessment in epidemiological studies.

    PubMed

    Faure, Elodie; Danjou, Aurélie M N; Clavel-Chapelon, Françoise; Boutron-Ruault, Marie-Christine; Dossus, Laure; Fervers, Béatrice

    2017-02-24

    Environmental exposure assessment based on Geographic Information Systems (GIS) and study participants' residential proximity to environmental exposure sources relies on the positional accuracy of subjects' residences to avoid misclassification bias. Our study compared the positional accuracy of two automatic geocoding methods to a manual reference method. We geocoded 4,247 address records representing the residential history (1990-2008) of 1,685 women from the French national E3N cohort living in the Rhône-Alpes region. We compared two automatic geocoding methods, a free-online geocoding service (method A) and an in-house geocoder (method B), to a reference layer created by manually relocating addresses from method A (method R). For each automatic geocoding method, positional accuracy levels were compared according to the urban/rural status of addresses and time-periods (1990-2000, 2001-2008), using Chi Square tests. Kappa statistics were performed to assess agreement of positional accuracy of both methods A and B with the reference method, overall, by time-periods and by urban/rural status of addresses. Respectively 81.4% and 84.4% of addresses were geocoded to the exact address (65.1% and 61.4%) or to the street segment (16.3% and 23.0%) with methods A and B. In the reference layer, geocoding accuracy was higher in urban areas compared to rural areas (74.4% vs. 10.5% addresses geocoded to the address or interpolated address level, p < 0.0001); no difference was observed according to the period of residence. Compared to the reference method, median positional errors were 0.0 m (IQR = 0.0-37.2 m) and 26.5 m (8.0-134.8 m), with positional errors <100 m for 82.5% and 71.3% of addresses, for method A and method B respectively. Positional agreement of method A and method B with method R was 'substantial' for both methods, with kappa coefficients of 0.60 and 0.61 for methods A and B, respectively. Our study demonstrates the feasibility of geocoding residential addresses in epidemiological studies not initially recorded for environmental exposure assessment, for both recent addresses and residence locations more than 20 years ago. Accuracy of the two automatic geocoding methods was comparable. The in-house method (B) allowed a better control of the geocoding process and was less time consuming.

  15. Comparison of reproducibility of natural head position using two methods.

    PubMed

    Khan, Abdul Rahim; Rajesh, R N G; Dinesh, M R; Sanjay, N; Girish, K S; Venkataraghavan, Karthik

    2012-01-01

    Lateral cephalometric radiographs have become virtually indispensable to orthodontists in the treatment of patients. They are important in orthodontic growth analysis, diagnosis, treatment planning, monitoring of therapy and evaluation of final treatment outcome. The purpose of this study was to evaluate and compare the maximum reproducibility with minimum variation of natural head position using two methods, i.e. the mirror method and the fluid level device method. The study included two sets of 40 lateral cephalograms taken using two methods of obtaining natural head position: (1) The mirror method and (2) fluid level device method, with a time interval of 2 months. Inclusion criteria • Subjects were randomly selected aged between 18 to 26 years Exclusion criteria • History of orthodontic treatment • Any history of respiratory tract problem or chronic mouth breathing • Any congenital deformity • History of traumatically-induced deformity • History of myofacial pain syndrome • Any previous history of head and neck surgery. The result showed that both the methods for obtaining natural head position-the mirror method and fluid level device method were comparable, but maximum reproducibility was more with the fluid level device as shown by the Dahlberg's coefficient and Bland-Altman plot. The minimum variance was seen with the fluid level device method as shown by Precision and Pearson correlation. The mirror method and the fluid level device method used for obtaining natural head position were comparable without any significance, and the fluid level device method was more reproducible and showed less variance when compared to mirror method for obtaining natural head position. Fluid level device method was more reproducible and shows less variance when compared to mirror method for obtaining natural head position.

  16. Comparing four non-invasive methods to determine the ventilatory anaerobic threshold during cardiopulmonary exercise testing in children with congenital heart or lung disease.

    PubMed

    Visschers, Naomi C A; Hulzebos, Erik H; van Brussel, Marco; Takken, Tim

    2015-11-01

    The ventilatory anaerobic threshold (VAT) is an important method to assess the aerobic fitness in patients with cardiopulmonary disease. Several methods exist to determine the VAT; however, there is no consensus which of these methods is the most accurate. To compare four different non-invasive methods for the determination of the VAT via respiratory gas exchange analysis during a cardiopulmonary exercise test (CPET). A secondary objective is to determine the interobserver reliability of the VAT. CPET data of 30 children diagnosed with either cystic fibrosis (CF; N = 15) or with a surgically corrected dextro-transposition of the great arteries (asoTGA; N = 15) were included. No significant differences were found between conditions or among testers. The RER = 1 method differed the most compared to the other methods, showing significant higher results in all six variables. The PET-O2 method differed significantly on five of six and four of six exercise variables with the V-slope method and the VentEq method, respectively. The V-slope and the VentEq method differed significantly on one of six exercise variables. Ten of thirteen ICCs that were >0.80 had a 95% CI > 0.70. The RER = 1 method and the V-slope method had the highest number of significant ICCs and 95% CIs. The V-slope method, the ventilatory equivalent method and the PET-O2 method are comparable and reliable methods to determine the VAT during CPET in children with CF or asoTGA. © 2014 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  17. Evaluation of Four Methods for Predicting Carbon Stocks of Korean Pine Plantations in Heilongjiang Province, China

    PubMed Central

    Gao, Huilin; Dong, Lihu; Li, Fengri; Zhang, Lianjun

    2015-01-01

    A total of 89 trees of Korean pine (Pinus koraiensis) were destructively sampled from the plantations in Heilongjiang Province, P.R. China. The sample trees were measured and calculated for the biomass and carbon stocks of tree components (i.e., stem, branch, foliage and root). Both compatible biomass and carbon stock models were developed with the total biomass and total carbon stocks as the constraints, respectively. Four methods were used to evaluate the carbon stocks of tree components. The first method predicted carbon stocks directly by the compatible carbon stocks models (Method 1). The other three methods indirectly predicted the carbon stocks in two steps: (1) estimating the biomass by the compatible biomass models, and (2) multiplying the estimated biomass by three different carbon conversion factors (i.e., carbon conversion factor 0.5 (Method 2), average carbon concentration of the sample trees (Method 3), and average carbon concentration of each tree component (Method 4)). The prediction errors of estimating the carbon stocks were compared and tested for the differences between the four methods. The results showed that the compatible biomass and carbon models with tree diameter (D) as the sole independent variable performed well so that Method 1 was the best method for predicting the carbon stocks of tree components and total. There were significant differences among the four methods for the carbon stock of stem. Method 2 produced the largest error, especially for stem and total. Methods 3 and Method 4 were slightly worse than Method 1, but the differences were not statistically significant. In practice, the indirect method using the mean carbon concentration of individual trees was sufficient to obtain accurate carbon stocks estimation if carbon stocks models are not available. PMID:26659257

  18. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  19. A New Online Calibration Method Based on Lord's Bias-Correction.

    PubMed

    He, Yinhong; Chen, Ping; Li, Yong; Zhang, Shumei

    2017-09-01

    Online calibration technique has been widely employed to calibrate new items due to its advantages. Method A is the simplest online calibration method and has attracted many attentions from researchers recently. However, a key assumption of Method A is that it treats person-parameter estimates θ ^ s (obtained by maximum likelihood estimation [MLE]) as their true values θ s , thus the deviation of the estimated θ ^ s from their true values might yield inaccurate item calibration when the deviation is nonignorable. To improve the performance of Method A, a new method, MLE-LBCI-Method A, is proposed. This new method combines a modified Lord's bias-correction method (named as maximum likelihood estimation-Lord's bias-correction with iteration [MLE-LBCI]) with the original Method A in an effort to correct the deviation of θ ^ s which may adversely affect the item calibration precision. Two simulation studies were carried out to explore the performance of both MLE-LBCI and MLE-LBCI-Method A under several scenarios. Simulation results showed that MLE-LBCI could make a significant improvement over the ML ability estimates, and MLE-LBCI-Method A did outperform Method A in almost all experimental conditions.

  20. Qualitative versus quantitative methods in psychiatric research.

    PubMed

    Razafsha, Mahdi; Behforuzi, Hura; Azari, Hassan; Zhang, Zhiqun; Wang, Kevin K; Kobeissy, Firas H; Gold, Mark S

    2012-01-01

    Qualitative studies are gaining their credibility after a period of being misinterpreted as "not being quantitative." Qualitative method is a broad umbrella term for research methodologies that describe and explain individuals' experiences, behaviors, interactions, and social contexts. In-depth interview, focus groups, and participant observation are among the qualitative methods of inquiry commonly used in psychiatry. Researchers measure the frequency of occurring events using quantitative methods; however, qualitative methods provide a broader understanding and a more thorough reasoning behind the event. Hence, it is considered to be of special importance in psychiatry. Besides hypothesis generation in earlier phases of the research, qualitative methods can be employed in questionnaire design, diagnostic criteria establishment, feasibility studies, as well as studies of attitude and beliefs. Animal models are another area that qualitative methods can be employed, especially when naturalistic observation of animal behavior is important. However, since qualitative results can be researcher's own view, they need to be statistically confirmed, quantitative methods. The tendency to combine both qualitative and quantitative methods as complementary methods has emerged over recent years. By applying both methods of research, scientists can take advantage of interpretative characteristics of qualitative methods as well as experimental dimensions of quantitative methods.

  1. Methods of Farm Guidance

    ERIC Educational Resources Information Center

    Vir, Dharm

    1971-01-01

    A survey of teaching methods for farm guidance workers in India, outlining some approaches developed by and used in other nations. Discusses mass educational methods, group educational methods, and the local leadership method. (JB)

  2. Using mixed methods research designs in health psychology: an illustrated discussion from a pragmatist perspective.

    PubMed

    Bishop, Felicity L

    2015-02-01

    To outline some of the challenges of mixed methods research and illustrate how they can be addressed in health psychology research. This study critically reflects on the author's previously published mixed methods research and discusses the philosophical and technical challenges of mixed methods, grounding the discussion in a brief review of methodological literature. Mixed methods research is characterized as having philosophical and technical challenges; the former can be addressed by drawing on pragmatism, the latter by considering formal mixed methods research designs proposed in a number of design typologies. There are important differences among the design typologies which provide diverse examples of designs that health psychologists can adapt for their own mixed methods research. There are also similarities; in particular, many typologies explicitly orient to the technical challenges of deciding on the respective timing of qualitative and quantitative methods and the relative emphasis placed on each method. Characteristics, strengths, and limitations of different sequential and concurrent designs are identified by reviewing five mixed methods projects each conducted for a different purpose. Adapting formal mixed methods designs can help health psychologists address the technical challenges of mixed methods research and identify the approach that best fits the research questions and purpose. This does not obfuscate the need to address philosophical challenges of mixing qualitative and quantitative methods. Statement of contribution What is already known on this subject? Mixed methods research poses philosophical and technical challenges. Pragmatism in a popular approach to the philosophical challenges while diverse typologies of mixed methods designs can help address the technical challenges. Examples of mixed methods research can be hard to locate when component studies from mixed methods projects are published separately. What does this study add? Critical reflections on the author's previously published mixed methods research illustrate how a range of different mixed methods designs can be adapted and applied to address health psychology research questions. The philosophical and technical challenges of mixed methods research should be considered together and in relation to the broader purpose of the research. © 2014 The British Psychological Society.

  3. Why, and how, mixed methods research is undertaken in health services research in England: a mixed methods study.

    PubMed

    O'Cathain, Alicia; Murphy, Elizabeth; Nicholl, Jon

    2007-06-14

    Recently, there has been a surge of international interest in combining qualitative and quantitative methods in a single study--often called mixed methods research. It is timely to consider why and how mixed methods research is used in health services research (HSR). Documentary analysis of proposals and reports of 75 mixed methods studies funded by a research commissioner of HSR in England between 1994 and 2004. Face-to-face semi-structured interviews with 20 researchers sampled from these studies. 18% (119/647) of HSR studies were classified as mixed methods research. In the documentation, comprehensiveness was the main driver for using mixed methods research, with researchers wanting to address a wider range of questions than quantitative methods alone would allow. Interviewees elaborated on this, identifying the need for qualitative research to engage with the complexity of health, health care interventions, and the environment in which studies took place. Motivations for adopting a mixed methods approach were not always based on the intrinsic value of mixed methods research for addressing the research question; they could be strategic, for example, to obtain funding. Mixed methods research was used in the context of evaluation, including randomised and non-randomised designs; survey and fieldwork exploratory studies; and instrument development. Studies drew on a limited number of methods--particularly surveys and individual interviews--but used methods in a wide range of roles. Mixed methods research is common in HSR in the UK. Its use is driven by pragmatism rather than principle, motivated by the perceived deficit of quantitative methods alone to address the complexity of research in health care, as well as other more strategic gains. Methods are combined in a range of contexts, yet the emerging methodological contributions from HSR to the field of mixed methods research are currently limited to the single context of combining qualitative methods and randomised controlled trials. Health services researchers could further contribute to the development of mixed methods research in the contexts of instrument development, survey and fieldwork, and non-randomised evaluations.

  4. New hybrid conjugate gradient methods with the generalized Wolfe line search.

    PubMed

    Xu, Xiao; Kong, Fan-Yu

    2016-01-01

    The conjugate gradient method was an efficient technique for solving the unconstrained optimization problem. In this paper, we made a linear combination with parameters β k of the DY method and the HS method, and putted forward the hybrid method of DY and HS. We also proposed the hybrid of FR and PRP by the same mean. Additionally, to present the two hybrid methods, we promoted the Wolfe line search respectively to compute the step size α k of the two hybrid methods. With the new Wolfe line search, the two hybrid methods had descent property and global convergence property of the two hybrid methods that can also be proved.

  5. Research on the calibration methods of the luminance parameter of radiation luminance meters

    NASA Astrophysics Data System (ADS)

    Cheng, Weihai; Huang, Biyong; Lin, Fangsheng; Li, Tiecheng; Yin, Dejin; Lai, Lei

    2017-10-01

    This paper introduces standard diffusion reflection white plate method and integrating sphere standard luminance source method to calibrate the luminance parameter. The paper compares the effects of calibration results by using these two methods through principle analysis and experimental verification. After using two methods to calibrate the same radiation luminance meter, the data obtained verifies the testing results of the two methods are both reliable. The results show that the display value using standard white plate method has fewer errors and better reproducibility. However, standard luminance source method is more convenient and suitable for on-site calibration. Moreover, standard luminance source method has wider range and can test the linear performance of the instruments.

  6. The change and development of statistical methods used in research articles in child development 1930-2010.

    PubMed

    Køppe, Simo; Dammeyer, Jesper

    2014-09-01

    The evolution of developmental psychology has been characterized by the use of different quantitative and qualitative methods and procedures. But how does the use of methods and procedures change over time? This study explores the change and development of statistical methods used in articles published in Child Development from 1930 to 2010. The methods used in every article in the first issue of every volume were categorized into four categories. Until 1980 relatively simple statistical methods were used. During the last 30 years there has been an explosive use of more advanced statistical methods employed. The absence of statistical methods or use of simple methods had been eliminated.

  7. Social network extraction based on Web: 1. Related superficial methods

    NASA Astrophysics Data System (ADS)

    Khairuddin Matyuso Nasution, Mahyuddin

    2018-01-01

    Often the nature of something affects methods to resolve the related issues about it. Likewise, methods to extract social networks from the Web, but involve the structured data types differently. This paper reveals several methods of social network extraction from the same sources that is Web: the basic superficial method, the underlying superficial method, the description superficial method, and the related superficial methods. In complexity we derive the inequalities between methods and so are their computations. In this case, we find that different results from the same tools make the difference from the more complex to the simpler: Extraction of social network by involving co-occurrence is more complex than using occurrences.

  8. Performance of a proposed determinative method for p-TSA in rainbow trout fillet tissue and bridging the proposed method with a method for total chloramine-T residues in rainbow trout fillet tissue

    USGS Publications Warehouse

    Meinertz, J.R.; Stehly, G.R.; Gingerich, W.H.; Greseth, Shari L.

    2001-01-01

    Chloramine-T is an effective drug for controlling fish mortality caused by bacterial gill disease. As part of the data required for approval of chloramine-T use in aquaculture, depletion of the chloramine-T marker residue (para-toluenesulfonamide; p-TSA) from edible fillet tissue of fish must be characterized. Declaration of p-TSA as the marker residue for chloramine-T in rainbow trout was based on total residue depletion studies using a method that used time consuming and cumbersome techniques. A simple and robust method recently developed is being proposed as a determinative method for p-TSA in fish fillet tissue. The proposed determinative method was evaluated by comparing accuracy and precision data with U.S. Food and Drug Administration criteria and by bridging the method to the former method for chloramine-T residues. The method accuracy and precision fulfilled the criteria for determinative methods; accuracy was 92.6, 93.4, and 94.6% with samples fortified at 0.5X, 1X, and 2X the expected 1000 ng/g tolerance limit for p-TSA, respectively. Method precision with tissue containing incurred p-TSA at a nominal concentration of 1000 ng/g ranged from 0.80 to 8.4%. The proposed determinative method was successfully bridged with the former method. The concentrations of p-TSA developed with the proposed method were not statistically different at p < 0.05 from p-TSA concentrations developed with the former method.

  9. Standard setting: comparison of two methods.

    PubMed

    George, Sanju; Haque, M Sayeed; Oyebode, Femi

    2006-09-14

    The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard-setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. The norm-reference method of standard-setting (mean minus 1 SD) was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ). Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart). We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. The pass rate with the norm-reference method was 85% (66/78) and that by the Angoff method was 100% (78 out of 78). The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% - 87%). The modified Angoff method had an inter-rater reliability of 0.81-0.82 and a test-retest reliability of 0.59-0.74. There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.

  10. Women's Contraceptive Preference-Use Mismatch

    PubMed Central

    He, Katherine; Dalton, Vanessa K.; Zochowski, Melissa K.

    2017-01-01

    Abstract Background: Family planning research has not adequately addressed women's preferences for different contraceptive methods and whether women's contraceptive experiences match their preferences. Methods: Data were drawn from the Women's Healthcare Experiences and Preferences Study, an Internet survey of 1,078 women aged 18–55 randomly sampled from a national probability panel. Survey items assessed women's preferences for contraceptive methods, match between methods preferred and used, and perceived reasons for mismatch. We estimated predictors of contraceptive preference with multinomial logistic regression models. Results: Among women at risk for pregnancy who responded with their preferred method (n = 363), hormonal methods (non-LARC [long-acting reversible contraception]) were the most preferred method (34%), followed by no method (23%) and LARC (18%). Sociodemographic differences in contraception method preferences were noted (p-values <0.05), generally with minority, married, and older women having higher rates of preferring less effective methods, compared to their counterparts. Thirty-six percent of women reported preference-use mismatch, with the majority preferring more effective methods than those they were using. Rates of match between preferred and usual methods were highest for LARC (76%), hormonal (non-LARC) (65%), and no method (65%). The most common reasons for mismatch were cost/insurance (41%), lack of perceived/actual need (34%), and method-specific preference concerns (19%). Conclusion: While preference for effective contraception was common among this sample of women, we found substantial mismatch between preferred and usual methods, notably among women of lower socioeconomic status and women using less effective methods. Findings may have implications for patient-centered contraceptive interventions. PMID:27710196

  11. Validation of various adaptive threshold methods of segmentation applied to follicular lymphoma digital images stained with 3,3’-Diaminobenzidine&Haematoxylin

    PubMed Central

    2013-01-01

    The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the ’brown component’ extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop. Virtual Slides The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017. PMID:23531405

  12. Validation of various adaptive threshold methods of segmentation applied to follicular lymphoma digital images stained with 3,3'-Diaminobenzidine&Haematoxylin.

    PubMed

    Korzynska, Anna; Roszkowiak, Lukasz; Lopez, Carlos; Bosch, Ramon; Witkowski, Lukasz; Lejeune, Marylene

    2013-03-25

    The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the 'brown component' extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop. The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017.

  13. Numerical Grid Generation and Potential Airfoil Analysis and Design

    DTIC Science & Technology

    1988-01-01

    Gauss- Seidel , SOR and ADI iterative methods e JACOBI METHOD In the Jacobi method each new value of a function is computed entirely from old values...preceding iteration and adding the inhomogeneous (boundary condition) term. * GAUSS- SEIDEL METHOD When we compute I in a Jacobi method, we have already...Gauss- Seidel method. Sufficient condition for p convergence of the Gauss- Seidel method is diagonal-dominance of [A].9W e SUCESSIVE OVER-RELAXATION (SOR

  14. Evaluation of intrinsic respiratory signal determination methods for 4D CBCT adapted for mice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Rachael; Pan, Tinsu, E-mail: tpan@mdanderson.org; Rubinstein, Ashley

    Purpose: 4D CT imaging in mice is important in a variety of areas including studies of lung function and tumor motion. A necessary step in 4D imaging is obtaining a respiratory signal, which can be done through an external system or intrinsically through the projection images. A number of methods have been developed that can successfully determine the respiratory signal from cone-beam projection images of humans, however only a few have been utilized in a preclinical setting and most of these rely on step-and-shoot style imaging. The purpose of this work is to assess and make adaptions of several successfulmore » methods developed for humans for an image-guided preclinical radiation therapy system. Methods: Respiratory signals were determined from the projection images of free-breathing mice scanned on the X-RAD system using four methods: the so-called Amsterdam shroud method, a method based on the phase of the Fourier transform, a pixel intensity method, and a center of mass method. The Amsterdam shroud method was modified so the sharp inspiration peaks associated with anesthetized mouse breathing could be detected. Respiratory signals were used to sort projections into phase bins and 4D images were reconstructed. Error and standard deviation in the assignment of phase bins for the four methods compared to a manual method considered to be ground truth were calculated for a range of region of interest (ROI) sizes. Qualitative comparisons were additionally made between the 4D images obtained using each of the methods and the manual method. Results: 4D images were successfully created for all mice with each of the respiratory signal extraction methods. Only minimal qualitative differences were noted between each of the methods and the manual method. The average error (and standard deviation) in phase bin assignment was 0.24 ± 0.08 (0.49 ± 0.11) phase bins for the Fourier transform method, 0.09 ± 0.03 (0.31 ± 0.08) phase bins for the modified Amsterdam shroud method, 0.09 ± 0.02 (0.33 ± 0.07) phase bins for the intensity method, and 0.37 ± 0.10 (0.57 ± 0.08) phase bins for the center of mass method. Little dependence on ROI size was noted for the modified Amsterdam shroud and intensity methods while the Fourier transform and center of mass methods showed a noticeable dependence on the ROI size. Conclusions: The modified Amsterdam shroud, Fourier transform, and intensity respiratory signal methods are sufficiently accurate to be used for 4D imaging on the X-RAD system and show improvement over the existing center of mass method. The intensity and modified Amsterdam shroud methods are recommended due to their high accuracy and low dependence on ROI size.« less

  15. 26 CFR 1.167(b)-2 - Declining balance method.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 2 2014-04-01 2014-04-01 false Declining balance method. 1.167(b)-2 Section 1... Declining balance method. (a) Application of method. Under the declining balance method a uniform rate is.... While salvage is not taken into account in determining the annual allowances under this method, in no...

  16. 77 FR 60985 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Three New Equivalent Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-05

    ... Methods: Designation of Three New Equivalent Methods AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of three new equivalent methods for monitoring ambient air quality. SUMMARY... equivalent methods, one for measuring concentrations of PM 2.5 , one for measuring concentrations of PM 10...

  17. 40 CFR Appendix A to Part 425 - Potassium Ferricyanide Titration Method

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Method A Appendix A to Part 425 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Appendix A to Part 425—Potassium Ferricyanide Titration Method Source The potassium ferricyanide titration method is based on method SLM 4/2 described in “Official Method of Analysis,” Society of Leather Trades...

  18. 40 CFR Appendix A to Part 425 - Potassium Ferricyanide Titration Method

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Method A Appendix A to Part 425 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED..., App. A Appendix A to Part 425—Potassium Ferricyanide Titration Method Source The potassium ferricyanide titration method is based on method SLM 4/2 described in “Official Method of Analysis,” Society of...

  19. 78 FR 67360 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of Five New Equivalent Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-12

    ... Methods: Designation of Five New Equivalent Methods AGENCY: Office of Research and Development; Environmental Protection Agency (EPA). ACTION: Notice of the designation of five new equivalent methods for...) has designated, in accordance with 40 CFR Part 53, five new equivalent methods, one for measuring...

  20. 40 CFR Appendix A to Part 425 - Potassium Ferricyanide Titration Method

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Method A Appendix A to Part 425 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Appendix A to Part 425—Potassium Ferricyanide Titration Method Source The potassium ferricyanide titration method is based on method SLM 4/2 described in “Official Method of Analysis,” Society of Leather Trades...

  1. 78 FR 22540 - Notice of Public Meeting/Webinar: EPA Method Development Update on Drinking Water Testing Methods...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-16

    ...: EPA Method Development Update on Drinking Water Testing Methods for Contaminant Candidate List... Division will describe methods currently in development for many CCL contaminants, with an expectation that several of these methods will support future cycles of the Unregulated Contaminant Monitoring Rule (UCMR...

  2. Problems d'elaboration d'une methode locale: la methode "Paris-Khartoum" (Problems in Implementing a Local Method: the Paris-Khartoum Method)

    ERIC Educational Resources Information Center

    Penhoat, Loick; Sakow, Kostia

    1978-01-01

    A description of the development and implementation of a method introduced in the Sudan that attempts to relate to Sudanese culture and to motivate students. The relationship between language teaching methods and the total educational system is discussed. (AMH)

  3. Exponentially fitted symplectic Runge-Kutta-Nyström methods derived by partitioned Runge-Kutta methods

    NASA Astrophysics Data System (ADS)

    Monovasilis, Th.; Kalogiratou, Z.; Simos, T. E.

    2013-10-01

    In this work we derive symplectic EF/TF RKN methods by symplectic EF/TF PRK methods. Also EF/TF symplectic RKN methods are constructed directly from classical symplectic RKN methods. Several numerical examples will be given in order to decide which is the most favourable implementation.

  4. Standard methods for chemical analysis of steel, cast iron, open-hearth iron, and wrought iron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1973-01-01

    Methods are described for determining manganese, phosphorus, sulfur, selenium, copper, nickel, chromium, vanadium, tungsten, titanium, lead, boron, molybdenum ( alpha -benzoin oxime method), zirconium (cupferron --phosphate method), niobium and tantalum (hydrolysis with perchloric and sulfurous acids (gravimetric, titrimetric, and photometric methods)), and beryllium (oxide method). (DHM)

  5. Detection of coupling delay: A problem not yet solved

    NASA Astrophysics Data System (ADS)

    Coufal, David; Jakubík, Jozef; Jajcay, Nikola; Hlinka, Jaroslav; Krakovská, Anna; Paluš, Milan

    2017-08-01

    Nonparametric detection of coupling delay in unidirectionally and bidirectionally coupled nonlinear dynamical systems is examined. Both continuous and discrete-time systems are considered. Two methods of detection are assessed—the method based on conditional mutual information—the CMI method (also known as the transfer entropy method) and the method of convergent cross mapping—the CCM method. Computer simulations show that neither method is generally reliable in the detection of coupling delays. For continuous-time chaotic systems, the CMI method appears to be more sensitive and applicable in a broader range of coupling parameters than the CCM method. In the case of tested discrete-time dynamical systems, the CCM method has been found to be more sensitive, while the CMI method required much stronger coupling strength in order to bring correct results. However, when studied systems contain a strong oscillatory component in their dynamics, results of both methods become ambiguous. The presented study suggests that results of the tested algorithms should be interpreted with utmost care and the nonparametric detection of coupling delay, in general, is a problem not yet solved.

  6. An historical survey of computational methods in optimal control.

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  7. Identifying Outliers of Non-Gaussian Groundwater State Data Based on Ensemble Estimation for Long-Term Trends

    NASA Astrophysics Data System (ADS)

    Park, E.; Jeong, J.; Choi, J.; Han, W. S.; Yun, S. T.

    2016-12-01

    Three modified outlier identification methods: the three sigma rule (3s), inter quantile range (IQR) and median absolute deviation (MAD), which take advantage of the ensemble regression method are proposed. For validation purposes, the performance of the methods is compared using simulated and actual groundwater data with a few hypothetical conditions. In the validations using simulated data, all of the proposed methods reasonably identify outliers at a 5% outlier level; whereas, only the IQR method performs well for identifying outliers at a 30% outlier level. When applying the methods to real groundwater data, the outlier identification performance of the IQR method is found to be superior to the other two methods. However, the IQR method is found to have a limitation in the false identification of excessive outliers, which may be supplemented by joint applications with the other methods (i.e., the 3s rule and MAD methods). The proposed methods can be also applied as a potential tool for future anomaly detection by model training based on currently available data.

  8. Overview of paint removal methods

    NASA Astrophysics Data System (ADS)

    Foster, Terry

    1995-04-01

    With the introduction of strict environmental regulations governing the use and disposal of methylene chloride and phenols, major components of chemical paint strippers, there have been many new environmentally safe and effective methods of paint removal developed. The new methods developed for removing coatings from aircraft and aircraft components include: mechanical methods using abrasive media such as plastic, wheat starch, walnut shells, ice and dry ice, environmentally safe chemical strippers and paint softeners, and optical methods such as lasers and flash lamps. Each method has its advantages and disadvantages, and some have unique applications. For example, mechanical and abrasive methods can damage sensitive surfaces such as composite materials and strict control of blast parameters and conditions are required. Optical methods can be slow, leaving paint residues, and chemical methods may not remove all of the coating or require special coating formulations to be effective. As an introduction to environmentally safe and effective methods of paint removal, this paper is an overview of the various methods available. The purpose of this overview is to introduce the various paint removal methods available.

  9. Newton's method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    More, J. J.; Sorensen, D. C.

    1982-02-01

    Newton's method plays a central role in the development of numerical techniques for optimization. In fact, most of the current practical methods for optimization can be viewed as variations on Newton's method. It is therefore important to understand Newton's method as an algorithm in its own right and as a key introduction to the most recent ideas in this area. One of the aims of this expository paper is to present and analyze two main approaches to Newton's method for unconstrained minimization: the line search approach and the trust region approach. The other aim is to present some of themore » recent developments in the optimization field which are related to Newton's method. In particular, we explore several variations on Newton's method which are appropriate for large scale problems, and we also show how quasi-Newton methods can be derived quite naturally from Newton's method.« less

  10. [Comparison of two nucleic acid extraction methods for norovirus in oysters].

    PubMed

    Yuan, Qiao; Li, Hui; Deng, Xiaoling; Mo, Yanling; Fang, Ling; Ke, Changwen

    2013-04-01

    To explore a convenient and effective method for norovirus nucleic acid extraction from oysters suitable for long-term viral surveillance. Two methods, namely method A (glycine washing and polyethylene glycol precipitation of the virus followed by silica gel centrifugal column) and method B (protease K digestion followed by application of paramagnetic silicon) were compared for their performance in norovirus nucleic acid extraction from oysters. Real-time RT-PCR was used to detect norovirus in naturally infected oysters and in oysters with induced infection. The two methods yielded comparable positive detection rates for the samples, but the recovery rate of the virus was higher with method B than with method A. Method B is a more convenient and rapid method for norovirus nucleic acid extraction from oysters and suitable for long-term surveillance of norovirus.

  11. On the Formulation of Weakly Singular Displacement/Traction Integral Equations; and Their Solution by the MLPG Method

    NASA Technical Reports Server (NTRS)

    Atluri, Satya N.; Shen, Shengping

    2002-01-01

    In this paper, a very simple method is used to derive the weakly singular traction boundary integral equation based on the integral relationships for displacement gradients. The concept of the MLPG method is employed to solve the integral equations, especially those arising in solid mechanics. A moving Least Squares (MLS) interpolation is selected to approximate the trial functions in this paper. Five boundary integral Solution methods are introduced: direct solution method; displacement boundary-value problem; traction boundary-value problem; mixed boundary-value problem; and boundary variational principle. Based on the local weak form of the BIE, four different nodal-based local test functions are selected, leading to four different MLPG methods for each BIE solution method. These methods combine the advantages of the MLPG method and the boundary element method.

  12. A numerical method to solve the 1D and the 2D reaction diffusion equation based on Bessel functions and Jacobian free Newton-Krylov subspace methods

    NASA Astrophysics Data System (ADS)

    Parand, K.; Nikarya, M.

    2017-11-01

    In this paper a novel method will be introduced to solve a nonlinear partial differential equation (PDE). In the proposed method, we use the spectral collocation method based on Bessel functions of the first kind and the Jacobian free Newton-generalized minimum residual (JFNGMRes) method with adaptive preconditioner. In this work a nonlinear PDE has been converted to a nonlinear system of algebraic equations using the collocation method based on Bessel functions without any linearization, discretization or getting the help of any other methods. Finally, by using JFNGMRes, the solution of the nonlinear algebraic system is achieved. To illustrate the reliability and efficiency of the proposed method, we solve some examples of the famous Fisher equation. We compare our results with other methods.

  13. Mending the Gap, An Effort to Aid the Transfer of Formal Methods Technology

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly

    2009-01-01

    Formal methods can be applied to many of the development and verification activities required for civil avionics software. RTCA/DO-178B, Software Considerations in Airborne Systems and Equipment Certification, gives a brief description of using formal methods as an alternate method of compliance with the objectives of that standard. Despite this, the avionics industry at large has been hesitant to adopt formal methods, with few developers have actually used formal methods for certification credit. Why is this so, given the volume of evidence of the benefits of formal methods? This presentation will explore some of the challenges to using formal methods in a certification context and describe the effort by the Formal Methods Subgroup of RTCA SC-205/EUROCAE WG-71 to develop guidance to make the use of formal methods a recognized approach.

  14. Methods for the calculation of axial wave numbers in lined ducts with mean flow

    NASA Technical Reports Server (NTRS)

    Eversman, W.

    1981-01-01

    A survey is made of the methods available for the calculation of axial wave numbers in lined ducts. Rectangular and circular ducts with both uniform and non-uniform flow are considered as are ducts with peripherally varying liners. A historical perspective is provided by a discussion of the classical methods for computing attenuation when no mean flow is present. When flow is present these techniques become either impractical or impossible. A number of direct eigenvalue determination schemes which have been used when flow is present are discussed. Methods described are extensions of the classical no-flow technique, perturbation methods based on the no-flow technique, direct integration methods for solution of the eigenvalue equation, an integration-iteration method based on the governing differential equation for acoustic transmission, Galerkin methods, finite difference methods, and finite element methods.

  15. Optimal projection method determination by Logdet Divergence and perturbed von-Neumann Divergence.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Qiu, Yushan; Cheng, Xiao-Qing

    2017-12-14

    Positive semi-definiteness is a critical property in kernel methods for Support Vector Machine (SVM) by which efficient solutions can be guaranteed through convex quadratic programming. However, a lot of similarity functions in applications do not produce positive semi-definite kernels. We propose projection method by constructing projection matrix on indefinite kernels. As a generalization of the spectrum method (denoising method and flipping method), the projection method shows better or comparable performance comparing to the corresponding indefinite kernel methods on a number of real world data sets. Under the Bregman matrix divergence theory, we can find suggested optimal λ in projection method using unconstrained optimization in kernel learning. In this paper we focus on optimal λ determination, in the pursuit of precise optimal λ determination method in unconstrained optimization framework. We developed a perturbed von-Neumann divergence to measure kernel relationships. We compared optimal λ determination with Logdet Divergence and perturbed von-Neumann Divergence, aiming at finding better λ in projection method. Results on a number of real world data sets show that projection method with optimal λ by Logdet divergence demonstrate near optimal performance. And the perturbed von-Neumann Divergence can help determine a relatively better optimal projection method. Projection method ia easy to use for dealing with indefinite kernels. And the parameter embedded in the method can be determined through unconstrained optimization under Bregman matrix divergence theory. This may provide a new way in kernel SVMs for varied objectives.

  16. Two-dimensional phase unwrapping using robust derivative estimation and adaptive integration.

    PubMed

    Strand, Jarle; Taxt, Torfinn

    2002-01-01

    The adaptive integration (ADI) method for two-dimensional (2-D) phase unwrapping is presented. The method uses an algorithm for noise robust estimation of partial derivatives, followed by a noise robust adaptive integration process. The ADI method can easily unwrap phase images with moderate noise levels, and the resulting images are congruent modulo 2pi with the observed, wrapped, input images. In a quantitative evaluation, both the ADI and the BLS methods (Strand et al.) were better than the least-squares methods of Ghiglia and Romero (GR), and of Marroquin and Rivera (MRM). In a qualitative evaluation, the ADI, the BLS, and a conjugate gradient version of the MRM method (MRMCG), were all compared using a synthetic image with shear, using 115 magnetic resonance images, and using 22 fiber-optic interferometry images. For the synthetic image and the interferometry images, the ADI method gave consistently visually better results than the other methods. For the MR images, the MRMCG method was best, and the ADI method second best. The ADI method was less sensitive to the mask definition and the block size than the BLS method, and successfully unwrapped images with shears that were not marked in the masks. The computational requirements of the ADI method for images of nonrectangular objects were comparable to only two iterations of many least-squares-based methods (e.g., GR). We believe the ADI method provides a powerful addition to the ensemble of tools available for 2-D phase unwrapping.

  17. A method for assigning species into groups based on generalized Mahalanobis distance between habitat model coefficients

    USGS Publications Warehouse

    Williams, C.J.; Heglund, P.J.

    2009-01-01

    Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.

  18. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    PubMed

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  19. Statistical methods used to test for agreement of medical instruments measuring continuous variables in method comparison studies: a systematic review.

    PubMed

    Zaki, Rafdzah; Bulgiba, Awang; Ismail, Roshidi; Ismail, Noor Azina

    2012-01-01

    Accurate values are a must in medicine. An important parameter in determining the quality of a medical instrument is agreement with a gold standard. Various statistical methods have been used to test for agreement. Some of these methods have been shown to be inappropriate. This can result in misleading conclusions about the validity of an instrument. The Bland-Altman method is the most popular method judging by the many citations of the article proposing this method. However, the number of citations does not necessarily mean that this method has been applied in agreement research. No previous study has been conducted to look into this. This is the first systematic review to identify statistical methods used to test for agreement of medical instruments. The proportion of various statistical methods found in this review will also reflect the proportion of medical instruments that have been validated using those particular methods in current clinical practice. Five electronic databases were searched between 2007 and 2009 to look for agreement studies. A total of 3,260 titles were initially identified. Only 412 titles were potentially related, and finally 210 fitted the inclusion criteria. The Bland-Altman method is the most popular method with 178 (85%) studies having used this method, followed by the correlation coefficient (27%) and means comparison (18%). Some of the inappropriate methods highlighted by Altman and Bland since the 1980s are still in use. This study finds that the Bland-Altman method is the most popular method used in agreement research. There are still inappropriate applications of statistical methods in some studies. It is important for a clinician or medical researcher to be aware of this issue because misleading conclusions from inappropriate analyses will jeopardize the quality of the evidence, which in turn will influence quality of care given to patients in the future.

  20. Statistical Methods Used to Test for Agreement of Medical Instruments Measuring Continuous Variables in Method Comparison Studies: A Systematic Review

    PubMed Central

    Zaki, Rafdzah; Bulgiba, Awang; Ismail, Roshidi; Ismail, Noor Azina

    2012-01-01

    Background Accurate values are a must in medicine. An important parameter in determining the quality of a medical instrument is agreement with a gold standard. Various statistical methods have been used to test for agreement. Some of these methods have been shown to be inappropriate. This can result in misleading conclusions about the validity of an instrument. The Bland-Altman method is the most popular method judging by the many citations of the article proposing this method. However, the number of citations does not necessarily mean that this method has been applied in agreement research. No previous study has been conducted to look into this. This is the first systematic review to identify statistical methods used to test for agreement of medical instruments. The proportion of various statistical methods found in this review will also reflect the proportion of medical instruments that have been validated using those particular methods in current clinical practice. Methodology/Findings Five electronic databases were searched between 2007 and 2009 to look for agreement studies. A total of 3,260 titles were initially identified. Only 412 titles were potentially related, and finally 210 fitted the inclusion criteria. The Bland-Altman method is the most popular method with 178 (85%) studies having used this method, followed by the correlation coefficient (27%) and means comparison (18%). Some of the inappropriate methods highlighted by Altman and Bland since the 1980s are still in use. Conclusions This study finds that the Bland-Altman method is the most popular method used in agreement research. There are still inappropriate applications of statistical methods in some studies. It is important for a clinician or medical researcher to be aware of this issue because misleading conclusions from inappropriate analyses will jeopardize the quality of the evidence, which in turn will influence quality of care given to patients in the future. PMID:22662248

  1. Evaluation of PDA Technical Report No 33. Statistical Testing Recommendations for a Rapid Microbiological Method Case Study.

    PubMed

    Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David

    2015-01-01

    New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc. 2015.

  2. [The research and application of pretreatment method for matrix-assisted laser desorption ionization-time of flight mass spectrometry identification of filamentous fungi].

    PubMed

    Huang, Y F; Chang, Z; Bai, J; Zhu, M; Zhang, M X; Wang, M; Zhang, G; Li, X Y; Tong, Y G; Wang, J L; Lu, X X

    2017-08-08

    Objective: To establish and evaluate the feasibility of a pretreatment method for matrix-assisted laser desorption ionization-time of flight mass spectrometry identification of filamentous fungi developed by the laboratory. Methods: Three hundred and eighty strains of filamentous fungi from January 2014 to December 2016 were recovered and cultured on sabouraud dextrose agar (SDA) plate at 28 ℃ to mature state. Meanwhile, the fungi were cultured in liquid sabouraud medium with a vertical rotation method recommended by Bruker and a horizontal vibration method developed by the laboratory until adequate amount of colonies were observed. For the strains cultured with the three methods, protein was extracted with modified magnetic bead-based extraction method for mass spectrum identification. Results: For 380 fungi strains, it took 3-10 d to culture with SDA culture method, and the ratio of identification of the species and genus was 47% and 81%, respectively; it took 5-7 d to culture with vertical rotation method, and the ratio of identification of the species and genus was 76% and 94%, respectively; it took 1-2 d to culture with horizontal vibration method, and the ratio of identification of the species and genus was 96% and 99%, respectively. For the comparison between horizontal vibration method and SDA culture method comparison, the difference was statistically significant (χ(2)=39.026, P <0.01); for the comparison between horizontal vibration method and vertical rotation method recommended by Bruker, the difference was statistically significant(χ(2)=11.310, P <0.01). Conclusion: The horizontal vibration method and modified magnetic bead-based extraction method developed by the laboratory is superior to the method recommended by Bruker and SDA culture method in terms of the identification capacity for filamentous fungi, which can be applied in clinic.

  3. Development of a practical costing method for hospitals.

    PubMed

    Cao, Pengyu; Toyabe, Shin-Ichi; Akazawa, Kouhei

    2006-03-01

    To realize an effective cost control, a practical and accurate cost accounting system is indispensable in hospitals. In traditional cost accounting systems, the volume-based costing (VBC) is the most popular cost accounting method. In this method, the indirect costs are allocated to each cost object (services or units of a hospital) using a single indicator named a cost driver (e.g., Labor hours, revenues or the number of patients). However, this method often results in rough and inaccurate results. The activity based costing (ABC) method introduced in the mid 1990s can prove more accurate results. With the ABC method, all events or transactions that cause costs are recognized as "activities", and a specific cost driver is prepared for each activity. Finally, the costs of activities are allocated to cost objects by the corresponding cost driver. However, it is much more complex and costly than other traditional cost accounting methods because the data collection for cost drivers is not always easy. In this study, we developed a simplified ABC (S-ABC) costing method to reduce the workload of ABC costing by reducing the number of cost drivers used in the ABC method. Using the S-ABC method, we estimated the cost of the laboratory tests, and as a result, similarly accurate results were obtained with the ABC method (largest difference was 2.64%). Simultaneously, this new method reduces the seven cost drivers used in the ABC method to four. Moreover, we performed an evaluation using other sample data from physiological laboratory department to certify the effectiveness of this new method. In conclusion, the S-ABC method provides two advantages in comparison to the VBC and ABC methods: (1) it can obtain accurate results, and (2) it is simpler to perform. Once we reduce the number of cost drivers by applying the proposed S-ABC method to the data for the ABC method, we can easily perform the cost accounting using few cost drivers after the second round of costing.

  4. Comparative study between recent methods manipulating ratio spectra and classical methods based on two-wavelength selection for the determination of binary mixture of antazoline hydrochloride and tetryzoline hydrochloride

    NASA Astrophysics Data System (ADS)

    Abdel-Halim, Lamia M.; Abd-El Rahman, Mohamed K.; Ramadan, Nesrin K.; EL Sanabary, Hoda F. A.; Salem, Maissa Y.

    2016-04-01

    A comparative study was developed between two classical spectrophotometric methods (dual wavelength method and Vierordt's method) and two recent methods manipulating ratio spectra (ratio difference method and first derivative of ratio spectra method) for simultaneous determination of Antazoline hydrochloride (AN) and Tetryzoline hydrochloride (TZ) in their combined pharmaceutical formulation and in the presence of benzalkonium chloride as a preservative without preliminary separation. The dual wavelength method depends on choosing two wavelengths for each drug in a way so that the difference in absorbance at those two wavelengths is zero for the other drug. While Vierordt's method, is based upon measuring the absorbance and the absorptivity values of the two drugs at their λmax (248.0 and 219.0 nm for AN and TZ, respectively), followed by substitution in the corresponding Vierordt's equation. Recent methods manipulating ratio spectra depend on either measuring the difference in amplitudes of ratio spectra between 255.5 and 269.5 nm for AN and 220.0 and 273.0 nm for TZ in case of ratio difference method or computing first derivative of the ratio spectra for each drug then measuring the peak amplitude at 250.0 nm for AN and at 224.0 nm for TZ in case of first derivative of ratio spectrophotometry. The specificity of the developed methods was investigated by analyzing different laboratory prepared mixtures of the two drugs. All methods were applied successfully for the determination of the selected drugs in their combined dosage form proving that the classical spectrophotometric methods can still be used successfully in analysis of binary mixture using minimal data manipulation rather than recent methods which require relatively more steps. Furthermore, validation of the proposed methods was performed according to ICH guidelines; accuracy, precision and repeatability are found to be within the acceptable limits. Statistical studies showed that the methods can be competitively applied in quality control laboratories.

  5. Lipidomic analysis of biological samples: Comparison of liquid chromatography, supercritical fluid chromatography and direct infusion mass spectrometry methods.

    PubMed

    Lísa, Miroslav; Cífková, Eva; Khalikova, Maria; Ovčačíková, Magdaléna; Holčapek, Michal

    2017-11-24

    Lipidomic analysis of biological samples in a clinical research represents challenging task for analytical methods given by the large number of samples and their extreme complexity. In this work, we compare direct infusion (DI) and chromatography - mass spectrometry (MS) lipidomic approaches represented by three analytical methods in terms of comprehensiveness, sample throughput, and validation results for the lipidomic analysis of biological samples represented by tumor tissue, surrounding normal tissue, plasma, and erythrocytes of kidney cancer patients. Methods are compared in one laboratory using the identical analytical protocol to ensure comparable conditions. Ultrahigh-performance liquid chromatography/MS (UHPLC/MS) method in hydrophilic interaction liquid chromatography mode and DI-MS method are used for this comparison as the most widely used methods for the lipidomic analysis together with ultrahigh-performance supercritical fluid chromatography/MS (UHPSFC/MS) method showing promising results in metabolomics analyses. The nontargeted analysis of pooled samples is performed using all tested methods and 610 lipid species within 23 lipid classes are identified. DI method provides the most comprehensive results due to identification of some polar lipid classes, which are not identified by UHPLC and UHPSFC methods. On the other hand, UHPSFC method provides an excellent sensitivity for less polar lipid classes and the highest sample throughput within 10min method time. The sample consumption of DI method is 125 times higher than for other methods, while only 40μL of organic solvent is used for one sample analysis compared to 3.5mL and 4.9mL in case of UHPLC and UHPSFC methods, respectively. Methods are validated for the quantitative lipidomic analysis of plasma samples with one internal standard for each lipid class. Results show applicability of all tested methods for the lipidomic analysis of biological samples depending on the analysis requirements. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. New clinical validation method for automated sphygmomanometer: a proposal by Japan ISO-WG for sphygmomanometer standard.

    PubMed

    Shirasaki, Osamu; Asou, Yosuke; Takahashi, Yukio

    2007-12-01

    Owing to fast or stepwise cuff deflation, or measuring at places other than the upper arm, the clinical accuracy of most recent automated sphygmomanometers (auto-BPMs) cannot be validated by one-arm simultaneous comparison, which would be the only accurate validation method based on auscultation. Two main alternative methods are provided by current standards, that is, two-arm simultaneous comparison (method 1) and one-arm sequential comparison (method 2); however, the accuracy of these validation methods might not be sufficient to compensate for the suspicious accuracy in lateral blood pressure (BP) differences (LD) and/or BP variations (BPV) between the device and reference readings. Thus, the Japan ISO-WG for sphygmomanometer standards has been studying a new method that might improve validation accuracy (method 3). The purpose of this study is to determine the appropriateness of method 3 by comparing immunity to LD and BPV with those of the current validation methods (methods 1 and 2). The validation accuracy of the above three methods was assessed in human participants [N=120, 45+/-15.3 years (mean+/-SD)]. An oscillometric automated monitor, Omron HEM-762, was used as the tested device. When compared with the others, methods 1 and 3 showed a smaller intra-individual standard deviation of device error (SD1), suggesting their higher reproducibility of validation. The SD1 by method 2 (P=0.004) significantly correlated with the participant's BP, supporting our hypothesis that the increased SD of device error by method 2 is at least partially caused by essential BPV. Method 3 showed a significantly (P=0.0044) smaller interparticipant SD of device error (SD2), suggesting its higher interparticipant consistency of validation. Among the methods of validation of the clinical accuracy of auto-BPMs, method 3, which showed the highest reproducibility and highest interparticipant consistency, can be proposed as being the most appropriate.

  7. [Significance of bacteria detection with filter paper method on diagnosis of diabetic foot wound infection].

    PubMed

    Zou, X H; Zhu, Y P; Ren, G Q; Li, G C; Zhang, J; Zou, L J; Feng, Z B; Li, B H

    2017-02-20

    Objective: To evaluate the significance of bacteria detection with filter paper method on diagnosis of diabetic foot wound infection. Methods: Eighteen patients with diabetic foot ulcer conforming to the study criteria were hospitalized in Liyuan Hospital Affiliated to Tongji Medical College of Huazhong University of Science and Technology from July 2014 to July 2015. Diabetic foot ulcer wounds were classified according to the University of Texas diabetic foot classification (hereinafter referred to as Texas grade) system, and general condition of patients with wounds in different Texas grade was compared. Exudate and tissue of wounds were obtained, and filter paper method and biopsy method were adopted to detect the bacteria of wounds of patients respectively. Filter paper method was regarded as the evaluation method, and biopsy method was regarded as the control method. The relevance, difference, and consistency of the detection results of two methods were tested. Sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of filter paper method in bacteria detection were calculated. Receiver operating characteristic (ROC) curve was drawn based on the specificity and sensitivity of filter paper method in bacteria detection of 18 patients to predict the detection effect of the method. Data were processed with one-way analysis of variance and Fisher's exact test. In patients tested positive for bacteria by biopsy method, the correlation between bacteria number detected by biopsy method and that by filter paper method was analyzed with Pearson correlation analysis. Results: (1) There were no statistically significant differences among patients with wounds in Texas grade 1, 2, and 3 in age, duration of diabetes, duration of wound, wound area, ankle brachial index, glycosylated hemoglobin, fasting blood sugar, blood platelet count, erythrocyte sedimentation rate, C-reactive protein, aspartate aminotransferase, serum creatinine, and urea nitrogen (with F values from 0.029 to 2.916, P values above 0.05), while there were statistically significant differences among patients with wounds in Texas grade 1, 2, and 3 in white blood cell count and alanine aminotransferase (with F values 4.688 and 6.833 respectively, P <0.05 or P <0.01). (2) According to the results of biopsy method, 6 patients were tested negative for bacteria, and 12 patients were tested positive for bacteria, among which 10 patients were with bacterial number above 1×10(5)/g, and 2 patients with bacterial number below 1×10(5)/g. According to the results of filter paper method, 8 patients were tested negative for bacteria, and 10 patients were tested positive for bacteria, among which 7 patients were with bacterial number above 1×10(5)/g, and 3 patients with bacterial number below 1×10(5)/g. There were 7 patients tested positive for bacteria both by biopsy method and filter paper method, 8 patients tested negative for bacteria both by biopsy method and filter paper method, and 3 patients tested positive for bacteria by biopsy method but negative by filter paper method. Patients tested negative for bacteria by biopsy method did not tested positive for bacteria by filter paper method. There was directional association between the detection results of two methods ( P =0.004), i. e. if result of biopsy method was positive, result of filter paper method could also be positive. There was no obvious difference in the detection results of two methods ( P =0.250). The consistency between the detection results of two methods was ordinary (Kappa=0.68, P =0.002). (3) The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of filter paper method in bacteria detection were 70%, 100%, 1.00, 0.73, and 83.3%, respectively. Total area under ROC curve of bacteria detection by filter paper method in 18 patients was 0.919 (with 95% confidence interval 0-1.000, P =0.030). (4) There were 13 strains of bacteria detected by biopsy method, with 5 strains of Acinetobacter baumannii, 5 strains of Staphylococcus aureus, 1 strain of Pseudomonas aeruginosa, 1 strain of Streptococcus bovis, and 1 strain of bird Enterococcus . There were 11 strains of bacteria detected by filter paper method, with 5 strains of Acinetobacter baumannii, 3 strains of Staphylococcus aureus, 1 strain of Pseudomonas aeruginosa, 1 strain of Streptococcus bovis, and 1 strain of bird Enterococcus . Except for Staphylococcus aureus, the sensitivity and specificity of filter paper method in the detection of the other 4 bacteria were all 100%. The consistency between filter paper method and biopsy method in detecting Acinetobacter baumannii was good (Kappa=1.00, P <0.01), while that in detecting Staphylococcus aureus was ordinary (Kappa=0.68, P <0.05). (5) There was no obvious correlation between the bacteria number of wounds detected by filter paper method and that by biopsy method ( r =0.257, P =0.419). There was obvious correlation between the bacteria numbers detected by two methods in wounds with Texas grade 1 and 2 (with r values as 0.999, P values as 0.001). There was no obvious correlation between the bacteria numbers detected by two methods in wounds with Texas grade 3 ( r =-0.053, P =0.947). Conclusions: The detection result of filter paper method is in accordance with that of biopsy method in the determination of bacterial infection, and it is of great importance in the diagnosis of local infection of diabetic foot wound.

  8. A k-space method for large-scale models of wave propagation in tissue.

    PubMed

    Mast, T D; Souriau, L P; Liu, D L; Tabei, M; Nachman, A I; Waag, R C

    2001-03-01

    Large-scale simulation of ultrasonic pulse propagation in inhomogeneous tissue is important for the study of ultrasound-tissue interaction as well as for development of new imaging methods. Typical scales of interest span hundreds of wavelengths; most current two-dimensional methods, such as finite-difference and finite-element methods, are unable to compute propagation on this scale with the efficiency needed for imaging studies. Furthermore, for most available methods of simulating ultrasonic propagation, large-scale, three-dimensional computations of ultrasonic scattering are infeasible. Some of these difficulties have been overcome by previous pseudospectral and k-space methods, which allow substantial portions of the necessary computations to be executed using fast Fourier transforms. This paper presents a simplified derivation of the k-space method for a medium of variable sound speed and density; the derivation clearly shows the relationship of this k-space method to both past k-space methods and pseudospectral methods. In the present method, the spatial differential equations are solved by a simple Fourier transform method, and temporal iteration is performed using a k-t space propagator. The temporal iteration procedure is shown to be exact for homogeneous media, unconditionally stable for "slow" (c(x) < or = c0) media, and highly accurate for general weakly scattering media. The applicability of the k-space method to large-scale soft tissue modeling is shown by simulating two-dimensional propagation of an incident plane wave through several tissue-mimicking cylinders as well as a model chest wall cross section. A three-dimensional implementation of the k-space method is also employed for the example problem of propagation through a tissue-mimicking sphere. Numerical results indicate that the k-space method is accurate for large-scale soft tissue computations with much greater efficiency than that of an analogous leapfrog pseudospectral method or a 2-4 finite difference time-domain method. However, numerical results also indicate that the k-space method is less accurate than the finite-difference method for a high contrast scatterer with bone-like properties, although qualitative results can still be obtained by the k-space method with high efficiency. Possible extensions to the method, including representation of absorption effects, absorbing boundary conditions, elastic-wave propagation, and acoustic nonlinearity, are discussed.

  9. The method of planning the energy consumption for electricity market

    NASA Astrophysics Data System (ADS)

    Russkov, O. V.; Saradgishvili, S. E.

    2017-10-01

    The limitations of existing forecast models are defined. The offered method is based on game theory, probabilities theory and forecasting the energy prices relations. New method is the basis for planning the uneven energy consumption of industrial enterprise. Ecological side of the offered method is disclosed. The program module performed the algorithm of the method is described. Positive method tests at the industrial enterprise are shown. The offered method allows optimizing the difference between planned and factual consumption of energy every hour of a day. The conclusion about applicability of the method for addressing economic and ecological challenges is made.

  10. Numerical solution of sixth-order boundary-value problems using Legendre wavelet collocation method

    NASA Astrophysics Data System (ADS)

    Sohaib, Muhammad; Haq, Sirajul; Mukhtar, Safyan; Khan, Imad

    2018-03-01

    An efficient method is proposed to approximate sixth order boundary value problems. The proposed method is based on Legendre wavelet in which Legendre polynomial is used. The mechanism of the method is to use collocation points that converts the differential equation into a system of algebraic equations. For validation two test problems are discussed. The results obtained from proposed method are quite accurate, also close to exact solution, and other different methods. The proposed method is computationally more effective and leads to more accurate results as compared to other methods from literature.

  11. Modifications of the PCPT method for HJB equations

    NASA Astrophysics Data System (ADS)

    Kossaczký, I.; Ehrhardt, M.; Günther, M.

    2016-10-01

    In this paper we will revisit the modification of the piecewise constant policy timestepping (PCPT) method for solving Hamilton-Jacobi-Bellman (HJB) equations. This modification is called piecewise predicted policy timestepping (PPPT) method and if properly used, it may be significantly faster. We will quickly recapitulate the algorithms of PCPT, PPPT methods and of the classical implicit method and apply them on a passport option pricing problem with non-standard payoff. We will present modifications needed to solve this problem effectively with the PPPT method and compare the performance with the PCPT method and the classical implicit method.

  12. Rapid Method for Sodium Hydroxide/Sodium Peroxide Fusion ...

    EPA Pesticide Factsheets

    Technical Fact Sheet Analysis Purpose: Qualitative analysis Technique: Alpha spectrometry Method Developed for: Plutonium-238 and plutonium-239 in water and air filters Method Selected for: SAM lists this method as a pre-treatment technique supporting analysis of refractory radioisotopic forms of plutonium in drinking water and air filters using the following qualitative techniques: • Rapid methods for acid or fusion digestion • Rapid Radiochemical Method for Plutonium-238 and Plutonium 239/240 in Building Materials for Environmental Remediation Following Radiological Incidents. Summary of subject analytical method which will be posted to the SAM website to allow access to the method.

  13. The Importance of Method Selection in Determining Product Integrity for Nutrition Research1234

    PubMed Central

    Mudge, Elizabeth M; Brown, Paula N

    2016-01-01

    The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. PMID:26980823

  14. Development of a Double Glass Mounting Method Using Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) and its Evaluation for Permanent Mounting of Small Nematodes

    PubMed Central

    ZAHABIUN, Farzaneh; SADJJADI, Seyed Mahmoud; ESFANDIARI, Farideh

    2015-01-01

    Background: Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. Methods: A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. Results: The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Conclusion: Using this method is cost effective and fast for mounting of small nematodes comparing to classic method. PMID:26811729

  15. An evaluation of the efficiency of cleaning methods in a bacon factory

    PubMed Central

    Dempster, J. F.

    1971-01-01

    The germicidal efficiencies of hot water (140-150° F.) under pressure (method 1), hot water + 2% (w/v) detergent solution (method 2) and hot water + detergent + 200 p.p.m. solution of available chlorine (method 3) were compared at six sites in a bacon factory. Results indicated that sites 1 and 2 (tiled walls) were satisfactorily cleaned by each method. It was therefore considered more economical to clean such surfaces routinely by method 1. However, this method was much less efficient (31% survival of micro-organisms) on site 3 (wooden surface) than methods 2 (7% survival) and 3 (1% survival). Likewise the remaining sites (dehairing machine, black scraper and table) were least efficiently cleaned by method 1. The most satisfactory results were obtained when these surfaces were treated by method 3. Pig carcasses were shown to be contaminated by an improperly cleaned black scraper. Repeated cleaning and sterilizing (method 3) of this equipment reduced the contamination on carcasses from about 70% to less than 10%. PMID:5291745

  16. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al.

    PubMed

    Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang

    2015-09-21

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.

  17. Simplified adsorption method for detection of antibodies to Candida albicans germ tubes.

    PubMed Central

    Ponton, J; Quindos, G; Arilla, M C; Mackenzie, D W

    1994-01-01

    Two modifications that simplify and shorten a method for adsorption of the antibodies against the antigens expressed on both blastospore and germ tube cell wall surfaces (methods 2 and 3) were compared with the original method of adsorption (method 1) to detect anti-Candida albicans germ tube antibodies in 154 serum specimens. Adsorption of the sera by both modified methods resulted in titers very similar to those obtained by the original method. Only 5.2% of serum specimens tested by method 2 and 5.8% of serum specimens tested by method 3 presented greater than one dilution discrepancies in the titers with respect to the titer observed by method 1. When a test based on method 2 was evaluated with sera from patients with invasive candidiasis, the best discriminatory results (sensitivity, 84.6%; specificity, 87.9%; positive predictive value, 75.9%; negative predictive value, 92.7%; efficiency, 86.9%) were obtained when a titer of > or = 1:160 was considered positive. PMID:8126184

  18. A hybrid perturbation Galerkin technique with applications to slender body theory

    NASA Technical Reports Server (NTRS)

    Geer, James F.; Andersen, Carl M.

    1989-01-01

    A two-step hybrid perturbation-Galerkin method to solve a variety of applied mathematics problems which involve a small parameter is presented. The method consists of: (1) the use of a regular or singular perturbation method to determine the asymptotic expansion of the solution in terms of the small parameter; (2) construction of an approximate solution in the form of a sum of the perturbation coefficient functions multiplied by (unknown) amplitudes (gauge functions); and (3) the use of the classical Bubnov-Galerkin method to determine these amplitudes. This hybrid method has the potential of overcoming some of the drawbacks of the perturbation method and the Bubnov-Galerkin method when they are applied by themselves, while combining some of the good features of both. The proposed method is applied to some singular perturbation problems in slender body theory. The results obtained from the hybrid method are compared with approximate solutions obtained by other methods, and the degree of applicability of the hybrid method to broader problem areas is discussed.

  19. A hybrid perturbation Galerkin technique with applications to slender body theory

    NASA Technical Reports Server (NTRS)

    Geer, James F.; Andersen, Carl M.

    1987-01-01

    A two step hybrid perturbation-Galerkin method to solve a variety of applied mathematics problems which involve a small parameter is presented. The method consists of: (1) the use of a regular or singular perturbation method to determine the asymptotic expansion of the solution in terms of the small parameter; (2) construction of an approximate solution in the form of a sum of the perturbation coefficient functions multiplied by (unknown) amplitudes (gauge functions); and (3) the use of the classical Bubnov-Galerkin method to determine these amplitudes. This hybrid method has the potential of overcoming some of the drawbacks of the perturbation method and the Bubnov-Galerkin method when they are applied by themselves, while combining some of the good features of both. The proposed method is applied to some singular perturbation problems in slender body theory. The results obtained from the hybrid method are compared with approximate solutions obtained by other methods, and the degree of applicability of the hybrid method to broader problem areas is discussed.

  20. Comparison of the convolution quadrature method and enhanced inverse FFT with application in elastodynamic boundary element method

    NASA Astrophysics Data System (ADS)

    Schanz, Martin; Ye, Wenjing; Xiao, Jinyou

    2016-04-01

    Transient problems can often be solved with transformation methods, where the inverse transformation is usually performed numerically. Here, the discrete Fourier transform in combination with the exponential window method is compared with the convolution quadrature method formulated as inverse transformation. Both are inverse Laplace transforms, which are formally identical but use different complex frequencies. A numerical study is performed, first with simple convolution integrals and, second, with a boundary element method (BEM) for elastodynamics. Essentially, when combined with the BEM, the discrete Fourier transform needs less frequency calculations, but finer mesh compared to the convolution quadrature method to obtain the same level of accuracy. If further fast methods like the fast multipole method are used to accelerate the boundary element method the convolution quadrature method is better, because the iterative solver needs much less iterations to converge. This is caused by the larger real part of the complex frequencies necessary for the calculation, which improves the conditions of system matrix.

  1. Explicit methods in extended phase space for inseparable Hamiltonian problems

    NASA Astrophysics Data System (ADS)

    Pihajoki, Pauli

    2015-03-01

    We present a method for explicit leapfrog integration of inseparable Hamiltonian systems by means of an extended phase space. A suitably defined new Hamiltonian on the extended phase space leads to equations of motion that can be numerically integrated by standard symplectic leapfrog (splitting) methods. When the leapfrog is combined with coordinate mixing transformations, the resulting algorithm shows good long term stability and error behaviour. We extend the method to non-Hamiltonian problems as well, and investigate optimal methods of projecting the extended phase space back to original dimension. Finally, we apply the methods to a Hamiltonian problem of geodesics in a curved space, and a non-Hamiltonian problem of a forced non-linear oscillator. We compare the performance of the methods to a general purpose differential equation solver LSODE, and the implicit midpoint method, a symplectic one-step method. We find the extended phase space methods to compare favorably to both for the Hamiltonian problem, and to the implicit midpoint method in the case of the non-linear oscillator.

  2. Recent Advances in the Method of Forces: Integrated Force Method of Structural Analysis

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.

    1998-01-01

    Stress that can be induced in an elastic continuum can be determined directly through the simultaneous application of the equilibrium equations and the compatibility conditions. In the literature, this direct stress formulation is referred to as the integrated force method. This method, which uses forces as the primary unknowns, complements the popular equilibrium-based stiffness method, which considers displacements as the unknowns. The integrated force method produces accurate stress, displacement, and frequency results even for modest finite element models. This version of the force method should be developed as an alternative to the stiffness method because the latter method, which has been researched for the past several decades, may have entered its developmental plateau. Stress plays a primary role in the development of aerospace and other products, and its analysis is difficult. Therefore, it is advisable to use both methods to calculate stress and eliminate errors through comparison. This paper examines the role of the integrated force method in analysis, animation and design.

  3. Comparison of gravimetric, creamatocrit and esterified fatty acid methods for determination of total fat content in human milk.

    PubMed

    Du, Jian; Gay, Melvin C L; Lai, Ching Tat; Trengove, Robert D; Hartmann, Peter E; Geddes, Donna T

    2017-02-15

    The gravimetric method is considered the gold standard for measuring the fat content of human milk. However, it is labor intensive and requires large volumes of human milk. Other methods, such as creamatocrit and esterified fatty acid assay (EFA), have also been used widely in fat analysis. However, these methods have not been compared concurrently with the gravimetric method. Comparison of the three methods was conducted with human milk of varying fat content. Correlations between these methods were high (r(2)=0.99). Statistical differences (P<0.001) were observed in the overall fat measurements and within each group (low, medium and high fat milk) using the three methods. Overall, stronger correlation with lower mean (4.73g/L) and percentage differences (5.16%) was observed with the creamatocrit than the EFA method when compared to the gravimetric method. Furthermore, the ease of operation and real-time analysis make the creamatocrit method preferable. Copyright © 2016. Published by Elsevier Ltd.

  4. EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.

    PubMed

    Hadinia, M; Jafari, R; Soleimani, M

    2016-06-01

    This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.

  5. Testing Multivariate Adaptive Regression Splines (MARS) as a Method of Land Cover Classification of TERRA-ASTER Satellite Images.

    PubMed

    Quirós, Elia; Felicísimo, Angel M; Cuartero, Aurora

    2009-01-01

    This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test.

  6. Monitoring the chemical production of citrus-derived bioactive 5-demethylnobiletin using surface enhanced Raman spectroscopy

    PubMed Central

    Zheng, Jinkai; Fang, Xiang; Cao, Yong; Xiao, Hang; He, Lili

    2013-01-01

    To develop an accurate and convenient method for monitoring the production of citrus-derived bioactive 5-demethylnobiletin from demethylation reaction of nobiletin, we compared surface enhanced Raman spectroscopy (SERS) methods with a conventional HPLC method. Our results show that both the substrate-based and solution-based SERS methods correlated with HPLC method very well. The solution method produced lower root mean square error of calibration and higher correlation coefficient than the substrate method. The solution method utilized an ‘affinity chromatography’-like procedure to separate the reactant nobiletin from the product 5-demthylnobiletin based on their different binding affinity to the silver dendrites. The substrate method was found simpler and faster to collect the SERS ‘fingerprint’ spectra of the samples as no incubation between samples and silver was needed and only trace amount of samples were required. Our results demonstrated that the SERS methods were superior to HPLC method in conveniently and rapidly characterizing and quantifying 5-demethylnobiletin production. PMID:23885986

  7. Flow “Fine” Synthesis: High Yielding and Selective Organic Synthesis by Flow Methods

    PubMed Central

    2015-01-01

    Abstract The concept of flow “fine” synthesis, that is, high yielding and selective organic synthesis by flow methods, is described. Some examples of flow “fine” synthesis of natural products and APIs are discussed. Flow methods have several advantages over batch methods in terms of environmental compatibility, efficiency, and safety. However, synthesis by flow methods is more difficult than synthesis by batch methods. Indeed, it has been considered that synthesis by flow methods can be applicable for the production of simple gasses but that it is difficult to apply to the synthesis of complex molecules such as natural products and APIs. Therefore, organic synthesis of such complex molecules has been conducted by batch methods. On the other hand, syntheses and reactions that attain high yields and high selectivities by flow methods are increasingly reported. Flow methods are leading candidates for the next generation of manufacturing methods that can mitigate environmental concerns toward sustainable society. PMID:26337828

  8. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhong-Li, E-mail: zl.liu@163.com; Zhang, Xiu-Lu; Cai, Ling-Cang

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curvemore » of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.« less

  9. The Importance of Method Selection in Determining Product Integrity for Nutrition Research.

    PubMed

    Mudge, Elizabeth M; Betz, Joseph M; Brown, Paula N

    2016-03-01

    The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. © 2016 American Society for Nutrition.

  10. Student Preferences Regarding Teaching Methods in a Drug-Induced Diseases and Clinical Toxicology Course

    PubMed Central

    Gim, Suzanna

    2013-01-01

    Objectives. To determine which teaching method in a drug-induced diseases and clinical toxicology course was preferred by students and whether their preference correlated with their learning of drug-induced diseases. Design. Three teaching methods incorporating active-learning exercises were implemented. A survey instrument was developed to analyze students’ perceptions of the active-learning methods used and how they compared to the traditional teaching method (lecture). Examination performance was then correlated to students’ perceptions of various teaching methods. Assessment. The majority of the 107 students who responded to the survey found traditional lecture significantly more helpful than active-learning methods (p=0.01 for all comparisons). None of the 3 active-learning methods were preferred over the others. No significant correlations were found between students’ survey responses and examination performance. Conclusions. Students preferred traditional lecture to other instructional methods. Learning was not influenced by the teaching method or by preference for a teaching method. PMID:23966726

  11. A new sampling method for fibre length measurement

    NASA Astrophysics Data System (ADS)

    Wu, Hongyan; Li, Xianghong; Zhang, Junying

    2018-06-01

    This paper presents a new sampling method for fibre length measurement. This new method can meet the three features of an effective sampling method, also it can produce the beard with two symmetrical ends which can be scanned from the holding line to get two full fibrograms for each sample. The methodology was introduced and experiments were performed to investigate effectiveness of the new method. The results show that the new sampling method is an effective sampling method.

  12. A comparison between progressive extension method (PEM) and iterative method (IM) for magnetic field extrapolations in the solar atmosphere

    NASA Technical Reports Server (NTRS)

    Wu, S. T.; Sun, M. T.; Sakurai, Takashi

    1990-01-01

    This paper presents a comparison between two numerical methods for the extrapolation of nonlinear force-free magnetic fields, viz the Iterative Method (IM) and the Progressive Extension Method (PEM). The advantages and disadvantages of these two methods are summarized, and the accuracy and numerical instability are discussed. On the basis of this investigation, it is claimed that the two methods do resemble each other qualitatively.

  13. Adaptive Discontinuous Galerkin Methods in Multiwavelets Bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archibald, Richard K; Fann, George I; Shelton Jr, William Allison

    2011-01-01

    We use a multiwavelet basis with the Discontinuous Galerkin (DG) method to produce a multi-scale DG method. We apply this Multiwavelet DG method to convection and convection-diffusion problems in multiple dimensions. Merging the DG method with multiwavelets allows the adaptivity in the DG method to be resolved through manipulation of multiwavelet coefficients rather than grid manipulation. Additionally, the Multiwavelet DG method is tested on non-linear equations in one dimension and on the cubed sphere.

  14. Sensitivity of Particle Size in Discrete Element Method to Particle Gas Method (DEM_PGM) Coupling in Underbody Blast Simulations

    DTIC Science & Technology

    2016-06-12

    Particle Size in Discrete Element Method to Particle Gas Method (DEM_PGM) Coupling in Underbody Blast Simulations Venkatesh Babu, Kumar Kulkarni, Sanjay...buried in soil viz., (1) coupled discrete element & particle gas methods (DEM-PGM) and (2) Arbitrary Lagrangian-Eulerian (ALE), are investigated. The...DEM_PGM and identify the limitations/strengths compared to the ALE method. Discrete Element Method (DEM) can model individual particle directly, and

  15. Two Project Methods: Preliminary Observations on the Similarities and Differences between William Heard Kilpatrick's Project Method and John Dewey's Problem-Solving Method

    ERIC Educational Resources Information Center

    Sutinen, Ari

    2013-01-01

    The project method became a famous teaching method when William Heard Kilpatrick published his article "Project Method" in 1918. The key idea in Kilpatrick's project method is to try to explain how pupils learn things when they work in projects toward different common objects. The same idea of pupils learning by work or action in an…

  16. Using an Ordinal Outranking Method Supporting the Acquisition of Military Equipment

    DTIC Science & Technology

    2009-10-01

    will concentrate on the well-known ORESTE method ([10],[12]) which is complementary to the PROMETHEE methods. There are other methods belonging to...the PROMETHEE methods. This MCDM method is taught in the curriculum of the High Staff College for Military Administrators of the Belgian MoD...C(b,a) similar to the preference indicators ( , ) and (b,a)a b  of the PROMETHEE methods (see [4] and SAS-080 14 and SAS-080 15). These

  17. Review of Statistical Methods for Analysing Healthcare Resources and Costs

    PubMed Central

    Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G

    2011-01-01

    We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344

  18. An adaptive proper orthogonal decomposition method for model order reduction of multi-disc rotor system

    NASA Astrophysics Data System (ADS)

    Jin, Yulin; Lu, Kuan; Hou, Lei; Chen, Yushu

    2017-12-01

    The proper orthogonal decomposition (POD) method is a main and efficient tool for order reduction of high-dimensional complex systems in many research fields. However, the robustness problem of this method is always unsolved, although there are some modified POD methods which were proposed to solve this problem. In this paper, a new adaptive POD method called the interpolation Grassmann manifold (IGM) method is proposed to address the weakness of local property of the interpolation tangent-space of Grassmann manifold (ITGM) method in a wider parametric region. This method is demonstrated here by a nonlinear rotor system of 33-degrees of freedom (DOFs) with a pair of liquid-film bearings and a pedestal looseness fault. The motion region of the rotor system is divided into two parts: simple motion region and complex motion region. The adaptive POD method is compared with the ITGM method for the large and small spans of parameter in the two parametric regions to present the advantage of this method and disadvantage of the ITGM method. The comparisons of the responses are applied to verify the accuracy and robustness of the adaptive POD method, as well as the computational efficiency is also analyzed. As a result, the new adaptive POD method has a strong robustness and high computational efficiency and accuracy in a wide scope of parameter.

  19. A hydrostatic weighing method using total lung capacity and a small tank.

    PubMed Central

    Warner, J G; Yeater, R; Sherwood, L; Weber, K

    1986-01-01

    The purpose of this study was to establish the validity and reliability of a hydrostatic weighing method using total lung capacity (measuring vital capacity with a respirometer at the time of weighing) the prone position, and a small oblong tank. The validity of the method was established by comparing the TLC prone (tank) method against three hydrostatic weighing methods administered in a pool. The three methods included residual volume seated, TLC seated and TLC prone. Eighty male and female subjects were underwater weighed using each of the four methods. Validity coefficients for per cent body fat between the TLC prone (tank) method and the RV seated (pool), TLC seated (pool) and TLC prone (pool) methods were .98, .99 and .99, respectively. A randomised complete block ANOVA found significant differences between the RV seated (pool) method and each of the three TLC methods with respect to both body density and per cent body fat. The differences were negligible with respect to HW error. Reliability of the TLC prone (tank) method was established by weighing twenty subjects three different times with ten-minute time intervals between testing. Multiple correlations yielded reliability coefficients for body density and per cent body fat values of .99 and .99, respectively. It was concluded that the TLC prone (tank) method is valid, reliable and a favourable method of hydrostatic weighing. PMID:3697596

  20. A hydrostatic weighing method using total lung capacity and a small tank.

    PubMed

    Warner, J G; Yeater, R; Sherwood, L; Weber, K

    1986-03-01

    The purpose of this study was to establish the validity and reliability of a hydrostatic weighing method using total lung capacity (measuring vital capacity with a respirometer at the time of weighing) the prone position, and a small oblong tank. The validity of the method was established by comparing the TLC prone (tank) method against three hydrostatic weighing methods administered in a pool. The three methods included residual volume seated, TLC seated and TLC prone. Eighty male and female subjects were underwater weighed using each of the four methods. Validity coefficients for per cent body fat between the TLC prone (tank) method and the RV seated (pool), TLC seated (pool) and TLC prone (pool) methods were .98, .99 and .99, respectively. A randomised complete block ANOVA found significant differences between the RV seated (pool) method and each of the three TLC methods with respect to both body density and per cent body fat. The differences were negligible with respect to HW error. Reliability of the TLC prone (tank) method was established by weighing twenty subjects three different times with ten-minute time intervals between testing. Multiple correlations yielded reliability coefficients for body density and per cent body fat values of .99 and .99, respectively. It was concluded that the TLC prone (tank) method is valid, reliable and a favourable method of hydrostatic weighing.

  1. A work study of the CAD/CAM method and conventional manual method in the fabrication of spinal orthoses for patients with adolescent idiopathic scoliosis.

    PubMed

    Wong, M S; Cheng, J C Y; Wong, M W; So, S F

    2005-04-01

    A study was conducted to compare the CAD/CAM method with the conventional manual method in fabrication of spinal orthoses for patients with adolescent idiopathic scoliosis. Ten subjects were recruited for this study. Efficiency analyses of the two methods were performed from cast filling/ digitization process to completion of cast/image rectification. The dimensional changes of the casts/ models rectified by the two cast rectification methods were also investigated. The results demonstrated that the CAD/CAM method was faster than the conventional manual method in the studied processes. The mean rectification time of the CAD/CAM method was shorter than that of the conventional manual method by 108.3 min (63.5%). This indicated that the CAD/CAM method took about 1/3 of the time of the conventional manual to finish cast rectification. In the comparison of cast/image dimensional differences between the conventional manual method and the CAD/CAM method, five major dimensions in each of the five rectified regions namely the axilla, thoracic, lumbar, abdominal and pelvic regions were involved. There were no significant dimensional differences (p < 0.05) in 19 out of the 25 studied dimensions. This study demonstrated that the CAD/CAM system could save the time in the rectification process and offer a relatively high resemblance in cast rectification as compared with the conventional manual method.

  2. An Improved Newton's Method.

    ERIC Educational Resources Information Center

    Mathews, John H.

    1989-01-01

    Describes Newton's method to locate roots of an equation using the Newton-Raphson iteration formula. Develops an adaptive method overcoming limitations of the iteration method. Provides the algorithm and computer program of the adaptive Newton-Raphson method. (YP)

  3. Symplectic test particle encounters: a comparison of methods

    NASA Astrophysics Data System (ADS)

    Wisdom, Jack

    2017-01-01

    A new symplectic method for handling encounters of test particles with massive bodies is presented. The new method is compared with several popular methods (RMVS3, SYMBA, and MERCURY). The new method compares favourably.

  4. The Tongue and Quill

    DTIC Science & Technology

    2004-08-01

    ethnography , phenomenological study , grounded theory study and content analysis. THE HISTORICAL METHOD Methods I. Qualitative Research Methods ... Phenomenological Study 4. Grounded Theory Study 5. Content Analysis II. Quantitative Research Methods A...A. The Historical Method B. General Qualitative

  5. 26 CFR 1.412(c)(1)-2 - Shortfall method.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 5 2010-04-01 2010-04-01 false Shortfall method. 1.412(c)(1)-2 Section 1.412(c... (CONTINUED) INCOME TAXES Pension, Profit-Sharing, Stock Bonus Plans, Etc. § 1.412(c)(1)-2 Shortfall method. (a) In general—(1) Shortfall method. The shortfall method is a funding method that adapts a plan's...

  6. Comparisons of two methods of harvesting biomass for energy

    Treesearch

    W.F. Watson; B.J. Stokes; I.W. Savelle

    1986-01-01

    Two harvesting methods for utilization of understory biomass were tested against a conventional harvesting method to determine relative costs. The conventional harvesting method tested removed all pine 6 inches diameter at breast height (DBH) and larger and hardwood sawlogs as tree length logs. The two intensive harvesting methods were a one-pass and a two-pass method...

  7. Log sampling methods and software for stand and landscape analyses.

    Treesearch

    Lisa J. Bate; Torolf R. Torgersen; Michael J. Wisdom; Edward O. Garton; Shawn C. Clabough

    2008-01-01

    We describe methods for efficient, accurate sampling of logs at landscape and stand scales to estimate density, total length, cover, volume, and weight. Our methods focus on optimizing the sampling effort by choosing an appropriate sampling method and transect length for specific forest conditions and objectives. Sampling methods include the line-intersect method and...

  8. 77 FR 55832 - Ambient Air Monitoring Reference and Equivalent Methods: Designation of a New Equivalent Method

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-11

    ... Methods: Designation of a New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of a new equivalent method for monitoring ambient air quality. SUMMARY: Notice is... part 53, a new equivalent method for measuring concentrations of PM 2.5 in the ambient air. FOR FURTHER...

  9. 26 CFR 1.446-2 - Method of accounting for interest.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... account by a taxpayer under the taxpayer's regular method of accounting (e.g., an accrual method or the... 26 Internal Revenue 6 2010-04-01 2010-04-01 false Method of accounting for interest. 1.446-2... TAX (CONTINUED) INCOME TAXES Methods of Accounting § 1.446-2 Method of accounting for interest. (a...

  10. Rapid Radiochemical Method for Radium-226 in Building ...

    EPA Pesticide Factsheets

    Technical Fact Sheet Analysis Purpose: Qualitative analysis Technique: Alpha spectrometry Method Developed for: Radium-226 in building materials Method Selected for: SAM lists this method for qualitative analysis of radium-226 in concrete or brick building materials Summary of subject analytical method which will be posted to the SAM website to allow access to the method.

  11. Rapid Radiochemical Method for Americium-241 in Building ...

    EPA Pesticide Factsheets

    Technical Fact Sheet Analysis Purpose: Qualitative analysis Technique: Alpha spectrometry Method Developed for: Americium-241 in building materials Method Selected for: SAM lists this method for qualitative analysis of americium-241 in concrete or brick building materials. Summary of subject analytical method which will be posted to the SAM website to allow access to the method.

  12. Draft Environmental Impact Statement: Peacekeeper Rail Garrison Program

    DTIC Science & Technology

    1988-06-01

    2-13 3.0 ENVIRONMENTAL ANALYSIS METHODS ................................ 3-1 3.1 Methods for Assessing Nationwide Impacts...3-2 3.1.1 Methods for Assessing National Economic Impacts ........... 3-2 3.1.2 Methods for Assessing Railroad Network...3.2.4 Methods for Assessing Existing and Future Baseline Conditions .......................................... 3-6 3.2.5 Methods for Assessing

  13. A Comparative Investigation of the Efficiency of Two Classroom Observational Methods.

    ERIC Educational Resources Information Center

    Kissel, Mary Ann

    The problem of this study was to determine whether Method A is a more efficient observational method for obtaining activity type behaviors in an individualized classroom than Method B. Method A requires the observer to record the activities of the entire class at given intervals while Method B requires only the activities of selected individuals…

  14. Improved methods of vibration analysis of pretwisted, airfoil blades

    NASA Technical Reports Server (NTRS)

    Subrahmanyam, K. B.; Kaza, K. R. V.

    1984-01-01

    Vibration analysis of pretwisted blades of asymmetric airfoil cross section is performed by using two mixed variational approaches. Numerical results obtained from these two methods are compared to those obtained from an improved finite difference method and also to those given by the ordinary finite difference method. The relative merits, convergence properties and accuracies of all four methods are studied and discussed. The effects of asymmetry and pretwist on natural frequencies and mode shapes are investigated. The improved finite difference method is shown to be far superior to the conventional finite difference method in several respects. Close lower bound solutions are provided by the improved finite difference method for untwisted blades with a relatively coarse mesh while the mixed methods have not indicated any specific bound.

  15. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System.

    PubMed

    Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan

    2017-02-20

    In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.

  16. Comparison of Instream and Laboratory Methods of Measuring Sediment Oxygen Demand

    USGS Publications Warehouse

    Hall, Dennis C.; Berkas, Wayne R.

    1988-01-01

    Sediment oxygen demand (SOD) was determined at three sites in a gravel-bottomed central Missouri stream by: (1) two variations of an instream method, and (2) a laboratory method. SOD generally was greatest by the instream methods, which are considered more accurate, and least by the laboratory method. Disturbing stream sediment did not significantly decrease SOD by the instream method. Temperature ranges of up to 12 degree Celsius had no significant effect on the SOD. In the gravel-bottomed stream, the placement of chambers was critical to obtain reliable measurements. SOD rates were dependent on the method; therefore, care should be taken in comparing SOD data obtained by different methods. There is a need for a carefully researched standardized method for SOD determinations.

  17. Echo movement and evolution from real-time processing.

    NASA Technical Reports Server (NTRS)

    Schaffner, M. R.

    1972-01-01

    Preliminary experimental data on the effectiveness of conventional radars in measuring the movement and evolution of meteorological echoes when the radar is connected to a programmable real-time processor are examined. In the processor programming is accomplished by conceiving abstract machines which constitute the actual programs used in the methods employed. An analysis of these methods, such as the center of gravity method, the contour-displacement method, the method of slope, the cross-section method, the contour crosscorrelation method, the method of echo evolution at each point, and three-dimensional measurements, shows that the motions deduced from them may differ notably (since each method determines different quantities) but the plurality of measurement may give additional information on the characteristics of the precipitation.

  18. Comparison of methods for measuring cholinesterase inhibition by carbamates

    PubMed Central

    Wilhelm, K.; Vandekar, M.; Reiner, E.

    1973-01-01

    The Acholest and tintometric methods are used widely for measuring blood cholinesterase activity after exposure to organophosphorus compounds. However, if applied for measuring blood cholinesterase activity in persons exposed to carbamates, the accuracy of the methods requires verification since carbamylated cholinesterases are unstable. The spectrophotometric method was used as a reference method and the two field methods were employed under controlled conditions. Human blood cholinesterases were inhibited in vitro by four methylcarbamates that are used as insecticides. When plasma cholinesterase activity was measured by the Acholest and spectrophotometric methods, no difference was found. The enzyme activity in whole blood determined by the tintometric method was ≤ 11% higher than when the same sample was measured by the spectrophotometric method. PMID:4541147

  19. An advanced probabilistic structural analysis method for implicit performance functions

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.

    1989-01-01

    In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.

  20. Formal methods technology transfer: Some lessons learned

    NASA Technical Reports Server (NTRS)

    Hamilton, David

    1992-01-01

    IBM has a long history in the application of formal methods to software development and verification. There have been many successes in the development of methods, tools and training to support formal methods. And formal methods have been very successful on several projects. However, the use of formal methods has not been as widespread as hoped. This presentation summarizes several approaches that have been taken to encourage more widespread use of formal methods, and discusses the results so far. The basic problem is one of technology transfer, which is a very difficult problem. It is even more difficult for formal methods. General problems of technology transfer, especially the transfer of formal methods technology, are also discussed. Finally, some prospects for the future are mentioned.

  1. Method Development in Forensic Toxicology.

    PubMed

    Peters, Frank T; Wissenbach, Dirk K; Busardo, Francesco Paolo; Marchei, Emilia; Pichini, Simona

    2017-01-01

    In the field of forensic toxicology, the quality of analytical methods is of great importance to ensure the reliability of results and to avoid unjustified legal consequences. A key to high quality analytical methods is a thorough method development. The presented article will provide an overview on the process of developing methods for forensic applications. This includes the definition of the method's purpose (e.g. qualitative vs quantitative) and the analytes to be included, choosing an appropriate sample matrix, setting up separation and detection systems as well as establishing a versatile sample preparation. Method development is concluded by an optimization process after which the new method is subject to method validation. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  2. Implementation of Leak Test Methods for the International Space Station (ISS) Elements, Systems and Components

    NASA Technical Reports Server (NTRS)

    Underwood, Steve; Lvovsky, Oleg

    2007-01-01

    The International Space Station (ISS has Qualification and Acceptance Environmental Test Requirements document, SSP 41172 that includes many environmental tests such as Thermal vacuum & Cycling, Depress/Repress, Sinusoidal, Random, and Acoustic Vibration, Pyro Shock, Acceleration, Humidity, Pressure, Electromatic Interference (EMI)/Electromagnetic Compatibility (EMCO), etc. This document also includes (13) leak test methods for Pressure Integrity Verification of the ISS Elements, Systems, and Components. These leak test methods are well known, however, the test procedure for specific leak test method shall be written and implemented paying attention to the important procedural steps/details that, if omitted or deviated, could impact the quality of the final product and affect the crew safety. Such procedural steps/details for different methods include, but not limited to: - Sequence of testing, f or example, pressurization and submersion steps for Method I (Immersion); - Stabilization of the mass spectrometer leak detector outputs fo r Method II (vacuum Chamber or Bell jar); - Proper data processing an d taking a conservative approach while making predictions for on-orbit leakage rate for Method III(Pressure Change); - Proper Calibration o f the mass spectrometer leak detector for all the tracer gas (mostly Helium) Methods such as Method V (Detector Probe), Method VI (Hood), Method VII (Tracer Probe), Method VIII(Accumulation); - Usage of visibl ility aides for Method I (Immersion), Method IV (Chemical Indicator), Method XII (Foam/Liquid Application), and Method XIII (Hydrostatic/Visual Inspection); While some methods could be used for the total leaka ge (either internal-to-external or external-to-internal) rate requirement verification (Vacuum Chamber, Pressure Decay, Hood, Accumulation), other methods shall be used only as a pass/fail test for individual joints (e.g., welds, fittings, and plugs) or for troubleshooting purposes (Chemical Indicator, Detector Probe, Tracer Probe, Local Vacuum Chamber, Foam/Liquid Application, and Hydrostatic/Visual Inspection). Any isolation of SSP 41172 requirements have led to either retesting of hardware or accepting a risk associated with the potential system or component pressure integrity problem during flight.

  3. Temperature profiles of different cooling methods in porcine pancreas procurement.

    PubMed

    Weegman, Bradley P; Suszynski, Thomas M; Scott, William E; Ferrer Fábrega, Joana; Avgoustiniatos, Efstathios S; Anazawa, Takayuki; O'Brien, Timothy D; Rizzari, Michael D; Karatzas, Theodore; Jie, Tun; Sutherland, David E R; Hering, Bernhard J; Papas, Klearchos K

    2014-01-01

    Porcine islet xenotransplantation is a promising alternative to human islet allotransplantation. Porcine pancreas cooling needs to be optimized to reduce the warm ischemia time (WIT) following donation after cardiac death, which is associated with poorer islet isolation outcomes. This study examines the effect of four different cooling Methods on core porcine pancreas temperature (n = 24) and histopathology (n = 16). All Methods involved surface cooling with crushed ice and chilled irrigation. Method A, which is the standard for porcine pancreas procurement, used only surface cooling. Method B involved an intravascular flush with cold solution through the pancreas arterial system. Method C involved an intraductal infusion with cold solution through the major pancreatic duct, and Method D combined all three cooling Methods. Surface cooling alone (Method A) gradually decreased core pancreas temperature to <10 °C after 30 min. Using an intravascular flush (Method B) improved cooling during the entire duration of procurement, but incorporating an intraductal infusion (Method C) rapidly reduced core temperature 15-20 °C within the first 2 min of cooling. Combining all methods (Method D) was the most effective at rapidly reducing temperature and providing sustained cooling throughout the duration of procurement, although the recorded WIT was not different between Methods (P = 0.36). Histological scores were different between the cooling Methods (P = 0.02) and the worst with Method A. There were differences in histological scores between Methods A and C (P = 0.02) and Methods A and D (P = 0.02), but not between Methods C and D (P = 0.95), which may highlight the importance of early cooling using an intraductal infusion. In conclusion, surface cooling alone cannot rapidly cool large (porcine or human) pancreata. Additional cooling with an intravascular flush and intraductal infusion results in improved core porcine pancreas temperature profiles during procurement and histopathology scores. These data may also have implications on human pancreas procurement as use of an intraductal infusion is not common practice. © 2014 John Wiley & Sons A/S Published by John Wiley & Sons Ltd.

  4. The PneuCarriage Project: A Multi-Centre Comparative Study to Identify the Best Serotyping Methods for Examining Pneumococcal Carriage in Vaccine Evaluation Studies

    PubMed Central

    Satzke, Catherine; Dunne, Eileen M.; Porter, Barbara D.; Klugman, Keith P.; Mulholland, E. Kim

    2015-01-01

    Background The pneumococcus is a diverse pathogen whose primary niche is the nasopharynx. Over 90 different serotypes exist, and nasopharyngeal carriage of multiple serotypes is common. Understanding pneumococcal carriage is essential for evaluating the impact of pneumococcal vaccines. Traditional serotyping methods are cumbersome and insufficient for detecting multiple serotype carriage, and there are few data comparing the new methods that have been developed over the past decade. We established the PneuCarriage project, a large, international multi-centre study dedicated to the identification of the best pneumococcal serotyping methods for carriage studies. Methods and Findings Reference sample sets were distributed to 15 research groups for blinded testing. Twenty pneumococcal serotyping methods were used to test 81 laboratory-prepared (spiked) samples. The five top-performing methods were used to test 260 nasopharyngeal (field) samples collected from children in six high-burden countries. Sensitivity and positive predictive value (PPV) were determined for the test methods and the reference method (traditional serotyping of >100 colonies from each sample). For the alternate serotyping methods, the overall sensitivity ranged from 1% to 99% (reference method 98%), and PPV from 8% to 100% (reference method 100%), when testing the spiked samples. Fifteen methods had ≥70% sensitivity to detect the dominant (major) serotype, whilst only eight methods had ≥70% sensitivity to detect minor serotypes. For the field samples, the overall sensitivity ranged from 74.2% to 95.8% (reference method 93.8%), and PPV from 82.2% to 96.4% (reference method 99.6%). The microarray had the highest sensitivity (95.8%) and high PPV (93.7%). The major limitation of this study is that not all of the available alternative serotyping methods were included. Conclusions Most methods were able to detect the dominant serotype in a sample, but many performed poorly in detecting the minor serotype populations. Microarray with a culture amplification step was the top-performing method. Results from this comprehensive evaluation will inform future vaccine evaluation and impact studies, particularly in low-income settings, where pneumococcal disease burden remains high. PMID:26575033

  5. [The clinical value of sentinel lymph node detection in laryngeal and hypopharyngeal carcinoma patients with clinically negative neck by methylene blue method and radiolabeled tracer method].

    PubMed

    Zhao, Xin; Xiao, Dajiang; Ni, Jianming; Zhu, Guochen; Yuan, Yuan; Xu, Ting; Zhang, Yongsheng

    2014-11-01

    To investigate the clinical value of sentinel lymph node (SLN) detection in laryngeal and hypopharyngeal carcinoma patients with clinically negative neck (cN0) by methylene blue method, radiolabeled tracer method and combination of these two methods. Thirty-three patients with cN0 laryngeal carcinoma and six patients with cN0 hypopharyngeal carcinoma underwent SLN detection using both of methylene blue and radiolabeled tracer method. All these patients were accepted received the injection of radioactive isotope 99 Tc(m)-sulfur colloid (SC) and methylene blue into the carcinoma before surgery, then all these patients underwent intraopertive lymphatic mapping with a handheld gamma-detecting probe and blue-dyed SLN. After the mapping of SLN, selected neck dissections and tumor resections were peformed. The results of SLN detection by radiolabeled tracer, dye and combination of both methods were compared. The detection rate of SLN by radiolabeled tracer, methylene blue and combined method were 89.7%, 79.5%, 92.3% respectively. The number of detected SLN was significantly different between radiolabeled tracer method and combined method, and also between methylene blue method and combined method. The detection rate of methylene blue and radiolabeled tracer method were significantly different from combined method (P < 0.05). Nine patients were found to have lymph node metastasis by final pathological examination. The accuracy and negative rate of SLN detection of the combined method were 97.2% and 11.1%. The combined method using radiolabeled tracer and methylene blue can improve the detection rate and accuracy of sentinel lymph node detection. Furthermore, sentinel lymph node detection can accurately represent the cervical lymph node status in cN0 laryngeal and hypopharyngeal carcinoma.

  6. Slump sitting X-ray of the lumbar spine is superior to the conventional flexion view in assessing lumbar spine instability.

    PubMed

    Hey, Hwee Weng Dennis; Lau, Eugene Tze-Chun; Lim, Joel-Louis; Choong, Denise Ai-Wen; Tan, Chuen-Seng; Liu, Gabriel Ka-Po; Wong, Hee-Kit

    2017-03-01

    Flexion radiographs have been used to identify cases of spinal instability. However, current methods are not standardized and are not sufficiently sensitive or specific to identify instability. This study aimed to introduce a new slump sitting method for performing lumbar spine flexion radiographs and comparison of the angular range of motions (ROMs) and displacements between the conventional method and this new method. This study used is a prospective study on radiological evaluation of the lumbar spine flexion ROMs and displacements using dynamic radiographs. Sixty patients were recruited from a single spine tertiary center. Angular and displacement measurements of lumbar spine flexion were carried out. Participants were randomly allocated into two groups: those who did the new method first, followed by the conventional method versus those who did the conventional method first, followed by the new method. A comparison of the angular and displacement measurements of lumbar spine flexion between the conventional method and the new method was performed and tested for superiority and non-inferiority. The measurements of global lumbar angular ROM were, on average, 17.3° larger (p<.0001) using the new slump sitting method compared with the conventional method. They were most significant at the levels of L3-L4, L4-L5, and L5-S1 (p<.0001, p<.0001 and p=.001, respectively). There was no significant difference between both methods when measuring lumbar displacements (p=.814). The new method of slump sitting dynamic radiograph was shown to be superior to the conventional method in measuring the angular ROM and non-inferior to the conventional method in the measurement of displacement. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Searching for transcription factor binding sites in vector spaces

    PubMed Central

    2012-01-01

    Background Computational approaches to transcription factor binding site identification have been actively researched in the past decade. Learning from known binding sites, new binding sites of a transcription factor in unannotated sequences can be identified. A number of search methods have been introduced over the years. However, one can rarely find one single method that performs the best on all the transcription factors. Instead, to identify the best method for a particular transcription factor, one usually has to compare a handful of methods. Hence, it is highly desirable for a method to perform automatic optimization for individual transcription factors. Results We proposed to search for transcription factor binding sites in vector spaces. This framework allows us to identify the best method for each individual transcription factor. We further introduced two novel methods, the negative-to-positive vector (NPV) and optimal discriminating vector (ODV) methods, to construct query vectors to search for binding sites in vector spaces. Extensive cross-validation experiments showed that the proposed methods significantly outperformed the ungapped likelihood under positional background method, a state-of-the-art method, and the widely-used position-specific scoring matrix method. We further demonstrated that motif subtypes of a TF can be readily identified in this framework and two variants called the k NPV and k ODV methods benefited significantly from motif subtype identification. Finally, independent validation on ChIP-seq data showed that the ODV and NPV methods significantly outperformed the other compared methods. Conclusions We conclude that the proposed framework is highly flexible. It enables the two novel methods to automatically identify a TF-specific subspace to search for binding sites. Implementations are available as source code at: http://biogrid.engr.uconn.edu/tfbs_search/. PMID:23244338

  8. [Social aspects of natural methods (author's transl)].

    PubMed

    Linhard, J

    1981-01-01

    It is rather difficult to distinguish between "natural methods" and "no natural methods" or "unnatural methods". "Natural methods" should therefore be defined as those which are used without any additional product. Use and success depend on the motivation and control of the couple. These methods are: postcoital douching, prolonged lactation, rhythm method according to Knaus or to Ogino by observing BBT, observation of cervical mucus according to Billings, coitus interruptus, and coitus reservatus. As far as we know, these methods have been used since primeval times and have been commented on during different periods and at different places as being used with the support of all 3 monotheistic religions until the era of Augustinus and Thomas of Aquinas. From then on the Christian and later on the Catholic faith saw human production as the purpose of matrimony and therefore banned all methods with the exception of the rhythm method. It has been assumed that the decrease of fertility in Europe since the industrial revolution was a result of using these methods--primarily coitus interruptus, which still seems to be widely spread. It is therefore unintelligible why so little is known about the impact of these methods on the medical and social sector. As long as the ideal method is not available the natural methods should be given a place in the development of a contraceptive methodology. Since the natural methods do not cost anything, they could help to carry forward family planning in countries with low-income population. But before employing them for the purpose they have to be studied in view of their medicobiological as well as their social aspects in order to learn more about these old and much used methods. (Author's)

  9. Evaluation of selected methods for determining streamflow during periods of ice effect

    USGS Publications Warehouse

    Melcher, Norwood B.; Walker, J.F.

    1992-01-01

    Seventeen methods for estimating ice-affected streamflow are evaluated for potential use with the U.S. Geological Survey streamflow-gaging station network. The methods evaluated were identified by written responses from U.S. Geological Survey field offices and by a comprehensive literature search. The methods selected and techniques used for applying the methods are described in this report. The methods are evaluated by comparing estimated results with data collected at three streamflow-gaging stations in Iowa during the winter of 1987-88. Discharge measurements were obtained at 1- to 5-day intervals during the ice-affected periods at the three stations to define an accurate baseline record. Discharge records were compiled for each method based on data available, assuming a 6-week field schedule. The methods are classified into two general categories-subjective and analytical--depending on whether individual judgment is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used at streamflow-gaging stations, where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice-adjustment factor) may be appropriate for use at stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge-ratio and multiple-regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.

  10. Fatigue properties of JIS H3300 C1220 copper for strain life prediction

    NASA Astrophysics Data System (ADS)

    Harun, Muhammad Faiz; Mohammad, Roslina

    2018-05-01

    The existing methods for estimating strain life parameters are dependent on the material's monotonic tensile properties. However, a few of these methods yield quite complicated expressions for calculating fatigue parameters, and are specific to certain groups of materials only. The Universal Slopes method, Modified Universal Slopes method, Uniform Material Law, the Hardness method, and Medians method are a few existing methods for predicting strain-life fatigue based on monotonic tensile material properties and hardness of material. In the present study, nine methods for estimating fatigue life and properties are applied on JIS H3300 C1220 copper to determine the best methods for strain life estimation of this ductile material. Experimental strain-life curves are compared to estimations obtained using each method. Muralidharan-Manson's Modified Universal Slopes method and Bäumel-Seeger's method for unalloyed and low-alloy steels are found to yield batter accuracy in estimating fatigue life with a deviation of less than 25%. However, the prediction of both methods only yield much better accuracy for a cycle of less than 1000 or for strain amplitudes of more than 1% and less than 6%. Manson's Original Universal Slopes method and Ong's Modified Four-Point Correlation method are found to predict the strain-life fatigue of copper with better accuracy for a high number of cycles of strain amplitudes of less than 1%. The differences between mechanical behavior during monotonic and cyclic loading and the complexity in deciding the coefficient in an equation are probably the reason for the lack of a reliable method for estimating fatigue behavior using the monotonic properties of a group of materials. It is therefore suggested that a differential approach and new expressions be developed to estimate the strain-life fatigue parameters for ductile materials such as copper.

  11. Innovative application of the moisture analyzer for determination of dry mass content of processed cheese

    NASA Astrophysics Data System (ADS)

    Kowalska, Małgorzata; Janas, Sławomir; Woźniak, Magdalena

    2018-04-01

    The aim of this work was the presentation of an alternative method of determination of the total dry mass content in processed cheese. The authors claim that the presented method can be used in industry's quality control laboratories for routine testing and for quick in-process control. For the test purposes both reference method of determination of dry mass in processed cheese and moisture analyzer method were used. The tests were carried out for three different kinds of processed cheese. In accordance with the reference method, the sample was placed on a layer of silica sand and dried at the temperature of 102 °C for about 4 h. The moisture analyzer test required method validation, with regard to drying temperature range and mass of the analyzed sample. Optimum drying temperature of 110 °C was determined experimentally. For Hochland cream processed cheese sample, the total dry mass content, obtained using the reference method, was 38.92%, whereas using the moisture analyzer method, it was 38.74%. An average analysis time in case of the moisture analyzer method was 9 min. For the sample of processed cheese with tomatoes, the reference method result was 40.37%, and the alternative method result was 40.67%. For the sample of cream processed cheese with garlic the reference method gave value of 36.88%, and the alternative method, of 37.02%. An average time of those determinations was 16 min. Obtained results confirmed that use of moisture analyzer is effective. Compliant values of dry mass content were obtained for both of the used methods. According to the authors, the fact that the measurement took incomparably less time for moisture analyzer method, is a key criterion of in-process control and final quality control method selection.

  12. Alternative microbial methods: An overview and selection criteria.

    PubMed

    Jasson, Vicky; Jacxsens, Liesbeth; Luning, Pieternel; Rajkovic, Andreja; Uyttendaele, Mieke

    2010-09-01

    This study provides an overview and criteria for the selection of a method, other than the reference method, for microbial analysis of foods. In a first part an overview of the general characteristics of rapid methods available, both for enumeration and detection, is given with reference to relevant bibliography. Perspectives on future development and the potential of the rapid method for routine application in food diagnostics are discussed. As various alternative "rapid" methods in different formats are available on the market, it can be very difficult for a food business operator or for a control authority to select the most appropriate method which fits its purpose. Validation of a method by a third party, according to international accepted protocol based upon ISO 16140, may increase the confidence in the performance of a method. A list of at the moment validated methods for enumeration of both utility indicators (aerobic plate count) and hygiene indicators (Enterobacteriaceae, Escherichia coli, coagulase positive Staphylococcus) as well as for detection of the four major pathogens (Salmonella spp., Listeria monocytogenes, E. coli O157 and Campylobacter spp.) is included with reference to relevant websites to check for updates. In a second part of this study, selection criteria are introduced to underpin the choice of the appropriate method(s) for a defined application. The selection criteria link the definition of the context in which the user of the method functions - and thus the prospective use of the microbial test results - with the technical information on the method and its operational requirements and sustainability. The selection criteria can help the end user of the method to obtain a systematic insight into all relevant factors to be taken into account for selection of a method for microbial analysis. Copyright 2010 Elsevier Ltd. All rights reserved.

  13. Study on ABO and RhD blood grouping: Comparison between conventional tile method and a new solid phase method (InTec Blood Grouping Test Kit).

    PubMed

    Yousuf, R; Abdul Ghani, S A; Abdul Khalid, N; Leong, C F

    2018-04-01

    'InTec Blood Grouping Test kit' using solid-phase technology is a new method which may be used at outdoor blood donation site or at bed side as an alternative to the conventional tile method in view of its stability at room temperature and fulfilled the criteria as point of care test. This study aimed to compare the efficiency of this solid phase method (InTec Blood Grouping Test Kit) with the conventional tile method in determining the ABO and RhD blood group of healthy donors. A total of 760 voluntary donors who attended the Blood Bank, Penang Hospital or offsite blood donation campaigns from April to May 2014 were recruited. The ABO and RhD blood groups were determined by the conventional tile method and the solid phase method, in which the tube method was used as the gold standard. For ABO blood grouping, the tile method has shown 100% concordance results with the gold standard tube method, whereas the solid-phase method only showed concordance result for 754/760 samples (99.2%). Therefore, for ABO grouping, tile method has 100% sensitivity and specificity while the solid phase method has slightly lower sensitivity of 97.7% but both with good specificity of 100%. For RhD grouping, both the tile and solid phase methods have grouped one RhD positive specimen as negative each, thus giving the sensitivity and specificity of 99.9% and 100% for both methods respectively. The 'InTec Blood Grouping Test Kit' is suitable for offsite usage because of its simplicity and user friendliness. However, further improvement in adding the internal quality control may increase the test sensitivity and validity of the test results.

  14. Knowledge, beliefs and use of nursing methods in preventing pressure sores in Dutch hospitals.

    PubMed

    Halfens, R J; Eggink, M

    1995-02-01

    Different methods have been developed in the past to prevent patients from developing pressure sores. The consensus guidelines developed in the Netherlands make a distinction between preventive methods useful for all patients, methods useful only in individual cases, and methods which are not useful at all. This study explores the extent of use of the different methods within Dutch hospitals, and the knowledge and beliefs of nurses regarding the usefulness of these methods. A mail questionnaire was sent to a representative sample of nurses working within Dutch hospitals. A total of 373 questionnaires were returned and used for the analyses. The results showed that many methods judged by the consensus report as not useful, or only useful in individual cases, are still being used. Some methods which are judged as useful, like the use of a risk assessment scale, are used on only a few wards. The opinion of nurses regarding the usefulness of the methods differ from the guidelines of the consensus committee. Although there is agreement about most of the useful methods, there is less agreement about the methods which are useful in individual cases or methods which are not useful at all. In particular the use of massage and cream are, in the opinion of the nurses, useful in individual or in all cases.

  15. Automatic allograft bone selection through band registration and its application to distal femur.

    PubMed

    Zhang, Yu; Qiu, Lei; Li, Fengzan; Zhang, Qing; Zhang, Li; Niu, Xiaohui

    2017-09-01

    Clinical reports suggest that large bone defects could be effectively restored by allograft bone transplantation, where allograft bone selection acts an important role. Besides, there is a huge demand for developing the automatic allograft bone selection methods, as the automatic methods could greatly improve the management efficiency of the large bone banks. Although several automatic methods have been presented to select the most suitable allograft bone from the massive allograft bone bank, these methods still suffer from inaccuracy. In this paper, we propose an effective allograft bone selection method without using the contralateral bones. Firstly, the allograft bone is globally aligned to the recipient bone by surface registration. Then, the global alignment is further refined through band registration. The band, defined as the recipient points within the lifted and lowered cutting planes, could involve more local structure of the defected segment. Therefore, our method could achieve robust alignment and high registration accuracy of the allograft and recipient. Moreover, the existing contour method and surface method could be unified into one framework under our method by adjusting the lift and lower distances of the cutting planes. Finally, our method has been validated on the database of distal femurs. The experimental results indicate that our method outperforms the surface method and contour method.

  16. Validation of a questionnaire method for estimating extent of menstrual blood loss in young adult women.

    PubMed

    Heath, A L; Skeaff, C M; Gibson, R S

    1999-04-01

    The objective of this study was to validate two indirect methods for estimating the extent of menstrual blood loss against a reference method to determine which method would be most appropriate for use in a population of young adult women. Thirty-two women aged 18 to 29 years (mean +/- SD; 22.4 +/- 2.8) were recruited by poster in Dunedin (New Zealand). Data are presented for 29 women. A recall method and a record method for estimating extent of menstrual loss were validated against a weighed reference method. Spearman rank correlation coefficients between blood loss assessed by Weighed Menstrual Loss and Menstrual Record was rs = 0.47 (p = 0.012), and between Weighed Menstrual Loss and Menstrual Recall, was rs = 0.61 (p = 0.001). The Record method correctly classified 66% of participants into the same tertile, grossly misclassifying 14%. The Recall method correctly classified 59% of participants, grossly misclassifying 7%. Reference method menstrual loss calculated for surrogate categories demonstrated a significant difference between the second and third tertiles for the Record method, and between the first and third tertiles for the Recall method. The Menstrual Recall method can differentiate between low and high levels of menstrual blood loss in young adult women, is quick to complete and analyse, and has a low participant burden.

  17. A comparative study of novel spectrophotometric methods based on isosbestic points; application on a pharmaceutical ternary mixture

    NASA Astrophysics Data System (ADS)

    Lotfy, Hayam M.; Saleh, Sarah S.; Hassan, Nagiba Y.; Salem, Hesham

    This work represents the application of the isosbestic points present in different absorption spectra. Three novel spectrophotometric methods were developed, the first method is the absorption subtraction method (AS) utilizing the isosbestic point in zero-order absorption spectra; the second method is the amplitude modulation method (AM) utilizing the isosbestic point in ratio spectra; and third method is the amplitude summation method (A-Sum) utilizing the isosbestic point in derivative spectra. The three methods were applied for the analysis of the ternary mixture of chloramphenicol (CHL), dexamethasone sodium phosphate (DXM) and tetryzoline hydrochloride (TZH) in eye drops in the presence of benzalkonium chloride as a preservative. The components at the isosbestic point were determined using the corresponding unified regression equation at this point with no need for a complementary method. The obtained results were statistically compared to each other and to that of the developed PLS model. The specificity of the developed methods was investigated by analyzing laboratory prepared mixtures and the combined dosage form. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed.

  18. Towards an Airframe Noise Prediction Methodology: Survey of Current Approaches

    NASA Technical Reports Server (NTRS)

    Farassat, Fereidoun; Casper, Jay H.

    2006-01-01

    In this paper, we present a critical survey of the current airframe noise (AFN) prediction methodologies. Four methodologies are recognized. These are the fully analytic method, CFD combined with the acoustic analogy, the semi-empirical method and fully numerical method. It is argued that for the immediate need of the aircraft industry, the semi-empirical method based on recent high quality acoustic database is the best available method. The method based on CFD and the Ffowcs William- Hawkings (FW-H) equation with penetrable data surface (FW-Hpds ) has advanced considerably and much experience has been gained in its use. However, more research is needed in the near future particularly in the area of turbulence simulation. The fully numerical method will take longer to reach maturity. Based on the current trends, it is predicted that this method will eventually develop into the method of choice. Both the turbulence simulation and propagation methods need to develop more for this method to become useful. Nonetheless, the authors propose that the method based on a combination of numerical and analytical techniques, e.g., CFD combined with FW-H equation, should also be worked on. In this effort, the current symbolic algebra software will allow more analytical approaches to be incorporated into AFN prediction methods.

  19. A Reconstructed Discontinuous Galerkin Method for the Compressible Euler Equations on Arbitrary Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Luo; Luquing Luo; Robert Nourgaliev

    2009-06-01

    A reconstruction-based discontinuous Galerkin (DG) method is presented for the solution of the compressible Euler equations on arbitrary grids. By taking advantage of handily available and yet invaluable information, namely the derivatives, in the context of the discontinuous Galerkin methods, a solution polynomial of one degree higher is reconstructed using a least-squares method. The stencils used in the reconstruction involve only the van Neumann neighborhood (face-neighboring cells) and are compact and consistent with the underlying DG method. The resulting DG method can be regarded as an improvement of a recovery-based DG method in the sense that it shares the samemore » nice features as the recovery-based DG method, such as high accuracy and efficiency, and yet overcomes some of its shortcomings such as a lack of flexibility, compactness, and robustness. The developed DG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate the accuracy and efficiency of the method. The numerical results indicate that this reconstructed DG method is able to obtain a third-order accurate solution at a slightly higher cost than its second-order DG method and provide an increase in performance over the third order DG method in terms of computing time and storage requirement.« less

  20. [Comparison of different methods in dealing with HIV viral load data with diversified missing value mechanism on HIV positive MSM].

    PubMed

    Jiang, Z; Dou, Z; Song, W L; Xu, J; Wu, Z Y

    2017-11-10

    Objective: To compare results of different methods: in organizing HIV viral load (VL) data with missing values mechanism. Methods We used software SPSS 17.0 to simulate complete and missing data with different missing value mechanism from HIV viral loading data collected from MSM in 16 cities in China in 2013. Maximum Likelihood Methods Using the Expectation and Maximization Algorithm (EM), regressive method, mean imputation, delete method, and Markov Chain Monte Carlo (MCMC) were used to supplement missing data respectively. The results: of different methods were compared according to distribution characteristics, accuracy and precision. Results HIV VL data could not be transferred into a normal distribution. All the methods showed good results in iterating data which is Missing Completely at Random Mechanism (MCAR). For the other types of missing data, regressive and MCMC methods were used to keep the main characteristic of the original data. The means of iterating database with different methods were all close to the original one. The EM, regressive method, mean imputation, and delete method under-estimate VL while MCMC overestimates it. Conclusion: MCMC can be used as the main imputation method for HIV virus loading missing data. The iterated data can be used as a reference for mean HIV VL estimation among the investigated population.

  1. Limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method for the parameter estimation on geographically weighted ordinal logistic regression model (GWOLR)

    NASA Astrophysics Data System (ADS)

    Saputro, Dewi Retno Sari; Widyaningsih, Purnami

    2017-08-01

    In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton's method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton's method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute the efficiency of the LBFGS method in the iterative and recursive computation of Hessian matrix and its inverse for the GWOLR parameter estimation. In reference to the research findings, we found out that the BFGS and LBFGS methods have arithmetic operation schemes, including O(n2) and O(nm).

  2. Comprehensive reliability allocation method for CNC lathes based on cubic transformed functions of failure mode and effects analysis

    NASA Astrophysics Data System (ADS)

    Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin

    2015-03-01

    Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.

  3. An Extraction Method of an Informative DOM Node from a Web Page by Using Layout Information

    NASA Astrophysics Data System (ADS)

    Tsuruta, Masanobu; Masuyama, Shigeru

    We propose an informative DOM node extraction method from a Web page for preprocessing of Web content mining. Our proposed method LM uses layout data of DOM nodes generated by a generic Web browser, and the learning set consists of hundreds of Web pages and the annotations of informative DOM nodes of those Web pages. Our method does not require large scale crawling of the whole Web site to which the target Web page belongs. We design LM so that it uses the information of the learning set more efficiently in comparison to the existing method that uses the same learning set. By experiments, we evaluate the methods obtained by combining one that consists of the method for extracting the informative DOM node both the proposed method and the existing methods, and the existing noise elimination methods: Heur removes advertisements and link-lists by some heuristics and CE removes the DOM nodes existing in the Web pages in the same Web site to which the target Web page belongs. Experimental results show that 1) LM outperforms other methods for extracting the informative DOM node, 2) the combination method (LM, {CE(10), Heur}) based on LM (precision: 0.755, recall: 0.826, F-measure: 0.746) outperforms other combination methods.

  4. Comparative study between the hand-wrist method and cervical vertebral maturation method for evaluation skeletal maturity in cleft patients.

    PubMed

    Manosudprasit, Montian; Wangsrimongkol, Tasanee; Pisek, Poonsak; Chantaramungkorn, Melissa

    2013-09-01

    To test the measure of agreement between use of the Skeletal Maturation Index (SMI) method of Fishman using hand-wrist radiographs and the Cervical Vertebral Maturation Index (CVMI) method for assessing skeletal maturity of the cleft patients. Hand-wrist and lateral cephalometric radiographs of 60 cleft subjects (35 females and 25 males, age range: 7-16 years) were used. Skeletal age was assessed using an adjustment to the SMI method of Fishman to compare with the CVMI method of Hassel and Farman. Agreement between skeletal age assessed by both methods and the intra- and inter-examiner reliability of both methods were tested by weighted kappa analysis. There was good agreement between the two methods with a kappa value of 0.80 (95% CI = 0.66-0.88, p-value <0.001). Reliability of intra- and inter-examiner of both methods was very good with kappa value ranging from 0.91 to 0.99. The CVMI method can be used as an alternative to the SMI method in skeletal age assessment in cleft patients with the benefit of no need of an additional radiograph and avoiding extra-radiation exposure. Comparing the two methods, the present study found better agreement from peak of adolescence onwards.

  5. Computer-aided analysis with Image J for quantitatively assessing psoriatic lesion area.

    PubMed

    Sun, Z; Wang, Y; Ji, S; Wang, K; Zhao, Y

    2015-11-01

    Body surface area is important in determining the severity of psoriasis. However, objective, reliable, and practical method is still in need for this purpose. We performed a computer image analysis (CIA) of psoriatic area using the image J freeware to determine whether this method could be used for objective evaluation of psoriatic area. Fifteen psoriasis patients were randomized to be treated with adalimumab or placebo in a clinical trial. At each visit, the psoriasis area of each body site was estimated by two physicians (E-method), and standard photographs were taken. The psoriasis area in the pictures was assessed with CIA using semi-automatic threshold selection (T-method), or manual selection (M-method, gold standard). The results assessed by the three methods were analyzed with reliability and affecting factors evaluated. Both T- and E-method correlated strongly with M-method, and T-method had a slightly stronger correlation with M-method. Both T- and E-methods had a good consistency between the evaluators. All the three methods were able to detect the change in the psoriatic area after treatment, while the E-method tends to overestimate. The CIA with image J freeware is reliable and practicable in quantitatively assessing the lesional of psoriasis area. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. Mixed Methods in CAM Research: A Systematic Review of Studies Published in 2012

    PubMed Central

    Bishop, Felicity L.; Holmes, Michelle M.

    2013-01-01

    Background. Mixed methods research uses qualitative and quantitative methods together in a single study or a series of related studies. Objectives. To review the prevalence and quality of mixed methods studies in complementary medicine. Methods. All studies published in the top 10 integrative and complementary medicine journals in 2012 were screened. The quality of mixed methods studies was appraised using a published tool designed for mixed methods studies. Results. 4% of papers (95 out of 2349) reported mixed methods studies, 80 of which met criteria for applying the quality appraisal tool. The most popular formal mixed methods design was triangulation (used by 74% of studies), followed by embedded (14%), sequential explanatory (8%), and finally sequential exploratory (5%). Quantitative components were generally of higher quality than qualitative components; when quantitative components involved RCTs they were of particularly high quality. Common methodological limitations were identified. Most strikingly, none of the 80 mixed methods studies addressed the philosophical tensions inherent in mixing qualitative and quantitative methods. Conclusions and Implications. The quality of mixed methods research in CAM can be enhanced by addressing philosophical tensions and improving reporting of (a) analytic methods and reflexivity (in qualitative components) and (b) sampling and recruitment-related procedures (in all components). PMID:24454489

  7. Accuracy of tree diameter estimation from terrestrial laser scanning by circle-fitting methods

    NASA Astrophysics Data System (ADS)

    Koreň, Milan; Mokroš, Martin; Bucha, Tomáš

    2017-12-01

    This study compares the accuracies of diameter at breast height (DBH) estimations by three initial (minimum bounding box, centroid, and maximum distance) and two refining (Monte Carlo and optimal circle) circle-fitting methods The circle-fitting algorithms were evaluated in multi-scan mode and a simulated single-scan mode on 157 European beech trees (Fagus sylvatica L.). DBH measured by a calliper was used as reference data. Most of the studied circle-fitting algorithms significantly underestimated the mean DBH in both scanning modes. Only the Monte Carlo method in the single-scan mode significantly overestimated the mean DBH. The centroid method proved to be the least suitable and showed significantly different results from the other circle-fitting methods in both scanning modes. In multi-scan mode, the accuracy of the minimum bounding box method was not significantly different from the accuracies of the refining methods The accuracy of the maximum distance method was significantly different from the accuracies of the refining methods in both scanning modes. The accuracy of the Monte Carlo method was significantly different from the accuracy of the optimal circle method in only single-scan mode. The optimal circle method proved to be the most accurate circle-fitting method for DBH estimation from point clouds in both scanning modes.

  8. Chemometric methods for the simultaneous determination of some water-soluble vitamins.

    PubMed

    Mohamed, Abdel-Maaboud I; Mohamed, Horria A; Mohamed, Niveen A; El-Zahery, Marwa R

    2011-01-01

    Two spectrophotometric methods, derivative and multivariate methods, were applied for the determination of binary, ternary, and quaternary mixtures of the water-soluble vitamins thiamine HCI (I), pyridoxine HCI (II), riboflavin (III), and cyanocobalamin (IV). The first method is divided into first derivative and first derivative of ratio spectra methods, and the second into classical least squares and principal components regression methods. Both methods are based on spectrophotometric measurements of the studied vitamins in 0.1 M HCl solution in the range of 200-500 nm for all components. The linear calibration curves were obtained from 2.5-90 microg/mL, and the correlation coefficients ranged from 0.9991 to 0.9999. These methods were applied for the analysis of the following mixtures: (I) and (II); (I), (II), and (III); (I), (II), and (IV); and (I), (II), (III), and (IV). The described methods were successfully applied for the determination of vitamin combinations in synthetic mixtures and dosage forms from different manufacturers. The recovery ranged from 96.1 +/- 1.2 to 101.2 +/- 1.0% for derivative methods and 97.0 +/- 0.5 to 101.9 +/- 1.3% for multivariate methods. The results of the developed methods were compared with those of reported methods, and gave good accuracy and precision.

  9. Perceptions of rural women about contraceptive usage in district Khushab, Punjab.

    PubMed

    Tabassum, Aqeela; Manj, Yasir Nawaz; Gunjial, Tahira Rehman; Nazir, Salma

    2016-12-01

    To identify the perceptions of rural women about modern contraceptive methods and to ascertain the psycho-social and economic attitude of women about family planning methods. This cross-sectional study was conducted at the University of Sargodha, Sargodha, Pakistan, from December 2014 to March 2015, and comprised married women. The sample was selected using multistage sampling technique through Fitzgibbon table. They were interviewed regarding use of family planning methods. . SPSS 16 was used for data analysis. Of the 500 women, 358(71.6%) were never-users and 142(28.4%) were past-users of family planning methods. Moreover, 52(14.5%) of never-users did not know about a single modern contraceptive method. Of the past-users, 43(30.3%) knew about 1-3 methods and 99(69.7%) about 4 or more methods. Furthermore, 153(30.6%) respondents graded condoms as good, 261(55.2%) agreed that family planning helped in improving one's standard of living to a great extent while 453(90.6%) indicated that family planning methods were not expensive. Besides, 366(71.2%) respondents believed that using contraceptive method caused infertility. Dissatisfaction with methods, method failure, bad experiences with side effects, privacy concerns and different myths associated to the methods were strongly related to the non-usage of modern contraceptive methods.

  10. Fast polarimetric dehazing method for visibility enhancement in HSI colour space

    NASA Astrophysics Data System (ADS)

    Zhang, Wenfei; Liang, Jian; Ren, Liyong; Ju, Haijuan; Bai, Zhaofeng; Wu, Zhaoxin

    2017-09-01

    Image haze removal has attracted much attention in optics and computer vision fields in recent years due to its wide applications. In particular, the fast and real-time dehazing methods are of significance. In this paper, we propose a fast dehazing method in hue, saturation and intensity colour space based on the polarimetric imaging technique. We implement the polarimetric dehazing method in the intensity channel, and the colour distortion of the image is corrected using the white patch retinex method. This method not only reserves the detailed information restoration capacity, but also improves the efficiency of the polarimetric dehazing method. Comparison studies with state of the art methods demonstrate that the proposed method obtains equal or better quality results and moreover the implementation is much faster. The proposed method is promising in real-time image haze removal and video haze removal applications.

  11. Parameter estimation of Monod model by the Least-Squares method for microalgae Botryococcus Braunii sp

    NASA Astrophysics Data System (ADS)

    See, J. J.; Jamaian, S. S.; Salleh, R. M.; Nor, M. E.; Aman, F.

    2018-04-01

    This research aims to estimate the parameters of Monod model of microalgae Botryococcus Braunii sp growth by the Least-Squares method. Monod equation is a non-linear equation which can be transformed into a linear equation form and it is solved by implementing the Least-Squares linear regression method. Meanwhile, Gauss-Newton method is an alternative method to solve the non-linear Least-Squares problem with the aim to obtain the parameters value of Monod model by minimizing the sum of square error ( SSE). As the result, the parameters of the Monod model for microalgae Botryococcus Braunii sp can be estimated by the Least-Squares method. However, the estimated parameters value obtained by the non-linear Least-Squares method are more accurate compared to the linear Least-Squares method since the SSE of the non-linear Least-Squares method is less than the linear Least-Squares method.

  12. A Mixed Prioritization Operators Strategy Using A Single Measurement Criterion For AHP Application Development

    NASA Astrophysics Data System (ADS)

    Yuen, Kevin Kam Fung

    2009-10-01

    The most appropriate prioritization method is still one of the unsettled issues of the Analytic Hierarchy Process, although many studies have been made and applied. Interestingly, many AHP applications apply only Saaty's Eigenvector method as many studies have found that this method may produce rank reversals and have proposed various prioritization methods as alternatives. Some methods have been proved to be better than the Eigenvector method. However, these methods seem not to attract the attention of researchers. In this paper, eight important prioritization methods are reviewed. A Mixed Prioritization Operators Strategy (MPOS) is developed to select a vector which is prioritized by the most appropriate prioritization operator. To verify this new method, a case study of high school selection is revised using the proposed method. The contribution is that MPOS is useful for solving prioritization problems in the AHP.

  13. On Multifunctional Collaborative Methods in Engineering Science

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.

    2001-01-01

    Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized.

  14. Modified microplate method for rapid and efficient estimation of siderophore produced by bacteria.

    PubMed

    Arora, Naveen Kumar; Verma, Maya

    2017-12-01

    In this study, siderophore production by various bacteria amongst the plant-growth-promoting rhizobacteria was quantified by a rapid and efficient method. In total, 23 siderophore-producing bacterial isolates/strains were taken to estimate their siderophore-producing ability by the standard method (chrome azurol sulphonate assay) as well as 96 well microplate method. Production of siderophore was estimated in percent siderophore unit by both the methods. It was observed that data obtained by both methods correlated positively with each other proving the correctness of microplate method. By the modified microplate method, siderophore production by several bacterial strains can be estimated both qualitatively and quantitatively at one go, saving time, chemicals, making it very less tedious, and also being cheaper in comparison with the method currently in use. The modified microtiter plate method as proposed here makes it far easier to screen the plant-growth-promoting character of plant-associated bacteria.

  15. What can Numerical Computation do for the History of Science? (Study of an Orbit Drawn by Newton on a Letter to Hooke)

    NASA Astrophysics Data System (ADS)

    Stuchi, Teresa; Cardozo Dias, P.

    2013-05-01

    Abstract (2,250 Maximum Characters): On a letter to Robert Hooke, Isaac Newton drew the orbit of a mass moving under a constant attracting central force. How he drew the orbit may indicate how and when he developed dynamic categories. Some historians claim that Newton used a method contrived by Hooke; others that he used some method of curvature. We prove geometrically: Hooke’s method is a second order symplectic area preserving algorithm, and the method of curvature is a first order algorithm without special features; then we integrate the hamiltonian equations. Integration by the method of curvature can also be done exploring geometric properties of curves. We compare three methods: Hooke’s method, the method of curvature and a first order method. A fourth order algorithm sets a standard of comparison. We analyze which of these methods best explains Newton’s drawing.

  16. What can numerical computation do for the history of science? (a study of an orbit drawn by Newton in a letter to Hooke)

    NASA Astrophysics Data System (ADS)

    Cardozo Dias, Penha Maria; Stuchi, T. J.

    2013-11-01

    In a letter to Robert Hooke, Isaac Newton drew the orbit of a mass moving under a constant attracting central force. The drawing of the orbit may indicate how and when Newton developed dynamic categories. Some historians claim that Newton used a method contrived by Hooke; others that he used some method of curvature. We prove that Hooke’s method is a second-order symplectic area-preserving algorithm, and the method of curvature is a first-order algorithm without special features; then we integrate the Hamiltonian equations. Integration by the method of curvature can also be done, exploring the geometric properties of curves. We compare three methods: Hooke’s method, the method of curvature and a first-order method. A fourth-order algorithm sets a standard of comparison. We analyze which of these methods best explains Newton’s drawing.

  17. Adenosine Monophosphate-Based Detection of Bacterial Spores

    NASA Technical Reports Server (NTRS)

    Kern, Roger G.; Chen, Fei; Venkateswaran, Kasthuri; Hattori, Nori; Suzuki, Shigeya

    2009-01-01

    A method of rapid detection of bacterial spores is based on the discovery that a heat shock consisting of exposure to a temperature of 100 C for 10 minutes causes the complete release of adenosine monophosphate (AMP) from the spores. This method could be an alternative to the method described in the immediately preceding article. Unlike that method and related prior methods, the present method does not involve germination and cultivation; this feature is an important advantage because in cases in which the spores are those of pathogens, delays involved in germination and cultivation could increase risks of infection. Also, in comparison with other prior methods that do not involve germination, the present method affords greater sensitivity. At present, the method is embodied in a laboratory procedure, though it would be desirable to implement the method by means of a miniaturized apparatus in order to make it convenient and economical enough to encourage widespread use.

  18. Meshless Local Petrov-Galerkin Method for Bending Problems

    NASA Technical Reports Server (NTRS)

    Phillips, Dawn R.; Raju, Ivatury S.

    2002-01-01

    Recent literature shows extensive research work on meshless or element-free methods as alternatives to the versatile Finite Element Method. One such meshless method is the Meshless Local Petrov-Galerkin (MLPG) method. In this report, the method is developed for bending of beams - C1 problems. A generalized moving least squares (GMLS) interpolation is used to construct the trial functions, and spline and power weight functions are used as the test functions. The method is applied to problems for which exact solutions are available to evaluate its effectiveness. The accuracy of the method is demonstrated for problems with load discontinuities and continuous beam problems. A Petrov-Galerkin implementation of the method is shown to greatly reduce computational time and effort and is thus preferable over the previously developed Galerkin approach. The MLPG method for beam problems yields very accurate deflections and slopes and continuous moment and shear forces without the need for elaborate post-processing techniques.

  19. A variationally coupled FE-BE method for elasticity and fracture mechanics

    NASA Technical Reports Server (NTRS)

    Lu, Y. Y.; Belytschko, T.; Liu, W. K.

    1991-01-01

    A new method for coupling finite element and boundary element subdomains in elasticity and fracture mechanics problems is described. The essential feature of this new method is that a single variational statement is obtained for the entire domain, and in this process the terms associated with tractions on the interfaces between the subdomains are eliminated. This provides the additional advantage that the ambiguities associated with the matching of discontinuous tractions are circumvented. The method leads to a direct procedure for obtaining the discrete equations for the coupled problem without any intermediate steps. In order to evaluate this method and compare it with previous methods, a patch test for coupled procedures has been devised. Evaluation of this variationally coupled method and other methods, such as stiffness coupling and constraint traction matching coupling, shows that this method is substantially superior. Solutions for a series of fracture mechanics problems are also reported to illustrate the effectiveness of this method.

  20. Comparing strategies to assess multiple behavior change in behavioral intervention studies.

    PubMed

    Drake, Bettina F; Quintiliani, Lisa M; Sapp, Amy L; Li, Yi; Harley, Amy E; Emmons, Karen M; Sorensen, Glorian

    2013-03-01

    Alternatives to individual behavior change methods have been proposed, however, little has been done to investigate how these methods compare. To explore four methods that quantify change in multiple risk behaviors targeting four common behaviors. We utilized data from two cluster-randomized, multiple behavior change trials conducted in two settings: small businesses and health centers. Methods used were: (1) summative; (2) z-score; (3) optimal linear combination; and (4) impact score. In the Small Business study, methods 2 and 3 revealed similar outcomes. However, physical activity did not contribute to method 3. In the Health Centers study, similar results were found with each of the methods. Multivitamin intake contributed significantly more to each of the summary measures than other behaviors. Selection of methods to assess multiple behavior change in intervention trials must consider study design, and the targeted population when determining the appropriate method/s to use.

  1. Fast multipole methods on a cluster of GPUs for the meshless simulation of turbulence

    NASA Astrophysics Data System (ADS)

    Yokota, R.; Narumi, T.; Sakamaki, R.; Kameoka, S.; Obi, S.; Yasuoka, K.

    2009-11-01

    Recent advances in the parallelizability of fast N-body algorithms, and the programmability of graphics processing units (GPUs) have opened a new path for particle based simulations. For the simulation of turbulence, vortex methods can now be considered as an interesting alternative to finite difference and spectral methods. The present study focuses on the efficient implementation of the fast multipole method and pseudo-particle method on a cluster of NVIDIA GeForce 8800 GT GPUs, and applies this to a vortex method calculation of homogeneous isotropic turbulence. The results of the present vortex method agree quantitatively with that of the reference calculation using a spectral method. We achieved a maximum speed of 7.48 TFlops using 64 GPUs, and the cost performance was near 9.4/GFlops. The calculation of the present vortex method on 64 GPUs took 4120 s, while the spectral method on 32 CPUs took 4910 s.

  2. Dynamic one-dimensional modeling of secondary settling tanks and system robustness evaluation.

    PubMed

    Li, Ben; Stenstrom, M K

    2014-01-01

    One-dimensional secondary settling tank models are widely used in current engineering practice for design and optimization, and usually can be expressed as a nonlinear hyperbolic or nonlinear strongly degenerate parabolic partial differential equation (PDE). Reliable numerical methods are needed to produce approximate solutions that converge to the exact analytical solutions. In this study, we introduced a reliable numerical technique, the Yee-Roe-Davis (YRD) method as the governing PDE solver, and compared its reliability with the prevalent Stenstrom-Vitasovic-Takács (SVT) method by assessing their simulation results at various operating conditions. The YRD method also produced a similar solution to the previously developed Method G and Enquist-Osher method. The YRD and SVT methods were also used for a time-to-failure evaluation, and the results show that the choice of numerical method can greatly impact the solution. Reliable numerical methods, such as the YRD method, are strongly recommended.

  3. Simultaneous determination of binary mixture of amlodipine besylate and atenolol based on dual wavelengths

    NASA Astrophysics Data System (ADS)

    Lamie, Nesrine T.

    2015-10-01

    Four, accurate, precise, and sensitive spectrophotometric methods are developed for simultaneous determination of a binary mixture of amlodipine besylate (AM) and atenolol (AT). AM is determined at its λmax 360 nm (0D), while atenolol can be determined by four different methods. Method (A) is absorption factor (AF). Method (B) is the new ratio difference method (RD) which measures the difference in amplitudes between 210 and 226 nm. Method (C) is novel constant center spectrophotometric method (CC). Method (D) is mean centering of the ratio spectra (MCR) at 284 nm. The methods are tested by analyzing synthetic mixtures of the cited drugs and they are applied to their commercial pharmaceutical preparation. The validity of results is assessed by applying standard addition technique. The results obtained are found to agree statistically with those obtained by official methods, showing no significant difference with respect to accuracy and precision.

  4. A scale-invariant change detection method for land use/cover change research

    NASA Astrophysics Data System (ADS)

    Xing, Jin; Sieber, Renee; Caelli, Terrence

    2018-07-01

    Land Use/Cover Change (LUCC) detection relies increasingly on comparing remote sensing images with different spatial and spectral scales. Based on scale-invariant image analysis algorithms in computer vision, we propose a scale-invariant LUCC detection method to identify changes from scale heterogeneous images. This method is composed of an entropy-based spatial decomposition, two scale-invariant feature extraction methods, Maximally Stable Extremal Region (MSER) and Scale-Invariant Feature Transformation (SIFT) algorithms, a spatial regression voting method to integrate MSER and SIFT results, a Markov Random Field-based smoothing method, and a support vector machine classification method to assign LUCC labels. We test the scale invariance of our new method with a LUCC case study in Montreal, Canada, 2005-2012. We found that the scale-invariant LUCC detection method provides similar accuracy compared with the resampling-based approach but this method avoids the LUCC distortion incurred by resampling.

  5. Mixed methods research in mental health nursing.

    PubMed

    Kettles, A M; Creswell, J W; Zhang, W

    2011-08-01

    Mixed methods research is becoming more widely used in order to answer research questions and to investigate research problems in mental health and psychiatric nursing. However, two separate literature searches, one in Scotland and one in the USA, revealed that few mental health nursing studies identified mixed methods research in their titles. Many studies used the term 'embedded' but few studies identified in the literature were mixed methods embedded studies. The history, philosophical underpinnings, definition, types of mixed methods research and associated pragmatism are discussed, as well as the need for mixed methods research. Examples of mental health nursing mixed methods research are used to illustrate the different types of mixed methods: convergent parallel, embedded, explanatory and exploratory in their sequential and concurrent combinations. Implementing mixed methods research is also discussed briefly and the problem of identifying mixed methods research in mental and psychiatric nursing are discussed with some possible solutions to the problem proposed. © 2011 Blackwell Publishing.

  6. Direct application of Padé approximant for solving nonlinear differential equations.

    PubMed

    Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Garcia-Gervacio, Jose Luis; Huerta-Chua, Jesus; Morales-Mendoza, Luis Javier; Gonzalez-Lee, Mario

    2014-01-01

    This work presents a direct procedure to apply Padé method to find approximate solutions for nonlinear differential equations. Moreover, we present some cases study showing the strength of the method to generate highly accurate rational approximate solutions compared to other semi-analytical methods. The type of tested nonlinear equations are: a highly nonlinear boundary value problem, a differential-algebraic oscillator problem, and an asymptotic problem. The high accurate handy approximations obtained by the direct application of Padé method shows the high potential if the proposed scheme to approximate a wide variety of problems. What is more, the direct application of the Padé approximant aids to avoid the previous application of an approximative method like Taylor series method, homotopy perturbation method, Adomian Decomposition method, homotopy analysis method, variational iteration method, among others, as tools to obtain a power series solutions to post-treat with the Padé approximant. 34L30.

  7. Methods of measuring soil moisture in the field

    USGS Publications Warehouse

    Johnson, A.I.

    1962-01-01

    For centuries, the amount of moisture in the soil has been of interest in agriculture. The subject of soil moisture is also of great importance to the hydrologist, forester, and soils engineer. Much equipment and many methods have been developed to measure soil moisture under field conditions. This report discusses and evaluates the various methods for measurement of soil moisture and describes the equipment needed for each method. The advantages and disadvantages of each method are discussed and an extensive list of references is provided for those desiring to study the subject in more detail. The gravimetric method is concluded to be the most satisfactory method for most problems requiring onetime moisture-content data. The radioactive method is normally best for obtaining repeated measurements of soil moisture in place. It is concluded that all methods have some limitations and that the ideal method for measurement of soil moisture under field conditions has yet to be perfected.

  8. Computational methods for internal flows with emphasis on turbomachinery

    NASA Technical Reports Server (NTRS)

    Mcnally, W. D.; Sockol, P. M.

    1981-01-01

    Current computational methods for analyzing flows in turbomachinery and other related internal propulsion components are presented. The methods are divided into two classes. The inviscid methods deal specifically with turbomachinery applications. Viscous methods, deal with generalized duct flows as well as flows in turbomachinery passages. Inviscid methods are categorized into the potential, stream function, and Euler aproaches. Viscous methods are treated in terms of parabolic, partially parabolic, and elliptic procedures. Various grids used in association with these procedures are also discussed.

  9. Digital signal processing methods for biosequence comparison.

    PubMed Central

    Benson, D C

    1990-01-01

    A method is discussed for DNA or protein sequence comparison using a finite field fast Fourier transform, a digital signal processing technique; and statistical methods are discussed for analyzing the output of this algorithm. This method compares two sequences of length N in computing time proportional to N log N compared to N2 for methods currently used. This method makes it feasible to compare very long sequences. An example is given to show that the method correctly identifies sites of known homology. PMID:2349096

  10. Evaluation of methods for the assay of radium-228 in water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noyce, J.R.

    1981-02-01

    The technical literature from 1967 to May 1980 was searched for methods for assaying radium-228 in water. These methods were evaluated for their suitability as potential EPA reference methods for drinking water assays. The authors suggest the present EPA reference method (Krieger, 1976) be retained but improved, and a second method (McCurdy and Mellor, 1979), which employs beta-gamma coincidence counting, be added. Included in this report is a table that lists the principal features of 17 methods for radium-228 assays.

  11. Comparing Methods for Assessing Reliability Uncertainty Based on Pass/Fail Data Collected Over Time

    DOE PAGES

    Abes, Jeff I.; Hamada, Michael S.; Hills, Charles R.

    2017-12-20

    In this paper, we compare statistical methods for analyzing pass/fail data collected over time; some methods are traditional and one (the RADAR or Rationale for Assessing Degradation Arriving at Random) was recently developed. These methods are used to provide uncertainty bounds on reliability. We make observations about the methods' assumptions and properties. Finally, we illustrate the differences between two traditional methods, logistic regression and Weibull failure time analysis, and the RADAR method using a numerical example.

  12. Comparing Methods for Assessing Reliability Uncertainty Based on Pass/Fail Data Collected Over Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abes, Jeff I.; Hamada, Michael S.; Hills, Charles R.

    In this paper, we compare statistical methods for analyzing pass/fail data collected over time; some methods are traditional and one (the RADAR or Rationale for Assessing Degradation Arriving at Random) was recently developed. These methods are used to provide uncertainty bounds on reliability. We make observations about the methods' assumptions and properties. Finally, we illustrate the differences between two traditional methods, logistic regression and Weibull failure time analysis, and the RADAR method using a numerical example.

  13. Remote air pollution measurement

    NASA Technical Reports Server (NTRS)

    Byer, R. L.

    1975-01-01

    This paper presents a discussion and comparison of the Raman method, the resonance and fluorescence backscatter method, long path absorption methods and the differential absorption method for remote air pollution measurement. A comparison of the above remote detection methods shows that the absorption methods offer the most sensitivity at the least required transmitted energy. Topographical absorption provides the advantage of a single ended measurement, and differential absorption offers the additional advantage of a fully depth resolved absorption measurement. Recent experimental results confirming the range and sensitivity of the methods are presented.

  14. Conservation properties of numerical integration methods for systems of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Rosenbaum, J. S.

    1976-01-01

    If a system of ordinary differential equations represents a property conserving system that can be expressed linearly (e.g., conservation of mass), it is then desirable that the numerical integration method used conserve the same quantity. It is shown that both linear multistep methods and Runge-Kutta methods are 'conservative' and that Newton-type methods used to solve the implicit equations preserve the inherent conservation of the numerical method. It is further shown that a method used by several authors is not conservative.

  15. The application of generalized, cyclic, and modified numerical integration algorithms to problems of satellite orbit computation

    NASA Technical Reports Server (NTRS)

    Chesler, L.; Pierce, S.

    1971-01-01

    Generalized, cyclic, and modified multistep numerical integration methods are developed and evaluated for application to problems of satellite orbit computation. Generalized methods are compared with the presently utilized Cowell methods; new cyclic methods are developed for special second-order differential equations; and several modified methods are developed and applied to orbit computation problems. Special computer programs were written to generate coefficients for these methods, and subroutines were written which allow use of these methods with NASA's GEOSTAR computer program.

  16. A comparison of modifications of the McMaster method for the enumeration of Ascaris suum eggs in pig faecal samples.

    PubMed

    Pereckiene, A; Kaziūnaite, V; Vysniauskas, A; Petkevicius, S; Malakauskas, A; Sarkūnas, M; Taylor, M A

    2007-10-21

    The comparative efficacies of seven published McMaster method modifications for faecal egg counting were evaluated on pig faecal samples containing Ascaris suum eggs. Comparisons were made as to the number of samples found to be positive by each of the methods, the total egg counts per gram (EPG) of faeces, the variations in EPG obtained in the samples examined, and the ease of use of each of the methods. Each method was evaluated after the examination of 30 samples of faeces. The positive samples were identified by counting A. suum eggs in one, two and three sections of newly designed McMaster chamber. In the present study compared methods were reported by: I-Henriksen and Aagaard [Henriksen, S.A., Aagaard, K.A., 1976. A simple flotation and McMaster method. Nord. Vet. Med. 28, 392-397]; II-Kassai [Kassai, T., 1999. Veterinary Helminthology. Butterworth-Heinemann, Oxford, 260 pp.]; III and IV-Urquhart et al. [Urquhart, G.M., Armour, J., Duncan, J.L., Dunn, A.M., Jennings, F.W., 1996. Veterinary Parasitology, 2nd ed. Blackwell Science Ltd., Oxford, UK, 307 pp.] (centrifugation and non-centrifugation methods); V and VI-Grønvold [Grønvold, J., 1991. Laboratory diagnoses of helminths common routine methods used in Denmark. In: Nansen, P., Grønvold, J., Bjørn, H. (Eds.), Seminars on Parasitic Problems in Farm Animals Related to Fodder Production and Management. The Estonian Academy of Sciences, Tartu, Estonia, pp. 47-48] (salt solution, and salt and glucose solution); VII-Thienpont et al. [Thienpont, D., Rochette, F., Vanparijs, O.F.J., 1986. Diagnosing Helminthiasis by Coprological Examination. Coprological Examination, 2nd ed. Janssen Research Foundation, Beerse, Belgium, 205 pp.]. The number of positive samples by examining single section ranged from 98.9% (method I), to 51.1% (method VII). Only with methods I and II, there was a 100% positivity in two out of three of the chambers examined, and FEC obtained using these methods were significantly (p<0.01) higher comparing to remaining methods. Mean FEC varied between 243 EPG (method I) and 82 EPG (method IV). Examination of all three chambers resulted in four methods (I, II, V and VI) having 100% sensitivity, while method VII had the lowest 83.3% sensitivity. Mean FEC in this case varied between 239 EPG (method I) and 81 EPG (method IV). Based on the mean FEC for two chambers, an efficiency coefficient (EF) was calculated and equated to 1 for the highest egg count (method I) and 0.87, 0.57, 0.34, 0.53, 0.49 and 0.50 for remaining methods (II-VII), respectively. Efficiency coefficients make it possible not only to recalculate and unify results of faeces examination obtained by any method but also to interpret coproscopical examinations by other authors. Method VII was the easiest and quickest but least sensitive, and method I the most complex but most sensitive. Examining two or three sections of the McMaster chamber resulted in increased sensitivity for all methods.

  17. Effect of preparation methods on dispersion stability and electrochemical performance of graphene sheets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Li, E-mail: chenli1981@lut.cn; Li, Na; Zhang, Mingxia

    Chemical exfoliation is one of the most important strategies for preparing graphene. The aggregation of graphene sheets severely prevents graphene from exhibiting excellent properties. However, there are no attempts to investigate the effect of preparation methods on the dispersity of graphene sheets. In this study, three chemical exfoliation methods, including Hummers method, modified Hummers method, and improved method, were used to prepare graphene sheets. The influence of preparation methods on the structure, dispersion stability in organic solvents, and electrochemical properties of graphene sheets were investigated. Fourier transform infrared microscopy, Raman spectra, transmission electron microscopy, and UV–vis spectrophotometry were employed tomore » analyze the structure of the as-prepared graphene sheets. The results showed that graphene prepared by improved method exhibits excellent dispersity and stability in organic solvents without any additional stabilizer or modifier, which is attributed to the completely exfoliation and regular structure. Moreover, cyclic voltammetric and electrochemical impedance spectroscopy measurements showed that graphene prepared by improved method exhibits superior electrochemical properties than that prepared by the other two methods. - Graphical abstract: Graphene oxides with different oxidation degree were obtained via three methods, and then graphene with different crystal structures were created by chemical reduction of exfoliated graphene oxides. - Highlights: • Graphene oxides with different oxidation degree were obtained via three oxidation methods. • The influence of oxidation methods on microstructure of graphene was investigated. • The effect of oxidation methods on dispersion stability of graphene was investigated. • The effect of oxidation methods on electrochemical properties of graphene was discussed.« less

  18. A rapid, efficient, and economic device and method for the isolation and purification of mouse islet cells

    PubMed Central

    Zongyi, Yin; Funian, Zou; Hao, Li; Ying, Cheng; Jialin, Zhang

    2017-01-01

    Rapid, efficient, and economic method for the isolation and purification of islets has been pursued by numerous islet-related researchers. In this study, we compared the advantages and disadvantages of our developed patented method with those of commonly used conventional methods (Ficoll-400, 1077, and handpicking methods). Cell viability was assayed using Trypan blue, cell purity and yield were assayed using diphenylthiocarbazone, and islet function was assayed using acridine orange/ethidium bromide staining and enzyme-linked immunosorbent assay-glucose stimulation testing 4 days after cultivation. The results showed that our islet isolation and purification method required 12 ± 3 min, which was significantly shorter than the time required in Ficoll-400, 1077, and HPU groups (34 ± 3, 41 ± 4, and 30 ± 4 min, respectively; P < 0.05). There was no significant difference in islet viability among the four groups. The islet purity, function, yield, and cost of our method were superior to those of the Ficoll-400 and 1077 methods, but inferior to the handpicking method. However, the handpicking method may cause wrist injury and visual impairment in researchers during large-scale islet isolation (>1000 islets). In summary, the MCT method is a rapid, efficient, and economic method for isolating and purifying murine islet cell clumps. This method overcomes some of the shortcomings of conventional methods, showing a relatively higher quality and yield of islets within a shorter duration at a lower cost. Therefore, the current method provides researchers with an alternative option for islet isolation and should be widely generalized. PMID:28207765

  19. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  20. A robust two-way semi-linear model for normalization of cDNA microarray data

    PubMed Central

    Wang, Deli; Huang, Jian; Xie, Hehuang; Manzella, Liliana; Soares, Marcelo Bento

    2005-01-01

    Background Normalization is a basic step in microarray data analysis. A proper normalization procedure ensures that the intensity ratios provide meaningful measures of relative expression values. Methods We propose a robust semiparametric method in a two-way semi-linear model (TW-SLM) for normalization of cDNA microarray data. This method does not make the usual assumptions underlying some of the existing methods. For example, it does not assume that: (i) the percentage of differentially expressed genes is small; or (ii) the numbers of up- and down-regulated genes are about the same, as required in the LOWESS normalization method. We conduct simulation studies to evaluate the proposed method and use a real data set from a specially designed microarray experiment to compare the performance of the proposed method with that of the LOWESS normalization approach. Results The simulation results show that the proposed method performs better than the LOWESS normalization method in terms of mean square errors for estimated gene effects. The results of analysis of the real data set also show that the proposed method yields more consistent results between the direct and the indirect comparisons and also can detect more differentially expressed genes than the LOWESS method. Conclusions Our simulation studies and the real data example indicate that the proposed robust TW-SLM method works at least as well as the LOWESS method and works better when the underlying assumptions for the LOWESS method are not satisfied. Therefore, it is a powerful alternative to the existing normalization methods. PMID:15663789

  1. A rapid, efficient, and economic device and method for the isolation and purification of mouse islet cells.

    PubMed

    Zongyi, Yin; Funian, Zou; Hao, Li; Ying, Cheng; Jialin, Zhang; Baifeng, Li

    2017-01-01

    Rapid, efficient, and economic method for the isolation and purification of islets has been pursued by numerous islet-related researchers. In this study, we compared the advantages and disadvantages of our developed patented method with those of commonly used conventional methods (Ficoll-400, 1077, and handpicking methods). Cell viability was assayed using Trypan blue, cell purity and yield were assayed using diphenylthiocarbazone, and islet function was assayed using acridine orange/ethidium bromide staining and enzyme-linked immunosorbent assay-glucose stimulation testing 4 days after cultivation. The results showed that our islet isolation and purification method required 12 ± 3 min, which was significantly shorter than the time required in Ficoll-400, 1077, and HPU groups (34 ± 3, 41 ± 4, and 30 ± 4 min, respectively; P < 0.05). There was no significant difference in islet viability among the four groups. The islet purity, function, yield, and cost of our method were superior to those of the Ficoll-400 and 1077 methods, but inferior to the handpicking method. However, the handpicking method may cause wrist injury and visual impairment in researchers during large-scale islet isolation (>1000 islets). In summary, the MCT method is a rapid, efficient, and economic method for isolating and purifying murine islet cell clumps. This method overcomes some of the shortcomings of conventional methods, showing a relatively higher quality and yield of islets within a shorter duration at a lower cost. Therefore, the current method provides researchers with an alternative option for islet isolation and should be widely generalized.

  2. A new cation-exchange method for accurate field speciation of hexavalent chromium

    USGS Publications Warehouse

    Ball, J.W.; McCleskey, R. Blaine

    2003-01-01

    A new method for field speciation of Cr(VI) has been developed to meet present stringent regulatory standards and to overcome the limitations of existing methods. The method consists of passing a water sample through strong acid cation-exchange resin at the field site, where Cr(III) is retained while Cr(VI) passes into the effluent and is preserved for later determination. The method is simple, rapid, portable, and accurate, and makes use of readily available, inexpensive materials. Cr(VI) concentrations are determined later in the laboratory using any elemental analysis instrument sufficiently sensitive to measure the Cr(VI) concentrations of interest. The new method allows measurement of Cr(VI) concentrations as low as 0.05 ??g 1-1, storage of samples for at least several weeks prior to analysis, and use of readily available analytical instrumentation. Cr(VI) can be separated from Cr(III) between pH 2 and 11 at Cr(III)/Cr(VI) concentration ratios as high as 1000. The new method has demonstrated excellent comparability with two commonly used methods, the Hach Company direct colorimetric method and USEPA method 218.6. The new method is superior to the Hach direct colorimetric method owing to its relative sensitivity and simplicity. The new method is superior to USEPA method 218.6 in the presence of Fe(II) concentrations up to 1 mg 1-1 and Fe(III) concentrations up to 10 mg 1-1. Time stability of preserved samples is a significant advantage over the 24-h time constraint specified for USEPA method 218.6.

  3. Whole-Body Computed Tomography-Based Body Mass and Body Fat Quantification: A Comparison to Hydrostatic Weighing and Air Displacement Plethysmography.

    PubMed

    Gibby, Jacob T; Njeru, Dennis K; Cvetko, Steve T; Heiny, Eric L; Creer, Andrew R; Gibby, Wendell A

    We correlate and evaluate the accuracy of accepted anthropometric methods of percent body fat (%BF) quantification, namely, hydrostatic weighing (HW) and air displacement plethysmography (ADP), to 2 automatic adipose tissue quantification methods using computed tomography (CT). Twenty volunteer subjects (14 men, 6 women) received head-to-toe CT scans. Hydrostatic weighing and ADP were obtained from 17 and 12 subjects, respectively. The CT data underwent conversion using 2 separate algorithms, namely, the Schneider method and the Beam method, to convert Hounsfield units to their respective tissue densities. The overall mass and %BF of both methods were compared with HW and ADP. When comparing ADP to CT data using the Schneider method and Beam method, correlations were r = 0.9806 and 0.9804, respectively. Paired t tests indicated there were no statistically significant biases. Additionally, observed average differences in %BF between ADP and the Schneider method and the Beam method were 0.38% and 0.77%, respectively. The %BF measured from ADP, the Schneider method, and the Beam method all had significantly higher mean differences when compared with HW (3.05%, 2.32%, and 1.94%, respectively). We have shown that total body mass correlates remarkably well with both the Schneider method and Beam method of mass quantification. Furthermore, %BF calculated with the Schneider method and Beam method CT algorithms correlates remarkably well with ADP. The application of these CT algorithms have utility in further research to accurately stratify risk factors with periorgan, visceral, and subcutaneous types of adipose tissue, and has the potential for significant clinical application.

  4. Method for determination of aflatoxin M₁ in cheese and butter by HPLC using an immunoaffinity column.

    PubMed

    Sakuma, Hisako; Kamata, Yoichi; Sugita-Konishi, Yoshiko; Kawakami, Hiroshi

    2011-01-01

    A rapid, sensitive convenient method for determination of aflatoxin M₁ (AFM₁) in cheese and butter by HPLC was developed and validated. The method employs a safe extraction solution (mixture of acetonitrile, methanol and water) and an immunoaffinity column (IAC) for clean-up. Compared with the widely used method employing chloroform and a Florisil column, the IAC method has a short analytical time and there are no interference peaks. The limits of quantification (LOQ) of the IAC method were 0.12 and 0.14 µg/kg, while those of the Florisil column method were 0.47 and 0.23 µg/kg in cheese and buffer, respectively. The recovery and relative standard deviation (RSD) for cheese (spiked at 0.5 µg/kg) in the IAC method were 92% and 7%, respectively, while for the Florisil column method the corresponding values were 76% and 10%. The recovery and RSD for butter (spiked at 0.5 µg/kg) in the IAC method were 97% and 9%, and those in the Florisil method were 74% and 9%, respectively. In the IAC method, the values of in-house precision (n=2, day=5) of cheese and butter (spiked at 0.5 µg/kg) were 9% and 13%, respectively. The IAC method is superior to the Florisil column method in terms of safety, ease of handling, sensitivity and reliability. A survey of AFM₁ contamination in imported cheese and butter in Japan was conducted by the IAC method. AFM₁ was not detected in 60 samples of cheese and 30 samples of butter.

  5. 26 CFR 1.412(c)(1)-3 - Applying the minimum funding requirements to restored plans.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...) In general—(1) Restoration method. The restoration method is a funding method that adapts the... spread gain method that maintains an unfunded liability. A plan may adopt any cost method that satisfies...

  6. Comparison of modal superposition methods for the analytical solution to moving load problems.

    DOT National Transportation Integrated Search

    1994-01-01

    The response of bridge structures to moving loads is investigated using modal superposition methods. Two distinct modal superposition methods are available: the modedisplacement method and the mode-acceleration method. While the mode-displacement met...

  7. Turbulent boundary layers over nonstationary plane boundaries

    NASA Technical Reports Server (NTRS)

    Roper, A. T.; Gentry, G. L., Jr.

    1978-01-01

    Methods of predicting integral parameters and skin friction coefficients of turbulent boundary layers developing over moving ground planes were evaluated. The three methods evaluated were: relative integral parameter method; relative power law method; and modified law of the wall method.

  8. Inventory-based estimates of forest biomass carbon stocks in China: A comparison of three methods

    Treesearch

    Zhaodi Guo; Jingyun Fang; Yude Pan; Richard Birdsey

    2010-01-01

    Several studies have reported different estimates for forest biomass carbon (C) stocks in China. The discrepancy among these estimates may be largely attributed to the methods used. In this study, we used three methods [mean biomass density method (MBM), mean ratio method (MRM), and continuous biomass expansion factor (BEF) method (abbreviated as CBM)] applied to...

  9. Comparative Evaluation of Two Methods to Estimate Natural Gas Production in Texas

    EIA Publications

    2003-01-01

    This report describes an evaluation conducted by the Energy Information Administration (EIA) in August 2003 of two methods that estimate natural gas production in Texas. The first method (parametric method) was used by EIA from February through August 2003 and the second method (multinomial method) replaced it starting in September 2003, based on the results of this evaluation.

  10. Hypothesis Testing Using Factor Score Regression: A Comparison of Four Methods

    ERIC Educational Resources Information Center

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2016-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and…

  11. Using Caspar Creek flow records to test peak flow estimation methods applicable to crossing design

    Treesearch

    Peter H. Cafferata; Leslie M. Reid

    2017-01-01

    Long-term flow records from sub-watersheds in the Caspar Creek Experimental Watersheds were used to test the accuracy of four methods commonly used to estimate peak flows in small forested watersheds: the Rational Method, the updated USGS Magnitude and Frequency Method, flow transference methods, and the NRCS curve number method. Comparison of measured and calculated...

  12. Slip and Slide Method of Factoring Trinomials with Integer Coefficients over the Integers

    ERIC Educational Resources Information Center

    Donnell, William A.

    2012-01-01

    In intermediate and college algebra courses there are a number of methods for factoring quadratic trinomials with integer coefficients over the integers. Some of these methods have been given names, such as trial and error, reversing FOIL, AC method, middle term splitting method and slip and slide method. The purpose of this article is to discuss…

  13. Evaluating IRT- and CTT-Based Methods of Estimating Classification Consistency and Accuracy Indices from Single Administrations

    ERIC Educational Resources Information Center

    Deng, Nina

    2011-01-01

    Three decision consistency and accuracy (DC/DA) methods, the Livingston and Lewis (LL) method, LEE method, and the Hambleton and Han (HH) method, were evaluated. The purposes of the study were: (1) to evaluate the accuracy and robustness of these methods, especially when their assumptions were not well satisfied, (2) to investigate the "true"…

  14. Rapid Radiochemical Method for Total Radiostrontium (Sr-90) ...

    EPA Pesticide Factsheets

    Technical Fact Sheet Analysis Purpose: Qualitative analysis Technique: Beta counting Method Developed for: Strontium-89 and strontium-90 in building materials Method Selected for: SAM lists this method for qualitative analysis of strontium-89 and strontium-90 in concrete or brick building materials Summary of subject analytical method which will be posted to the SAM website to allow access to the method.

  15. 26 CFR 1.472-2 - Requirements incident to adoption and use of LIFO inventory method.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... inventory method. (ii) Any method of establishing pools for inventory under the dollar-value LIFO inventory method. (iii) Any method of determining the LIFO value of a dollar-value inventory pool, such as the... selecting a price index to be used with the index or link chain method of valuing inventory pools under the...

  16. Attitudes of Teachers of Arabic as a Foreign Language toward Methods of Foreign Language Teaching

    ERIC Educational Resources Information Center

    Seraj, Sami A.

    2010-01-01

    This study examined the attitude of teachers of Arabic as a foreign language toward some of the most well known teaching methods. For this reason the following eight methods were selected: (1) the Grammar-Translation Method (GTM), (2) the Direct Method (DM), (3) the Audio-Lingual Method (ALM), (4) Total Physical Response (TPR), (5) Community…

  17. Effects of Anchor Item Methods on the Detection of Differential Item Functioning within the Family of Rasch Models

    ERIC Educational Resources Information Center

    Wang, Wen-Chung

    2004-01-01

    Scale indeterminacy in analysis of differential item functioning (DIF) within the framework of item response theory can be resolved by imposing 3 anchor item methods: the equal-mean-difficulty method, the all-other anchor item method, and the constant anchor item method. In this article, applicability and limitations of these 3 methods are…

  18. A Comparison of Cut Scores Using Multiple Standard Setting Methods.

    ERIC Educational Resources Information Center

    Impara, James C.; Plake, Barbara S.

    This paper reports the results of using several alternative methods of setting cut scores. The methods used were: (1) a variation of the Angoff method (1971); (2) a variation of the borderline group method; and (3) an advanced impact method (G. Dillon, 1996). The results discussed are from studies undertaken to set the cut scores for fourth grade…

  19. On finite element methods for the Helmholtz equation

    NASA Technical Reports Server (NTRS)

    Aziz, A. K.; Werschulz, A. G.

    1979-01-01

    The numerical solution of the Helmholtz equation is considered via finite element methods. A two-stage method which gives the same accuracy in the computed gradient as in the computed solution is discussed. Error estimates for the method using a newly developed proof are given, and the computational considerations which show this method to be computationally superior to previous methods are presented.

  20. Restricted random search method based on taboo search in the multiple minima problem

    NASA Astrophysics Data System (ADS)

    Hong, Seung Do; Jhon, Mu Shik

    1997-03-01

    The restricted random search method is proposed as a simple Monte Carlo sampling method to search minima fast in the multiple minima problem. This method is based on taboo search applied recently to continuous test functions. The concept of the taboo region instead of the taboo list is used and therefore the sampling of a region near an old configuration is restricted in this method. This method is applied to 2-dimensional test functions and the argon clusters. This method is found to be a practical and efficient method to search near-global configurations of test functions and the argon clusters.

  1. The Split Coefficient Matrix method for hyperbolic systems of gasdynamic equations

    NASA Technical Reports Server (NTRS)

    Chakravarthy, S. R.; Anderson, D. A.; Salas, M. D.

    1980-01-01

    The Split Coefficient Matrix (SCM) finite difference method for solving hyperbolic systems of equations is presented. This new method is based on the mathematical theory of characteristics. The development of the method from characteristic theory is presented. Boundary point calculation procedures consistent with the SCM method used at interior points are explained. The split coefficient matrices that define the method for steady supersonic and unsteady inviscid flows are given for several examples. The SCM method is used to compute several flow fields to demonstrate its accuracy and versatility. The similarities and differences between the SCM method and the lambda-scheme are discussed.

  2. A Multifunctional Interface Method for Coupling Finite Element and Finite Difference Methods: Two-Dimensional Scalar-Field Problems

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.

    2002-01-01

    A multifunctional interface method with capabilities for variable-fidelity modeling and multiple method analysis is presented. The methodology provides an effective capability by which domains with diverse idealizations can be modeled independently to exploit the advantages of one approach over another. The multifunctional method is used to couple independently discretized subdomains, and it is used to couple the finite element and the finite difference methods. The method is based on a weighted residual variational method and is presented for two-dimensional scalar-field problems. A verification test problem and a benchmark application are presented, and the computational implications are discussed.

  3. An Accurate and Stable FFT-based Method for Pricing Options under Exp-Lévy Processes

    NASA Astrophysics Data System (ADS)

    Ding, Deng; Chong U, Sio

    2010-05-01

    An accurate and stable method for pricing European options in exp-Lévy models is presented. The main idea of this new method is combining the quadrature technique and the Carr-Madan Fast Fourier Transform methods. The theoretical analysis shows that the overall complexity of this new method is still O(N log N) with N grid points as the fast Fourier transform methods. Numerical experiments for different exp-Lévy processes also show that the numerical algorithm proposed by this new method has an accuracy and stability for the small strike prices K. That develops and improves the Carr-Madan method.

  4. Methods for producing complex films, and films produced thereby

    DOEpatents

    Duty, Chad E.; Bennett, Charlee J. C.; Moon, Ji -Won; Phelps, Tommy J.; Blue, Craig A.; Dai, Quanqin; Hu, Michael Z.; Ivanov, Ilia N.; Jellison, Jr., Gerald E.; Love, Lonnie J.; Ott, Ronald D.; Parish, Chad M.; Walker, Steven

    2015-11-24

    A method for producing a film, the method comprising melting a layer of precursor particles on a substrate until at least a portion of the melted particles are planarized and merged to produce the film. The invention is also directed to a method for producing a photovoltaic film, the method comprising depositing particles having a photovoltaic or other property onto a substrate, and affixing the particles to the substrate, wherein the particles may or may not be subsequently melted. Also described herein are films produced by these methods, methods for producing a patterned film on a substrate, and methods for producing a multilayer structure.

  5. Divergence preserving discrete surface integral methods for Maxwell's curl equations using non-orthogonal unstructured grids

    NASA Technical Reports Server (NTRS)

    Madsen, Niel K.

    1992-01-01

    Several new discrete surface integral (DSI) methods for solving Maxwell's equations in the time-domain are presented. These methods, which allow the use of general nonorthogonal mixed-polyhedral unstructured grids, are direct generalizations of the canonical staggered-grid finite difference method. These methods are conservative in that they locally preserve divergence or charge. Employing mixed polyhedral cells, (hexahedral, tetrahedral, etc.) these methods allow more accurate modeling of non-rectangular structures and objects because the traditional stair-stepped boundary approximations associated with the orthogonal grid based finite difference methods can be avoided. Numerical results demonstrating the accuracy of these new methods are presented.

  6. Selection of neural network structure for system error correction of electro-optical tracker system with horizontal gimbal

    NASA Astrophysics Data System (ADS)

    Liu, Xing-fa; Cen, Ming

    2007-12-01

    Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.

  7. Research progress of nano self - cleaning anti-fouling coatings

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Zhao, Y. J.; Teng, J. L.; Wang, J. H.; Wu, L. S.; Zheng, Y. L.

    2018-01-01

    There are many methods of evaluating the performance of nano self-cleaning anti-fouling coatings, such as carbon blacking method, coating reflection coefficient method, glass microbead method, film method, contact angle and rolling angle method, organic degradation method, and the application of performance evaluation method in self-cleaning antifouling coating. For the more, the types of nano self-cleaning anti-fouling coatings based on aqueous media was described, such as photocatalytic self-cleaning coatings, silicone coatings, organic fluorine coatings, fluorosilicone coatings, fluorocarbon coatings, polysilazane self-cleaning coatings. The research and application of different kinds of nano self-cleaning antifouling coatings are anlysised, and the latest research results are summed.

  8. Analysis of a turbulent boundary layer over a moving ground plane

    NASA Technical Reports Server (NTRS)

    Roper, A. T.; Gentry, G. L., Jr.

    1972-01-01

    Four methods of predicting the integral and friction parameters for a turbulent boundary layer over a moving ground plane were evaluated by using test information obtained in 76.2- by 50.8-centimeter tunnel. The tunnel was operated in the open sidewall configuration. These methods are (1) relative integral parameter method, (2) modified power law method, (3) relative power law method, and (4) modified law of the wall method. The modified law of the wall method predicts a more rapid decrease in skin friction with an increase in the ratio of belt velocity to free steam velocity than do methods (1) and (3).

  9. Modified harmonic balance method for the solution of nonlinear jerk equations

    NASA Astrophysics Data System (ADS)

    Rahman, M. Saifur; Hasan, A. S. M. Z.

    2018-03-01

    In this paper, a second approximate solution of nonlinear jerk equations (third order differential equation) can be obtained by using modified harmonic balance method. The method is simpler and easier to carry out the solution of nonlinear differential equations due to less number of nonlinear equations are required to solve than the classical harmonic balance method. The results obtained from this method are compared with those obtained from the other existing analytical methods that are available in the literature and the numerical method. The solution shows a good agreement with the numerical solution as well as the analytical methods of the available literature.

  10. [Series: Utilization of Differential Equations and Methods for Solving Them in Medical Physics (2)].

    PubMed

    Murase, Kenya

    2015-01-01

    In this issue, symbolic methods for solving differential equations were firstly introduced. Of the symbolic methods, Laplace transform method was also introduced together with some examples, in which this method was applied to solving the differential equations derived from a two-compartment kinetic model and an equivalent circuit model for membrane potential. Second, series expansion methods for solving differential equations were introduced together with some examples, in which these methods were used to solve Bessel's and Legendre's differential equations. In the next issue, simultaneous differential equations and various methods for solving these differential equations will be introduced together with some examples in medical physics.

  11. The Robin Hood method - A novel numerical method for electrostatic problems based on a non-local charge transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lazic, Predrag; Stefancic, Hrvoje; Abraham, Hrvoje

    2006-03-20

    We introduce a novel numerical method, named the Robin Hood method, of solving electrostatic problems. The approach of the method is closest to the boundary element methods, although significant conceptual differences exist with respect to this class of methods. The method achieves equipotentiality of conducting surfaces by iterative non-local charge transfer. For each of the conducting surfaces, non-local charge transfers are performed between surface elements, which differ the most from the targeted equipotentiality of the surface. The method is tested against analytical solutions and its wide range of application is demonstrated. The method has appealing technical characteristics. For the problemmore » with N surface elements, the computational complexity of the method essentially scales with N {sup {alpha}}, where {alpha} < 2, the required computer memory scales with N, while the error of the potential decreases exponentially with the number of iterations for many orders of magnitude of the error, without the presence of the Critical Slowing Down. The Robin Hood method could prove useful in other classical or even quantum problems. Some future development ideas for possible applications outside electrostatics are addressed.« less

  12. Methods to control for unmeasured confounding in pharmacoepidemiology: an overview.

    PubMed

    Uddin, Md Jamal; Groenwold, Rolf H H; Ali, Mohammed Sanni; de Boer, Anthonius; Roes, Kit C B; Chowdhury, Muhammad A B; Klungel, Olaf H

    2016-06-01

    Background Unmeasured confounding is one of the principal problems in pharmacoepidemiologic studies. Several methods have been proposed to detect or control for unmeasured confounding either at the study design phase or the data analysis phase. Aim of the Review To provide an overview of commonly used methods to detect or control for unmeasured confounding and to provide recommendations for proper application in pharmacoepidemiology. Methods/Results Methods to control for unmeasured confounding in the design phase of a study are case only designs (e.g., case-crossover, case-time control, self-controlled case series) and the prior event rate ratio adjustment method. Methods that can be applied in the data analysis phase include, negative control method, perturbation variable method, instrumental variable methods, sensitivity analysis, and ecological analysis. A separate group of methods are those in which additional information on confounders is collected from a substudy. The latter group includes external adjustment, propensity score calibration, two-stage sampling, and multiple imputation. Conclusion As the performance and application of the methods to handle unmeasured confounding may differ across studies and across databases, we stress the importance of using both statistical evidence and substantial clinical knowledge for interpretation of the study results.

  13. A hybrid perturbation-Galerkin method for differential equations containing a parameter

    NASA Technical Reports Server (NTRS)

    Geer, James F.; Andersen, Carl M.

    1989-01-01

    A two-step hybrid perturbation-Galerkin method to solve a variety of differential equations which involve a parameter is presented and discussed. The method consists of: (1) the use of a perturbation method to determine the asymptotic expansion of the solution about one or more values of the parameter; and (2) the use of some of the perturbation coefficient functions as trial functions in the classical Bubnov-Galerkin method. This hybrid method has the potential of overcoming some of the drawbacks of the perturbation method and the Bubnov-Galerkin method when they are applied by themselves, while combining some of the good features of both. The proposed method is illustrated first with a simple linear two-point boundary value problem and is then applied to a nonlinear two-point boundary value problem in lubrication theory. The results obtained from the hybrid method are compared with approximate solutions obtained by purely numerical methods. Some general features of the method, as well as some special tips for its implementation, are discussed. A survey of some current research application areas is presented and its degree of applicability to broader problem areas is discussed.

  14. Validated univariate and multivariate spectrophotometric methods for the determination of pharmaceuticals mixture in complex wastewater

    NASA Astrophysics Data System (ADS)

    Riad, Safaa M.; Salem, Hesham; Elbalkiny, Heba T.; Khattab, Fatma I.

    2015-04-01

    Five, accurate, precise, and sensitive univariate and multivariate spectrophotometric methods were developed for the simultaneous determination of a ternary mixture containing Trimethoprim (TMP), Sulphamethoxazole (SMZ) and Oxytetracycline (OTC) in waste water samples collected from different cites either production wastewater or livestock wastewater after their solid phase extraction using OASIS HLB cartridges. In univariate methods OTC was determined at its λmax 355.7 nm (0D), while (TMP) and (SMZ) were determined by three different univariate methods. Method (A) is based on successive spectrophotometric resolution technique (SSRT). The technique starts with the ratio subtraction method followed by ratio difference method for determination of TMP and SMZ. Method (B) is successive derivative ratio technique (SDR). Method (C) is mean centering of the ratio spectra (MCR). The developed multivariate methods are principle component regression (PCR) and partial least squares (PLS). The specificity of the developed methods is investigated by analyzing laboratory prepared mixtures containing different ratios of the three drugs. The obtained results are statistically compared with those obtained by the official methods, showing no significant difference with respect to accuracy and precision at p = 0.05.

  15. Identifying outliers of non-Gaussian groundwater state data based on ensemble estimation for long-term trends

    NASA Astrophysics Data System (ADS)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kueyoung; Choung, Sungwook; Chung, Il Moon

    2017-05-01

    A hydrogeological dataset often includes substantial deviations that need to be inspected. In the present study, three outlier identification methods - the three sigma rule (3σ), inter quantile range (IQR), and median absolute deviation (MAD) - that take advantage of the ensemble regression method are proposed by considering non-Gaussian characteristics of groundwater data. For validation purposes, the performance of the methods is compared using simulated and actual groundwater data with a few hypothetical conditions. In the validations using simulated data, all of the proposed methods reasonably identify outliers at a 5% outlier level; whereas, only the IQR method performs well for identifying outliers at a 30% outlier level. When applying the methods to real groundwater data, the outlier identification performance of the IQR method is found to be superior to the other two methods. However, the IQR method shows limitation by identifying excessive false outliers, which may be overcome by its joint application with other methods (for example, the 3σ rule and MAD methods). The proposed methods can be also applied as potential tools for the detection of future anomalies by model training based on currently available data.

  16. Validated univariate and multivariate spectrophotometric methods for the determination of pharmaceuticals mixture in complex wastewater.

    PubMed

    Riad, Safaa M; Salem, Hesham; Elbalkiny, Heba T; Khattab, Fatma I

    2015-04-05

    Five, accurate, precise, and sensitive univariate and multivariate spectrophotometric methods were developed for the simultaneous determination of a ternary mixture containing Trimethoprim (TMP), Sulphamethoxazole (SMZ) and Oxytetracycline (OTC) in waste water samples collected from different cites either production wastewater or livestock wastewater after their solid phase extraction using OASIS HLB cartridges. In univariate methods OTC was determined at its λmax 355.7 nm (0D), while (TMP) and (SMZ) were determined by three different univariate methods. Method (A) is based on successive spectrophotometric resolution technique (SSRT). The technique starts with the ratio subtraction method followed by ratio difference method for determination of TMP and SMZ. Method (B) is successive derivative ratio technique (SDR). Method (C) is mean centering of the ratio spectra (MCR). The developed multivariate methods are principle component regression (PCR) and partial least squares (PLS). The specificity of the developed methods is investigated by analyzing laboratory prepared mixtures containing different ratios of the three drugs. The obtained results are statistically compared with those obtained by the official methods, showing no significant difference with respect to accuracy and precision at p=0.05. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Investigation of the low-depression velocity layer in desert area by multichannel analysis of surface-wave method

    USGS Publications Warehouse

    Cheng, S.; Tian, G.; Xia, J.; He, H.; Shi, Z.; ,

    2004-01-01

    The multichannel analysis of surface-wave method (MASW) is a newly development method. The method has been employed in various applications in environmental and engineering geophysics overseas. However, It can only be found a few case studies in China. Most importantly, there is no application of the MASW in desert area in China or abroad. We present a case study of investigating the low-depression velocity in Temple of North Taba Area in Erdos Basin. The MASW method successfully defined the low-depression velocity layer in the desert area. Comparing results obtained by the MASW method with results by refraction seismic method, we discussed efficiency and simplicity of applying the MASW method in the desert area. It is proved that the maximum investigation depth can reach 60m in the study area when the acquisition and procession parameters are carefully chosen. The MASW method can remedy the incompetence of the refraction method and the micro-seismograph log method in low-depression velocity layer's investigation. The MASW method is also a powerful tool in investigation of near-surface complicated materials and possesses many unique advantages.

  18. Dual domain material point method for multiphase flows

    NASA Astrophysics Data System (ADS)

    Zhang, Duan

    2017-11-01

    Although the particle-in-cell method was first invented in the 60's for fluid computations, one of its later versions, the material point method, is mostly used for solid calculations. Recent development of the multi-velocity formulations for multiphase flows and fluid-structure interactions requires the Lagrangian capability of the method be combined with Eulerian calculations for fluids. Because of different numerical representations of the materials, additional numerical schemes are needed to ensure continuity of the materials. New applications of the method to compute fluid motions have revealed numerical difficulties in various versions of the method. To resolve these difficulties, the dual domain material point method is introduced and improved. Unlike other particle based methods, the material point method uses both Lagrangian particles and Eulerian mesh, therefore it avoids direct communication between particles. With this unique property and the Lagrangian capability of the method, it is shown that a multiscale numerical scheme can be efficiently built based on the dual domain material point method. In this talk, the theoretical foundation of the method will be introduced. Numerical examples will be shown. Work sponsored by the next generation code project of LANL.

  19. What makes a contraceptive acceptable?

    PubMed

    Berer, M

    1995-01-01

    The women's health movement is developing an increasing number of negative campaigns against various contraceptive methods based on three assumptions: 1) user-controlled methods are better for women than provider-controlled methods, 2) long-acting methods are undesirable because of their susceptibility to abuse, and 3) systemic methods carry unacceptable health risks to women. While these objections have sparked helpful debate, criticizing an overreliance on such methods is one thing and calling for bans on the provision of injectables and implants and on the development of vaccine contraceptives is another. Examination of the terms "provider-controlled," "user-controlled," and "long-acting" reveals that their definitions are not as clear-cut as opponents would have us believe. Some women's health advocates find the methods that are long-acting and provider-controlled to be the most problematic. They also criticize the near 100% contraceptive effectiveness of the long-acting methods despite the fact that the goal of contraception is to prevent pregnancy. It is wrong to condemn these methods because of their link to population control policies of the 1960s, and it is important to understand that long-acting, effective methods are often beneficial to women who require contraception for 20-22 years of their lives. Arguments against systemic methods (including RU-486 for early abortion and contraceptive vaccines) rebound around issues of safety. Feminists have gone so far as to create an intolerable situation by publishing books that criticize these methods based on erroneous conclusions and faulty scientific analysis. While women's health advocates have always rightly called for bans on abuse of various methods, they have not extended this ban to the methods themselves. In settings where other methods are not available, bans can lead to harm or maternal deaths. Another perspective can be used to consider methods in terms of their relationship with the user (repeated application). While feminists have called for more barrier and natural methods, most people in the world today refuse to use condoms even though they are the best protection from infection. Instead science should pursue promising new methods as well as continue to improve existing methods and to fill important gaps. Feminists should be advocates for women and their diverse needs rather than advocates against specific contraceptive methods.

  20. A new method for water quality assessment: by harmony degree equation.

    PubMed

    Zuo, Qiting; Han, Chunhui; Liu, Jing; Ma, Junxia

    2018-02-22

    Water quality assessment is an important basic work in the development, utilization, management, and protection of water resources, and also a prerequisite for water safety. In this paper, the harmony degree equation (HDE) was introduced into the research of water quality assessment, and a new method for water quality assessment was proposed according to the HDE: by harmony degree equation (WQA-HDE). First of all, the calculation steps and ideas of this method were described in detail, and then, this method with some other important methods of water quality assessment (single factor assessment method, mean-type comprehensive index assessment method, and multi-level gray correlation assessment method) were used to assess the water quality of the Shaying River (the largest tributary of the Huaihe in China). For this purpose, 2 years (2013-2014) dataset of nine water quality variables covering seven monitoring sites, and approximately 189 observations were used to compare and analyze the characteristics and advantages of the new method. The results showed that the calculation steps of WQA-HDE are similar to the comprehensive assessment method, and WQA-HDE is more operational comparing with the results of other water quality assessment methods. In addition, this new method shows good flexibility by setting the judgment criteria value HD 0 of water quality; when HD 0  = 0.8, the results are closer to reality, and more realistic and reliable. Particularly, when HD 0  = 1, the results of WQA-HDE are consistent with the single factor assessment method, both methods are subject to the most stringent "one vote veto" judgment condition. So, WQA-HDE is a composite method that combines the single factor assessment and comprehensive assessment. This research not only broadens the research field of theoretical method system of harmony theory but also promotes the unity of water quality assessment method and can be used for reference in other comprehensive assessment.

  1. Timing of nest vegetation measurement may obscure adaptive significance of nest-site characteristics: A simulation study.

    PubMed

    McConnell, Mark D; Monroe, Adrian P; Burger, Loren Wes; Martin, James A

    2017-02-01

    Advances in understanding avian nesting ecology are hindered by a prevalent lack of agreement between nest-site characteristics and fitness metrics such as nest success. We posit this is a result of inconsistent and improper timing of nest-site vegetation measurements. Therefore, we evaluated how the timing of nest vegetation measurement influences the estimated effects of vegetation structure on nest survival. We simulated phenological changes in nest-site vegetation growth over a typical nesting season and modeled how the timing of measuring that vegetation, relative to nest fate, creates bias in conclusions regarding its influence on nest survival. We modeled the bias associated with four methods of measuring nest-site vegetation: Method 1-measuring at nest initiation, Method 2-measuring at nest termination regardless of fate, Method 3-measuring at nest termination for successful nests and at estimated completion for unsuccessful nests, and Method 4-measuring at nest termination regardless of fate while also accounting for initiation date. We quantified and compared bias for each method for varying simulated effects, ranked models for each method using AIC, and calculated the proportion of simulations in which each model (measurement method) was selected as the best model. Our results indicate that the risk of drawing an erroneous or spurious conclusion was present in all methods but greater with Method 2 which is the most common method reported in the literature. Methods 1 and 3 were similarly less biased. Method 4 provided no additional value as bias was similar to Method 2 for all scenarios. While Method 1 is seldom practical to collect in the field, Method 3 is logistically practical and minimizes inherent bias. Implementation of Method 3 will facilitate estimating the effect of nest-site vegetation on survival, in the least biased way, and allow reliable conclusions to be drawn.

  2. Psychological traits underlying different killing methods among Malaysian male murderers.

    PubMed

    Kamaluddin, Mohammad Rahim; Shariff, Nadiah Syariani; Nurfarliza, Siti; Othman, Azizah; Ismail, Khaidzir H; Mat Saat, Geshina Ayu

    2014-04-01

    Murder is the most notorious crime that violates religious, social and cultural norms. Examining the types and number of different killing methods that used are pivotal in a murder case. However, the psychological traits underlying specific and multiple killing methods are still understudied. The present study attempts to fill this gap in knowledge by identifying the underlying psychological traits of different killing methods among Malaysian murderers. The study adapted an observational cross-sectional methodology using a guided self-administered questionnaire for data collection. The sampling frame consisted of 71 Malaysian male murderers from 11 Malaysian prisons who were selected using purposive sampling method. The participants were also asked to provide the types and number of different killing methods used to kill their respective victims. An independent sample t-test was performed to establish the mean score difference of psychological traits between the murderers who used single and multiple types of killing methods. Kruskal-Wallis tests were carried out to ascertain the psychological trait differences between specific types of killing methods. The results suggest that specific psychological traits underlie the type and number of different killing methods used during murder. The majority (88.7%) of murderers used a single method of killing. Multiple methods of killing was evident in 'premeditated' murder compared to 'passion' murder, and revenge was a common motive. Examples of multiple methods are combinations of stabbing and strangulation or slashing and physical force. An exception was premeditated murder committed with shooting, when it was usually a single method, attributed to the high lethality of firearms. Shooting was also notable when the motive was financial gain or related to drug dealing. Murderers who used multiple killing methods were more aggressive and sadistic than those who used a single killing method. Those who used multiple methods or slashing also displayed a higher level of minimisation traits. Despite its limitations, this study has provided some light on the underlying psychological traits of different killing methods which is useful in the field of criminology.

  3. Technical note: Comparison of metal-on-metal hip simulator wear measured by gravimetric, CMM and optical profiling methods

    NASA Astrophysics Data System (ADS)

    Alberts, L. Russell; Martinez-Nogues, Vanesa; Baker Cook, Richard; Maul, Christian; Bills, Paul; Racasan, R.; Stolz, Martin; Wood, Robert J. K.

    2018-03-01

    Simulation of wear in artificial joint implants is critical for evaluating implant designs and materials. Traditional protocols employ the gravimetric method to determine the loss of material by measuring the weight of the implant components before and after various test intervals and after the completed test. However, the gravimetric method cannot identify the location, area coverage or maximum depth of the wear and it has difficulties with proportionally small weight changes in relatively heavy implants. In this study, we compare the gravimetric method with two geometric surface methods; an optical light method (RedLux) and a coordinate measuring method (CMM). We tested ten Adept hips in a simulator for 2 million cycles (MC). Gravimetric and optical methods were performed at 0.33, 0.66, 1.00, 1.33 and 2 MC. CMM measurements were done before and after the test. A high correlation was found between the gravimetric and optical methods for both heads (R 2  =  0.997) and for cups (R 2  =  0.96). Both geometric methods (optical and CMM) measured more volume loss than the gravimetric method (for the heads, p  =  0.004 (optical) and p  =  0.08 (CMM); for the cups p  =  0.01 (optical) and p  =  0.003 (CMM)). Two cups recorded negative wear at 2 MC by the gravimetric method but none did by either the optical method or by CMM. The geometric methods were prone to confounding factors such as surface deformation and the gravimetric method could be confounded by protein absorption and backside wear. Both of the geometric methods were able to show the location, area covered and depth of the wear on the bearing surfaces, and track their changes during the test run; providing significant advantages to solely using the gravimetric method.

  4. A Review of the Extraction and Determination Methods of Thirteen Essential Vitamins to the Human Body: An Update from 2010.

    PubMed

    Zhang, Yuan; Zhou, Wei-E; Yan, Jia-Qing; Liu, Min; Zhou, Yu; Shen, Xin; Ma, Ying-Lin; Feng, Xue-Song; Yang, Jun; Li, Guo-Hui

    2018-06-19

    Vitamins are a class of essential nutrients in the body; thus, they play important roles in human health. The chemicals are involved in many physiological functions and both their lack and excess can put health at risk. Therefore, the establishment of methods for monitoring vitamin concentrations in different matrices is necessary. In this review, an updated overview of the main pretreatments and determination methods that have been used since 2010 is given. Ultrasonic assisted extraction, liquid⁻liquid extraction, solid phase extraction and dispersive liquid⁻liquid microextraction are the most common pretreatment methods, while the determination methods involve chromatography methods, electrophoretic methods, microbiological assays, immunoassays, biosensors and several other methods. Different pretreatments and determination methods are discussed.

  5. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System

    PubMed Central

    Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan

    2017-01-01

    In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequency-domain and achieves computational complexity reduction. PMID:28230763

  6. Hybrid finite element and Brownian dynamics method for charged particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huber, Gary A., E-mail: ghuber@ucsd.edu; Miao, Yinglong; Zhou, Shenggao

    2016-04-28

    Diffusion is often the rate-determining step in many biological processes. Currently, the two main computational methods for studying diffusion are stochastic methods, such as Brownian dynamics, and continuum methods, such as the finite element method. A previous study introduced a new hybrid diffusion method that couples the strengths of each of these two methods, but was limited by the lack of interactions among the particles; the force on each particle had to be from an external field. This study further develops the method to allow charged particles. The method is derived for a general multidimensional system and is presented usingmore » a basic test case for a one-dimensional linear system with one charged species and a radially symmetric system with three charged species.« less

  7. A robust direct-integration method for rotorcraft maneuver and periodic response

    NASA Technical Reports Server (NTRS)

    Panda, Brahmananda

    1992-01-01

    The Newmark-Beta method and the Newton-Raphson iteration scheme are combined to develop a direct-integration method for evaluating the maneuver and periodic-response expressions for rotorcraft. The method requires the generation of Jacobians and includes higher derivatives in the formulation of the geometric stiffness matrix to enhance the convergence of the system. The method leads to effective convergence with nonlinear structural dynamics and aerodynamic terms. Singularities in the matrices can be addressed with the method as they arise from a Lagrange multiplier approach for coupling equations with nonlinear constraints. The method is also shown to be general enough to handle singularities from quasisteady control-system models. The method is shown to be more general and robust than the similar 2GCHAS method for analyzing rotorcraft dynamics.

  8. Sources of method bias in social science research and recommendations on how to control it.

    PubMed

    Podsakoff, Philip M; MacKenzie, Scott B; Podsakoff, Nathan P

    2012-01-01

    Despite the concern that has been expressed about potential method biases, and the pervasiveness of research settings with the potential to produce them, there is disagreement about whether they really are a problem for researchers in the behavioral sciences. Therefore, the purpose of this review is to explore the current state of knowledge about method biases. First, we explore the meaning of the terms "method" and "method bias" and then we examine whether method biases influence all measures equally. Next, we review the evidence of the effects that method biases have on individual measures and on the covariation between different constructs. Following this, we evaluate the procedural and statistical remedies that have been used to control method biases and provide recommendations for minimizing method bias.

  9. Projection methods for the numerical solution of Markov chain models

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1989-01-01

    Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.

  10. Kinematic Distances: A Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Wenger, Trey V.; Balser, Dana S.; Anderson, L. D.; Bania, T. M.

    2018-03-01

    Distances to high-mass star-forming regions (HMSFRs) in the Milky Way are a crucial constraint on the structure of the Galaxy. Only kinematic distances are available for a majority of the HMSFRs in the Milky Way. Here, we compare the kinematic and parallax distances of 75 Galactic HMSFRs to assess the accuracy of kinematic distances. We derive the kinematic distances using three different methods: the traditional method using the Brand & Blitz rotation curve (Method A), the traditional method using the Reid et al. rotation curve and updated solar motion parameters (Method B), and a Monte Carlo technique (Method C). Methods B and C produce kinematic distances closest to the parallax distances, with median differences of 13% (0.43 {kpc}) and 17% (0.42 {kpc}), respectively. Except in the vicinity of the tangent point, the kinematic distance uncertainties derived by Method C are smaller than those of Methods A and B. In a large region of the Galaxy, the Method C kinematic distances constrain both the distances and the Galactocentric positions of HMSFRs more accurately than parallax distances. Beyond the tangent point along ℓ = 30°, for example, the Method C kinematic distance uncertainties reach a minimum of 10% of the parallax distance uncertainty at a distance of 14 {kpc}. We develop a prescription for deriving and applying the Method C kinematic distances and distance uncertainties. The code to generate the Method C kinematic distances is publicly available and may be utilized through an online tool.

  11. Estimating dietary costs of low-income women in California: a comparison of 2 approaches.

    PubMed

    Aaron, Grant J; Keim, Nancy L; Drewnowski, Adam; Townsend, Marilyn S

    2013-04-01

    Currently, no simplified approach to estimating food costs exists for a large, nationally representative sample. The objective was to compare 2 approaches for estimating individual daily diet costs in a population of low-income women in California. Cost estimates based on time-intensive method 1 (three 24-h recalls and associated food prices on receipts) were compared with estimates made by using less intensive method 2 [a food-frequency questionnaire (FFQ) and store prices]. Low-income participants (n = 121) of USDA nutrition programs were recruited. Mean daily diet costs, both unadjusted and adjusted for energy, were compared by using Pearson correlation coefficients and the Bland-Altman 95% limits of agreement between methods. Energy and nutrient intakes derived by the 2 methods were comparable; where differences occurred, the FFQ (method 2) provided higher nutrient values than did the 24-h recall (method 1). The crude daily diet cost was $6.32 by the 24-h recall method and $5.93 by the FFQ method (P = 0.221). The energy-adjusted diet cost was $6.65 by the 24-h recall method and $5.98 by the FFQ method (P < 0.001). Although the agreement between methods was weaker than expected, both approaches may be useful. Additional research is needed to further refine a large national survey approach (method 2) to estimate daily dietary costs with the use of this minimal time-intensive method for the participant and moderate time-intensive method for the researcher.

  12. Modified Extraction-Free Ion-Pair Methods for the Determination of Flunarizine Dihydrochloride in Bulk Drug, Tablets, and Human Urine

    NASA Astrophysics Data System (ADS)

    Prashanth, K. N.; Basavaiah, K.

    2018-01-01

    Two simple and sensitive extraction-free spectrophotometric methods are described for the determination of flunarizine dihydrochloride. The methods are based on the ion-pair complex formation between the nitrogenous compound flunarizine (FNZ), converted from flunarizine dihydrochloride (FNH), and the acidic dye phenol red (PR), in which experimental variables were circumvented. The first method (method A) is based on the formation of a yellow-colored ion-pair complex (1:1 drug:dye) between FNZ and PR in chloroform, which is measured at 415 nm. In the second method (method B), the formed drug-dye ion-pair complex is treated with ethanolic potassium hydroxide in an ethanolic medium, and the resulting base form of the dye is measured at 580 nm. The stoichiometry of the formed ion-pair complex between the drug and dye (1:1) is determined by Job's continuous variations method, and the stability constant of the complex is also calculated. These methods quantify FNZ over the concentration ranges 5.0-70.0 in method A and 0.5-7.0 μg/mL in method B. The calculated molar absorptivities are 6.17 × 103 and 5.5 × 104 L/mol·cm-1 for method A and method B, respectively, with corresponding Sandell sensitivity values of 0.0655 and 0.0074 μg/cm2. The methods are applied to the determination of FNZ in pure drug and human urine.

  13. Development of a novel and highly efficient method of isolating bacteriophages from water.

    PubMed

    Liu, Weili; Li, Chao; Qiu, Zhi-Gang; Jin, Min; Wang, Jing-Feng; Yang, Dong; Xiao, Zhong-Hai; Yuan, Zhao-Kang; Li, Jun-Wen; Xu, Qun-Ying; Shen, Zhi-Qiang

    2017-08-01

    Bacteriophages are widely used to the treatment of drug-resistant bacteria and the improvement of food safety through bacterial lysis. However, the limited investigations on bacteriophage restrict their further application. In this study, a novel and highly efficient method was developed for isolating bacteriophage from water based on the electropositive silica gel particles (ESPs) method. To optimize the ESPs method, we evaluated the eluent type, flow rate, pH, temperature, and inoculation concentration of bacteriophage using bacteriophage f2. The quantitative detection reported that the recovery of the ESPs method reached over 90%. The qualitative detection demonstrated that the ESPs method effectively isolated 70% of extremely low-concentration bacteriophage (10 0 PFU/100L). Based on the host bacteria composed of 33 standard strains and 10 isolated strains, the bacteriophages in 18 water samples collected from the three sites in the Tianjin Haihe River Basin were isolated by the ESPs and traditional methods. Results showed that the ESPs method was significantly superior to the traditional method. The ESPs method isolated 32 strains of bacteriophage, whereas the traditional method isolated 15 strains. The sample isolation efficiency and bacteriophage isolation efficiency of the ESPs method were 3.28 and 2.13 times higher than those of the traditional method. The developed ESPs method was characterized by high isolation efficiency, efficient handling of large water sample size and low requirement on water quality. Copyright © 2017. Published by Elsevier B.V.

  14. Condition number estimation of preconditioned matrices.

    PubMed

    Kushida, Noriyuki

    2015-01-01

    The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager's method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei's matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei's matrix, and matrices generated with the finite element method.

  15. Computational Methods for Configurational Entropy Using Internal and Cartesian Coordinates.

    PubMed

    Hikiri, Simon; Yoshidome, Takashi; Ikeguchi, Mitsunori

    2016-12-13

    The configurational entropy of solute molecules is a crucially important quantity to study various biophysical processes. Consequently, it is necessary to establish an efficient quantitative computational method to calculate configurational entropy as accurately as possible. In the present paper, we investigate the quantitative performance of the quasi-harmonic and related computational methods, including widely used methods implemented in popular molecular dynamics (MD) software packages, compared with the Clausius method, which is capable of accurately computing the change of the configurational entropy upon temperature change. Notably, we focused on the choice of the coordinate systems (i.e., internal or Cartesian coordinates). The Boltzmann-quasi-harmonic (BQH) method using internal coordinates outperformed all the six methods examined here. The introduction of improper torsions in the BQH method improves its performance, and anharmonicity of proper torsions in proteins is identified to be the origin of the superior performance of the BQH method. In contrast, widely used methods implemented in MD packages show rather poor performance. In addition, the enhanced sampling of replica-exchange MD simulations was found to be efficient for the convergent behavior of entropy calculations. Also in folding/unfolding transitions of a small protein, Chignolin, the BQH method was reasonably accurate. However, the independent term without the correlation term in the BQH method was most accurate for the folding entropy among the methods considered in this study, because the QH approximation of the correlation term in the BQH method was no longer valid for the divergent unfolded structures.

  16. Comparison of haemoglobin estimates using direct & indirect cyanmethaemoglobin methods.

    PubMed

    Bansal, Priyanka Gupta; Toteja, Gurudayal Singh; Bhatia, Neena; Gupta, Sanjeev; Kaur, Manpreet; Adhikari, Tulsi; Garg, Ashok Kumar

    2016-10-01

    Estimation of haemoglobin is the most widely used method to assess anaemia. Although direct cyanmethaemoglobin method is the recommended method for estimation of haemoglobin, but it may not be feasible under field conditions. Hence, the present study was undertaken to compare indirect cyanmethaemoglobin method against the conventional direct method for haemoglobin estimation. Haemoglobin levels were estimated for 888 adolescent girls aged 11-18 yr residing in an urban slum in Delhi by both direct and indirect cyanmethaemoglobin methods, and the results were compared. The mean haemoglobin levels for 888 whole blood samples estimated by direct and indirect cyanmethaemoglobin method were 116.1 ± 12.7 and 110.5 ± 12.5 g/l, respectively, with a mean difference of 5.67 g/l (95% confidence interval: 5.45 to 5.90, P<0.001); which is equivalent to 0.567 g%. The prevalence of anaemia was reported as 59.6 and 78.2 per cent by direct and indirect methods, respectively. Sensitivity and specificity of indirect cyanmethaemoglobin method were 99.2 and 56.4 per cent, respectively. Using regression analysis, prediction equation was developed for indirect haemoglobin values. The present findings revealed that indirect cyanmethaemoglobin method overestimated the prevalence of anaemia as compared to the direct method. However, if a correction factor is applied, indirect method could be successfully used for estimating true haemoglobin level. More studies should be undertaken to establish agreement and correction factor between direct and indirect cyanmethaemoglobin methods.

  17. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhakal, Tilak Raj

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crackmore » tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the computation. The numerical properties of the multiscale method are investigated as well as the results from this multi-scale calculation are compared with direct MD simulation results to demonstrate the feasibility of the method. Also, the multi-scale method is applied for a two dimensional problem of jet formation around copper notch under a strong impact.« less

  18. Teaching Fashion Illustration to University Students: Experiential and Expository Methods.

    ERIC Educational Resources Information Center

    Dragoo, Sheri; Martin, Ruth E.; Horridge, Patricia

    1998-01-01

    In a fashion illustration course, 24 students were taught using expository methods and 28 with experiential methods. Each method involved 20 lessons over eight weeks. Pre/posttest results indicated that both methods were equally effective in improving scores. (SK)

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogt, J R

    A total of 75 papers were presented on nuclear methods for analysis of environmental and biological samples. Sessions were devoted to software and mathematical methods; nuclear methods in atmospheric and water research; nuclear and atomic methodology; nuclear methods in biology and medicine; and nuclear methods in energy research.

  20. 40 CFR 440.50 - Applicability; description of the titanium ore subcategory.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) mills beneficiating titanium ores by electrostatic methods, magnetic and physical methods, or flotation methods; and (c) mines engaged in the dredge mining of placer deposits of sands containing rutile... methods in conjunction with electrostatic or magnetic methods). ...

Top