Sample records for standard curve method

  1. The standard centrifuge method accurately measures vulnerability curves of long-vesselled olive stems.

    PubMed

    Hacke, Uwe G; Venturas, Martin D; MacKinnon, Evan D; Jacobsen, Anna L; Sperry, John S; Pratt, R Brandon

    2015-01-01

    The standard centrifuge method has been frequently used to measure vulnerability to xylem cavitation. This method has recently been questioned. It was hypothesized that open vessels lead to exponential vulnerability curves, which were thought to be indicative of measurement artifact. We tested this hypothesis in stems of olive (Olea europea) because its long vessels were recently claimed to produce a centrifuge artifact. We evaluated three predictions that followed from the open vessel artifact hypothesis: shorter stems, with more open vessels, would be more vulnerable than longer stems; standard centrifuge-based curves would be more vulnerable than dehydration-based curves; and open vessels would cause an exponential shape of centrifuge-based curves. Experimental evidence did not support these predictions. Centrifuge curves did not vary when the proportion of open vessels was altered. Centrifuge and dehydration curves were similar. At highly negative xylem pressure, centrifuge-based curves slightly overestimated vulnerability compared to the dehydration curve. This divergence was eliminated by centrifuging each stem only once. The standard centrifuge method produced accurate curves of samples containing open vessels, supporting the validity of this technique and confirming its utility in understanding plant hydraulics. Seven recommendations for avoiding artefacts and standardizing vulnerability curve methodology are provided. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.

  2. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film.

    PubMed

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.

  3. Simplified method for creating a density-absorbed dose calibration curve for the low dose range from Gafchromic EBT3 film

    PubMed Central

    Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki

    2016-01-01

    Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120

  4. Mathematics of quantitative kinetic PCR and the application of standard curves.

    PubMed

    Rutledge, R G; Côté, C

    2003-08-15

    Fluorescent monitoring of DNA amplification is the basis of real-time PCR, from which target DNA concentration can be determined from the fractional cycle at which a threshold amount of amplicon DNA is produced. Absolute quantification can be achieved using a standard curve constructed by amplifying known amounts of target DNA. In this study, the mathematics of quantitative PCR are examined in detail, from which several fundamental aspects of the threshold method and the application of standard curves are illustrated. The construction of five replicate standard curves for two pairs of nested primers was used to examine the reproducibility and degree of quantitative variation using SYBER Green I fluorescence. Based upon this analysis the application of a single, well- constructed standard curve could provide an estimated precision of +/-6-21%, depending on the number of cycles required to reach threshold. A simplified method for absolute quantification is also proposed, in which quantitative scale is determined by DNA mass at threshold.

  5. Use of lignin extracted from different plant sources as standards in the spectrophotometric acetyl bromide lignin method.

    PubMed

    Fukushima, Romualdo S; Kerley, Monty S

    2011-04-27

    A nongravimetric acetyl bromide lignin (ABL) method was evaluated to quantify lignin concentration in a variety of plant materials. The traditional approach to lignin quantification required extraction of lignin with acidic dioxane and its isolation from each plant sample to construct a standard curve via spectrophotometric analysis. Lignin concentration was then measured in pre-extracted plant cell walls. However, this presented a methodological complexity because extraction and isolation procedures are lengthy and tedious, particularly if there are many samples involved. This work was targeted to simplify lignin quantification. Our hypothesis was that any lignin, regardless of its botanical origin, could be used to construct a standard curve for the purpose of determining lignin concentration in a variety of plants. To test our hypothesis, lignins were isolated from a range of diverse plants and, along with three commercial lignins, standard curves were built and compared among them. Slopes and intercepts derived from these standard curves were close enough to allow utilization of a mean extinction coefficient in the regression equation to estimate lignin concentration in any plant, independent of its botanical origin. Lignin quantification by use of a common regression equation obviates the steps of lignin extraction, isolation, and standard curve construction, which substantially expedites the ABL method. Acetyl bromide lignin method is a fast, convenient analytical procedure that may routinely be used to quantify lignin.

  6. A BAYESIAN METHOD FOR CALCULATING REAL-TIME QUANTITATIVE PCR CALIBRATION CURVES USING ABSOLUTE PLASMID DNA STANDARDS

    EPA Science Inventory

    In real-time quantitative PCR studies using absolute plasmid DNA standards, a calibration curve is developed to estimate an unknown DNA concentration. However, potential differences in the amplification performance of plasmid DNA compared to genomic DNA standards are often ignore...

  7. An ROC-type measure of diagnostic accuracy when the gold standard is continuous-scale.

    PubMed

    Obuchowski, Nancy A

    2006-02-15

    ROC curves and summary measures of accuracy derived from them, such as the area under the ROC curve, have become the standard for describing and comparing the accuracy of diagnostic tests. Methods for estimating ROC curves rely on the existence of a gold standard which dichotomizes patients into disease present or absent. There are, however, many examples of diagnostic tests whose gold standards are not binary-scale, but rather continuous-scale. Unnatural dichotomization of these gold standards leads to bias and inconsistency in estimates of diagnostic accuracy. In this paper, we propose a non-parametric estimator of diagnostic test accuracy which does not require dichotomization of the gold standard. This estimator has an interpretation analogous to the area under the ROC curve. We propose a confidence interval for test accuracy and a statistical test for comparing accuracies of tests from paired designs. We compare the performance (i.e. CI coverage, type I error rate, power) of the proposed methods with several alternatives. An example is presented where the accuracies of two quick blood tests for measuring serum iron concentrations are estimated and compared.

  8. An Improved Quantitative Real-Time PCR Assay for the Enumeration of Heterosigma akashiwo (Raphidophyceae) Cysts Using a DNA Debris Removal Method and a Cyst-Based Standard Curve.

    PubMed

    Kim, Joo-Hwan; Kim, Jin Ho; Wang, Pengbin; Park, Bum Soo; Han, Myung-Soo

    2016-01-01

    The identification and quantification of Heterosigma akashiwo cysts in sediments by light microscopy can be difficult due to the small size and morphology of the cysts, which are often indistinguishable from those of other types of algae. Quantitative real-time PCR (qPCR) based assays represent a potentially efficient method for quantifying the abundance of H. akashiwo cysts, although standard curves must be based on cyst DNA rather than on vegetative cell DNA due to differences in gene copy number and DNA extraction yield between these two cell types. Furthermore, qPCR on sediment samples can be complicated by the presence of extracellular DNA debris. To solve these problems, we constructed a cyst-based standard curve and developed a simple method for removing DNA debris from sediment samples. This cyst-based standard curve was compared with a standard curve based on vegetative cells, as vegetative cells may have twice the gene copy number of cysts. To remove DNA debris from the sediment, we developed a simple method involving dilution with distilled water and heating at 75°C. A total of 18 sediment samples were used to evaluate this method. Cyst abundance determined using the qPCR assay without DNA debris removal yielded results up to 51-fold greater than with direct counting. By contrast, a highly significant correlation was observed between cyst abundance determined by direct counting and the qPCR assay in conjunction with DNA debris removal (r2 = 0.72, slope = 1.07, p < 0.001). Therefore, this improved qPCR method should be a powerful tool for the accurate quantification of H. akashiwo cysts in sediment samples.

  9. Research on Standard and Automatic Judgment of Press-fit Curve of Locomotive Wheel-set Based on AAR Standard

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Xiao, Jun; Gao, Dong Jun; Zong, Shu Yu; Li, Zhu

    2018-03-01

    In the production of the Association of American Railroads (AAR) locomotive wheel-set, the press-fit curve is the most important basis for the reliability of wheel-set assembly. In the past, Most of production enterprises mainly use artificial detection methods to determine the quality of assembly. There are cases of miscarriage of justice appear. For this reason, the research on the standard is carried out. And the automatic judgment of press-fit curve is analysed and designed, so as to provide guidance for the locomotive wheel-set production based on AAR standard.

  10. A sensitive chemiluminescence enzyme immunoassay based on molecularly imprinted polymers solid-phase extraction of parathion.

    PubMed

    Chen, Ge; Jin, Maojun; Du, Pengfei; Zhang, Chan; Cui, Xueyan; Zhang, Yudan; She, Yongxin; Shao, Hua; Jin, Fen; Wang, Shanshan; Zheng, Lufei; Wang, Jing

    2017-08-01

    The chemiluminescence enzyme immunoassay (CLEIA) method responds differently to various sample matrices because of the matrix effect. In this work, the CLEIA method was coupled with molecularly imprinted polymers (MIPs) synthesized by precipitation polymerization to study the matrix effect. The sample recoveries ranged from 72.62% to 121.89%, with a relative standard deviation (RSD) of 3.74-18.14%.The ratio of the sample matrix-matched standard curve slope rate to the solvent standard curve slope was 1.21, 1.12, 1.17, and 0.85 for apple, rice, orange and cabbage in samples pretreated with the mixture of PSA and C 18 . However, the ratio of sample (apple, rice, orange, and cabbage) matrix-matched standard-MIPs curve slope rate to the solvent standard curve was 1.05, 0.92, 1.09, and 1.05 in samples pretreated with MIPs, respectively. The results demonstrated that the matrices of the samples greatly interfered with the detection of parathion residues by CLEIA. The MIPs bound specifically to the parathion in the samples and eliminated the matrix interference effect. Therefore, the CLEIA method have successfully applied MIPs in sample pretreatment to eliminate matrix interference effects and provided a new sensitive assay for agro-products. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Comparison of two methods to determine fan performance curves using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Onma, Patinya; Chantrasmi, Tonkid

    2018-01-01

    This work investigates a systematic numerical approach that employs Computational Fluid Dynamics (CFD) to obtain performance curves of a backward-curved centrifugal fan. Generating the performance curves requires a number of three-dimensional simulations with varying system loads at a fixed rotational speed. Two methods were used and their results compared to experimental data. The first method incrementally changes the mass flow late through the inlet boundary condition while the second method utilizes a series of meshes representing the physical damper blade at various angles. The generated performance curves from both methods are compared with an experiment setup in accordance with the AMCA fan performance testing standard.

  12. Quantifying stream nutrient uptake from ambient to saturation with instantaneous tracer additions

    NASA Astrophysics Data System (ADS)

    Covino, T. P.; McGlynn, B. L.; McNamara, R.

    2009-12-01

    Stream nutrient tracer additions and spiraling metrics are frequently used to quantify stream ecosystem behavior. However, standard approaches limit our understanding of aquatic biogeochemistry. Specifically, the relationship between in-stream nutrient concentration and stream nutrient spiraling has not been characterized. The standard constant rate (steady-state) approach to stream spiraling parameter estimation, either through elevating nutrient concentration or adding isotopically labeled tracers (e.g. 15N), provides little information regarding the stream kinetic curve that represents the uptake-concentration relationship analogous to the Michaelis-Menten curve. These standard approaches provide single or a few data points and often focus on estimating ambient uptake under the conditions at the time of the experiment. Here we outline and demonstrate a new method using instantaneous nutrient additions and dynamic analyses of breakthrough curve (BTC) data to characterize the full relationship between spiraling metrics and nutrient concentration. We compare the results from these dynamic analyses to BTC-integrated, and standard steady-state approaches. Our results indicate good agreement between these three approaches but we highlight the advantages of our dynamic method. Specifically, our new dynamic method provides a cost-effective and efficient approach to: 1) characterize full concentration-spiraling metric curves; 2) estimate ambient spiraling metrics; 3) estimate Michaelis-Menten parameters maximum uptake (Umax) and the half-saturation constant (Km) from developed uptake-concentration kinetic curves, and; 4) measure dynamic nutrient spiraling in larger rivers where steady-state approaches are impractical.

  13. Limitation of the Cavitron technique by conifer pit aspiration.

    PubMed

    Beikircher, B; Ameglio, T; Cochard, H; Mayr, S

    2010-07-01

    The Cavitron technique facilitates time and material saving for vulnerability analysis. The use of rotors with small diameters leads to high water pressure gradients (DeltaP) across samples, which may cause pit aspiration in conifers. In this study, the effect of pit aspiration on Cavitron measurements was analysed and a modified 'conifer method' was tested which avoids critical (i.e. pit aspiration inducing) DeltaP. Four conifer species were used (Juniperus communis, Picea abies, Pinus sylvestris, and Larix decidua) for vulnerability analysis based on the standard Cavitron technique and the conifer method. In addition, DeltaP thresholds for pit aspiration were determined and water extraction curves were constructed. Vulnerability curves obtained with the standard method showed generally a less negative P for the induction of embolism than curves of the conifer method. Differences were species-specific with the smallest effects in Juniperus. Larix showed the most pronounced shifts in P(50) (pressure at 50% loss of conductivity) between the standard (-1.5 MPa) and the conifer (-3.5 MPa) methods. Pit aspiration occurred at the lowest DeltaP in Larix and at the highest in Juniperus. Accordingly, at a spinning velocity inducing P(50), DeltaP caused only a 4% loss of conductivity induced by pit aspiration in Juniperus, but about 60% in Larix. Water extraction curves were similar to vulnerability curves indicating that spinning itself did not affect pits. Conifer pit aspiration can have major influences on Cavitron measurements and lead to an overestimation of vulnerability thresholds when a small rotor is used. Thus, the conifer method presented here enables correct vulnerability analysis by avoiding artificial conductivity losses.

  14. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO/sub 3/ standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less

  15. Using FTIR-ATR Spectroscopy to Teach the Internal Standard Method

    ERIC Educational Resources Information Center

    Bellamy, Michael K.

    2010-01-01

    The internal standard method is widely applied in quantitative analyses. However, most analytical chemistry textbooks either omit this topic or only provide examples of a single-point internal standardization. An experiment designed to teach students how to prepare an internal standard calibration curve is described. The experiment is a modified…

  16. MASW on the standard seismic prospective scale using full spread recording

    NASA Astrophysics Data System (ADS)

    Białas, Sebastian; Majdański, Mariusz; Trzeciak, Maciej; Gałczyński, Edward; Maksym, Andrzej

    2015-04-01

    The Multichannel Analysis of Surface Waves (MASW) is one of seismic survey methods that use the dispersion curve of surface waves in order to describe the stiffness of the surface. Is is used mainly for geotechnical engineering scale with total length of spread between 5 - 450 m and spread offset between 1 - 100 m, the hummer is the seismic source on this surveys. The standard procedure of MASW survey is: data acquisition, dispersion analysis and inversion of extracting dispersion curve to obtain the closest theoretical curve. The final result includes share-wave velocity (Vs) values at different depth along the surveyed lines. The main goal of this work is to expand this engineering method to the bigger scale with the length of standard prospecting spread of 20 km using 4.5 Hz version of vertical component geophones. The standard vibroseis and explosive method are used as the seismic source. The acquisition were conducted on the full spread all the time during each single shoot. The seismic data acquisition used for this analysis were carried out on the Braniewo 2014 project in north of Poland. The results achieved during standard MASW procedure says that this method can be used on much bigger scale as well. The different methodology of this analysis requires only much stronger seismic source.

  17. [Growth standardized values and curves based on weight, length/height and head circumference for Chinese children under 7 years of age].

    PubMed

    Li, Hui

    2009-03-01

    To construct the growth standardized data and curves based on weight, length/height, head circumference for Chinese children under 7 years of age. Random cluster sampling was used. The fourth national growth survey of children under 7 years in the nine cities (Beijing, Harbin, Xi'an, Shanghai, Nanjing, Wuhan, Fuzhou, Guangzhou and Kunming) of China was performed in 2005 and from this survey, data of 69 760 urban healthy boys and girls were used to set up the database for weight-for-age, height-for-age (length was measured for children under 3 years) and head circumference-for-age. Anthropometric data were ascribed to rigorous methods of data collection and standardized procedures across study sites. LMS method based on BOX-COX normal transformation and cubic splines smoothing technique was chosen for fitting the raw data according to study design and data features, and standardized values of any percentile and standard deviation were obtained by the special formulation of L, M and S parameters. Length-for-age and height-for-age standards were constructed by fitting the same model but the final curves reflected the 0.7 cm average difference between these two measurements. A set of systematic diagnostic tools was used to detect possible biases in estimated percentiles or standard deviation curves, including chi2 test, which was used for reference to evaluate to the goodness of fit. The 3rd, 10th, 25th, 50th, 75th, 90th, 97th smoothed percentiles and -3, -2, -1, 0, +1, +2, +3 SD values and curves of weight-for-age, length/height-for-age and head circumference-for-age for boys and girls aged 0-7 years were made out respectively. The Chinese child growth charts was slightly higher than the WHO child growth standards. The newly established growth charts represented the growth level of healthy and well-nourished Chinese children. The sample size was very large and national, the data were high-quality and the smoothing method was internationally accepted. The new Chinese growth charts are recommended as the Chinese child growth standards in 21st century used in China.

  18. Marine Structural Steel Toughness Data Bank. Volume 3

    DTIC Science & Technology

    1991-08-28

    Headings: Break? Did specimen fracture completely? CODIc Critical COD CODi Initial COD CVN Energy Charpy V Energy Crack lgth Crack Length Curve Curve...BS5762 -Standard Year Test Temp CODIc degC mm -30 0.57 -30 0.68 -30 . 1.26 not rporw(continued) Main Stutua To n ssDta:an Material BS4360 Gr50D Page...Initial JI. . . .. ._I. . . Maximum 1, ]max * Tearing Modulus ......... Standard Method ~P S5762 -Standard Year_______________ Test Tcmp CODIc degC mm

  19. Using the weighted area under the net benefit curve for decision curve analysis.

    PubMed

    Talluri, Rajesh; Shete, Sanjay

    2016-07-18

    Risk prediction models have been proposed for various diseases and are being improved as new predictors are identified. A major challenge is to determine whether the newly discovered predictors improve risk prediction. Decision curve analysis has been proposed as an alternative to the area under the curve and net reclassification index to evaluate the performance of prediction models in clinical scenarios. The decision curve computed using the net benefit can evaluate the predictive performance of risk models at a given or range of threshold probabilities. However, when the decision curves for 2 competing models cross in the range of interest, it is difficult to identify the best model as there is no readily available summary measure for evaluating the predictive performance. The key deterrent for using simple measures such as the area under the net benefit curve is the assumption that the threshold probabilities are uniformly distributed among patients. We propose a novel measure for performing decision curve analysis. The approach estimates the distribution of threshold probabilities without the need of additional data. Using the estimated distribution of threshold probabilities, the weighted area under the net benefit curve serves as the summary measure to compare risk prediction models in a range of interest. We compared 3 different approaches, the standard method, the area under the net benefit curve, and the weighted area under the net benefit curve. Type 1 error and power comparisons demonstrate that the weighted area under the net benefit curve has higher power compared to the other methods. Several simulation studies are presented to demonstrate the improvement in model comparison using the weighted area under the net benefit curve compared to the standard method. The proposed measure improves decision curve analysis by using the weighted area under the curve and thereby improves the power of the decision curve analysis to compare risk prediction models in a clinical scenario.

  20. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated non-linear multiparameter fitting program has been used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantitiesmore » with a known error. Error estimates for the calibration curve parameters can be obtined from the curvature of the Chi-Squared Matrix or from error relaxation techniques. It has been shown that non-dispersive x-ray fluorescence analysis of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 sec.« less

  1. Using quasars as standard clocks for measuring cosmological redshift.

    PubMed

    Dai, De-Chang; Starkman, Glenn D; Stojkovic, Branislav; Stojkovic, Dejan; Weltman, Amanda

    2012-06-08

    We report hitherto unnoticed patterns in quasar light curves. We characterize segments of the quasar's light curves with the slopes of the straight lines fit through them. These slopes appear to be directly related to the quasars' redshifts. Alternatively, using only global shifts in time and flux, we are able to find significant overlaps between the light curves of different pairs of quasars by fitting the ratio of their redshifts. We are then able to reliably determine the redshift of one quasar from another. This implies that one can use quasars as standard clocks, as we explicitly demonstrate by constructing two independent methods of finding the redshift of a quasar from its light curve.

  2. Assessment of opacimeter calibration according to International Standard Organization 10155.

    PubMed

    Gomes, J F

    2001-01-01

    This paper compares the calibration method for opacimeters issued by the International Standard Organization (ISO) 10155 with the manual reference method for determination of dust content in stack gases. ISO 10155 requires at least nine operational measurements, corresponding to three operational measurements per each dust emission range within the stack. The procedure is assessed by comparison with previous calibration methods for opacimeters using only two operational measurements from a set of measurements made at stacks from pulp mills. The results show that even if the international standard for opacimeter calibration requires that the calibration curve is to be obtained using 3 x 3 points, a calibration curve derived using 3 points could be, at times, acceptable in statistical terms, provided that the amplitude of individual measurements is low.

  3. Use of laser ablation-inductively coupled plasma-time of flight-mass spectrometry to identify the elemental composition of vanilla and determine the geographic origin by discriminant function analysis.

    PubMed

    Hondrogiannis, Ellen M; Ehrlinger, Erin; Poplaski, Alyssa; Lisle, Meredith

    2013-11-27

    A total of 11 elements found in 25 vanilla samples from Uganda, Madagascar, Indonesia, and Papua New Guinea were measured by laser ablation-inductively coupled plasma-time-of-flight-mass spectrometry (LA-ICP-TOF-MS) for the purpose of collecting data that could be used to discriminate among the origins. Pellets were prepared of the samples, and elemental concentrations were obtained on the basis of external calibration curves created using five National Institute of Standards and Technology (NIST) standards and one Chinese standard with (13)C internal standardization. These curves were validated using NIST 1573a (tomato leaves) as a check standard. Discriminant analysis was used to successfully classify the vanilla samples by their origin. Our method illustrates the feasibility of using LA-ICP-TOF-MS with an external calibration curve for high-throughput screening of spice screening analysis.

  4. Comparison of software and human observers in reading images of the CDMAM test object to assess digital mammography systems

    NASA Astrophysics Data System (ADS)

    Young, Kenneth C.; Cook, James J. H.; Oduko, Jennifer M.; Bosmans, Hilde

    2006-03-01

    European Guidelines for quality control in digital mammography specify minimum and achievable standards of image quality in terms of threshold contrast, based on readings of images of the CDMAM test object by human observers. However this is time-consuming and has large inter-observer error. To overcome these problems a software program (CDCOM) is available to automatically read CDMAM images, but the optimal method of interpreting the output is not defined. This study evaluates methods of determining threshold contrast from the program, and compares these to human readings for a variety of mammography systems. The methods considered are (A) simple thresholding (B) psychometric curve fitting (C) smoothing and interpolation and (D) smoothing and psychometric curve fitting. Each method leads to similar threshold contrasts but with different reproducibility. Method (A) had relatively poor reproducibility with a standard error in threshold contrast of 18.1 +/- 0.7%. This was reduced to 8.4% by using a contrast-detail curve fitting procedure. Method (D) had the best reproducibility with an error of 6.7%, reducing to 5.1% with curve fitting. A panel of 3 human observers had an error of 4.4% reduced to 2.9 % by curve fitting. All automatic methods led to threshold contrasts that were lower than for humans. The ratio of human to program threshold contrasts varied with detail diameter and was 1.50 +/- .04 (sem) at 0.1mm and 1.82 +/- .06 at 0.25mm for method (D). There were good correlations between the threshold contrast determined by humans and the automated methods.

  5. Validation of GC and HPLC systems for residue studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, M.

    1995-12-01

    For residue studies, GC and HPLC system performance must be validated prior to and during use. One excellent measure of system performance is the standard curve and associated chromatograms used to construct that curve. The standard curve is a model of system response to an analyte over a specific time period, and is prima facia evidence of system performance beginning at the auto sampler and proceeding through the injector, column, detector, electronics, data-capture device, and printer/plotter. This tool measures the performance of the entire chromatographic system; its power negates most of the benefits associated with costly and time-consuming validation ofmore » individual system components. Other measures of instrument and method validation will be discussed, including quality control charts and experimental designs for method validation.« less

  6. In situ method for estimating cell survival in a solid tumor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alfieri, A.A.; Hahn, E.W.

    1978-09-01

    The response of the murine Meth-A fibrosarcoma to single and fractionated doses of x-irradiation, actinomycin D chemotherapy, and/or concomitant local tumor hyperthermia was assayed with the use of an in situ method for estimating cell kill within a solid tumor. The cell survival assay was based on a standard curve plotting number of inoculated viable cells with and without radiation-inactivated homologous tumor cells versus the time required for i.m. tumors to grow to 1.0 cu cm. The time for post-treatment tumors to grow to 1.0 cu cm was cross-referenced to the standard curve, and the number of surviving cells contributingmore » to tumor regrowth was estimated. The resulting surviving fraction curves closely resemble those obtained with in vitro systems.« less

  7. An Investigation of Undefined Cut Scores with the Hofstee Standard-Setting Method

    ERIC Educational Resources Information Center

    Wyse, Adam E.; Babcock, Ben

    2017-01-01

    This article provides an overview of the Hofstee standard-setting method and illustrates several situations where the Hofstee method will produce undefined cut scores. The situations where the cut scores will be undefined involve cases where the line segment derived from the Hofstee ratings does not intersect the score distribution curve based on…

  8. Application of standard addition for the determination of carboxypeptidase activity in Actinomucor elegans bran koji.

    PubMed

    Fu, J; Li, L; Yang, X Q; Zhu, M J

    2011-01-01

    Leucine carboxypeptidase (EC 3.4.16) activity in Actinomucor elegans bran koji was investigated via absorbance at 507 nm after stained by Cd-nihydrin solution, with calibration curve A, which was made by a set of known concentration standard leucine, calibration B, which was made by three sets of known concentration standard leucine solutions with the addition of three concentrations inactive crude enzyme extract, and calibration C, which was made by three sets of known concentration standard leucine solutions with the addition of three concentrations crude enzyme extract. The results indicated that application of pure amino acid standard curve was not a suitable way to determine carboxypeptidase in complicate mixture, and it probably led to overestimated carboxypeptidase activity. It was found that addition of crude exact into pure amino acid standard curve had a significant difference from pure amino acid standard curve method (p < 0.05). There was no significant enzyme activity difference (p > 0.05) between addition of active crude exact and addition of inactive crude kind, when the proper dilute multiple was used. It was concluded that the addition of crude enzyme extract to the calibration was needed to eliminate the interference of free amino acids and related compounds presented in crude enzyme extract.

  9. An approach for mapping the number and distribution of Salmonella contamination on the poultry carcass.

    PubMed

    Oscar, T P

    2008-09-01

    Mapping the number and distribution of Salmonella on poultry carcasses will help guide better design of processing procedures to reduce or eliminate this human pathogen from poultry. A selective plating media with multiple antibiotics (xylose-lysine agar medium [XL] containing N-(2-hydroxyethyl)piperazine-N'-(2-ethanesulfonic acid) and the antibiotics chloramphenicol, ampicillin, tetracycline, and streptomycin [XLH-CATS]) and a multiple-antibiotic-resistant strain (ATCC 700408) of Salmonella Typhimurium definitive phage type 104 (DT104) were used to develop an enumeration method for mapping the number and distribution of Salmonella Typhimurium DT104 on the carcasses of young chickens in the Cornish game hen class. The enumeration method was based on the concept that the time to detection by drop plating on XLH-CATS during incubation of whole chicken parts in buffered peptone water would be inversely related to the initial log number (N0) of Salmonella Typhimurium DT104 on the chicken part. The sampling plan for mapping involved dividing the chicken into 12 parts, which ranged in average size from 36 to 80 g. To develop the enumeration method, whole parts were spot inoculated with 0 to 6 log Salmonella Typhimurium DT104, incubated in 300 ml of buffered peptone water, and detected on XLH-CATS by drop plating. An inverse relationship between detection time on XLH-CATS and N0 was found (r = -0.984). The standard curve was similar for the individual chicken parts and therefore, a single standard curve for all 12 chicken parts was developed. The final standard curve, which contained a 95% prediction interval for providing stochastic results for N0, had high goodness of fit (r2 = 0.968) and was N0 (log) = 7.78 +/- 0.61 - (0.995 x detention time). Ninety-five percent of N0 were within +/- 0.61 log of the standard curve. The enumeration method and sampling plan will be used in future studies to map changes in the number and distribution of Salmonella on carcasses of young chickens fed the DT104 strain used in standard curve development and subjected to different processing procedures.

  10. The application of laser-Raman light scattering to the determination of sulfate in sea and estuarine waters

    NASA Technical Reports Server (NTRS)

    Bandy, A. R.

    1973-01-01

    Laser-Raman light scattering is a technique for determining sulfate concentrations in sea and estuarine waters with apparently none of the interferences inherent in the gravimetric and titrametric methods. The Raman measurement involved the ratioing of the peak heights of an unknown sulfate concentration and a nitrate internal standard. This ratio was used to calculate the unknown sulfate concentration from a standard curve. The standard curve was derived from the Raman data on prepared nitrate-sulfate solutions. At the 99.7% confidence level, the accuracy of the Raman technique was 7 to 8.6 percent over the concentration range of the standard curve. The sulfate analyses of water samples collected at the mouth of the James River, Hampton, Virginia, demonstrated that in most cases sulfate had a constant concentration relative to salinity in this area.

  11. Master curve characterization of the fracture toughness behavior in SA508 Gr.4N low alloy steels

    NASA Astrophysics Data System (ADS)

    Lee, Ki-Hyoung; Kim, Min-Chul; Lee, Bong-Sang; Wee, Dang-Moon

    2010-08-01

    The fracture toughness properties of the tempered martensitic SA508 Gr.4N Ni-Mo-Cr low alloy steel for reactor pressure vessels were investigated by using the master curve concept. These results were compared to those of the bainitic SA508 Gr.3 Mn-Mo-Ni low alloy steel, which is a commercial RPV material. The fracture toughness tests were conducted by 3-point bending with pre-cracked charpy (PCVN) specimens according to the ASTM E1921-09c standard method. The temperature dependency of the fracture toughness was steeper than those predicted by the standard master curve, while the bainitic SA508 Gr.3 steel fitted well with the standard prediction. In order to properly evaluate the fracture toughness of the Gr.4N steels, the exponential coefficient of the master curve equation was changed and the modified curve was applied to the fracture toughness test results of model alloys that have various chemical compositions. It was found that the modified curve provided a better description for the overall fracture toughness behavior and adequate T0 determination for the tempered martensitic SA508 Gr.4N steels.

  12. Rethinking non-inferiority: a practical trial design for optimising treatment duration.

    PubMed

    Quartagno, Matteo; Walker, A Sarah; Carpenter, James R; Phillips, Patrick Pj; Parmar, Mahesh Kb

    2018-06-01

    Background Trials to identify the minimal effective treatment duration are needed in different therapeutic areas, including bacterial infections, tuberculosis and hepatitis C. However, standard non-inferiority designs have several limitations, including arbitrariness of non-inferiority margins, choice of research arms and very large sample sizes. Methods We recast the problem of finding an appropriate non-inferior treatment duration in terms of modelling the entire duration-response curve within a pre-specified range. We propose a multi-arm randomised trial design, allocating patients to different treatment durations. We use fractional polynomials and spline-based methods to flexibly model the duration-response curve. We call this a 'Durations design'. We compare different methods in terms of a scaled version of the area between true and estimated prediction curves. We evaluate sensitivity to key design parameters, including sample size, number and position of arms. Results A total sample size of ~ 500 patients divided into a moderate number of equidistant arms (5-7) is sufficient to estimate the duration-response curve within a 5% error margin in 95% of the simulations. Fractional polynomials provide similar or better results than spline-based methods in most scenarios. Conclusion Our proposed practical randomised trial 'Durations design' shows promising performance in the estimation of the duration-response curve; subject to a pending careful investigation of its inferential properties, it provides a potential alternative to standard non-inferiority designs, avoiding many of their limitations, and yet being fairly robust to different possible duration-response curves. The trial outcome is the whole duration-response curve, which may be used by clinicians and policymakers to make informed decisions, facilitating a move away from a forced binary hypothesis testing paradigm.

  13. Curve fitting air sample filter decay curves to estimate transuranic content.

    PubMed

    Hayes, Robert B; Chiou, Hung Cheng

    2004-01-01

    By testing industry standard techniques for radon progeny evaluation on air sample filters, a new technique is developed to evaluate transuranic activity on air filters by curve fitting the decay curves. The industry method modified here is simply the use of filter activity measurements at different times to estimate the air concentrations of radon progeny. The primary modification was to not look for specific radon progeny values but rather transuranic activity. By using a method that will provide reasonably conservative estimates of the transuranic activity present on a filter, some credit for the decay curve shape can then be taken. By carrying out rigorous statistical analysis of the curve fits to over 65 samples having no transuranic activity taken over a 10-mo period, an optimization of the fitting function and quality tests for this purpose was attained.

  14. Interaction Analysis of Longevity Interventions Using Survival Curves.

    PubMed

    Nowak, Stefan; Neidhart, Johannes; Szendro, Ivan G; Rzezonka, Jonas; Marathe, Rahul; Krug, Joachim

    2018-01-06

    A long-standing problem in ageing research is to understand how different factors contributing to longevity should be expected to act in combination under the assumption that they are independent. Standard interaction analysis compares the extension of mean lifespan achieved by a combination of interventions to the prediction under an additive or multiplicative null model, but neither model is fundamentally justified. Moreover, the target of longevity interventions is not mean life span but the entire survival curve. Here we formulate a mathematical approach for predicting the survival curve resulting from a combination of two independent interventions based on the survival curves of the individual treatments, and quantify interaction between interventions as the deviation from this prediction. We test the method on a published data set comprising survival curves for all combinations of four different longevity interventions in Caenorhabditis elegans . We find that interactions are generally weak even when the standard analysis indicates otherwise.

  15. Interaction Analysis of Longevity Interventions Using Survival Curves

    PubMed Central

    Nowak, Stefan; Neidhart, Johannes; Szendro, Ivan G.; Rzezonka, Jonas; Marathe, Rahul; Krug, Joachim

    2018-01-01

    A long-standing problem in ageing research is to understand how different factors contributing to longevity should be expected to act in combination under the assumption that they are independent. Standard interaction analysis compares the extension of mean lifespan achieved by a combination of interventions to the prediction under an additive or multiplicative null model, but neither model is fundamentally justified. Moreover, the target of longevity interventions is not mean life span but the entire survival curve. Here we formulate a mathematical approach for predicting the survival curve resulting from a combination of two independent interventions based on the survival curves of the individual treatments, and quantify interaction between interventions as the deviation from this prediction. We test the method on a published data set comprising survival curves for all combinations of four different longevity interventions in Caenorhabditis elegans. We find that interactions are generally weak even when the standard analysis indicates otherwise. PMID:29316622

  16. [Difference of three standard curves of real-time reverse-transcriptase PCR in viable Vibrio parahaemolyticus quantification].

    PubMed

    Jin, Mengtong; Sun, Wenshuo; Li, Qin; Sun, Xiaohong; Pan, Yingjie; Zhao, Yong

    2014-04-04

    We evaluated the difference of three standard curves in quantifying viable Vibrio parahaemolyticus in samples by real-time reverse-transcriptase PCR (Real-time RT-PCR). The standard curve A was established by 10-fold diluted cDNA. The cDNA was reverse transcripted after RNA synthesized in vitro. The standard curve B and C were established by 10-fold diluted cDNA. The cDNA was synthesized after RNA isolated from Vibrio parahaemolyticus in pure cultures (10(8) CFU/mL) and shrimp samples (10(6) CFU/g) (Standard curve A and C were proposed for the first time). Three standard curves were performed to quantitatively detect V. parahaemolyticus in six samples, respectively (Two pure cultured V. parahaemolyticus samples, two artificially contaminated cooked Litopenaeus vannamei samples and two artificially contaminated Litopenaeus vannamei samples). Then we evaluated the quantitative results of standard curve and the plate counting results and then analysed the differences. The three standard curves all show a strong linear relationship between the fractional cycle number and V. parahaemolyticus concentration (R2 > 0.99); The quantitative results of Real-time PCR were significantly (p < 0.05) lower than the results of plate counting. The relative errors compared with the results of plate counting ranked standard curve A (30.0%) > standard curve C (18.8%) > standard curve B (6.9%); The average differences between standard curve A and standard curve B and C were - 2.25 Lg CFU/mL and - 0.75 Lg CFU/mL, respectively, and the mean relative errors were 48.2% and 15.9%, respectively; The average difference between standard curve B and C was among (1.47 -1.53) Lg CFU/mL and the average relative errors were among 19.0% - 23.8%. Standard curve B could be applied to Real-time RT-PCR when quantify the number of viable microorganisms in samples.

  17. Learning micro incision surgery without the learning curve

    PubMed Central

    Navin, Shoba; Parikh, Rajul

    2008-01-01

    We describe a method of learning micro incision cataract surgery painlessly with the minimum of learning curves. A large-bore or standard anterior chamber maintainer (ACM) facilitates learning without change of machine or preferred surgical technique. Experience with the use of an ACM during phacoemulsification is desirable. PMID:18292624

  18. Inter-Labeler and Intra-Labeler Variability of Condition Severity Classification Models Using Active and Passive Learning Methods

    PubMed Central

    Nissim, Nir; Shahar, Yuval; Boland, Mary Regina; Tatonetti, Nicholas P; Elovici, Yuval; Hripcsak, George; Moskovitch, Robert

    2018-01-01

    Background and Objectives Labeling instances by domain experts for classification is often time consuming and expensive. To reduce such labeling efforts, we had proposed the application of active learning (AL) methods, introduced our CAESAR-ALE framework for classifying the severity of clinical conditions, and shown its significant reduction of labeling efforts. The use of any of three AL methods (one well known [SVM-Margin], and two that we introduced [Exploitation and Combination_XA]) significantly reduced (by 48% to 64%) condition labeling efforts, compared to standard passive (random instance-selection) SVM learning. Furthermore, our new AL methods achieved maximal accuracy using 12% fewer labeled cases than the SVM-Margin AL method. However, because labelers have varying levels of expertise, a major issue associated with learning methods, and AL methods in particular, is how to best to use the labeling provided by a committee of labelers. First, we wanted to know, based on the labelers’ learning curves, whether using AL methods (versus standard passive learning methods) has an effect on the Intra-labeler variability (within the learning curve of each labeler) and inter-labeler variability (among the learning curves of different labelers). Then, we wanted to examine the effect of learning (either passively or actively) from the labels created by the majority consensus of a group of labelers. Methods We used our CAESAR-ALE framework for classifying the severity of clinical conditions, the three AL methods and the passive learning method, as mentioned above, to induce the classifications models. We used a dataset of 516 clinical conditions and their severity labeling, represented by features aggregated from the medical records of 1.9 million patients treated at Columbia University Medical Center. We analyzed the variance of the classification performance within (intra-labeler), and especially among (inter-labeler) the classification models that were induced by using the labels provided by seven labelers. We also compared the performance of the passive and active learning models when using the consensus label. Results The AL methods produced, for the models induced from each labeler, smoother Intra-labeler learning curves during the training phase, compared to the models produced when using the passive learning method. The mean standard deviation of the learning curves of the three AL methods over all labelers (mean: 0.0379; range: [0.0182 to 0.0496]), was significantly lower (p = 0.049) than the Intra-labeler standard deviation when using the passive learning method (mean: 0.0484; range: [0.0275 to 0.0724). Using the AL methods resulted in a lower mean Inter-labeler AUC standard deviation among the AUC values of the labelers’ different models during the training phase, compared to the variance of the induced models’ AUC values when using passive learning. The Inter-labeler AUC standard deviation, using the passive learning method (0.039), was almost twice as high as the Inter-labeler standard deviation using our two new AL methods (0.02 and 0.019, respectively). The SVM-Margin AL method resulted in an Inter-labeler standard deviation (0.029) that was higher by almost 50% than that of our two AL methods. The difference in the inter-labeler standard deviation between the passive learning method and the SVM-Margin learning method was significant (p = 0.042). The difference between the SVM-Margin and Exploitation method was insignificant (p = 0.29), as was the difference between the Combination_XA and Exploitation methods (p = 0.67). Finally, using the consensus label led to a learning curve that had a higher mean intra-labeler variance, but resulted eventually in an AUC that was at least as high as the AUC achieved using the gold standard label and that was always higher than the expected mean AUC of a randomly selected labeler, regardless of the choice of learning method (including a passive learning method). Using a paired t-test, the difference between the intra-labeler AUC standard deviation when using the consensus label, versus that value when using the other two labeling strategies, was significant only when using the passive learning method (p = 0.014), but not when using any of the three AL methods. Conclusions The use of AL methods, (a) reduces intra-labeler variability in the performance of the induced models during the training phase, and thus reduces the risk of halting the process at a local minimum that is significantly different in performance from the rest of the learned models; and (b) reduces Inter-labeler performance variance, and thus reduces the dependence on the use of a particular labeler. In addition, the use of a consensus label, agreed upon by a rather uneven group of labelers, might be at least as good as using the gold standard labeler, who might not be available, and certainly better than randomly selecting one of the group’s individual labelers. Finally, using the AL methods when provided by the consensus label reduced the intra-labeler AUC variance during the learning phase, compared to using passive learning. PMID:28456512

  19. Methods for Performing Survival Curve Quality-of-Life Assessments.

    PubMed

    Sumner, Walton; Ding, Eric; Fischer, Irene D; Hagen, Michael D

    2014-08-01

    Many medical decisions involve an implied choice between alternative survival curves, typically with differing quality of life. Common preference assessment methods neglect this structure, creating some risk of distortions. Survival curve quality-of-life assessments (SQLA) were developed from Gompertz survival curves fitting the general population's survival. An algorithm was developed to generate relative discount rate-utility (DRU) functions from a standard survival curve and health state and an equally attractive alternative curve and state. A least means squared distance algorithm was developed to describe how nearly 3 or more DRU functions intersect. These techniques were implemented in a program called X-Trade and tested. SQLA scenarios can portray realistic treatment choices. A side effect scenario portrays one prototypical choice, to extend life while experiencing some loss, such as an amputation. A risky treatment scenario portrays procedures with an initial mortality risk. A time trade scenario mimics conventional time tradeoffs. Each SQLA scenario yields DRU functions with distinctive shapes, such as sigmoid curves or vertical lines. One SQLA can imply a discount rate or utility if the other value is known and both values are temporally stable. Two SQLA exercises imply a unique discount rate and utility if the inferred DRU functions intersect. Three or more SQLA results can quantify uncertainty or inconsistency in discount rate and utility estimates. Pilot studies suggested that many subjects could learn to interpret survival curves and do SQLA. SQLA confuse some people. Compared with SQLA, standard gambles quantify very low utilities more easily, and time tradeoffs are simpler for high utilities. When discount rates approach zero, time tradeoffs are as informative and easier to do than SQLA. SQLA may complement conventional utility assessment methods. © The Author(s) 2014.

  20. Inter-labeler and intra-labeler variability of condition severity classification models using active and passive learning methods.

    PubMed

    Nissim, Nir; Shahar, Yuval; Elovici, Yuval; Hripcsak, George; Moskovitch, Robert

    2017-09-01

    Labeling instances by domain experts for classification is often time consuming and expensive. To reduce such labeling efforts, we had proposed the application of active learning (AL) methods, introduced our CAESAR-ALE framework for classifying the severity of clinical conditions, and shown its significant reduction of labeling efforts. The use of any of three AL methods (one well known [SVM-Margin], and two that we introduced [Exploitation and Combination_XA]) significantly reduced (by 48% to 64%) condition labeling efforts, compared to standard passive (random instance-selection) SVM learning. Furthermore, our new AL methods achieved maximal accuracy using 12% fewer labeled cases than the SVM-Margin AL method. However, because labelers have varying levels of expertise, a major issue associated with learning methods, and AL methods in particular, is how to best to use the labeling provided by a committee of labelers. First, we wanted to know, based on the labelers' learning curves, whether using AL methods (versus standard passive learning methods) has an effect on the Intra-labeler variability (within the learning curve of each labeler) and inter-labeler variability (among the learning curves of different labelers). Then, we wanted to examine the effect of learning (either passively or actively) from the labels created by the majority consensus of a group of labelers. We used our CAESAR-ALE framework for classifying the severity of clinical conditions, the three AL methods and the passive learning method, as mentioned above, to induce the classifications models. We used a dataset of 516 clinical conditions and their severity labeling, represented by features aggregated from the medical records of 1.9 million patients treated at Columbia University Medical Center. We analyzed the variance of the classification performance within (intra-labeler), and especially among (inter-labeler) the classification models that were induced by using the labels provided by seven labelers. We also compared the performance of the passive and active learning models when using the consensus label. The AL methods: produced, for the models induced from each labeler, smoother Intra-labeler learning curves during the training phase, compared to the models produced when using the passive learning method. The mean standard deviation of the learning curves of the three AL methods over all labelers (mean: 0.0379; range: [0.0182 to 0.0496]), was significantly lower (p=0.049) than the Intra-labeler standard deviation when using the passive learning method (mean: 0.0484; range: [0.0275-0.0724). Using the AL methods resulted in a lower mean Inter-labeler AUC standard deviation among the AUC values of the labelers' different models during the training phase, compared to the variance of the induced models' AUC values when using passive learning. The Inter-labeler AUC standard deviation, using the passive learning method (0.039), was almost twice as high as the Inter-labeler standard deviation using our two new AL methods (0.02 and 0.019, respectively). The SVM-Margin AL method resulted in an Inter-labeler standard deviation (0.029) that was higher by almost 50% than that of our two AL methods The difference in the inter-labeler standard deviation between the passive learning method and the SVM-Margin learning method was significant (p=0.042). The difference between the SVM-Margin and Exploitation method was insignificant (p=0.29), as was the difference between the Combination_XA and Exploitation methods (p=0.67). Finally, using the consensus label led to a learning curve that had a higher mean intra-labeler variance, but resulted eventually in an AUC that was at least as high as the AUC achieved using the gold standard label and that was always higher than the expected mean AUC of a randomly selected labeler, regardless of the choice of learning method (including a passive learning method). Using a paired t-test, the difference between the intra-labeler AUC standard deviation when using the consensus label, versus that value when using the other two labeling strategies, was significant only when using the passive learning method (p=0.014), but not when using any of the three AL methods. The use of AL methods, (a) reduces intra-labeler variability in the performance of the induced models during the training phase, and thus reduces the risk of halting the process at a local minimum that is significantly different in performance from the rest of the learned models; and (b) reduces Inter-labeler performance variance, and thus reduces the dependence on the use of a particular labeler. In addition, the use of a consensus label, agreed upon by a rather uneven group of labelers, might be at least as good as using the gold standard labeler, who might not be available, and certainly better than randomly selecting one of the group's individual labelers. Finally, using the AL methods: when provided by the consensus label reduced the intra-labeler AUC variance during the learning phase, compared to using passive learning. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Adapting Shape Parameters for Cubic Bezier Curves

    NASA Technical Reports Server (NTRS)

    Isacoff, D.; Bailey, M. J.

    1985-01-01

    Bezier curves are an established tool in Computer Aided Geometric Design. One of the drawbacks of the Bezier method is that the curves often bear little resemblance to their control polygons. As a result, it becomes increasingly difficult to obtain anything but a rough outline of the desired shape. One possible solution is tomanipulate the curve itself instead of the control polygon. The standard cubic Bezier curve form has introduced into it two shape parameters, gamma 1 and 2. These parameters give the user the ability to manipulate the curve while the control polygon retains its original form, thereby providing a more intuitive feel for the necessary changes to the curve in order to achieve the desired shape.

  2. Can anthropometry measure gender discrimination? An analysis using WHO standards to assess the growth of Bangladeshi children.

    PubMed

    Moestue, Helen

    2009-08-01

    To examine the potential of anthropometry as a tool to measure gender discrimination, with particular attention to the WHO growth standards. Surveillance data collected from 1990 to 1999 were analysed. Height-for-age Z-scores were calculated using three norms: the WHO standards, the 1978 National Center for Health Statistics (NCHS) reference and the 1990 British growth reference (UK90). Bangladesh. Boys and girls aged 6-59 months (n 504 358). The three sets of growth curves provided conflicting pictures of the relative growth of girls and boys by age and over time. Conclusions on sex differences in growth depended also on the method used to analyse the curves, be it according to the shape or the relative position of the sex-specific curves. The shapes of the WHO-generated curves uniquely implied that Bangladeshi girls faltered faster or caught up slower than boys throughout their pre-school years, a finding consistent with the literature. In contrast, analysis of the relative position of the curves suggested that girls had higher WHO Z-scores than boys below 24 months of age. Further research is needed to help establish whether and how the WHO international standards can measure gender discrimination in practice, which continues to be a serious problem in many parts of the world.

  3. Analysis of variation in calibration curves for Kodak XV radiographic film using model-based parameters.

    PubMed

    Hsu, Shu-Hui; Kulasekere, Ravi; Roberson, Peter L

    2010-08-05

    Film calibration is time-consuming work when dose accuracy is essential while working in a range of photon scatter environments. This study uses the single-target single-hit model of film response to fit the calibration curves as a function of calibration method, processor condition, field size and depth. Kodak XV film was irradiated perpendicular to the beam axis in a solid water phantom. Standard calibration films (one dose point per film) were irradiated at 90 cm source-to-surface distance (SSD) for various doses (16-128 cGy), depths (0.2, 0.5, 1.5, 5, 10 cm) and field sizes (5 × 5, 10 × 10 and 20 × 20 cm²). The 8-field calibration method (eight dose points per film) was used as a reference for each experiment, taken at 95 cm SSD and 5 cm depth. The delivered doses were measured using an Attix parallel plate chamber for improved accuracy of dose estimation in the buildup region. Three fitting methods with one to three dose points per calibration curve were investigated for the field sizes of 5 × 5, 10 × 10 and 20 × 20 cm². The inter-day variation of model parameters (background, saturation and slope) were 1.8%, 5.7%, and 7.7% (1 σ) using the 8-field method. The saturation parameter ratio of standard to 8-field curves was 1.083 ± 0.005. The slope parameter ratio of standard to 8-field curves ranged from 0.99 to 1.05, depending on field size and depth. The slope parameter ratio decreases with increasing depth below 0.5 cm for the three field sizes. It increases with increasing depths above 0.5 cm. A calibration curve with one to three dose points fitted with the model is possible with 2% accuracy in film dosimetry for various irradiation conditions. The proposed fitting methods may reduce workload while providing energy dependence correction in radiographic film dosimetry. This study is limited to radiographic XV film with a Lumisys scanner.

  4. Proficiency Standards and Cut-Scores for Language Proficiency Tests.

    ERIC Educational Resources Information Center

    Moy, Raymond H.

    The problem of standard setting on language proficiency tests is often approached by the use of norms derived from the group being tested, a process commonly known as "grading on the curve." One particular problem with this ad hoc method of standard setting is that it will usually result in a fluctuating standard dependent on the particular group…

  5. A novel Python program for implementation of quality control in the ELISA.

    PubMed

    Wetzel, Hanna N; Cohen, Cinder; Norman, Andrew B; Webster, Rose P

    2017-09-01

    The use of semi-quantitative assays such as the enzyme-linked immunosorbent assay (ELISA) requires stringent quality control of the data. However, such quality control is often lacking in academic settings due to unavailability of software and knowledge. Therefore, our aim was to develop methods to easily implement Levey-Jennings quality control methods. For this purpose, we created a program written in Python (a programming language with an open-source license) and tested it using a training set of ELISA standard curves quantifying the Fab fragment of an anti-cocaine monoclonal antibody in mouse blood. A colorimetric ELISA was developed using a goat anti-human anti-Fab capture method. Mouse blood samples spiked with the Fab fragment were tested against a standard curve of known concentrations of Fab fragment in buffer over a period of 133days stored at 4°C to assess stability of the Fab fragment and to generate a test dataset to assess the program. All standard curves were analyzed using our program to batch process the data and to generate Levey-Jennings control charts and statistics regarding the datasets. The program was able to identify values outside of two standard deviations, and this identification of outliers was consistent with the results of a two-way ANOVA. This program is freely available, which will help laboratories implement quality control methods, thus improving reproducibility within and between labs. We report here successful testing of the program with our training set and development of a method for quantification of the Fab fragment in mouse blood. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. A novel quantitative reverse-transcription PCR (qRT-PCR) for the enumeration of total bacteria, using meat micro-flora as a model.

    PubMed

    Dolan, Anthony; Burgess, Catherine M; Barry, Thomas B; Fanning, Seamus; Duffy, Geraldine

    2009-04-01

    A sensitive quantitative reverse-transcription PCR (qRT-PCR) method was developed for enumeration of total bacteria. Using two sets of primers separately to target the ribonuclease-P (RNase P) RNA transcripts of gram positive and gram negative bacteria. Standard curves were generated using SYBR Green I kits for the LightCycler 2.0 instrument (Roche Diagnostics) to allow quantification of mixed microflora in liquid media. RNA standards were used and extracted from known cell equivalents and subsequently converted to cDNA for the construction of standard curves. The number of mixed bacteria in culture was determined by qRT-PCR, and the results correlated (r(2)=0.88, rsd=0.466) with the total viable count over the range from approx. Log(10) 3 to approx. Log(10) 7 CFU ml(-1). The rapid nature of this assay (8 h) and its potential as an alternative method to the standard plate count method to predict total viable counts and shelf life are discussed.

  7. Growth Charts for Prader-Willi Syndrome During Growth Hormone Treatment

    PubMed Central

    Butler, Merlin G.; Lee, Jaehoon; Cox, Devin M.; Manzardo, Ann M.; Gold, June-Anne; Miller, Jennifer L.; Roof, Elizabeth; Dykens, Elisabeth; Kimonis, Virginia; Driscoll, Daniel J.

    2018-01-01

    The purpose of the current study was to develop syndrome-specific standardized growth curves for growth hormone–treated Prader-Willi syndrome (PWS) individuals aged 0 to 18 years. Anthropometric growth-related measures were obtained on 171 subjects with PWS who were treated with growth hormone for at least 40% of their lifespan. They had no history of scoliosis. PWS standardized growth curves were developed for 7 percentile ranges using the LMS method for weight, height, head circumference, weight/length, and BMI along with normative 3rd, 50th, and 97th percentiles plotted using control data from the literature and growth databases. Percentiles were plotted on growth charts for comparison purposes. Growth hormone treatment appears to normalize stature and markedly improves weight in PWS compared with standardized curves for non–growth hormone–treated PWS individuals. Growth chart implications and recommended usage are discussed. PMID:26842920

  8. A Method for Molecular Dynamics on Curved Surfaces

    PubMed Central

    Paquay, Stefan; Kusters, Remy

    2016-01-01

    Dynamics simulations of constrained particles can greatly aid in understanding the temporal and spatial evolution of biological processes such as lateral transport along membranes and self-assembly of viruses. Most theoretical efforts in the field of diffusive transport have focused on solving the diffusion equation on curved surfaces, for which it is not tractable to incorporate particle interactions even though these play a crucial role in crowded systems. We show here that it is possible to take such interactions into account by combining standard constraint algorithms with the classical velocity Verlet scheme to perform molecular dynamics simulations of particles constrained to an arbitrarily curved surface. Furthermore, unlike Brownian dynamics schemes in local coordinates, our method is based on Cartesian coordinates, allowing for the reuse of many other standard tools without modifications, including parallelization through domain decomposition. We show that by applying the schemes to the Langevin equation for various surfaces, we obtain confined Brownian motion, which has direct applications to many biological and physical problems. Finally we present two practical examples that highlight the applicability of the method: 1) the influence of crowding and shape on the lateral diffusion of proteins in curved membranes; and 2) the self-assembly of a coarse-grained virus capsid protein model. PMID:27028633

  9. A Method for Molecular Dynamics on Curved Surfaces

    NASA Astrophysics Data System (ADS)

    Paquay, Stefan; Kusters, Remy

    2016-03-01

    Dynamics simulations of constrained particles can greatly aid in understanding the temporal and spatial evolution of biological processes such as lateral transport along membranes and self-assembly of viruses. Most theoretical efforts in the field of diffusive transport have focussed on solving the diffusion equation on curved surfaces, for which it is not tractable to incorporate particle interactions even though these play a crucial role in crowded systems. We show here that it is possible to combine standard constraint algorithms with the classical velocity Verlet scheme to perform molecular dynamics simulations of particles constrained to an arbitrarily curved surface, in which such interactions can be taken into account. Furthermore, unlike Brownian dynamics schemes in local coordinates, our method is based on Cartesian coordinates allowing for the reuse of many other standard tools without modifications, including parallelisation through domain decomposition. We show that by applying the schemes to the Langevin equation for various surfaces, confined Brownian motion is obtained, which has direct applications to many biological and physical problems. Finally we present two practical examples that highlight the applicability of the method: (i) the influence of crowding and shape on the lateral diffusion of proteins in curved membranes and (ii) the self-assembly of a coarse-grained virus capsid protein model.

  10. Novel isotopic N, N-Dimethyl Leucine (iDiLeu) Reagents Enable Absolute Quantification of Peptides and Proteins Using a Standard Curve Approach

    NASA Astrophysics Data System (ADS)

    Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun

    2015-01-01

    Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).

  11. Growth curves and the international standard: How children's growth reflects challenging conditions in rural Timor-Leste.

    PubMed

    Spencer, Phoebe R; Sanders, Katherine A; Judge, Debra S

    2018-02-01

    Population-specific growth references are important in understanding local growth variation, especially in developing countries where child growth is poor and the need for effective health interventions is high. In this article, we use mixed longitudinal data to calculate the first growth curves for rural East Timorese children to identify where, during development, deviation from the international standards occurs. Over an eight-year period, 1,245 children from two ecologically distinct rural areas of Timor-Leste were measured a total of 4,904 times. We compared growth to the World Health Organization (WHO) standards using z-scores, and modeled height and weight velocity using the SuperImposition by Translation And Rotation (SITAR) method. Using the Generalized Additive Model for Location, Scale and Shape (GAMLSS) method, we created the first growth curves for rural Timorese children for height, weight and body mass index (BMI). Relative to the WHO standards, children show early-life growth faltering, and stunting throughout childhood and adolescence. The median height and weight for this population tracks below the WHO fifth centile. Males have poorer growth than females in both z-BMI (p = .001) and z-height-for-age (p = .018) and, unlike females, continue to grow into adulthood. This is the most comprehensive investigation to date of rural Timorese children's growth, and the growth curves created may potentially be used to identify future secular trends in growth as the country develops. We show significant deviation from the international standard that becomes most pronounced at adolescence, similar to the growth of other Asian populations. Males and females show different growth responses to challenging conditions in this population. © 2017 Wiley Periodicals, Inc.

  12. Creation of three-dimensional craniofacial standards from CBCT images

    NASA Astrophysics Data System (ADS)

    Subramanyan, Krishna; Palomo, Martin; Hans, Mark

    2006-03-01

    Low-dose three-dimensional Cone Beam Computed Tomography (CBCT) is becoming increasingly popular in the clinical practice of dental medicine. Two-dimensional Bolton Standards of dentofacial development are routinely used to identify deviations from normal craniofacial anatomy. With the advent of CBCT three dimensional imaging, we propose a set of methods to extend these 2D Bolton Standards to anatomically correct surface based 3D standards to allow analysis of morphometric changes seen in craniofacial complex. To create 3D surface standards, we have implemented series of steps. 1) Converting bi-plane 2D tracings into set of splines 2) Converting the 2D splines curves from bi-plane projection into 3D space curves 3) Creating labeled template of facial and skeletal shapes and 4) Creating 3D average surface Bolton standards. We have used datasets from patients scanned with Hitachi MercuRay CBCT scanner providing high resolution and isotropic CT volume images, digitized Bolton Standards from age 3 to 18 years of lateral and frontal male, female and average tracings and converted them into facial and skeletal 3D space curves. This new 3D standard will help in assessing shape variations due to aging in young population and provide reference to correct facial anomalies in dental medicine.

  13. Using Peano Curves to Construct Laplacians on Fractals

    NASA Astrophysics Data System (ADS)

    Molitor, Denali; Ott, Nadia; Strichartz, Robert

    2015-12-01

    We describe a new method to construct Laplacians on fractals using a Peano curve from the circle onto the fractal, extending an idea that has been used in the case of certain Julia sets. The Peano curve allows us to visualize eigenfunctions of the Laplacian by graphing the pullback to the circle. We study in detail three fractals: the pentagasket, the octagasket and the magic carpet. We also use the method for two nonfractal self-similar sets, the torus and the equilateral triangle, obtaining appealing new visualizations of eigenfunctions on the triangle. In contrast to the many familiar pictures of approximations to standard Peano curves, that do no show self-intersections, our descriptions of approximations to the Peano curves have self-intersections that play a vital role in constructing graph approximations to the fractal with explicit graph Laplacians that give the fractal Laplacian in the limit.

  14. Schroth Physiotherapeutic Scoliosis-Specific Exercises Added to the Standard of Care Lead to Better Cobb Angle Outcomes in Adolescents with Idiopathic Scoliosis – an Assessor and Statistician Blinded Randomized Controlled Trial

    PubMed Central

    Parent, Eric C.; Khodayari Moez, Elham; Hedden, Douglas M.; Hill, Douglas L.; Moreau, Marc; Lou, Edmond; Watkins, Elise M.; Southon, Sarah C.

    2016-01-01

    Background The North American non-surgical standard of care for adolescent idiopathic scoliosis (AIS) includes observation and bracing, but not exercises. Schroth physiotherapeutic scoliosis-specific exercises (PSSE) showed promise in several studies of suboptimal methodology. The Scoliosis Research Society calls for rigorous studies supporting the role of exercises before including it as a treatment recommendation for scoliosis. Objectives To determine the effect of a six-month Schroth PSSE intervention added to standard of care (Experimental group) on the Cobb angle compared to standard of care alone (Control group) in patients with AIS. Methods Fifty patients with AIS aged 10–18 years, with curves of 10°-45° and Risser grade 0–5 were recruited from a single pediatric scoliosis clinic and randomized to the Experimental or Control group. Outcomes included the change in the Cobb angles of the Largest Curve and Sum of Curves from baseline to six months. The intervention consisted of a 30–45 minute daily home program and weekly supervised sessions. Intention-to-treat and per protocol linear mixed effects model analyses are reported. Results In the intention-to-treat analysis, after six months, the Schroth group had significantly smaller Largest Curve than controls (-3.5°, 95% CI -1.1° to -5.9°, p = 0.006). Likewise, the between-group difference in the square root of the Sum of Curves was -0.40°, (95% CI -0.03° to -0.8°, p = 0.046), suggesting that an average patient with 51.2° at baseline, will have a 49.3° Sum of Curves at six months in the Schroth group, and 55.1° in the control group with the difference between groups increasing with severity. Per protocol analyses produced similar, but larger differences: Largest Curve = -4.1° (95% CI -1.7° to -6.5°, p = 0.002) and Sum of Curves=−0.5° (95% CI -0.8 to 0.2, p = 0.006). Conclusion Schroth PSSE added to the standard of care were superior compared to standard of care alone for reducing the curve severity in patients with AIS. Trial Registration NCT01610908 PMID:28033399

  15. Is It Time to Change Our Reference Curve for Femur Length? Using the Z-Score to Select the Best Chart in a Chinese Population

    PubMed Central

    Yang, Huixia; Wei, Yumei; Su, Rina; Wang, Chen; Meng, Wenying; Wang, Yongqing; Shang, Lixin; Cai, Zhenyu; Ji, Liping; Wang, Yunfeng; Sun, Ying; Liu, Jiaxiu; Wei, Li; Sun, Yufeng; Zhang, Xueying; Luo, Tianxia; Chen, Haixia; Yu, Lijun

    2016-01-01

    Objective To use Z-scores to compare different charts of femur length (FL) applied to our population with the aim of identifying the most appropriate chart. Methods A retrospective study was conducted in Beijing. Fifteen hospitals in Beijing were chosen as clusters using a systemic cluster sampling method, in which 15,194 pregnant women delivered from June 20th to November 30th, 2013. The measurements of FL in the second and third trimester were recorded, as well as the last measurement obtained before delivery. Based on the inclusion and exclusion criteria, we identified FL measurements from 19996 ultrasounds from 7194 patients between 11 and 42 weeks gestation. The FL data were then transformed into Z-scores that were calculated using three series of reference equations obtained from three reports: Leung TN, Pang MW et al (2008); Chitty LS, Altman DG et al (1994); and Papageorghiou AT et al (2014). Each Z-score distribution was presented as the mean and standard deviation (SD). Skewness and kurtosis and were compared with the standard normal distribution using the Kolmogorov-Smirnov test. The histogram of their distributions was superimposed on the non-skewed standard normal curve (mean = 0, SD = 1) to provide a direct visual impression. Finally, the sensitivity and specificity of each reference chart for identifying fetuses <5th or >95th percentile (based on the observed distribution of Z-scores) were calculated. The Youden index was also listed. A scatter diagram with the 5th, 50th, and 95th percentile curves calculated from and superimposed on each reference chart was presented to provide a visual impression. Results The three Z-score distribution curves appeared to be normal, but none of them matched the expected standard normal distribution. In our study, the Papageorghiou reference curve provided the best results, with a sensitivity of 100% for identifying fetuses with measurements < 5th and > 95th percentile, and specificities of 99.9% and 81.5%, respectively. Conclusions It is important to choose an appropriate reference curve when defining what is normal. The Papageorghiou reference curve for FL seems to be the best fit for our population. Perhaps it is time to change our reference curve for femur length. PMID:27458922

  16. An adaptive-binning method for generating constant-uncertainty/constant-significance light curves with Fermi -LAT data

    DOE PAGES

    Lott, B.; Escande, L.; Larsson, S.; ...

    2012-07-19

    Here, we present a method enabling the creation of constant-uncertainty/constant-significance light curves with the data of the Fermi-Large Area Telescope (LAT). The adaptive-binning method enables more information to be encapsulated within the light curve than with the fixed-binning method. Although primarily developed for blazar studies, it can be applied to any sources. Furthermore, this method allows the starting and ending times of each interval to be calculated in a simple and quick way during a first step. The reported mean flux and spectral index (assuming the spectrum is a power-law distribution) in the interval are calculated via the standard LATmore » analysis during a second step. In the absence of major caveats associated with this method Monte-Carlo simulations have been established. We present the performance of this method in determining duty cycles as well as power-density spectra relative to the traditional fixed-binning method.« less

  17. Fracture toughness testing of Linde 1092 reactor vessel welds in the transition range using Charpy-sized specimens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavinich, W.A.; Yoon, K.K.; Hour, K.Y.

    1999-10-01

    The present reference toughness method for predicting the change in fracture toughness can provide over estimates of these values because of uncertainties in initial RT{sub NDT} and shift correlations. It would be preferable to directly measure fracture toughness. However, until recently, no standard method was available to characterize fracture toughness in the transition range. ASTM E08 has developed a draft standard that shows promise for providing lower bound transition range fracture toughness using the master curve approach. This method has been successfully implemented using 1T compact fracture specimens. Combustion Engineering reactor vessel surveillance programs do not have compact fracture specimens.more » Therefore, the CE Owners Group developed a program to validate the master curve method for Charpy-sized and reconstituted Charpy-sized specimens for future application on irradiated specimens. This method was validated for Linde 1092 welds using unirradiated Charpy-sized and reconstituted Charpy-sized specimens by comparison of results with those from compact fracture specimens.« less

  18. Quantitation of spatially-localized proteins in tissue samples using MALDI-MRM imaging.

    PubMed

    Clemis, Elizabeth J; Smith, Derek S; Camenzind, Alexander G; Danell, Ryan M; Parker, Carol E; Borchers, Christoph H

    2012-04-17

    MALDI imaging allows the creation of a "molecular image" of a tissue slice. This image is reconstructed from the ion abundances in spectra obtained while rastering the laser over the tissue. These images can then be correlated with tissue histology to detect potential biomarkers of, for example, aberrant cell types. MALDI, however, is known to have problems with ion suppression, making it difficult to correlate measured ion abundance with concentration. It would be advantageous to have a method which could provide more accurate protein concentration measurements, particularly for screening applications or for precise comparisons between samples. In this paper, we report the development of a novel MALDI imaging method for the localization and accurate quantitation of proteins in tissues. This method involves optimization of in situ tryptic digestion, followed by reproducible and uniform deposition of an isotopically labeled standard peptide from a target protein onto the tissue, using an aerosol-generating device. Data is acquired by MALDI multiple reaction monitoring (MRM) mass spectrometry (MS), and accurate peptide quantitation is determined from the ratio of MRM transitions for the endogenous unlabeled proteolytic peptides to the corresponding transitions from the applied isotopically labeled standard peptides. In a parallel experiment, the quantity of the labeled peptide applied to the tissue was determined using a standard curve generated from MALDI time-of-flight (TOF) MS data. This external calibration curve was then used to determine the quantity of endogenous peptide in a given area. All standard curves generate by this method had coefficients of determination greater than 0.97. These proof-of-concept experiments using MALDI MRM-based imaging show the feasibility for the precise and accurate quantitation of tissue protein concentrations over 2 orders of magnitude, while maintaining the spatial localization information for the proteins.

  19. Percentile curves for skinfold thickness for Canadian children and youth.

    PubMed

    Kuhle, Stefan; Ashley-Martin, Jillian; Maguire, Bryan; Hamilton, David C

    2016-01-01

    Background. Skinfold thickness (SFT) measurements are a reliable and feasible method for assessing body fat in children but their use and interpretation is hindered by the scarcity of reference values in representative populations of children. The objective of the present study was to develop age- and sex-specific percentile curves for five SFT measures (biceps, triceps, subscapular, suprailiac, medial calf) in a representative population of Canadian children and youth. Methods. We analyzed data from 3,938 children and adolescents between 6 and 19 years of age who participated in the Canadian Health Measures Survey cycles 1 (2007/2009) and 2 (2009/2011). Standardized procedures were used to measure SFT. Age- and sex-specific centiles for SFT were calculated using the GAMLSS method. Results. Percentile curves were materially different in absolute value and shape for boys and girls. Percentile girls in girls steadily increased with age whereas percentile curves in boys were characterized by a pubertal centered peak. Conclusions. The current study has presented for the first time percentile curves for five SFT measures in a representative sample of Canadian children and youth.

  20. Percentile curves for skinfold thickness for Canadian children and youth

    PubMed Central

    Ashley-Martin, Jillian; Maguire, Bryan; Hamilton, David C.

    2016-01-01

    Background. Skinfold thickness (SFT) measurements are a reliable and feasible method for assessing body fat in children but their use and interpretation is hindered by the scarcity of reference values in representative populations of children. The objective of the present study was to develop age- and sex-specific percentile curves for five SFT measures (biceps, triceps, subscapular, suprailiac, medial calf) in a representative population of Canadian children and youth. Methods. We analyzed data from 3,938 children and adolescents between 6 and 19 years of age who participated in the Canadian Health Measures Survey cycles 1 (2007/2009) and 2 (2009/2011). Standardized procedures were used to measure SFT. Age- and sex-specific centiles for SFT were calculated using the GAMLSS method. Results. Percentile curves were materially different in absolute value and shape for boys and girls. Percentile girls in girls steadily increased with age whereas percentile curves in boys were characterized by a pubertal centered peak. Conclusions. The current study has presented for the first time percentile curves for five SFT measures in a representative sample of Canadian children and youth. PMID:27547554

  1. FY17 Status Report on the Initial EPP Finite Element Analysis of Grade 91 Steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messner, M. C.; Sham, T. -L.

    This report describes a modification to the elastic-perfectly plastic (EPP) strain limits design method to account for cyclic softening in Gr. 91 steel. The report demonstrates that the unmodified EPP strain limits method described in current ASME code case is not conservative for materials with substantial cyclic softening behavior like Gr. 91 steel. However, the EPP strain limits method can be modified to be conservative for softening materials by using softened isochronous stress-strain curves in place of the standard curves developed from unsoftened creep experiments. The report provides softened curves derived from inelastic material simulations and factors describing the transformationmore » of unsoftened curves to a softened state. Furthermore, the report outlines a method for deriving these factors directly from creep/fatigue tests. If the material softening saturates the proposed EPP strain limits method can be further simplified, providing a methodology based on temperature-dependent softening factors that could be implemented in an ASME code case allowing the use of the EPP strain limits method with Gr. 91. Finally, the report demonstrates the conservatism of the modified method when applied to inelastic simulation results and two bar experiments.« less

  2. Detection of climate signal in dendrochronological data analysis: a comparison of tree-ring standardization methods

    NASA Astrophysics Data System (ADS)

    Helama, S.; Lindholm, M.; Timonen, M.; Eronen, M.

    2004-12-01

    Tree-ring standardization methods were compared. Traditional methods along with the recently introduced approaches of regional curve standardization (RCS) and power-transformation (PT) were included. The difficulty in removing non-climatic variation (noise) while simultaneously preserving the low-frequency variability in the tree-ring series was emphasized. The potential risk of obtaining inflated index values was analysed by comparing methods to extract tree-ring indices from the standardization curve. The material for the tree-ring series, previously used in several palaeoclimate predictions, came from living and dead wood of high-latitude Scots pine in northernmost Europe. This material provided a useful example of a long composite tree-ring chronology with the typical strengths and weaknesses of such data, particularly in the context of standardization. PT stabilized the heteroscedastic variation in the original tree-ring series more efficiently than any other standardization practice expected to preserve the low-frequency variability. RCS showed great potential in preserving variability in tree-ring series at centennial time scales; however, this method requires a homogeneous sample for reliable signal estimation. It is not recommended to derive indices by subtraction without first stabilizing the variance in the case of series of forest-limit tree-ring data. Index calculation by division did not seem to produce inflated chronology values for the past one and a half centuries of the chronology (where mean sample cambial age is high). On the other hand, potential bias of high RCS chronology values was observed during the period of anomalously low mean sample cambial age. An alternative technique for chronology construction was proposed based on series age decomposition, where indices in the young vigorously behaving part of each series are extracted from the curve by division and in the mature part by subtraction. Because of their specific nature, the dendrochronological data here should not be generalized to all tree-ring records. The examples presented should be used as guidelines for detecting potential sources of bias and as illustrations of the usefulness of tree-ring records as palaeoclimate indicators.

  3. A new method for separating the climatic and biological trend components from tree ring series, with implications for paleoclimate reconstructions

    NASA Astrophysics Data System (ADS)

    Bouldin, J.

    2010-12-01

    In the reconstruction of past climates from tree rings multi-decadal to multi-centennial periods, one longstanding problem is the confounding of the natural biological growth trend of the tree with any existing long term trends in the climate. No existing analytical method is capable of resolving these two change components, so it remains unclear how accurate existing ring series standardizations are, and by implication, climate reconstructions based upon them. For example, dendrochronological at the ITRDB are typically standardized by detrending, at each site, each individual tree core, using a relatively stiff deterministic function such as a negative exponential curve or smoothing spline. Another approach, referred to as RCS (Regional Curve Standardization) attempts to solve some problems of the individual series detrending, by constructing a single growth curve from the aggregated cambial ages of the rings of the cores at a site (or collection of sites). This curve is presumed to represent the “ideal” or expected growth of the trees from which it is derived. Although an improvement in some respects, this method will be degraded in direct proportion to the lack of a mixture of tree sizes or ages throughout the span of the chronology. I present a new method of removing the biological curve from tree ring series, such that temporal changes better represent the environmental variation captured by the tree rings. The method institutes several new approaches, such as the correction for the estimated number of missed rings near the pith, and the use of tree size and ring area relationships instead of the traditional tree ages and ring widths. The most important innovation is a careful extraction of the existing information on the relationship between tree size (basal area) and ring area that exists within each single year of the chronology. This information is, by definition, not contaminated by temporal climatic changes, and so when removed, leaves the climatically caused, and random error components of the chronology. A sophisticated algorithm, based on pair-wise ring comparisons in which tree size is standardized both within and between years, forms the basis of the method. Evaluations of the method are underway with both simulated and actual (ITRDB) data, to evaluate the potentials and drawbacks of the method relative to existing methods. The ITRDB test data consists of a set of about 50 primarily high elevation sites from across western North America. Most of these sites show a pronounced 20th Century warming relative to earlier centuries, in accordance with current understanding, albeit at a non-global scale. A relative minority show cooling, occasionally strongly. Current and future work emphasizes evaluation of the method with varying, simulated data, and more thorough empirical evaluations of the method in situations where the type, and intensity, of the primary environmentally limiting factor varies (e.g temperature versus soil moisture limited sites).

  4. Cloned plasmid DNA fragments as calibrators for controlling GMOs: different real-time duplex quantitative PCR methods.

    PubMed

    Taverniers, Isabel; Van Bockstaele, Erik; De Loose, Marc

    2004-03-01

    Analytical real-time PCR technology is a powerful tool for implementation of the GMO labeling regulations enforced in the EU. The quality of analytical measurement data obtained by quantitative real-time PCR depends on the correct use of calibrator and reference materials (RMs). For GMO methods of analysis, the choice of appropriate RMs is currently under debate. So far, genomic DNA solutions from certified reference materials (CRMs) are most often used as calibrators for GMO quantification by means of real-time PCR. However, due to some intrinsic features of these CRMs, errors may be expected in the estimations of DNA sequence quantities. In this paper, two new real-time PCR methods are presented for Roundup Ready soybean, in which two types of plasmid DNA fragments are used as calibrators. Single-target plasmids (STPs) diluted in a background of genomic DNA were used in the first method. Multiple-target plasmids (MTPs) containing both sequences in one molecule were used as calibrators for the second method. Both methods simultaneously detect a promoter 35S sequence as GMO-specific target and a lectin gene sequence as endogenous reference target in a duplex PCR. For the estimation of relative GMO percentages both "delta C(T)" and "standard curve" approaches are tested. Delta C(T) methods are based on direct comparison of measured C(T) values of both the GMO-specific target and the endogenous target. Standard curve methods measure absolute amounts of target copies or haploid genome equivalents. A duplex delta C(T) method with STP calibrators performed at least as well as a similar method with genomic DNA calibrators from commercial CRMs. Besides this, high quality results were obtained with a standard curve method using MTP calibrators. This paper demonstrates that plasmid DNA molecules containing either one or multiple target sequences form perfect alternative calibrators for GMO quantification and are especially suitable for duplex PCR reactions.

  5. The 124Sb activity standardization by gamma spectrometry for medical applications

    NASA Astrophysics Data System (ADS)

    de Almeida, M. C. M.; Iwahara, A.; Delgado, J. U.; Poledna, R.; da Silva, R. L.

    2010-07-01

    This work describes a metrological activity determination of 124Sb, which can be used as radiotracer, applying gamma spectrometry methods with hyper pure germanium detector and efficiency curves. This isotope with good activity and high radionuclidic purity is employed in the form of meglumine antimoniate (Glucantime) or sodium stibogluconate (Pentostam) to treat leishmaniasis. 124Sb is also applied in animal organ distribution studies to solve some questions in pharmacology. 124Sb decays by β-emission and it produces several photons (X and gamma rays) with energy varying from 27 to 2700 keV. Efficiency curves to measure point 124Sb solid sources were obtained from a 166mHo standard that is a multi-gamma reference source. These curves depend on radiation energy, sample geometry, photon attenuation, dead time and sample-detector position. Results for activity determination of 124Sb samples using efficiency curves and a high purity coaxial germanium detector were consistent in different counting geometries. Also uncertainties of about 2% ( k=2) were obtained.

  6. Spectrophotometric determination of irrigant extrusion using passive ultrasonic irrigation, EndoActivator, or syringe irrigation.

    PubMed

    Rodríguez-Figueroa, Carolina; McClanahan, Scott B; Bowles, Walter R

    2014-10-01

    Sodium hypochlorite (NaOCl) irrigation is critical to endodontic success, and several new methods have been developed to improve irrigation efficacy (eg, passive ultrasonic irrigation [PUI] and EndoActivator [EA]). Using a novel spectrophotometric method, this study evaluated NaOCl irrigant extrusion during canal irrigation. One hundred fourteen single-rooted extracted teeth were decoronated to leave 15 mm of the root length for each tooth. Cleaning and shaping of the teeth were completed using standardized hand and rotary instrumentation to an apical file size #40/0.04 taper. Roots were sealed (not apex), and 54 straight roots (n = 18/group) and 60 curved roots (>20° curvature, n = 20/group) were included. Teeth were irrigated with 5.25% NaOCl by 1 of 3 methods: passive irrigation with needle, PUI, or EA irrigation. Extrusion of NaOCl was evaluated using a pH indicator and a spectrophotometer. Standard curves were prepared with known amounts of irrigant to quantify amounts in unknown samples. Irrigant extrusion was minimal with all methods, with most teeth showing no NaOCl extrusion in straight or curved roots. Minor NaOCl extrusion (1-3 μL) in straight roots or curved roots occurred in 10%-11% of teeth in all 3 irrigant methods. Two teeth in both the syringe irrigation and the EA group extruded 3-10 μL of NaOCl. The spectrophotometric method used in this study proved to be very sensitive while providing quantification of the irrigant levels extruded. Using the PUI or EA tip to within 1 mm of the working length appears to be fairly safe, but apical anatomy can vary in teeth to allow extrusion of irrigant. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  7. Zirconium determination by cooling curve analysis during the pyroprocessing of used nuclear fuel

    NASA Astrophysics Data System (ADS)

    Westphal, B. R.; Price, J. C.; Bateman, K. J.; Marsden, K. C.

    2015-02-01

    An alternative method to sampling and chemical analyses has been developed to monitor the concentration of zirconium in real-time during the casting of uranium products from the pyroprocessing of used nuclear fuel. The method utilizes the solidification characteristics of the uranium products to determine zirconium levels based on standard cooling curve analyses and established binary phase diagram data. Numerous uranium products have been analyzed for their zirconium content and compared against measured zirconium data. From this data, the following equation was derived for the zirconium content of uranium products:

  8. Computerized measurement and analysis of scoliosis: a more accurate representation of the shape of the curve.

    PubMed

    Jeffries, B F; Tarlton, M; De Smet, A A; Dwyer, S J; Brower, A C

    1980-02-01

    A computer program was created to identify and accept spatial data regarding the location of the thoracic and lumbar vertebral bodies on scoliosis films. With this information, the spine can be mathematically reconstructed and a scoliotic angle calculated. There was a 0.968 positive correlation between the computer and manual methods of measuring scoliosis. The computer method was more reproducible with a standard deviation of only 1.3 degrees. Computerized measurement of scoliosis also provides better evaluation of the true shape of the curve.

  9. Absolute method of measuring magnetic susceptibility

    USGS Publications Warehouse

    Thorpe, A.; Senftle, F.E.

    1959-01-01

    An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.

  10. Comparative study of some robust statistical methods: weighted, parametric, and nonparametric linear regression of HPLC convoluted peak responses using internal standard method in drug bioavailability studies.

    PubMed

    Korany, Mohamed A; Maher, Hadir M; Galal, Shereen M; Ragab, Marwa A A

    2013-05-01

    This manuscript discusses the application and the comparison between three statistical regression methods for handling data: parametric, nonparametric, and weighted regression (WR). These data were obtained from different chemometric methods applied to the high-performance liquid chromatography response data using the internal standard method. This was performed on a model drug Acyclovir which was analyzed in human plasma with the use of ganciclovir as internal standard. In vivo study was also performed. Derivative treatment of chromatographic response ratio data was followed by convolution of the resulting derivative curves using 8-points sin x i polynomials (discrete Fourier functions). This work studies and also compares the application of WR method and Theil's method, a nonparametric regression (NPR) method with the least squares parametric regression (LSPR) method, which is considered the de facto standard method used for regression. When the assumption of homoscedasticity is not met for analytical data, a simple and effective way to counteract the great influence of the high concentrations on the fitted regression line is to use WR method. WR was found to be superior to the method of LSPR as the former assumes that the y-direction error in the calibration curve will increase as x increases. Theil's NPR method was also found to be superior to the method of LSPR as the former assumes that errors could occur in both x- and y-directions and that might not be normally distributed. Most of the results showed a significant improvement in the precision and accuracy on applying WR and NPR methods relative to LSPR.

  11. Fully automated spectrometric protocols for determination of antioxidant activity: advantages and disadvantages.

    PubMed

    Sochor, Jiri; Ryvolova, Marketa; Krystofova, Olga; Salas, Petr; Hubalek, Jaromir; Adam, Vojtech; Trnkova, Libuse; Havel, Ladislav; Beklova, Miroslava; Zehnalek, Josef; Provaznik, Ivo; Kizek, Rene

    2010-11-29

    The aim of this study was to describe behaviour, kinetics, time courses and limitations of the six different fully automated spectrometric methods--DPPH, TEAC, FRAP, DMPD, Free Radicals and Blue CrO5. Absorption curves were measured and absorbance maxima were found. All methods were calibrated using the standard compounds Trolox® and/or gallic acid. Calibration curves were determined (relative standard deviation was within the range from 1.5 to 2.5%). The obtained characteristics were compared and discussed. Moreover, the data obtained were applied to optimize and to automate all mentioned protocols. Automatic analyzer allowed us to analyse simultaneously larger set of samples, to decrease the measurement time, to eliminate the errors and to provide data of higher quality in comparison to manual analysis. The total time of analysis for one sample was decreased to 10 min for all six methods. In contrary, the total time of manual spectrometric determination was approximately 120 min. The obtained data provided good correlations between studied methods (R=0.97-0.99).

  12. Pulmonary vessel segmentation utilizing curved planar reformation and optimal path finding (CROP) in computed tomographic pulmonary angiography (CTPA) for CAD applications

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Kuriakose, Jean W.; Chughtai, Aamer; Wei, Jun; Hadjiiski, Lubomir M.; Guo, Yanhui; Patel, Smita; Kazerooni, Ella A.

    2012-03-01

    Vessel segmentation is a fundamental step in an automated pulmonary embolism (PE) detection system. The purpose of this study is to improve the segmentation scheme for pulmonary vessels affected by PE and other lung diseases. We have developed a multiscale hierarchical vessel enhancement and segmentation (MHES) method for pulmonary vessel tree extraction based on the analysis of eigenvalues of Hessian matrices. However, it is difficult to segment the pulmonary vessels accurately under suboptimal conditions, such as vessels occluded by PEs, surrounded by lymphoid tissues or lung diseases, and crossing with other vessels. In this study, we developed a new vessel refinement method utilizing curved planar reformation (CPR) technique combined with optimal path finding method (MHES-CROP). The MHES segmented vessels straightened in the CPR volume was refined using adaptive gray level thresholding where the local threshold was obtained from least-square estimation of a spline curve fitted to the gray levels of the vessel along the straightened volume. An optimal path finding method based on Dijkstra's algorithm was finally used to trace the correct path for the vessel of interest. Two and eight CTPA scans were randomly selected as training and test data sets, respectively. Forty volumes of interest (VOIs) containing "representative" vessels were manually segmented by a radiologist experienced in CTPA interpretation and used as reference standard. The results show that, for the 32 test VOIs, the average percentage volume error relative to the reference standard was improved from 32.9+/-10.2% using the MHES method to 9.9+/-7.9% using the MHES-CROP method. The accuracy of vessel segmentation was improved significantly (p<0.05). The intraclass correlation coefficient (ICC) of the segmented vessel volume between the automated segmentation and the reference standard was improved from 0.919 to 0.988. Quantitative comparison of the MHES method and the MHES-CROP method with the reference standard was also evaluated by the Bland-Altman plot. This preliminary study indicates that the MHES-CROP method has the potential to improve PE detection.

  13. Modeling error distributions of growth curve models through Bayesian methods.

    PubMed

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  14. High Performance Liquid Chromatography of Vitamin A: A Quantitative Determination.

    ERIC Educational Resources Information Center

    Bohman, Ove; And Others

    1982-01-01

    Experimental procedures are provided for the quantitative determination of Vitamin A (retinol) in food products by analytical liquid chromatography. Standard addition and calibration curve extraction methods are outlined. (SK)

  15. Construction of a New Growth References for China Based on Urban Chinese Children: Comparison with the WHO Growth Standards

    PubMed Central

    Zong, Xin-Nan; Li, Hui

    2013-01-01

    Introduction Growth references for Chinese children should be updated due to the positive secular growth trends and the progress of the smoothing techniques. Human growth differs among the various ethnic groups, so comparison of the China references with the WHO standards helps to understand such differences. Methods The China references, including weight, length/height, head circumference, weight-for-length/height and body mass index (BMI) aged 0–18 years, were constructed based on 69,760 urban infants and preschool children under 7 years and 24,542 urban school children aged 6–20 years derived from two cross-sectional national surveys. The Cole’s LMS method is employed for smoothing the growth curves. Results The merged data sets resulted in a smooth transition at age 6–7 years and continuity of curves from 0 to 18 years. Varying differences were found on the empirical standard deviation (SD) curves in each indicator at nearly all ages between China and WHO. The most noticeable differences occurred in genders, final height and boundary centiles curves. Chinese boys’ weight is strikingly heavier than that of the WHO at age 6–10 years. The height is taller than that of the WHO for boys below 15 years and for girls below 13, but is significantly lower when boys over 15 years and girls over 13. BMI is generally higher than that of the WHO for boys at age 6–16 years but appreciably lower for girls at 3–18 years. Conclusions The differences between China and WHO are mainly caused by the reference populations of different ethnic backgrounds. For practitioners, the choices of the standards/references depend on the population to be assessed and the purpose of the study. The new China references could be applied to facilitate the standardization assessment of growth and nutrition for Chinese children and adolescents in clinical pediatric and public health. PMID:23527219

  16. Customized versus population-based growth curves: prediction of low body fat percent at term corrected gestational age following preterm birth.

    PubMed

    Law, Tameeka L; Katikaneni, Lakshmi D; Taylor, Sarah N; Korte, Jeffrey E; Ebeling, Myla D; Wagner, Carol L; Newman, Roger B

    2012-07-01

    Compare customized versus population-based growth curves for identification of small-for-gestational-age (SGA) and body fat percent (BF%) among preterm infants. Prospective cohort study of 204 preterm infants classified as SGA or appropriate-for-gestational-age (AGA) by population-based and customized growth curves. BF% was determined by air-displacement plethysmography. Differences between groups were compared using bivariable and multivariable linear and logistic regression analyses. Customized curves reclassified 30% of the preterm infants as SGA. SGA infants identified by customized method only had significantly lower BF% (13.8 ± 6.0) than the AGA (16.2 ± 6.3, p = 0.02) infants and similar to the SGA infants classified by both methods (14.6 ± 6.7, p = 0.51). Customized growth curves were a significant predictor of BF% (p = 0.02), whereas population-based growth curves were not a significant independent predictor of BF% (p = 0.50) at term corrected gestational age. Customized growth potential improves the differentiation of SGA infants and low BF% compared with a standard population-based growth curve among a cohort of preterm infants.

  17. Gene Scanning of an Internalin B Gene Fragment Using High-Resolution Melting Curve Analysis as a Tool for Rapid Typing of Listeria monocytogenes

    PubMed Central

    Pietzka, Ariane T.; Stöger, Anna; Huhulescu, Steliana; Allerberger, Franz; Ruppitsch, Werner

    2011-01-01

    The ability to accurately track Listeria monocytogenes strains involved in outbreaks is essential for control and prevention of listeriosis. Because current typing techniques are time-consuming, cost-intensive, technically demanding, and difficult to standardize, we developed a rapid and cost-effective method for typing of L. monocytogenes. In all, 172 clinical L. monocytogenes isolates and 20 isolates from culture collections were typed by high-resolution melting (HRM) curve analysis of a specific locus of the internalin B gene (inlB). All obtained HRM curve profiles were verified by sequence analysis. The 192 tested L. monocytogenes isolates yielded 15 specific HRM curve profiles. Sequence analysis revealed that these 15 HRM curve profiles correspond to 18 distinct inlB sequence types. The HRM curve profiles obtained correlated with the five phylogenetic groups I.1, I.2, II.1, II.2, and III. Thus, HRM curve analysis constitutes an inexpensive assay and represents an improvement in typing relative to classical serotyping or multiplex PCR typing protocols. This method provides a rapid and powerful screening tool for simultaneous preliminary typing of up to 384 samples in approximately 2 hours. PMID:21227395

  18. Determination of arsenic and cadmium in crude oil by direct sampling graphite furnace atomic absorption spectrometry

    NASA Astrophysics Data System (ADS)

    de Jesus, Alexandre; Zmozinski, Ariane Vanessa; Damin, Isabel Cristina Ferreira; Silva, Márcia Messias; Vale, Maria Goreti Rodrigues

    2012-05-01

    In this work, a direct sampling graphite furnace atomic absorption spectrometry method has been developed for the determination of arsenic and cadmium in crude oil samples. The samples were weighed directly on the solid sampling platforms and introduced into the graphite tube for analysis. The chemical modifier used for both analytes was a mixture of 0.1% Pd + 0.06% Mg + 0.06% Triton X-100. Pyrolysis and atomization curves were obtained for both analytes using standards and samples. Calibration curves with aqueous standards could be used for both analytes. The limits of detection obtained were 5.1 μg kg- 1 for arsenic and 0.2 μg kg- 1 for cadmium, calculated for the maximum amount of sample that can be analyzed (8 mg and 10 mg) for arsenic and cadmium, respectively. Relative standard deviations lower than 20% were obtained. For validation purposes, a calibration curve was constructed with the SRM 1634c and aqueous standards for arsenic and the results obtained for several crude oil samples were in agreement according to paired t-test. The result obtained for the determination of arsenic in the SRM against aqueous standards was also in agreement with the certificate value. As there is no crude oil or similar reference material available with a certified value for cadmium, a digestion in an open vessel under reflux using a "cold finger" was adopted for validation purposes. The use of paired t-test showed that the results obtained by direct sampling and digestion were in agreement at a 95% confidence level. Recovery tests were carried out with inorganic and organic standards and the results were between 88% and 109%. The proposed method is simple, fast and reliable, being appropriated for routine analysis.

  19. A new IRT-based standard setting method: application to eCat-listening.

    PubMed

    García, Pablo Eduardo; Abad, Francisco José; Olea, Julio; Aguado, David

    2013-01-01

    Criterion-referenced interpretations of tests are highly necessary, which usually involves the difficult task of establishing cut scores. Contrasting with other Item Response Theory (IRT)-based standard setting methods, a non-judgmental approach is proposed in this study, in which Item Characteristic Curve (ICC) transformations lead to the final cut scores. eCat-Listening, a computerized adaptive test for the evaluation of English Listening, was administered to 1,576 participants, and the proposed standard setting method was applied to classify them into the performance standards of the Common European Framework of Reference for Languages (CEFR). The results showed a classification closely related to relevant external measures of the English language domain, according to the CEFR. It is concluded that the proposed method is a practical and valid standard setting alternative for IRT-based tests interpretations.

  20. Estimated damage from the Cascadia Subduction Zone tsunami: A model comparisons using fragility curves

    NASA Astrophysics Data System (ADS)

    Wiebe, D. M.; Cox, D. T.; Chen, Y.; Weber, B. A.; Chen, Y.

    2012-12-01

    Building damage from a hypothetical Cascadia Subduction Zone tsunami was estimated using two methods and applied at the community scale. The first method applies proposed guidelines for a new ASCE 7 standard to calculate the flow depth, flow velocity, and momentum flux from a known runup limit and estimate of the total tsunami energy at the shoreline. This procedure is based on a potential energy budget, uses the energy grade line, and accounts for frictional losses. The second method utilized numerical model results from previous studies to determine maximum flow depth, velocity, and momentum flux throughout the inundation zone. The towns of Seaside and Canon Beach, Oregon, were selected for analysis due to the availability of existing data from previously published works. Fragility curves, based on the hydrodynamic features of the tsunami flow (inundation depth, flow velocity, and momentum flux) and proposed design standards from ASCE 7 were used to estimate the probability of damage to structures located within the inundations zone. The analysis proceeded at the parcel level, using tax-lot data to identify construction type (wood, steel, and reinforced-concrete) and age, which was used as a performance measure when applying the fragility curves and design standards. The overall probability of damage to civil buildings was integrated for comparison between the two methods, and also analyzed spatially for damage patterns, which could be controlled by local bathymetric features. The two methods were compared to assess the sensitivity of the results to the uncertainty in the input hydrodynamic conditions and fragility curves, and the potential advantages of each method discussed. On-going work includes coupling the results of building damage and vulnerability to an economic input output model. This model assesses trade between business sectors located inside and outside the induction zone, and is used to measure the impact to the regional economy. Results highlight critical businesses sectors and infrastructure critical to the economic recovery effort, which could be retrofitted or relocated to survive the event. The results of this study improve community understanding of the tsunami hazard for civil buildings.

  1. Statistical analysis of radioimmunoassay. In comparison with bioassay (in Japanese)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakano, R.

    1973-01-01

    Using the data of RIA (radioimmunoassay), statistical procedures for dealing with two problems of the linearization of dose response curve and calculation of relative potency were described. There were three methods for linearization of dose response curve of RIA. In each method, the following parameters were shown on the horizontal and vertical axis: dose x, (B/T)/sup -1/; c/x + c, B/T (C: dose which makes B/T 50%); log x, logit B/T. Among them, the last method seems to be most practical. The statistical procedures for bioassay were employed for calculating the relative potency of unknown samples compared to the standardmore » samples from dose response curves of standand and unknown samples using regression coefficient. It is desirable that relative potency is calculated by plotting more than 5 points in the standard curve and plotting more than 2 points in unknow samples. For examining the statistical limit of precision of measuremert, LH activity of gonadotropin in urine was measured and relative potency, precision coefficient and the upper and lower limits of relative potency at 95% confidence limit were calculated. On the other hand, bioassay (by the ovarian ascorbic acid reduction method and anteriol lobe of prostate weighing method) was done in the same samples, and the precision was compared with that of RIA. In these examinations, the upper and lower limits of the relative potency at 95% confidence limit were near each other, while in bioassay, a considerable difference was observed between the upper and lower limits. The necessity of standardization and systematization of the statistical procedures for increasing the precision of RIA was pointed out. (JA)« less

  2. Solid-state curved focal plane arrays

    NASA Technical Reports Server (NTRS)

    Jones, Todd (Inventor); Nikzad, Shouleh (Inventor); Hoenk, Michael (Inventor)

    2010-01-01

    The present invention relates to curved focal plane arrays. More specifically, the present invention relates to a system and method for making solid-state curved focal plane arrays from standard and high-purity devices that may be matched to a given optical system. There are two ways to make a curved focal plane arrays starting with the fully fabricated device. One way, is to thin the device and conform it to a curvature. A second way, is to back-illuminate a thick device without making a thinned membrane. The thick device is a special class of devices; for example devices fabricated with high purity silicon. One surface of the device (the non VLSI fabricated surface, also referred to as the back surface) can be polished to form a curved surface.

  3. Comparison of salivary collection and processing methods for quantitative HHV-8 detection.

    PubMed

    Speicher, D J; Johnson, N W

    2014-10-01

    Saliva is a proved diagnostic fluid for the qualitative detection of infectious agents, but the accuracy of viral load determinations is unknown. Stabilising fluids impede nucleic acid degradation, compared with collection onto ice and then freezing, and we have shown that the DNA Genotek P-021 prototype kit (P-021) can produce high-quality DNA after 14 months of storage at room temperature. Here we evaluate the quantitative capability of 10 collection/processing methods. Unstimulated whole mouth fluid was spiked with a mixture of HHV-8 cloned constructs, 10-fold serial dilutions were produced, and samples were extracted and then examined with quantitative PCR (qPCR). Calibration curves were compared by linear regression and qPCR dynamics. All methods extracted with commercial spin columns produced linear calibration curves with large dynamic range and gave accurate viral loads. Ethanol precipitation of the P-021 does not produce a linear standard curve, and virus is lost in the cell pellet. DNA extractions from the P-021 using commercial spin columns produced linear standard curves with wide dynamic range and excellent limit of detection. When extracted with spin columns, the P-021 enables accurate viral loads down to 23 copies μl(-1) DNA. The quantitative and long-term storage capability of this system makes it ideal for study of salivary DNA viruses in resource-poor settings. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. A Three-Dimensional Receiver Operator Characteristic Surface Diagnostic Metric

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.

    2011-01-01

    Receiver Operator Characteristic (ROC) curves are commonly applied as metrics for quantifying the performance of binary fault detection systems. An ROC curve provides a visual representation of a detection system s True Positive Rate versus False Positive Rate sensitivity as the detection threshold is varied. The area under the curve provides a measure of fault detection performance independent of the applied detection threshold. While the standard ROC curve is well suited for quantifying binary fault detection performance, it is not suitable for quantifying the classification performance of multi-fault classification problems. Furthermore, it does not provide a measure of diagnostic latency. To address these shortcomings, a novel three-dimensional receiver operator characteristic (3D ROC) surface metric has been developed. This is done by generating and applying two separate curves: the standard ROC curve reflecting fault detection performance, and a second curve reflecting fault classification performance. A third dimension, diagnostic latency, is added giving rise to 3D ROC surfaces. Applying numerical integration techniques, the volumes under and between the surfaces are calculated to produce metrics of the diagnostic system s detection and classification performance. This paper will describe the 3D ROC surface metric in detail, and present an example of its application for quantifying the performance of aircraft engine gas path diagnostic methods. Metric limitations and potential enhancements are also discussed

  5. Do placebo based validation standards mimic real batch products behaviour? Case studies.

    PubMed

    Bouabidi, A; Talbi, M; Bouklouze, A; El Karbane, M; Bourichi, H; El Guezzar, M; Ziemons, E; Hubert, Ph; Rozet, E

    2011-06-01

    Analytical methods validation is a mandatory step to evaluate the ability of developed methods to provide accurate results for their routine application. Validation usually involves validation standards or quality control samples that are prepared in placebo or reconstituted matrix made of a mixture of all the ingredients composing the drug product except the active substance or the analyte under investigation. However, one of the main concerns that can be made with this approach is that it may lack an important source of variability that come from the manufacturing process. The question that remains at the end of the validation step is about the transferability of the quantitative performance from validation standards to real authentic drug product samples. In this work, this topic is investigated through three case studies. Three analytical methods were validated using the commonly spiked placebo validation standards at several concentration levels as well as using samples coming from authentic batch samples (tablets and syrups). The results showed that, depending on the type of response function used as calibration curve, there were various degrees of differences in the results accuracy obtained with the two types of samples. Nonetheless the use of spiked placebo validation standards was showed to mimic relatively well the quantitative behaviour of the analytical methods with authentic batch samples. Adding these authentic batch samples into the validation design may help the analyst to select and confirm the most fit for purpose calibration curve and thus increase the accuracy and reliability of the results generated by the method in routine application. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Spectral characterization of near-infrared acousto-optic tunable filter (AOTF) hyperspectral imaging systems using standard calibration materials.

    PubMed

    Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan

    2011-04-01

    In this study, we propose and evaluate a method for spectral characterization of acousto-optic tunable filter (AOTF) hyperspectral imaging systems in the near-infrared (NIR) spectral region from 900 nm to 1700 nm. The proposed spectral characterization method is based on the SRM-2035 standard reference material, exhibiting distinct spectral features, which enables robust non-rigid matching of the acquired and reference spectra. The matching is performed by simultaneously optimizing the parameters of the AOTF tuning curve, spectral resolution, baseline, and multiplicative effects. In this way, the tuning curve (frequency-wavelength characteristics) and the corresponding spectral resolution of the AOTF hyperspectral imaging system can be characterized simultaneously. Also, the method enables simple spectral characterization of the entire imaging plane of hyperspectral imaging systems. The results indicate that the method is accurate and efficient and can easily be integrated with systems operating in diffuse reflection or transmission modes. Therefore, the proposed method is suitable for characterization, calibration, or validation of AOTF hyperspectral imaging systems. © 2011 Society for Applied Spectroscopy

  7. A photometric method for the estimation of the oil yield of oil shale

    USGS Publications Warehouse

    Cuttitta, Frank

    1951-01-01

    A method is presented for the distillation and photometric estimation of the oil yield of oil-bearing shales. The oil shale is distilled in a closed test tube and the oil extracted with toluene. The optical density of the toluene extract is used in the estimation of oil content and is converted to percentage of oil by reference to a standard curve. This curve is obtained by relating the oil yields determined by the Fischer assay method to the optical density of the toluene extract of the oil evolved by the new procedure. The new method gives results similar to those obtained by the Fischer assay method in a much shorter time. The applicability of the new method to oil-bearing shale and phosphatic shale has been tested.

  8. Establishment of analysis method for methane detection by gas chromatography

    NASA Astrophysics Data System (ADS)

    Liu, Xinyuan; Yang, Jie; Ye, Tianyi; Han, Zeyu

    2018-02-01

    The study focused on the establishment of analysis method for methane determination by gas chromatography. Methane was detected by hydrogen flame ionization detector, and the quantitative relationship was determined by working curve of y=2041.2x+2187 with correlation coefficient of 0.9979. The relative standard deviation of 2.60-6.33% and the recovery rate of 96.36%∼105.89% were obtained during the parallel determination of standard gas. This method was not quite suitable for biogas content analysis because methane content in biogas would be over the measurement range in this method.

  9. Marine Structural Steel Toughness Data Bank (Abridged Edition)

    DTIC Science & Technology

    1990-08-31

    for Table Column Headings: Break? Did specimen fracture completely? CODIc Critical COD CODi Initial COD CVN Energy Charpy V Energy Crack lgth Crack...Standard Y ear ........ .................. * Onen Test Temp KQ CODi CODIc Curve 3l degF ksi*in**0.5 mils mils in-lb/in2 L-T -166...Standard Method .... BS5762 -Standard Year . Test Temp CODIc degC mm -30 0.57 -30 0.68 -30 11.26 (continued) -not reported

  10. Ion-selective electrodes in potentiometric titrations; a new method for processing and evaluating titration data.

    PubMed

    Granholm, Kim; Sokalski, Tomasz; Lewenstam, Andrzej; Ivaska, Ari

    2015-08-12

    A new method to convert the potential of an ion-selective electrode to concentration or activity in potentiometric titration is proposed. The advantage of this method is that the electrode standard potential and the slope of the calibration curve do not have to be known. Instead two activities on the titration curve have to be estimated e.g. the starting activity before the titration begins and the activity at the end of the titration in the presence of large excess of titrant. This new method is beneficial when the analyte is in a complexed matrix or in a harsh environment which affects the properties of the electrode and the traditional calibration procedure with standard solutions cannot be used. The new method was implemented both in a method of linearization based on the Grans's plot and in determination of the stability constant of a complex and the concentration of the complexing ligand in the sample. The new method gave accurate results when using titrations data from experiments with samples of known composition and with real industrial harsh black liquor sample. A complexometric titration model was also developed. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Tendency for interlaboratory precision in the GMO analysis method based on real-time PCR.

    PubMed

    Kodama, Takashi; Kurosawa, Yasunori; Kitta, Kazumi; Naito, Shigehiro

    2010-01-01

    The Horwitz curve estimates interlaboratory precision as a function only of concentration, and is frequently used as a method performance criterion in food analysis with chemical methods. The quantitative biochemical methods based on real-time PCR require an analogous criterion to progressively promote method validation. We analyzed the tendency of precision using a simplex real-time PCR technique in 53 collaborative studies of seven genetically modified (GM) crops. Reproducibility standard deviation (SR) and repeatability standard deviation (Sr) of the genetically modified organism (GMO) amount (%) was more or less independent of GM crops (i.e., maize, soybean, cotton, oilseed rape, potato, sugar beet, and rice) and evaluation procedure steps. Some studies evaluated whole steps consisting of DNA extraction and PCR quantitation, whereas others focused only on the PCR quantitation step by using DNA extraction solutions. Therefore, SR and Sr for GMO amount (%) are functions only of concentration similar to the Horwitz curve. We proposed S(R) = 0.1971C 0.8685 and S(r) = 0.1478C 0.8424, where C is the GMO amount (%). We also proposed a method performance index in GMO quantitative methods that is analogous to the Horwitz Ratio.

  12. Devising a method towards development of early warning tool for detection of malaria outbreak.

    PubMed

    Verma, Preeti; Sarkar, Soma; Singh, Poonam; Dhiman, Ramesh C

    2017-11-01

    Uncertainty often arises in differentiating seasonal variation from outbreaks of malaria. The present study was aimed to generalize the theoretical structure of sine curve for detecting an outbreak so that a tool for early warning of malaria may be developed. A 'case/mean-ratio scale' system was devised for labelling the outbreak in respect of two diverse districts of Assam and Rajasthan. A curve-based method of analysis was developed for determining outbreak and using the properties of sine curve. It could be used as an early warning tool for Plasmodium falciparum malaria outbreaks. In the present method of analysis, the critical C max (peak value of sine curve) value of seasonally adjusted curve for P. falciparum malaria outbreak was 2.3 for Karbi Anglong and 2.2 for Jaisalmer districts. On case/mean-ratio scale, the C max value of malaria curve between C max and 3.5, the outbreak could be labelled as minor while >3.5 may be labelled as major. In epidemic years, with mean of case/mean ratio of ≥1.00 and root mean square (RMS) ≥1.504 of case/mean ratio, outbreaks can be predicted 1-2 months in advance. The present study showed that in P. falciparum cases in Karbi Anglong (Assam) and Jaisalmer (Rajasthan) districts, the rise in C max value of curve was always followed by rise in average/RMS or both and hence could be used as an early warning tool. The present method provides better detection of outbreaks than the conventional method of mean plus two standard deviation (mean+2 SD). The identified tools are simple and may be adopted for preparedness of malaria outbreaks.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, D.; Fertitta, E.; Paulus, B.

    Due to the importance of both static and dynamical correlation in the bond formation, low-dimensional beryllium systems constitute interesting case studies to test correlation methods. Aiming to describe the whole dissociation curve of extended Be systems we chose to apply the method of increments (MoI) in its multireference (MR) formalism. To gain insight into the main characteristics of the wave function, we started by focusing on the description of small Be chains using standard quantum chemical methods. In a next step we applied the MoI to larger beryllium systems, starting from the Be{sub 6} ring. The complete active space formalismmore » was employed and the results were used as reference for local MR calculations of the whole dissociation curve. Although this is a well-established approach for systems with limited multireference character, its application regarding the description of whole dissociation curves requires further testing. Subsequent to the discussion of the role of the basis set, the method was finally applied to larger rings and extrapolated to an infinite chain.« less

  14. A comparative physical evaluation of four X-ray films.

    PubMed

    Egyed, M; Shearer, D R

    1981-09-01

    In this study, four general purpose radiographic films (Agfa Gevaert Curix RP-1, duPont Cronex 4, Fuji RX, and Kodak XRP-1) were compared using three independent techniques. By examining the characteristic curves for the four films, film speed and contrast were compared over the diagnostically useful density range. These curves were generated using three methods: (1) irradiation of a standard film cassette lined with high-speed screens, covered by a twelve-step aluminum wedge; (2) direct exposure of film strips to an electro-luminescent sensitometer; and (3) direct irradiation of a standard film cassette lined with high-speed screens. The latter technique provided quantitative values for film speed and relative contrast. All three techniques provided virtually properly identical results and indicate that under properly controlled conditions simplified methods of film testing can give results equivalent to those obtained by more sophisticated techniques.

  15. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  16. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  17. New well testing applications of the pressure derivative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onur, M.

    1989-01-01

    This work presents new derivative type curves based on a new derivative group which is equal to the dimensionless pressure group divided by its logarithmic derivative with respect to dimensionless time group. One major advantage of these type curves is that the type-curve match of field pressure/pressure-derivative data with the new derivative type curves is accomplished by moving the field data plot in only the horizontal direction. This type-curve match fixes time match-point values. The pressure change versus time data is then matched with the dimensionless pressure solution to determine match-point values. Well/reservoir parameters can then be estimated in themore » standard way. This two step type-curve matching procedure increases the likelihood of obtaining a unique match. Moreover, the unique correspondence between the ordinate of the field data plot and the new derivative type curves should prove useful in determining whether given field data actually represents the well/reservoir model assumed by a selected type curve solution. It is also shown that the basic idea used in construction the type curves can be used to ensure that proper semilog straight lines are chosen when analyzing pressure data by semilog methods. Analysis of both drawdown and buildup data is considered and actual field cases are analyzed using the new derivative type curves and the semilog identification method. This work also presents new methods based on the pressure derivative to analyze buildup data obtained at a well (fracture or unfractured) produced to pseudosteady-state prior to shut-in. By using a method of analysis based on the pressure derivative, it is shown that a well's drainage area at the instant of shut-in and the flow capacity can be computed directly from buildup data even in cases where conventional semilog straight lines are not well-defined.« less

  18. Influence analysis in quantitative trait loci detection.

    PubMed

    Dou, Xiaoling; Kuriki, Satoshi; Maeno, Akiteru; Takada, Toyoyuki; Shiroishi, Toshihiko

    2014-07-01

    This paper presents systematic methods for the detection of influential individuals that affect the log odds (LOD) score curve. We derive general formulas of influence functions for profile likelihoods and introduce them into two standard quantitative trait locus detection methods-the interval mapping method and single marker analysis. Besides influence analysis on specific LOD scores, we also develop influence analysis methods on the shape of the LOD score curves. A simulation-based method is proposed to assess the significance of the influence of the individuals. These methods are shown useful in the influence analysis of a real dataset of an experimental population from an F2 mouse cross. By receiver operating characteristic analysis, we confirm that the proposed methods show better performance than existing diagnostics. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Analysis of fast and slow responses in AC conductance curves for p-type SiC MOS capacitors

    NASA Astrophysics Data System (ADS)

    Karamoto, Yuki; Zhang, Xufang; Okamoto, Dai; Sometani, Mitsuru; Hatakeyama, Tetsuo; Harada, Shinsuke; Iwamuro, Noriyuki; Yano, Hiroshi

    2018-06-01

    We used a conductance method to investigate the interface characteristics of a SiO2/p-type 4H-SiC MOS structure fabricated by dry oxidation. It was found that the measured equivalent parallel conductance–frequency (G p/ω–f) curves were not symmetric, showing that there existed both high- and low-frequency signals. We attributed high-frequency responses to fast interface states and low-frequency responses to near-interface oxide traps. To analyze the fast interface states, Nicollian’s standard conductance method was applied in the high-frequency range. By extracting the high-frequency responses from the measured G p/ω–f curves, the characteristics of the low-frequency responses were reproduced by Cooper’s model, which considers the effect of near-interface traps on the G p/ω–f curves. The corresponding density distribution of slow traps as a function of energy level was estimated.

  20. Analytical evaluation of current starch methods used in the international sugar industry: Part I.

    PubMed

    Cole, Marsha; Eggleston, Gillian; Triplett, Alexa

    2017-08-01

    Several analytical starch methods exist in the international sugar industry to mitigate starch-related processing challenges and assess the quality of traded end-products. These methods use iodometric chemistry, mostly potato starch standards, and utilize similar solubilization strategies, but had not been comprehensively compared. In this study, industrial starch methods were compared to the USDA Starch Research method using simulated raw sugars. Type of starch standard, solubilization approach, iodometric reagents, and wavelength detection affected total starch determination in simulated raw sugars. Simulated sugars containing potato starch were more accurately detected by the industrial methods, whereas those containing corn starch, a better model for sugarcane starch, were only accurately measured by the USDA Starch Research method. Use of a potato starch standard curve over-estimated starch concentrations. Among the variables studied, starch standard, solubilization approach, and wavelength detection affected the sensitivity, accuracy/precision, and limited the detection/quantification of the current industry starch methods the most. Published by Elsevier Ltd.

  1. Association between routine and standardized blood pressure measurements and left ventricular hypertrophy among patients on hemodialysis.

    PubMed

    Khangura, Jaspreet; Culleton, Bruce F; Manns, Braden J; Zhang, Jianguo; Barnieh, Lianne; Walsh, Michael; Klarenbach, Scott W; Tonelli, Marcello; Sarna, Magdalena; Hemmelgarn, Brenda R

    2010-06-24

    Left ventricular (LV) hypertrophy is common among patients on hemodialysis. While a relationship between blood pressure (BP) and LV hypertrophy has been established, it is unclear which BP measurement method is the strongest correlate of LV hypertrophy. We sought to determine agreement between various blood pressure measurement methods, as well as identify which method was the strongest correlate of LV hypertrophy among patients on hemodialysis. This was a post-hoc analysis of data from a randomized controlled trial. We evaluated the agreement between seven BP measurement methods: standardized measurement at baseline; single pre- and post-dialysis, as well as mean intra-dialytic measurement at baseline; and cumulative pre-, intra- and post-dialysis readings (an average of 12 monthly readings based on a single day per month). Agreement was assessed using Lin's concordance correlation coefficient (CCC) and the Bland Altman method. Association between BP measurement method and LV hypertrophy on baseline cardiac MRI was determined using receiver operating characteristic curves and area under the curve (AUC). Agreement between BP measurement methods in the 39 patients on hemodialysis varied considerably, from a CCC of 0.35 to 0.94, with overlapping 95% confidence intervals. Pre-dialysis measurements were the weakest predictors of LV hypertrophy while standardized, post- and inter-dialytic measurements had similar and strong (AUC 0.79 to 0.80) predictive power for LV hypertrophy. A single standardized BP has strong predictive power for LV hypertrophy and performs just as well as more resource intensive cumulative measurements, whereas pre-dialysis blood pressure measurements have the weakest predictive power for LV hypertrophy. Current guidelines, which recommend using pre-dialysis measurements, should be revisited to confirm these results.

  2. Fluorescence spectroscopy for diagnosis of squamous intraepithelial lesions of the cervix.

    PubMed

    Mitchell, M F; Cantor, S B; Ramanujam, N; Tortolero-Luna, G; Richards-Kortum, R

    1999-03-01

    To calculate receiver operating characteristic (ROC) curves for fluorescence spectroscopy in order to measure its performance in the diagnosis of squamous intraepithelial lesions (SILs) and to compare these curves with those for other diagnostic methods: colposcopy, cervicography, speculoscopy, Papanicolaou smear screening, and human papillomavirus (HPV) testing. Data from our previous clinical study were used to calculate ROC curves for fluorescence spectroscopy. Curves for other techniques were calculated from other investigators' reports. To identify these, a MEDLINE search for articles published from 1966 to 1996 was carried out, using the search terms "colposcopy," "cervicoscopy," "cervicography," "speculoscopy," "Papanicolaou smear," "HPV testing," "fluorescence spectroscopy," and "polar probe" in conjunction with the terms "diagnosis," "positive predictive value," "negative predictive value," and "receiver operating characteristic curve." We found 270 articles, from which articles were selected if they reported results of studies involving high-disease-prevalence populations, reported findings of studies in which colposcopically directed biopsy was the criterion standard, and included sufficient data for recalculation of the reported sensitivities and specificities. We calculated ROC curves for fluorescence spectroscopy using Bayesian and neural net algorithms. A meta-analytic approach was used to calculate ROC curves for the other techniques. Areas under the curves were calculated. Fluorescence spectroscopy using the neural net algorithm had the highest area under the ROC curve, followed by fluorescence spectroscopy using the Bayesian algorithm, followed by colposcopy, the standard diagnostic technique. Cervicography, Papanicolaou smear screening, and HPV testing performed comparably with each other but not as well as fluorescence spectroscopy and colposcopy. Fluorescence spectroscopy performs better than colposcopy and other techniques in the diagnosis of SILs. Because it also permits real-time diagnosis and has the potential of being used by inexperienced health care personnel, this technology holds bright promise.

  3. Evaluation of alternative model selection criteria in the analysis of unimodal response curves using CART

    USGS Publications Warehouse

    Ribic, C.A.; Miller, T.W.

    1998-01-01

    We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.

  4. A synthetic method of solar spectrum based on LED

    NASA Astrophysics Data System (ADS)

    Wang, Ji-qiang; Su, Shi; Zhang, Guo-yu; Zhang, Jian

    2017-10-01

    A synthetic method of solar spectrum which based on the spectral characteristics of the solar spectrum and LED, and the principle of arbitrary spectral synthesis was studied by using 14 kinds of LED with different central wavelengths.The LED and solar spectrum data were selected by Origin Software firstly, then calculated the total number of LED for each center band by the transformation relation between brightness and illumination and Least Squares Curve Fit in Matlab.Finally, the spectrum curve of AM1.5 standard solar spectrum was obtained. The results met the technical indexes of the solar spectrum matching with ±20% and the solar constant with >0.5.

  5. Applied Polarography for Analysis of Ordnance Materials. Part 1. Determination and Monitoring for 1,2-Propyleneglycoldinitrate in Effluent Water by Single-Sweep Polarography

    DTIC Science & Technology

    1976-06-01

    ie irom Ott - Ae wastewatt:r. Data obtained by the NWC-developed method of analyds and field equipment ccupare favorably with data obtained by a vapor...5, curve &. A microaliquot of standard PIM solution is then added to the cell solution and the procedure is repeated. This is known as the standard

  6. Estimation of Mechanical Properties of Stainless Steel AISI 410 by Small-Punch Testing (Erickson Test)

    NASA Astrophysics Data System (ADS)

    Hassan, A.-P.

    2014-07-01

    The small-punch testing (SPT) method is used for determining the mechanical properties of AISI 410 (0.14% C, 12% Cr) stainless steel. A thin disc-shaped specimen with known mechanical properties is pressed with a small ball until the appearance of cracks in the former. The load - displacement curves are recorded. Computation of the yield strength and fracture energy by the curve obtained and by known formulas shows good convergence with the characteristics obtained by standard testing.

  7. Droplet Digital PCR for Minimal Residual Disease Detection in Mature Lymphoproliferative Disorders.

    PubMed

    Drandi, Daniela; Ferrero, Simone; Ladetto, Marco

    2018-01-01

    Minimal residual disease (MRD) detection has a powerful prognostic relevance for response evaluation and prediction of relapse in hematological malignancies. Real-time quantitative PCR (qPCR) has become the settled and standardized method for MRD assessment in lymphoid disorders. However, qPCR is a relative quantification approach, since it requires a reference standard curve. Droplet digital TM PCR (ddPCR TM ) allows a reliable absolute tumor burden quantification withdrawing the need for preparing, for each experiment, a tumor-specific standard curve. We have recently shown that ddPCR has a good concordance with qPCR and could be a feasible and reliable tool for MRD monitoring in mature lymphoproliferative disorders. In this chapter we describe the experimental workflow, from the detection of the clonal molecular marker to the MRD monitoring by ddPCR, in patients affected by multiple myeloma, mantle cell lymphoma and follicular lymphoma. However, standardization programs among different laboratories are needed in order to ensure the reliability and reproducibility of ddPCR-based MRD results.

  8. Mitigation methods for temporary concrete traffic barrier effects on flood water flows.

    DOT National Transportation Integrated Search

    2011-07-01

    A combined experimental and analytical approach was put together to evaluate the hydraulic performance and : stability of TxDOT standard and modified temporary concrete traffic barriers (TCTBs) in extreme flood. : Rating curves are developed for diff...

  9. A Method for Evaluating Insecticide Efficacy against Bed Bug, Cimex lectularius, Eggs and First Instars.

    PubMed

    Campbell, Brittany E; Miller, Dini M

    2017-03-15

    Standard toxicity evaluations of insecticides against insect pests are primarily conducted on adult insects. Evaluations are based on a dose-response or concentration-response curve, where mortality increases as the dose or concentration of an insecticide is increased. Standard lethal concentration (LC50) and lethal dose (LD50) tests that result in 50% mortality of a test population can be challenging for evaluating toxicity of insecticides against non-adult insect life stages, such as eggs and early instar or nymphal stages. However, this information is essential for understanding insecticide efficacy in all bed bug life stages, which affects control and treatment efforts. This protocol uses a standard dipping bioassay modified for bed bug eggs and a contact insecticidal assay for treating nymphal first instars. These assays produce a concentration-response curve to further quantify LC50 values for insecticide evaluations.

  10. Quantification Bias Caused by Plasmid DNA Conformation in Quantitative Real-Time PCR Assay

    PubMed Central

    Lin, Chih-Hui; Chen, Yu-Chieh; Pan, Tzu-Ming

    2011-01-01

    Quantitative real-time PCR (qPCR) is the gold standard for the quantification of specific nucleic acid sequences. However, a serious concern has been revealed in a recent report: supercoiled plasmid standards cause significant over-estimation in qPCR quantification. In this study, we investigated the effect of plasmid DNA conformation on the quantification of DNA and the efficiency of qPCR. Our results suggest that plasmid DNA conformation has significant impact on the accuracy of absolute quantification by qPCR. DNA standard curves shifted significantly among plasmid standards with different DNA conformations. Moreover, the choice of DNA measurement method and plasmid DNA conformation may also contribute to the measurement error of DNA standard curves. Due to the multiple effects of plasmid DNA conformation on the accuracy of qPCR, efforts should be made to assure the highest consistency of plasmid standards for qPCR. Thus, we suggest that the conformation, preparation, quantification, purification, handling, and storage of standard plasmid DNA should be described and defined in the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) to assure the reproducibility and accuracy of qPCR absolute quantification. PMID:22194997

  11. Activity measurements of 55Fe by two different methods

    NASA Astrophysics Data System (ADS)

    da Cruz, Paulo A. L.; Iwahara, Akira; da Silva, Carlos J.; Poledna, Roberto; Loureiro, Jamir S.; da Silva, Monica A. L.; Ruzzarin, Anelise

    2018-03-01

    A calibrated germanium detector and CIEMAT/NIST liquid scintillation method were used in the standardization of solution of 55Fe coming from a key-comparison BIPM. Commercial cocktails were used in source preparation for activity measurements in CIEMAT/NIST method. Measurements were performed in Liquid Scintillation Counter. In the germanium counting method standard point sources were prepared for obtaining atomic number versus efficiency curve of the detector in order to obtain the efficiency of 5.9 keV KX-ray of 55Fe by interpolation. The activity concentrations obtained were 508.17 ± 3.56 and 509.95 ± 16.20 kBq/g for CIEMAT/NIST and germanium methods, respectively.

  12. Determination of volume-time curves for the right ventricle and its outflow tract for functional analyses.

    PubMed

    Gabbert, Dominik D; Entenmann, Andreas; Jerosch-Herold, Michael; Frettlöh, Felicitas; Hart, Christopher; Voges, Inga; Pham, Minh; Andrade, Ana; Pardun, Eileen; Wegner, P; Hansen, Traudel; Kramer, Hans-Heiner; Rickers, Carsten

    2013-12-01

    The determination of right ventricular volumes and function is of increasing interest for the postoperative care of patients with congenital heart defects. The presentation of volumetry data in terms of volume-time curves allows a comprehensive functional assessment. By using manual contour tracing, the generation of volume-time curves is exceedingly time-consuming. This study describes a fast and precise method for determining volume-time curves for the right ventricle and for the right ventricular outflow tract. The method applies contour detection and includes a feature for identifying the right ventricular outflow tract volume. The segregation of the outflow tract is performed by four-dimensional curved smooth boundary surfaces defined by prespecified anatomical landmarks. The comparison with manual contour tracing demonstrates that the method is accurate and improves the precision of the measurement. Compared to manual contour tracing the bias is <0.1% ± 4.1% (right ventricle) and -2.6% ± 20.0% (right ventricular outflow tract). The standard deviations of inter- and intraobserver variabilities for determining the volume of the right ventricular outflow tract are reduced to less than half the values of manual contour tracing. The time consumption per patient is reduced from 341 ± 80 min (right ventricle) and 56 ± 11 min (right ventricular outflow tract) using manual contour tracing to 46 ± 9 min for a combined analysis of right ventricle and right ventricular outflow tract. The analysis of volume-time curves for the right ventricle and its outflow tract discloses new evaluation methods in clinical routine and science. Copyright © 2013 Wiley Periodicals, Inc.

  13. Protecting Location Privacy for Outsourced Spatial Data in Cloud Storage

    PubMed Central

    Gui, Xiaolin; An, Jian; Zhao, Jianqiang; Zhang, Xuejun

    2014-01-01

    As cloud computing services and location-aware devices are fully developed, a large amount of spatial data needs to be outsourced to the cloud storage provider, so the research on privacy protection for outsourced spatial data gets increasing attention from academia and industry. As a kind of spatial transformation method, Hilbert curve is widely used to protect the location privacy for spatial data. But sufficient security analysis for standard Hilbert curve (SHC) is seldom proceeded. In this paper, we propose an index modification method for SHC (SHC∗) and a density-based space filling curve (DSC) to improve the security of SHC; they can partially violate the distance-preserving property of SHC, so as to achieve better security. We formally define the indistinguishability and attack model for measuring the privacy disclosure risk of spatial transformation methods. The evaluation results indicate that SHC∗ and DSC are more secure than SHC, and DSC achieves the best index generation performance. PMID:25097865

  14. Protecting location privacy for outsourced spatial data in cloud storage.

    PubMed

    Tian, Feng; Gui, Xiaolin; An, Jian; Yang, Pan; Zhao, Jianqiang; Zhang, Xuejun

    2014-01-01

    As cloud computing services and location-aware devices are fully developed, a large amount of spatial data needs to be outsourced to the cloud storage provider, so the research on privacy protection for outsourced spatial data gets increasing attention from academia and industry. As a kind of spatial transformation method, Hilbert curve is widely used to protect the location privacy for spatial data. But sufficient security analysis for standard Hilbert curve (SHC) is seldom proceeded. In this paper, we propose an index modification method for SHC (SHC(∗)) and a density-based space filling curve (DSC) to improve the security of SHC; they can partially violate the distance-preserving property of SHC, so as to achieve better security. We formally define the indistinguishability and attack model for measuring the privacy disclosure risk of spatial transformation methods. The evaluation results indicate that SHC(∗) and DSC are more secure than SHC, and DSC achieves the best index generation performance.

  15. Quantitative evaluation method of the threshold adjustment and the flat field correction performances of hybrid photon counting pixel detectors

    NASA Astrophysics Data System (ADS)

    Medjoubi, K.; Dawiec, A.

    2017-12-01

    A simple method is proposed in this work for quantitative evaluation of the quality of the threshold adjustment and the flat-field correction of Hybrid Photon Counting pixel (HPC) detectors. This approach is based on the Photon Transfer Curve (PTC) corresponding to the measurement of the standard deviation of the signal in flat field images. Fixed pattern noise (FPN), easily identifiable in the curve, is linked to the residual threshold dispersion, sensor inhomogeneity and the remnant errors in flat fielding techniques. The analytical expression of the signal to noise ratio curve is developed for HPC and successfully used as a fit function applied to experimental data obtained with the XPAD detector. The quantitative evaluation of the FPN, described by the photon response non-uniformity (PRNU), is measured for different configurations (threshold adjustment method and flat fielding technique) and is demonstrated to be used in order to evaluate the best setting for having the best image quality from a commercial or a R&D detector.

  16. Liquid-liquid extraction of strongly protein bound BMS-299897 from human plasma and cerebrospinal fluid, followed by high-performance liquid chromatography/tandem mass spectrometry.

    PubMed

    Xue, Y J; Pursley, Janice; Arnold, Mark

    2007-04-11

    BMS-299897 is a gamma-secretase inhibitor that is being developed for the treatment of Alzheimer's disease. Liquid-liquid extraction (LLE), chromatographic/tandem mass spectrometry (LC/MS/MS) methods have been developed and validated for the quantitation of BMS-299897 in human plasma and cerebrospinal fluid (CSF). Both methods utilized (13)C6-BMS-299897, the stable label isotope analog, as the internal standard. For the human plasma extraction method, two incubation steps were required after the addition of 5 mM ammonium acetate and the internal standard in acetonitrile to release the analyte bound to proteins prior to LLE with toluene. For the human CSF extraction method, after the addition of 0.5 N HCl and the internal standard, CSF samples were extracted with toluene and no incubation was required. The organic layers obtained from both extraction methods were removed and evaporated to dryness. The residues were reconstituted and injected into the LC/MS/MS system. Chromatographic separation was achieved isocratically on a MetaChem C18 Hypersil BDS column (2.0 mm x 50 mm, 3 microm). The mobile phase contained 10 mM ammonium acetate pH 5 and acetonitrile. Detection was by negative ion electrospray tandem mass spectrometry. The standard curves ranged from 1 to 1000 ng/ml for human plasma and 0.25-100 ng/ml for human CSF. Both standard curves were fitted to a 1/x weighted quadratic regression model. For both methods, the intra-assay precision was within 8.2% CV, the inter-assay precision was within 5.4% CV, and assay accuracy was within +/-7.4% of the nominal values. The validation and sample analysis results demonstrated that both methods had acceptable precision and accuracy across the calibration ranges.

  17. Bayesian modeling and inference for diagnostic accuracy and probability of disease based on multiple diagnostic biomarkers with and without a perfect reference standard.

    PubMed

    Jafarzadeh, S Reza; Johnson, Wesley O; Gardner, Ian A

    2016-03-15

    The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Information Management Systems in the Undergraduate Instrumental Analysis Laboratory.

    ERIC Educational Resources Information Center

    Merrer, Robert J.

    1985-01-01

    Discusses two applications of Laboratory Information Management Systems (LIMS) in the undergraduate laboratory. They are the coulometric titration of thiosulfate with electrogenerated triiodide ion and the atomic absorption determination of calcium using both analytical calibration curve and standard addition methods. (JN)

  19. Visualizing excipient composition and homogeneity of Compound Liquorice Tablets by near-infrared chemical imaging

    NASA Astrophysics Data System (ADS)

    Wu, Zhisheng; Tao, Ou; Cheng, Wei; Yu, Lu; Shi, Xinyuan; Qiao, Yanjiang

    2012-02-01

    This study demonstrated that near-infrared chemical imaging (NIR-CI) was a promising technology for visualizing the spatial distribution and homogeneity of Compound Liquorice Tablets. The starch distribution (indirectly, plant extraction) could be spatially determined using basic analysis of correlation between analytes (BACRA) method. The correlation coefficients between starch spectrum and spectrum of each sample were greater than 0.95. Depending on the accurate determination of starch distribution, a method to determine homogeneous distribution was proposed by histogram graph. The result demonstrated that starch distribution in sample 3 was relatively heterogeneous according to four statistical parameters. Furthermore, the agglomerates domain in each tablet was detected using score image layers of principal component analysis (PCA) method. Finally, a novel method named Standard Deviation of Macropixel Texture (SDMT) was introduced to detect agglomerates and heterogeneity based on binary image. Every binary image was divided into different sizes length of macropixel and the number of zero values in each macropixel was counted to calculate standard deviation. Additionally, a curve fitting graph was plotted on the relationship between standard deviation and the size length of macropixel. The result demonstrated the inter-tablet heterogeneity of both starch and total compounds distribution, simultaneously, the similarity of starch distribution and the inconsistency of total compounds distribution among intra-tablet were signified according to the value of slope and intercept parameters in the curve.

  20. Accurate quantification of PGE2 in the polyposis in rat colon (Pirc) model by surrogate analyte-based UPLC-MS/MS.

    PubMed

    Yun, Changhong; Dashwood, Wan-Mohaiza; Kwong, Lawrence N; Gao, Song; Yin, Taijun; Ling, Qinglan; Singh, Rashim; Dashwood, Roderick H; Hu, Ming

    2018-01-30

    An accurate and reliable UPLC-MS/MS method is reported for the quantification of endogenous Prostaglandin E2 (PGE 2 ) in rat colonic mucosa and polyps. This method adopted the "surrogate analyte plus authentic bio-matrix" approach, using two different stable isotopic labeled analogs - PGE 2 -d9 as the surrogate analyte and PGE 2 -d4 as the internal standard. A quantitative standard curve was constructed with the surrogate analyte in colonic mucosa homogenate, and the method was successfully validated with the authentic bio-matrix. Concentrations of endogenous PGE 2 in both normal and inflammatory tissue homogenates were back-calculated based on the regression equation. Because of no endogenous interference on the surrogate analyte determination, the specificity was particularly good. By using authentic bio-matrix for validation, the matrix effect and exaction recovery are identically same for the quantitative standard curve and actual samples - this notably increased the assay accuracy. The method is easy, fast, robust and reliable for colon PGE 2 determination. This "surrogate analyte" approach was applied to measure the Pirc (an Apc-mutant rat kindred that models human FAP) mucosa and polyps PGE 2 , one of the strong biomarkers of colorectal cancer. A similar concept could be applied to endogenous biomarkers in other tissues. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. A Simple and Rapid Method for Standard Preparation of Gas Phase Extract of Cigarette Smoke

    PubMed Central

    Higashi, Tsunehito; Mai, Yosuke; Noya, Yoichi; Horinouchi, Takahiro; Terada, Koji; Hoshi, Akimasa; Nepal, Prabha; Harada, Takuya; Horiguchi, Mika; Hatate, Chizuru; Kuge, Yuji; Miwa, Soichi

    2014-01-01

    Cigarette smoke consists of tar and gas phase: the latter is toxicologically important because it can pass through lung alveolar epithelium to enter the circulation. Here we attempt to establish a standard method for preparation of gas phase extract of cigarette smoke (CSE). CSE was prepared by continuously sucking cigarette smoke through a Cambridge filter to remove tar, followed by bubbling it into phosphate-buffered saline (PBS). An increase in dry weight of the filter was defined as tar weight. Characteristically, concentrations of CSEs were represented as virtual tar concentrations, assuming that tar on the filter was dissolved in PBS. CSEs prepared from smaller numbers of cigarettes (original tar concentrations ≤15 mg/ml) showed similar concentration-response curves for cytotoxicity versus virtual tar concentrations, but with CSEs from larger numbers (tar ≥20 mg/ml), the curves were shifted rightward. Accordingly, the cytotoxic activity was detected in PBS of the second reservoir downstream of the first one with larger numbers of cigarettes. CSEs prepared from various cigarette brands showed comparable concentration-response curves for cytotoxicity. Two types of CSEs prepared by continuous and puff smoking protocols were similar regarding concentration-response curves for cytotoxicity, pharmacology of their cytotoxicity, and concentrations of cytotoxic compounds. These data show that concentrations of CSEs expressed by virtual tar concentrations can be a reference value to normalize their cytotoxicity, irrespective of numbers of combusted cigarettes, cigarette brands and smoking protocols, if original tar concentrations are ≤15 mg/ml. PMID:25229830

  2. On a framework for generating PoD curves assisted by numerical simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subair, S. Mohamed, E-mail: prajagopal@iitm.ac.in; Agrawal, Shweta, E-mail: prajagopal@iitm.ac.in; Balasubramaniam, Krishnan, E-mail: prajagopal@iitm.ac.in

    2015-03-31

    The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here wemore » develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.« less

  3. On a framework for generating PoD curves assisted by numerical simulations

    NASA Astrophysics Data System (ADS)

    Subair, S. Mohamed; Agrawal, Shweta; Balasubramaniam, Krishnan; Rajagopal, Prabhu; Kumar, Anish; Rao, Purnachandra B.; Tamanna, Jayakumar

    2015-03-01

    The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here we develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.

  4. Corrected confidence bands for functional data using principal components.

    PubMed

    Goldsmith, J; Greven, S; Crainiceanu, C

    2013-03-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.

  5. Corrected Confidence Bands for Functional Data Using Principal Components

    PubMed Central

    Goldsmith, J.; Greven, S.; Crainiceanu, C.

    2014-01-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003

  6. Color-coded automated signal intensity curves for detection and characterization of breast lesions: preliminary evaluation of a new software package for integrated magnetic resonance-based breast imaging.

    PubMed

    Pediconi, Federica; Catalano, Carlo; Venditti, Fiammetta; Ercolani, Mauro; Carotenuto, Luigi; Padula, Simona; Moriconi, Enrica; Roselli, Antonella; Giacomelli, Laura; Kirchin, Miles A; Passariello, Roberto

    2005-07-01

    The objective of this study was to evaluate the value of a color-coded automated signal intensity curve software package for contrast-enhanced magnetic resonance mammography (CE-MRM) in patients with suspected breast cancer. Thirty-six women with suspected breast cancer based on mammographic and sonographic examinations were preoperatively evaluated on CE-MRM. CE-MRM was performed on a 1.5-T magnet using a 2D Flash dynamic T1-weighted sequence. A dosage of 0.1 mmol/kg of Gd-BOPTA was administered at a flow rate of 2 mL/s followed by 10 mL of saline. Images were analyzed with the new software package and separately with a standard display method. Statistical comparison was performed of the confidence for lesion detection and characterization with the 2 methods and of the diagnostic accuracy for characterization compared with histopathologic findings. At pathology, 54 malignant lesions and 14 benign lesions were evaluated. All 68 (100%) lesions were detected with both methods and good correlation with histopathologic specimens was obtained. Confidence for both detection and characterization was significantly (P < or = 0.025) better with the color-coded method, although no difference (P > 0.05) between the methods was noted in terms of the sensitivity, specificity, and overall accuracy for lesion characterization. Excellent agreement between the 2 methods was noted for both the determination of lesion size (kappa = 0.77) and determination of SI/T curves (kappa = 0.85). The novel color-coded signal intensity curve software allows lesions to be visualized as false color maps that correspond to conventional signal intensity time curves. Detection and characterization of breast lesions with this method is quick and easily interpretable.

  7. Alternative standardization approaches to improving streamflow reconstructions with ring-width indices of riparian trees

    USGS Publications Warehouse

    Meko, David M.; Friedman, Jonathan M.; Touchan, Ramzi; Edmondson, Jesse R.; Griffin, Eleanor R.; Scott, Julian A.

    2015-01-01

    Old, multi-aged populations of riparian trees provide an opportunity to improve reconstructions of streamflow. Here, ring widths of 394 plains cottonwood (Populus deltoids, ssp. monilifera) trees in the North Unit of Theodore Roosevelt National Park, North Dakota, are used to reconstruct streamflow along the Little Missouri River (LMR), North Dakota, US. Different versions of the cottonwood chronology are developed by (1) age-curve standardization (ACS), using age-stratified samples and a single estimated curve of ring width against estimated ring age, and (2) time-curve standardization (TCS), using a subset of longer ring-width series individually detrended with cubic smoothing splines of width against year. The cottonwood chronologies are combined with the first principal component of four upland conifer chronologies developed by conventional methods to investigate the possible value of riparian tree-ring chronologies for streamflow reconstruction of the LMR. Regression modeling indicates that the statistical signal for flow is stronger in the riparian cottonwood than in the upland chronologies. The flow signal from cottonwood complements rather than repeats the signal from upland conifers and is especially strong in young trees (e.g. 5–35 years). Reconstructions using a combination of cottonwoods and upland conifers are found to explain more than 50% of the variance of LMR flow over a 1935–1990 calibration period and to yield reconstruction of flow to 1658. The low-frequency component of reconstructed flow is sensitive to the choice of standardization method for the cottonwood. In contrast to the TCS version, the ACS reconstruction features persistent low flows in the 19th century. Results demonstrate the value to streamflow reconstruction of riparian cottonwood and suggest that more studies are needed to exploit the low-frequency streamflow signal in densely sampled age-stratified stands of riparian trees.

  8. Relation between Birth Weight and Intraoperative Hemorrhage during Cesarean Section in Pregnancy with Placenta Previa

    PubMed Central

    Ishibashi, Hiroki; Takano, Masashi; Sasa, Hidenori; Furuya, Kenichi

    2016-01-01

    Background Placenta previa, one of the most severe obstetric complications, carries an increased risk of intraoperative massive hemorrhage. Several risk factors for intraoperative hemorrhage have been identified to date. However, the correlation between birth weight and intraoperative hemorrhage has not been investigated. Here we estimate the correlation between birth weight and the occurrence of intraoperative massive hemorrhage in placenta previa. Materials and Methods We included all 256 singleton pregnancies delivered via cesarean section at our hospital because of placenta previa between 2003 and 2015. We calculated not only measured birth weights but also standard deviation values according to the Japanese standard growth curve to adjust for differences in gestational age. We assessed the correlation between birth weight and the occurrence of intraoperative massive hemorrhage (>1500 mL blood loss). Receiver operating characteristic curves were constructed to determine the cutoff value of intraoperative massive hemorrhage. Results Of 256 pregnant women with placenta previa, 96 (38%) developed intraoperative massive hemorrhage. Receiver-operating characteristic curves revealed that the area under the curve of the combination variables between the standard deviation of birth weight and intraoperative massive hemorrhage was 0.71. The cutoff value with a sensitivity of 81.3% and specificity of 55.6% was −0.33 standard deviation. The multivariate analysis revealed that a standard deviation of >−0.33 (odds ratio, 5.88; 95% confidence interval, 3.04–12.00), need for hemostatic procedures (odds ratio, 3.31; 95% confidence interval, 1.79–6.25), and placental adhesion (odds ratio, 12.68; 95% confidence interval, 2.85–92.13) were independent risk of intraoperative massive hemorrhage. Conclusion In patients with placenta previa, a birth weight >−0.33 standard deviation was a significant risk indicator of massive hemorrhage during cesarean section. Based on this result, further studies are required to investigate whether fetal weight estimated by ultrasonography can predict hemorrhage during cesarean section in patients with placental previa. PMID:27902772

  9. Estimating site index of ponderosa pine in Northern California...standard curves, soil series, stem analysis

    Treesearch

    Robert F. Powers

    1972-01-01

    Four sets of standard site index curves based on statewide or regionwide averages were compared with data on natural growth from nine young stands of ponderosa pine in northern California. The curves tested were by Meyer; Dunning; Dunning and Reineke; and Arvanitis, Lindquist, and Palley. The effects of soils on height growth were also studied. Among the curves tested...

  10. 40 CFR Appendix C to Part 425 - Definition and Procedure for the Determination of the Method Detection Limit 1

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Determination of the Method Detection Limit 1 C Appendix C to Part 425 Protection of Environment ENVIRONMENTAL... CATEGORY Pt. 425, App. C Appendix C to Part 425—Definition and Procedure for the Determination of the... reagent water. (c) The concentration value that corresponds to the region of the standard curve where...

  11. How far is it? Distance measurements and their consequences

    NASA Astrophysics Data System (ADS)

    Krełowski, Jacek

    2017-08-01

    Methods of measuring distances to objects in our Milky Way are briefly discussed. They generally base on three principles: of using a standard rod, of standard candle and of column density of interstellar matter. Weak and strong points of these methods are presented. The presence of gray extinction towards some objects is suggested which makes the most universal method of standard candle (spectroscopic parallax) very uncertain. Hard to say whether gray extinc-tion appears only in the form of circumstellar debris discs or is present also in the general interstellar medium. The application of the method of measuring column densities of interstellar gases suggests that the rotation curve of our Milky Way system is rather Keplerian than flat which creates doubts as to whether any Dark Matter halo is present around our Galaxy. It is emphasized that the most universal method, i.e. that of standard candle, used to estimate distances to cosmological objects, may suffer serious errors because of improper subtraction of extinction effects.

  12. High performance liquid chromatographic assay for the quantitation of total glutathione in plasma

    NASA Technical Reports Server (NTRS)

    Abukhalaf, Imad K.; Silvestrov, Natalia A.; Menter, Julian M.; von Deutsch, Daniel A.; Bayorh, Mohamed A.; Socci, Robin R.; Ganafa, Agaba A.

    2002-01-01

    A simple and widely used homocysteine HPLC procedure was applied for the HPLC identification and quantitation of glutathione in plasma. The method, which utilizes SBDF as a derivatizing agent utilizes only 50 microl of sample volume. Linear quantitative response curve was generated for glutathione over a concentration range of 0.3125-62.50 micromol/l. Linear regression analysis of the standard curve exhibited correlation coefficient of 0.999. Limit of detection (LOD) and limit of quantitation (LOQ) values were 5.0 and 15 pmol, respectively. Glutathione recovery using this method was nearly complete (above 96%). Intra-assay and inter-assay precision studies reflected a high level of reliability and reproducibility of the method. The applicability of the method for the quantitation of glutathione was demonstrated successfully using human and rat plasma samples.

  13. Standardized Percentile Curves of Body Mass Index of Northeast Iranian Children Aged 25 to 60 Months

    PubMed Central

    Emdadi, Maryam; Safarian, Mohammad; Doosti, Hassan

    2011-01-01

    Objective Growth charts are widely used to assess children's growth status and can provide a trajectory of growth during early important months of life. Racial differences necessitate using local growth charts. This study aimed to provide standardized growth curves of body mass index (BMI) for children living in northeast Iran. Methods A total of 23730 apparently healthy boys and girls aged 25 to 60 months recruited for 20 days from those attending community clinics for routine health checks. Anthropometric measurements were done by trained health staff using WHO methodology. The LMSP method with maximum penalized likelihood, the Generalized Additive Models, the Box-Cox power exponential distribution distribution, Akaike Information Criteria and Generalized Akaike Criteria with penalty equal to 3 [GAIC(3)], and Worm plot and Q-tests as goodness of fit tests were used to construct the centile reference charts. Findings The BMI centile curves for boys and girls aged 25 to 60 months were drawn utilizing a population of children living in northeast Iran. Conclusion The results of the current study demonstrate the possibility of preparation of local growth charts and their importance in evaluating children's growth. Also their differences, relative to those prepared by global references, reflect the necessity of preparing local charts in future studies using longitudinal data. PMID:23056770

  14. Historical Cost Curves for Hydrogen Masers and Cesium Beam Frequency and Timing Standards

    NASA Technical Reports Server (NTRS)

    Remer, D. S.; Moore, R. C.

    1985-01-01

    Historical cost curves were developed for hydrogen masers and cesium beam standards used for frequency and timing calibration in the Deep Space Network. These curves may be used to calculate the cost of future hydrogen masers or cesium beam standards in either future or current dollars. The cesium beam standards are decreasing in cost by about 2.3% per year since 1966, and hydrogen masers are decreasing by about 0.8% per year since 1978 relative to the National Aeronautics and Space Administration inflation index.

  15. Noncontact spirometry with a webcam

    NASA Astrophysics Data System (ADS)

    Liu, Chenbin; Yang, Yuting; Tsow, Francis; Shao, Dangdang; Tao, Nongjian

    2017-05-01

    We present an imaging-based method for noncontact spirometry. The method tracks the subtle respiratory-induced shoulder movement of a subject, builds a calibration curve, and determines the flow-volume spirometry curve and vital respiratory parameters, including forced expiratory volume in the first second, forced vital capacity, and peak expiratory flow rate. We validate the accuracy of the method by comparing the data with those simultaneously recorded with a gold standard reference method and examine the reliability of the noncontact spirometry with a pilot study including 16 subjects. This work demonstrates that the noncontact method can provide accurate and reliable spirometry tests with a webcam. Compared to the traditional spirometers, the present noncontact spirometry does not require using a spirometer, breathing into a mouthpiece, or wearing a nose clip, thus making spirometry test more easily accessible for the growing population of asthma and chronic obstructive pulmonary diseases.

  16. Noncontact spirometry with a webcam.

    PubMed

    Liu, Chenbin; Yang, Yuting; Tsow, Francis; Shao, Dangdang; Tao, Nongjian

    2017-05-01

    We present an imaging-based method for noncontact spirometry. The method tracks the subtle respiratory-induced shoulder movement of a subject, builds a calibration curve, and determines the flow-volume spirometry curve and vital respiratory parameters, including forced expiratory volume in the first second, forced vital capacity, and peak expiratory flow rate. We validate the accuracy of the method by comparing the data with those simultaneously recorded with a gold standard reference method and examine the reliability of the noncontact spirometry with a pilot study including 16 subjects. This work demonstrates that the noncontact method can provide accurate and reliable spirometry tests with a webcam. Compared to the traditional spirometers, the present noncontact spirometry does not require using a spirometer, breathing into a mouthpiece, or wearing a nose clip, thus making spirometry test more easily accessible for the growing population of asthma and chronic obstructive pulmonary diseases.

  17. Development and validation of new spectrophotometric ratio H-point standard addition method and application to gastrointestinal acting drugs mixtures.

    PubMed

    Yehia, Ali M

    2013-05-15

    New, simple, specific, accurate and precise spectrophotometric technique utilizing ratio spectra is developed for simultaneous determination of two different binary mixtures. The developed ratio H-point standard addition method (RHPSAM) was managed successfully to resolve the spectral overlap in itopride hydrochloride (ITO) and pantoprazole sodium (PAN) binary mixture, as well as, mosapride citrate (MOS) and PAN binary mixture. The theoretical background and advantages of the newly proposed method are presented. The calibration curves are linear over the concentration range of 5-60 μg/mL, 5-40 μg/mL and 4-24 μg/mL for ITO, MOS and PAN, respectively. Specificity of the method was investigated and relative standard deviations were less than 1.5. The accuracy, precision and repeatability were also investigated for the proposed method according to ICH guidelines. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Development and validation of new spectrophotometric ratio H-point standard addition method and application to gastrointestinal acting drugs mixtures

    NASA Astrophysics Data System (ADS)

    Yehia, Ali M.

    2013-05-01

    New, simple, specific, accurate and precise spectrophotometric technique utilizing ratio spectra is developed for simultaneous determination of two different binary mixtures. The developed ratio H-point standard addition method (RHPSAM) was managed successfully to resolve the spectral overlap in itopride hydrochloride (ITO) and pantoprazole sodium (PAN) binary mixture, as well as, mosapride citrate (MOS) and PAN binary mixture. The theoretical background and advantages of the newly proposed method are presented. The calibration curves are linear over the concentration range of 5-60 μg/mL, 5-40 μg/mL and 4-24 μg/mL for ITO, MOS and PAN, respectively. Specificity of the method was investigated and relative standard deviations were less than 1.5. The accuracy, precision and repeatability were also investigated for the proposed method according to ICH guidelines.

  19. Accurate determination of arsenic in arsenobetaine standard solutions of BCR-626 and NMIJ CRM 7901-a by neutron activation analysis coupled with internal standard method.

    PubMed

    Miura, Tsutomu; Chiba, Koichi; Kuroiwa, Takayoshi; Narukawa, Tomohiro; Hioki, Akiharu; Matsue, Hideaki

    2010-09-15

    Neutron activation analysis (NAA) coupled with an internal standard method was applied for the determination of As in the certified reference material (CRM) of arsenobetaine (AB) standard solutions to verify their certified values. Gold was used as an internal standard to compensate for the difference of the neutron exposure in an irradiation capsule and to improve the sample-to-sample repeatability. Application of the internal standard method significantly improved linearity of the calibration curve up to 1 microg of As, too. The analytical reliability of the proposed method was evaluated by k(0)-standardization NAA. The analytical results of As in AB standard solutions of BCR-626 and NMIJ CRM 7901-a were (499+/-55)mgkg(-1) (k=2) and (10.16+/-0.15)mgkg(-1) (k=2), respectively. These values were found to be 15-20% higher than the certified values. The between-bottle variation of BCR-626 was much larger than the expanded uncertainty of the certified value, although that of NMIJ CRM 7901-a was almost negligible. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  20. Measurement of regional cerebral blood flow with copper-62-PTSM and a three-compartment model.

    PubMed

    Okazawa, H; Yonekura, Y; Fujibayashi, Y; Mukai, T; Nishizawa, S; Magata, Y; Ishizu, K; Tamaki, N; Konishi, J

    1996-07-01

    We evaluated quantitatively 62Cu-labeled pyruvaldehyde bis(N4-methylthiosemicarbazone) copper II (62Cu-PTSM) as a brain perfusion tracer for positron emission tomography (PET). For quantitative measurement, the octanol extraction method is needed to correct for arterial radioactivity in estimating the lipophilic input function, but the procedure is not practical for clinical studies. To measure regional cerebral blood flow (rCBF) by 62Cu-PTSM with simple arterial blood sampling, a standard curve of the octanol extraction ratio and a three-compartment model were applied. We performed both 15O-labeled water PET and 62 Cu-PTSM PET with dynamic data acquisition and arterial sampling in six subjects. Data obtained in 10 subjects studied previously were used for the standard octanol extraction curve. Arterial activity was measured and corrected to obtain the true input function using the standard curve. Graphical analysis (Gjedde-Patlak plot) with the data for each subject fitted by a straight regression line suggested that 62Cu-PTSM can be analyzed by the three-compartment model with negligible K4. Using this model, K1-K3 were estimated from curve fitting of the cerebral time-activity curve and the corrected input function. The fractional uptake of 62Cu-PTSM was corrected to rCBF with the individual extraction at steady state calculated from K1-K3. The influx rates (Ki) obtained from three-compartment model and graphical analyses were compared for the validation of the model. A comparison of rCBF values obtained from 62Cu-PTSM and 150-water studies demonstrated excellent correlation. The results suggest the potential feasibility of quantitation of cerebral perfusion with 62Cu-PTSM accompanied by dynamic PET and simple arterial sampling.

  1. Comparison of dynamic contrast-enhanced MRI parameters of breast lesions at 1.5 and 3.0 T: a pilot study

    PubMed Central

    Pineda, F D; Medved, M; Fan, X; Ivancevic, M K; Abe, H; Shimauchi, A; Newstead, G M

    2015-01-01

    Objective: To compare dynamic contrast-enhanced (DCE) MRI parameters from scans of breast lesions at 1.5 and 3.0 T. Methods: 11 patients underwent paired MRI examinations in both Philips 1.5 and 3.0 T systems (Best, Netherlands) using a standard clinical fat-suppressed, T1 weighted DCE-MRI protocol, with 70–76 s temporal resolution. Signal intensity vs time curves were fit with an empirical mathematical model to obtain semi-quantitative measures of uptake and washout rates as well as time-to-peak enhancement (TTP). Maximum percent enhancement and signal enhancement ratio (SER) were also measured for each lesion. Percent differences between parameters measured at the two field strengths were compared. Results: TTP and SER parameters measured at 1.5 and 3.0 T were similar; with mean absolute differences of 19% and 22%, respectively. Maximum percent signal enhancement was significantly higher at 3 T than at 1.5 T (p = 0.006). Qualitative assessment showed that image quality was significantly higher at 3 T (p = 0.005). Conclusion: Our results suggest that TTP and SER are more robust to field strength change than other measured kinetic parameters, and therefore measurements of these parameters can be more easily standardized than measurements of other parameters derived from DCE-MRI. Semi-quantitative measures of overall kinetic curve shape showed higher reproducibility than do discrete classification of kinetic curve early and delayed phases in a majority of the cases studied. Advances in knowledge: Qualitative measures of curve shape are not consistent across field strength even when acquisition parameters are standardized. Quantitative measures of overall kinetic curve shape, by contrast, have higher reproducibility. PMID:25785918

  2. Quality Control Method for a Micro-Nano-Channel Microfabricated Device

    NASA Technical Reports Server (NTRS)

    Grattoni, Alessandro; Ferrari, Mauro; Li, Xuewu

    2012-01-01

    A variety of silicon-fabricated devices is used in medical applications such as drug and cell delivery, and DNA and protein separation and analysis. When a fluidic device inlet is connected to a compressed gas reservoir, and the outlet is at a lower pressure, a gas flow occurs through the membrane toward the outside. The method relies on the measurement of the gas pressure over the elapsed time inside the upstream and downstream environments. By knowing the volume of the upstream reservoir, the gas flow rate through the membrane over the pressure drop can be calculated. This quality control method consists of measuring the gas flow through a device and comparing the results with a standard curve, which can be obtained by testing standard devices. Standard devices can be selected through a variety of techniques, both destructive and nondestructive, such as SEM, AFM, and standard particle filtration.

  3. Utility of mass spectrometry in the diagnosis of prion diseases

    USDA-ARS?s Scientific Manuscript database

    We developed a sensitive mass spectrometry-based method of quantitating the prions present in a variety of mammalian species. Calibration curves relating the area ratios of the selected analyte peptides and their oxidized analogs to their homologous stable isotope labeled internal standards were pre...

  4. Pre-Analytical Conditions in Non-Invasive Prenatal Testing of Cell-Free Fetal RHD

    PubMed Central

    Rieneck, Klaus; Krog, Grethe Risum; Nielsen, Leif Kofoed; Tabor, Ann; Dziegiel, Morten Hanefeld

    2013-01-01

    Background Non-invasive prenatal testing of cell-free fetal DNA (cffDNA) in maternal plasma can predict the fetal RhD type in D negative pregnant women. In Denmark, routine antenatal screening for the fetal RhD gene (RHD) directs the administration of antenatal anti-D prophylaxis only to women who carry an RhD positive fetus. Prophylaxis reduces the risk of immunization that may lead to hemolytic disease of the fetus and the newborn. The reliability of predicting the fetal RhD type depends on pre-analytical factors and assay sensitivity. We evaluated the testing setup in the Capital Region of Denmark, based on data from routine antenatal RHD screening. Methods Blood samples were drawn at gestational age 25 weeks. DNA extracted from 1 mL of plasma was analyzed for fetal RHD using a duplex method for exon 7/10. We investigated the effect of blood sample transportation time (n = 110) and ambient outdoor temperatures (n = 1539) on the levels of cffDNA and total DNA. We compared two different quantification methods, the delta Ct method and a universal standard curve. PCR pipetting was compared on two systems (n = 104). Results The cffDNA level was unaffected by blood sample transportation for up to 9 days and by ambient outdoor temperatures ranging from -10°C to 28°C during transport. The universal standard curve was applicable for cffDNA quantification. Identical levels of cffDNA were observed using the two automated PCR pipetting systems. We detected a mean of 100 fetal DNA copies/mL at a median gestational age of 25 weeks (range 10–39, n = 1317). Conclusion The setup for real-time PCR-based, non-invasive prenatal testing of cffDNA in the Capital Region of Denmark is very robust. Our findings regarding the transportation of blood samples demonstrate the high stability of cffDNA. The applicability of a universal standard curve facilitates easy cffDNA quantification. PMID:24204719

  5. Effect of grid transparency and finite collector size on determining ion temperature and density by the retarding potential analyzer

    NASA Technical Reports Server (NTRS)

    Troy, B. E., Jr.; Maier, E. J.

    1975-01-01

    The effects of the grid transparency and finite collector size on the values of thermal ion density and temperature determined by the standard RPA (retarding potential analyzer) analysis method are investigated. The current-voltage curves calculated for varying RPA parameters and a given ion mass, temperature, and density are analyzed by the standard RPA method. It is found that only small errors in temperature and density are introduced for an RPA with typical dimensions, and that even when the density error is substantial for nontypical dimensions, the temperature error remains minimum.

  6. Orthogonal Comparison of GC-MS and 1H NMR Spectroscopy for Short Chain Fatty Acid Quantitation.

    PubMed

    Cai, Jingwei; Zhang, Jingtao; Tian, Yuan; Zhang, Limin; Hatzakis, Emmanuel; Krausz, Kristopher W; Smith, Philip B; Gonzalez, Frank J; Patterson, Andrew D

    2017-08-01

    Short chain fatty acids (SCFAs) are important regulators of host physiology and metabolism and may contribute to obesity and associated metabolic diseases. Interest in SCFAs has increased in part due to the recognized importance of how production of SCFAs by the microbiota may signal to the host. Therefore, reliable, reproducible, and affordable methods for SCFA profiling are required for accurate identification and quantitation. In the current study, four different methods for SCFA (acetic acid, propionic acid, and butyric acid) extraction and quantitation were compared using two independent platforms including gas chromatography coupled with mass spectrometry (GC-MS) and 1 H nuclear magnetic resonance (NMR) spectroscopy. Sensitivity, recovery, repeatability, matrix effect, and validation using mouse fecal samples were determined across all methods. The GC-MS propyl esterification method exhibited superior sensitivity for acetic acid and butyric acid measurement (LOD < 0.01 μg mL -1 , LOQ < 0.1 μg mL -1 ) and recovery accuracy (99.4%-108.3% recovery rate for 100 μg mL -1 SCFA mixed standard spike in and 97.8%-101.8% recovery rate for 250 μg mL -1 SCFAs mixed standard spike in). NMR methods by either quantitation relative to an internal standard or quantitation using a calibration curve yielded better repeatability and minimal matrix effects compared to GC-MS methods. All methods generated good calibration curve linearity (R 2 > 0.99) and comparable measurement of fecal SCFA concentration. Lastly, these methods were used to quantitate fecal SCFAs obtained from conventionally raised (CONV-R) and germ free (GF) mice. Results from global metabolomic analysis of feces generated by 1 H NMR and bomb calorimetry were used to further validate these approaches.

  7. Thermal neutron macroscopic absorption cross section measurement (theory, experiment and results) for small environmental samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czubek, J.A.; Drozdowicz, K.; Gabanska, B.

    Czubek`s method of measurement of the thermal neutron macroscopic absorption cross section of small samples has been developed at the Henryk Niewodniczanski Institute of Nuclear Physics in Krakow, Poland. Theoretical principles of the method have been elaborated in the one-velocity diffusion approach in which the thermal neutron parameters used have been averaged over a modified Maxwellian. In consecutive measurements the investigated sample is enveloped in shells of a known moderator of varying thickness and irradiated with a pulsed beam of fast neutrons. The neutrons are slowed-down in the system and a die-away rate of escaping thermal neutrons is measured. Themore » decay constant vs. thickness of the moderator creates the experimental curve. The absorption cross section of the unknown sample is found from the intersection of this curve with the theoretical one. The theoretical curve is calculated for the case when the dynamic material buckling of the inner sample is zero. The method does not use any reference absorption standard and is independent of the transport cross section of the measured sample. The volume of the sample is form of fluid or crushed material is about 170 cm{sup 3}. The standard deviation for the measured mass absorption cross section of rock samples is in the range of 4 divided by 20% of the measured value and for brines is of the order of 0.5%.« less

  8. Development and validation of a bioanalytical LC-MS method for the quantification of GHRP-6 in human plasma.

    PubMed

    Gil, Jeovanis; Cabrales, Ania; Reyes, Osvaldo; Morera, Vivian; Betancourt, Lázaro; Sánchez, Aniel; García, Gerardo; Moya, Galina; Padrón, Gabriel; Besada, Vladimir; González, Luis Javier

    2012-02-23

    Growth hormone-releasing peptide 6 (GHRP-6, His-(DTrp)-Ala-Trp-(DPhe)-Lys-NH₂, MW=872.44 Da) is a potent growth hormone secretagogue that exhibits a cytoprotective effect, maintaining tissue viability during acute ischemia/reperfusion episodes in different organs like small bowel, liver and kidneys. In the present work a quantitative method to analyze GHRP-6 in human plasma was developed and fully validated following FDA guidelines. The method uses an internal standard (IS) of GHRP-6 with ¹³C-labeled Alanine for quantification. Sample processing includes a precipitation step with cold acetone to remove the most abundant plasma proteins, recovering the GHRP-6 peptide with a high yield. Quantification was achieved by LC-MS in positive full scan mode in a Q-Tof mass spectrometer. The sensitivity of the method was evaluated, establishing the lower limit of quantification at 5 ng/mL and a range for the calibration curve from 5 ng/mL to 50 ng/mL. A dilution integrity test was performed to analyze samples at higher concentration of GHRP-6. The validation process involved five calibration curves and the analysis of quality control samples to determine accuracy and precision. The calibration curves showed R² higher than 0.988. The stability of the analyte and its internal standard (IS) was demonstrated in all conditions the samples would experience in a real time analyses. This method was applied to the quantification of GHRP-6 in plasma from nine healthy volunteers participating in a phase I clinical trial. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Oxidation of methionine 216 in sheep and elk prion protein is highly dependent upon the amino acid at position 218 but is not important for prion propagation

    USDA-ARS?s Scientific Manuscript database

    We developed a sensitive mass spectrometry-based method of quantitating the prions present in elk and sheep. Calibration curves relating the area ratios of the selected analyte peptides and their homologous stable isotope labeled internal standards were prepared. This method was compared to the ELIS...

  10. The effect of customization and use of a fetal growth standard on the association between birthweight percentile and adverse perinatal outcome.

    PubMed

    Sovio, Ulla; Smith, Gordon C S

    2018-02-01

    It has been proposed that correction of offspring weight percentiles (customization) might improve the prediction of adverse pregnancy outcome; however, the approach is not accepted universally. A complication in the interpretation of the data is that the main method for calculation of customized percentiles uses a fetal growth standard, and multiple analyses have compared the results with birthweight-based standards. First, we aimed to determine whether women who deliver small-for-gestational-age infants using a customized standard differed from other women. Second, we aimed to compare the association between birthweight percentile and adverse outcome using 3 different methods for percentile calculation: (1) a noncustomized actual birthweight standard, (2) a noncustomized fetal growth standard, and (3) a fully customized fetal growth standard. We analyzed data from the Pregnancy Outcome Prediction study, a prospective cohort study of nulliparous women who delivered in Cambridge, UK, between 2008 and 2013. We used a composite adverse outcome, namely, perinatal morbidity or preeclampsia. Receiver operating characteristic curve analysis was used to compare the 3 methods of calculating birthweight percentiles in relation to the composite adverse outcome. We confirmed previous observations that delivering an infant who was small for gestational age (<10th percentile) with the use of a fully customized fetal growth standard but who was appropriate for gestational age with the use of a noncustomized actual birthweight standard was associated with higher rates of adverse outcomes. However, we also observed that the mothers of these infants were 3-4 times more likely to be obese and to deliver preterm. When we compared the risk of adverse outcome from logistic regression models that were fitted to the birthweight percentiles that were derived by each of the 3 predefined methods, the areas under the receiver operating characteristic curves were similar for all 3 methods: 0.56 (95% confidence interval, 0.54-0.59) fully customized, 0.56 (95% confidence interval, 0.53-0.59) noncustomized fetal weight standard, and 0.55 (95% confidence interval, 0.53-0.58) noncustomized actual birthweight standard. When we classified the top 5% of predicted risk as high risk, the methods that used a fetal growth standard showed attenuation after adjustment for gestational age, whereas the birthweight standard did not. Further adjustment for the maternal characteristics, which included weight, attenuated the association with the customized standard, but not the other 2 methods. The associations after full adjustment were similar when we compared the 3 approaches. The independent association between birthweight percentile and adverse outcome was similar when we compared actual birthweight standards and fetal growth standards and compared customized and noncustomized standards. Use of fetal weight standards and customized percentiles for maternal characteristics could lead to stronger associations with adverse outcome through confounding by preterm birth and maternal obesity. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Poor interoperability of the Adams-Harbertson method for analysis of anthocyanins: comparison with AOAC pH differential method.

    PubMed

    Brooks, Larry M; Kuhlman, Benjamin J; McKesson, Doug W; McCloskey, Leo

    2013-01-01

    The poor interoperability of anthocyanin glycosides measurements by two pH differential methods is documented. Adams-Harbertson, which was proposed for commercial winemaking, was compared to AOAC Official Method 2005.02 for wine. California bottled wines (Pinot Noir, Merlot, and Cabernet Sauvignon) were assayed in a collaborative study (n=105), which found mean precision of Adams-Harbertson winery versus reference measurements to be 77 +/- 20%. Maximum error is expected to be 48% for Pinot Noir, 42% for Merlot, and 34% for Cabernet Sauvignon from reproducibility RSD. Range of measurements was actually 30 to 91% for Pinot Noir. An interoperability study (n=30) found Adams-Harbertson produces measurements that are nominally 150% of the AOAC pH differential method. Large analytical chemistry differences are: AOAC method uses Beer-Lambert equation and measures absorbance at pH 1.0 and 4.5, proposed a priori by Flueki and Francis; whereas Adams-Harbertson uses "universal" standard curve and measures absorbance ad hoc at pH 1.8 and 4.9 to reduce the effects of so-called co-pigmentation. Errors relative to AOAC are produced by Adams-Harbertson standard curve over Beer-Lambert and pH 1.8 over pH 1.0. The study recommends using AOAC Official Method 2005.02 for analysis of wine anthocyanin glycosides.

  12. Intra and interrater reliability of spinal sagittal curves and mobility using pocket goniometer IncliMed® in healthy subjects.

    PubMed

    Alderighi, Marzia; Ferrari, Raffaello; Maghini, Irene; Del Felice, Alessandra; Masiero, Stefano

    2016-11-21

    Radiographic examination is the gold standard to evaluate spine curves, but ionising radiations limit routine use. Non-invasive methods, such as skin-surface goniometer (IncliMed®) should be used instead. To evaluate intra- and interrater reliability to assess sagittal curves and mobility of the spine with IncliMed®. a reliability study on agonistic football players. Thoracic kyphosis, lumbar lordosis and mobility of the spine were assessed by IncliMed®. Measurements were repeated twice by each examiner during the same session with between-rater blinding. Intrarater and interrater reliability were measured by Intraclass Correlation Coefficient (ICC), 95% Confidence Interval (CI 95%) and Standard Error of Measurement (SEM). Thirty-four healthy female football players (19.17 ± 4.52 years) were enrolled. Statistical results showed high intrarater (0.805-0.923) and interrater (0.701-0.886) reliability (ICC > 0.8). The obtained intra- and interrater SEM were low, with overall absolute intrarater values between 1.39° and 2.76° and overall interrater values between 1.71° and 4.25°. IncliMed® provides high intra- and interrater reliability in healthy subjects, with limited Standard Error of Measurement. These results encourage its use in clinical practice and scientific research.

  13. Estimating Time to Event From Longitudinal Categorical Data: An Analysis of Multiple Sclerosis Progression.

    PubMed

    Mandel, Micha; Gauthier, Susan A; Guttmann, Charles R G; Weiner, Howard L; Betensky, Rebecca A

    2007-12-01

    The expanded disability status scale (EDSS) is an ordinal score that measures progression in multiple sclerosis (MS). Progression is defined as reaching EDSS of a certain level (absolute progression) or increasing of one point of EDSS (relative progression). Survival methods for time to progression are not adequate for such data since they do not exploit the EDSS level at the end of follow-up. Instead, we suggest a Markov transitional model applicable for repeated categorical or ordinal data. This approach enables derivation of covariate-specific survival curves, obtained after estimation of the regression coefficients and manipulations of the resulting transition matrix. Large sample theory and resampling methods are employed to derive pointwise confidence intervals, which perform well in simulation. Methods for generating survival curves for time to EDSS of a certain level, time to increase of EDSS of at least one point, and time to two consecutive visits with EDSS greater than three are described explicitly. The regression models described are easily implemented using standard software packages. Survival curves are obtained from the regression results using packages that support simple matrix calculation. We present and demonstrate our method on data collected at the Partners MS center in Boston, MA. We apply our approach to progression defined by time to two consecutive visits with EDSS greater than three, and calculate crude (without covariates) and covariate-specific curves.

  14. Measurement of Antioxidant Capacity by Electron Spin Resonance Spectroscopy Based on Copper(II) Reduction.

    PubMed

    Li, Dan; Jiang, Jia; Han, Dandan; Yu, Xinyu; Wang, Kun; Zang, Shuang; Lu, Dayong; Yu, Aimin; Zhang, Ziwei

    2016-04-05

    A new method is proposed for measuring the antioxidant capacity by electron spin resonance spectroscopy based on the loss of electron spin resonance signal after Cu(2+) is reduced to Cu(+) with antioxidant. Cu(+) was removed by precipitation in the presence of SCN(-). The remaining Cu(2+) was coordinated with diethyldithiocarbamate, extracted into n-butanol and determined by electron spin resonance spectrometry. Eight standards widely used in antioxidant capacity determination, including Trolox, ascorbic acid, ferulic acid, rutin, caffeic acid, quercetin, chlorogenic acid, and gallic acid were investigated. The standard curves for determining the eight standards were plotted, and results showed that the linear regression correlation coefficients were all high enough (r > 0.99). Trolox equivalent antioxidant capacity values for the antioxidant standards were calculated, and a good correlation (r > 0.94) between the values obtained by the present method and cupric reducing antioxidant capacity method was observed. The present method was applied to the analysis of real fruit samples and the evaluation of the antioxidant capacity of these fruits.

  15. Selecting Items for Criterion-Referenced Tests.

    ERIC Educational Resources Information Center

    Mellenbergh, Gideon J.; van der Linden, Wim J.

    1982-01-01

    Three item selection methods for criterion-referenced tests are examined: the classical theory of item difficulty and item-test correlation; the latent trait theory of item characteristic curves; and a decision-theoretic approach for optimal item selection. Item contribution to the standardized expected utility of mastery testing is discussed. (CM)

  16. Molecular Form Differences Between Prostate-Specific Antigen (PSA) Standards Create Quantitative Discordances in PSA ELISA Measurements.

    PubMed

    McJimpsey, Erica L

    2016-02-25

    The prostate-specific antigen (PSA) assays currently employed for the detection of prostate cancer (PCa) lack the specificity needed to differentiate PCa from benign prostatic hyperplasia and have high false positive rates. The PSA calibrants used to create calibration curves in these assays are typically purified from seminal plasma and contain many molecular forms (intact PSA and cleaved subforms). The purpose of this study was to determine if the composition of the PSA molecular forms found in these PSA standards contribute to the lack of PSA test reliability. To this end, seminal plasma purified PSA standards from different commercial sources were investigated by western blot (WB) and in multiple research grade PSA ELISAs. The WB results revealed that all of the PSA standards contained different mass concentrations of intact and cleaved molecular forms. Increased mass concentrations of intact PSA yielded higher immunoassay absorbance values, even between lots from the same manufacturer. Standardization of seminal plasma derived PSA calibrant molecular form mass concentrations and purification methods will assist in closing the gaps in PCa testing measurements that require the use of PSA values, such as the % free PSA and Prostate Health Index by increasing the accuracy of the calibration curves.

  17. Molecular Form Differences Between Prostate-Specific Antigen (PSA) Standards Create Quantitative Discordances in PSA ELISA Measurements

    NASA Astrophysics Data System (ADS)

    McJimpsey, Erica L.

    2016-02-01

    The prostate-specific antigen (PSA) assays currently employed for the detection of prostate cancer (PCa) lack the specificity needed to differentiate PCa from benign prostatic hyperplasia and have high false positive rates. The PSA calibrants used to create calibration curves in these assays are typically purified from seminal plasma and contain many molecular forms (intact PSA and cleaved subforms). The purpose of this study was to determine if the composition of the PSA molecular forms found in these PSA standards contribute to the lack of PSA test reliability. To this end, seminal plasma purified PSA standards from different commercial sources were investigated by western blot (WB) and in multiple research grade PSA ELISAs. The WB results revealed that all of the PSA standards contained different mass concentrations of intact and cleaved molecular forms. Increased mass concentrations of intact PSA yielded higher immunoassay absorbance values, even between lots from the same manufacturer. Standardization of seminal plasma derived PSA calibrant molecular form mass concentrations and purification methods will assist in closing the gaps in PCa testing measurements that require the use of PSA values, such as the % free PSA and Prostate Health Index by increasing the accuracy of the calibration curves.

  18. High-performance liquid chromatography determination of ketone bodies in human plasma by precolumn derivatization with p-nitrobenzene diazonium fluoroborate.

    PubMed

    Yamato, Susumu; Shinohara, Kumiko; Nakagawa, Saori; Kubota, Ai; Inamura, Katsushi; Watanabe, Gen; Hirayama, Satoshi; Miida, Takashi; Ohta, Shin

    2009-01-01

    We developed and validated a sensitive and convenient high-performance liquid chromatography (HPLC) method for the specific determination of ketone bodies (acetoacetate and D-3-hydroxybutyrate) in human plasma. p-Nitrobenzene diazonium fluoroborate (diazo reagent) was used as a precolumn derivatization agent, and 3-(2-hydroxyphenyl) propionic acid was used as an internal standard. After the reaction, excess diazo reagent and plasma proteins were removed by passing through a solid-phase cartridge (C(18)). The derivatives retained on the cartridge were eluted with methanol, introduced into the HPLC system, and then detected with UV at 380 nm. A calibration curve for acetoacetate standard solution with a 20-microl injection volume showed good linearity in the range of 1 to 400 microM with a 0.9997 correlation coefficient. For the determination of D-3-hydroxybutyrate, it was converted to acetoacetate before reaction with the diazo reagent by an enzymatic coupling method using D-3-hydroxybutyrate dehydrogenase and lactate dehydrogenase. A calibration curve for D-3-hydroxybutyrate standard solution also showed good linearity in the range of 1.5 to 2000 microM with a 0.9988 correlation coefficient. Analytical recoveries of acetoacetate and D-3-hydroxybutyrate in human plasma were satisfactory. The method was successfully applied to samples from diabetic patients, and results were consistent with those obtained using the thio-NAD enzymatic cycling method used in clinical laboratories.

  19. Potency Determination of Antidandruff Shampoos in Nystatin International Unit Equivalents

    PubMed Central

    Anusha Hewage, D. B. G.; Pathirana, W.; Pinnawela, Amara

    2008-01-01

    A convenient standard microbiological potency determination test for the antidandruff shampoos was developed by adopting the pharmacopoeial microbiological assay procedure of the drug nystatin. A standard curve was drawn consisting of the inhibition zone diameters vs. logarithm of nystatin concentrations in international units using the fungus Saccharomyces cerevisiae (yeast) strain National Collection of Type Culture (NCTC) 1071606 as the test organism. From the standard curve the yeast inhibitory potencies of the shampoos in nystatin international unit equivalents were determined from the respective inhibition zones of the test samples of the shampoos. Under test conditions four shampoo samples showed remarkable fungal inhibitory potencies of 10227, 10731, 12396 and 18211 nystatin international unit equivalents/ml while two shampoo samples had extremely feeble inhibitory potencies 4.07 and 4.37 nystatin international unit equivalents/ml although the latter two products claimed antifungal activity. The potency determination method could be applied to any antidandruff shampoo with any one or a combination of active ingredients. PMID:21394271

  20. Brief Report: Investigating Uncertainty in the Minimum Mortality Temperature: Methods and Application to 52 Spanish Cities.

    PubMed

    Tobías, Aurelio; Armstrong, Ben; Gasparrini, Antonio

    2017-01-01

    The minimum mortality temperature from J- or U-shaped curves varies across cities with different climates. This variation conveys information on adaptation, but ability to characterize is limited by the absence of a method to describe uncertainty in estimated minimum mortality temperatures. We propose an approximate parametric bootstrap estimator of confidence interval (CI) and standard error (SE) for the minimum mortality temperature from a temperature-mortality shape estimated by splines. The coverage of the estimated CIs was close to nominal value (95%) in the datasets simulated, although SEs were slightly high. Applying the method to 52 Spanish provincial capital cities showed larger minimum mortality temperatures in hotter cities, rising almost exactly at the same rate as annual mean temperature. The method proposed for computing CIs and SEs for minimums from spline curves allows comparing minimum mortality temperatures in different cities and investigating their associations with climate properly, allowing for estimation uncertainty.

  1. [Determination of heavy metals in four traditional Chinese medicines by ICP-MS].

    PubMed

    Wen, Hui-Min; Chen, Xiao-Hui; Dong, Ting-Xia; Zhan, Hua-Qiang; Bi, Kai-Shun

    2006-08-01

    To establish a ICP-MS method for the determination of heavy metals, including As, Hg, Pb, Cd, in four traditional Chinese medicines. The samples were digested by closed-versel microwave. The four heavy metals were directly analyzed by ICP-MS. Select internal standard element in for the method by which the analyse signal drife is corrected by the signal of another element (internal standard elements) added to both the standard solution and sample. For all of the analyzed heary methals, the correlative coefficient of the calibration curves was over 0.999 2. The recovery rates of the procedure were 97.5%-108.0%, and its RSD was lower than 11.6%. This method was convenient, quick-acquired, accurate and highly sensitive. The method can be used for the quality control of trace elements in traditional Chinese medicines and for the contents determination of traditional Chinese medicines from different habitats and species.

  2. Microbioassay of Antimicrobial Agents

    PubMed Central

    Simon, Harold J.; Yin, E. Jong

    1970-01-01

    A previously described agar-diffusion technique for microbioassay of antimicrobial agents has been modified to increase sensitivity of the technique and to extend the range of antimicrobial agents to which it is applicable. This microtechnique requires only 0.02 ml of an unknown test sample for assay, and is capable of measuring minute concentrations of antibiotics in buffer, serum, and urine. In some cases, up to a 20-fold increase in sensitivity is gained relative to other published standardized methods and the error of this method is less than ±5%. Buffer standard curves have been established for this technique, concurrently with serum standard curves, yielding information on antimicrobial serum-binding and demonstrating linearity of the data points compared to the estimated regression line for the microconcentration ranges covered by this technique. This microassay technique is particularly well suited for pediatric research and for other investigations where sample volumes are small and quantitative accuracy is desired. Dilution of clinical samples to attain concentrations falling with the range of this assay makes the technique readily adaptable and suitable for general clinical pharmacological studies. The microassay technique has been standardized in buffer solutions and in normal human serum pools for the following antimicrobials: ampicillin, methicillin, penicillin G, oxacillin, cloxacillin, dicloxacillin, cephaloglycin, cephalexin, cephaloridine, cephalothin, erythromycin, rifamycin amino methyl piperazine, kanamycin, neomycin, streptomycin, colistin, polymyxin B, doxycycline, minocycline, oxytetracycline, tetracycline, and chloramphenicol. PMID:4986725

  3. Revisiting the finite temperature string method for the calculation of reaction tubes and free energies

    NASA Astrophysics Data System (ADS)

    Vanden-Eijnden, Eric; Venturoli, Maddalena

    2009-05-01

    An improved and simplified version of the finite temperature string (FTS) method [W. E, W. Ren, and E. Vanden-Eijnden, J. Phys. Chem. B 109, 6688 (2005)] is proposed. Like the original approach, the new method is a scheme to calculate the principal curves associated with the Boltzmann-Gibbs probability distribution of the system, i.e., the curves which are such that their intersection with the hyperplanes perpendicular to themselves coincides with the expected position of the system in these planes (where perpendicular is understood with respect to the appropriate metric). Unlike more standard paths such as the minimum energy path or the minimum free energy path, the location of the principal curve depends on global features of the energy or the free energy landscapes and thereby may remain appropriate in situations where the landscape is rough on the thermal energy scale and/or entropic effects related to the width of the reaction channels matter. Instead of using constrained sampling in hyperplanes as in the original FTS, the new method calculates the principal curve via sampling in the Voronoi tessellation whose generating points are the discretization points along this curve. As shown here, this modification results in greater algorithmic simplicity. As a by-product, it also gives the free energy associated with the Voronoi tessellation. The new method can be applied both in the original Cartesian space of the system or in a set of collective variables. We illustrate FTS on test-case examples and apply it to the study of conformational transitions of the nitrogen regulatory protein C receiver domain using an elastic network model and to the isomerization of solvated alanine dipeptide.

  4. Growth charts for non-growth hormone treated Prader-Willi syndrome.

    PubMed

    Butler, Merlin G; Lee, Jaehoon; Manzardo, Ann M; Gold, June-Anne; Miller, Jennifer L; Kimonis, Virginia; Driscoll, Daniel J

    2015-01-01

    The goal of this study was to generate and report standardized growth curves for weight, height, head circumference, and BMI for non-growth hormone-treated white male and female US subjects with Prader-Willi syndrome (PWS) between 3 and 18 years of age and develop standardized growth charts. Anthropometric measures (N = 133) were obtained according to standard methods from 120 non-growth hormone-treated white subjects (63 males and 57 females) with PWS between 3 and 18 years of age. Standardized growth curves were developed for the third, 10th, 25th, 50th, 75th, 90th, and 97th percentiles by using the LMS method for weight, height, head circumference, and BMI for PWS subjects along with the normative third, 50th, and 97th percentiles from national and international growth data. The LMS smoothing procedure summarized the distribution of the anthropometric variables at each age using three parameters: power of the Box-Cox transformation λ (L), median μ (M) and coefficient of variation δ (S). Weight, height, head circumference, and BMI standardized growth charts representing 7 percentile ranges were developed from 120 non-growth hormone-treated white male and female US subjects with PWS (age range: 3-18 years) and normative third, 50th, and 97th percentiles from national and international data. We encourage the use of syndrome-specific growth standards to examine and evaluate subjects with PWS when monitoring growth patterns and determining nutritional and obesity status. These variables can be influenced by culture, individual medical care, diet intervention, and physical activity plans. Copyright © 2015 by the American Academy of Pediatrics.

  5. [Determination of six main components in compound theophylline tablet by convolution curve method after prior separation by column partition chromatography

    NASA Technical Reports Server (NTRS)

    Zhang, S. Y.; Wang, G. F.; Wu, Y. T.; Baldwin, K. M. (Principal Investigator)

    1993-01-01

    On a partition chromatographic column in which the support is Kieselguhr and the stationary phase is sulfuric acid solution (2 mol/L), three components of compound theophylline tablet were simultaneously eluted by chloroform and three other components were simultaneously eluted by ammonia-saturated chloroform. The two mixtures were determined by computer-aided convolution curve method separately. The corresponding average recovery and relative standard deviation of the six components were as follows: 101.6, 1.46% for caffeine; 99.7, 0.10% for phenacetin; 100.9, 1.31% for phenobarbitone; 100.2, 0.81% for theophylline; 99.9, 0.81% for theobromine and 100.8, 0.48% for aminopyrine.

  6. Standards for the Analysis and Processing of Surface-Water Data and Information Using Electronic Methods

    USGS Publications Warehouse

    Sauer, Vernon B.

    2002-01-01

    Surface-water computation methods and procedures are described in this report to provide standards from which a completely automated electronic processing system can be developed. To the greatest extent possible, the traditional U. S. Geological Survey (USGS) methodology and standards for streamflow data collection and analysis have been incorporated into these standards. Although USGS methodology and standards are the basis for this report, the report is applicable to other organizations doing similar work. The proposed electronic processing system allows field measurement data, including data stored on automatic field recording devices and data recorded by the field hydrographer (a person who collects streamflow and other surface-water data) in electronic field notebooks, to be input easily and automatically. A user of the electronic processing system easily can monitor the incoming data and verify and edit the data, if necessary. Input of the computational procedures, rating curves, shift requirements, and other special methods are interactive processes between the user and the electronic processing system, with much of this processing being automatic. Special computation procedures are provided for complex stations such as velocity-index, slope, control structures, and unsteady-flow models, such as the Branch-Network Dynamic Flow Model (BRANCH). Navigation paths are designed to lead the user through the computational steps for each type of gaging station (stage-only, stagedischarge, velocity-index, slope, rate-of-change in stage, reservoir, tide, structure, and hydraulic model stations). The proposed electronic processing system emphasizes the use of interactive graphics to provide good visual tools for unit values editing, rating curve and shift analysis, hydrograph comparisons, data-estimation procedures, data review, and other needs. Documentation, review, finalization, and publication of records are provided for with the electronic processing system, as well as archiving, quality assurance, and quality control.

  7. Incorporating Experience Curves in Appliance Standards Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garbesi, Karina; Chan, Peter; Greenblatt, Jeffery

    2011-10-31

    The technical analyses in support of U.S. energy conservation standards for residential appliances and commercial equipment have typically assumed that manufacturing costs and retail prices remain constant during the projected 30-year analysis period. There is, however, considerable evidence that this assumption does not reflect real market prices. Costs and prices generally fall in relation to cumulative production, a phenomenon known as experience and modeled by a fairly robust empirical experience curve. Using price data from the Bureau of Labor Statistics, and shipment data obtained as part of the standards analysis process, we present U.S. experience curves for room air conditioners,more » clothes dryers, central air conditioners, furnaces, and refrigerators and freezers. These allow us to develop more representative appliance price projections than the assumption-based approach of constant prices. These experience curves were incorporated into recent energy conservation standards for these products. The impact on the national modeling can be significant, often increasing the net present value of potential standard levels in the analysis. In some cases a previously cost-negative potential standard level demonstrates a benefit when incorporating experience. These results imply that past energy conservation standards analyses may have undervalued the economic benefits of potential standard levels.« less

  8. Waist Circumferences of Chilean Students: Comparison of the CDC-2012 Standard and Proposed Percentile Curves

    PubMed Central

    Gómez-Campos, Rossana; Lee Andruske, Cinthya; Hespanhol, Jefferson; Sulla Torres, Jose; Arruda, Miguel; Luarte-Rocha, Cristian; Cossio-Bolaños, Marco Antonio

    2015-01-01

    The measurement of waist circumference (WC) is considered to be an important means to control overweight and obesity in children and adolescents. The objectives of the study were to (a) compare the WC measurements of Chilean students with the international CDC-2012 standard and other international standards, and (b) propose a specific measurement value for the WC of Chilean students based on age and sex. A total of 3892 students (6 to 18 years old) were assessed. Weight, height, body mass index (BMI), and WC were measured. WC was compared with the CDC-2012 international standard. Percentiles were constructed based on the LMS method. Chilean males had a greater WC during infancy. Subsequently, in late adolescence, males showed values lower than those of the international standards. Chilean females demonstrated values similar to the standards until the age of 12. Subsequently, females showed lower values. The 85th and 95th percentiles were adopted as cutoff points for evaluating overweight and obesity based on age and sex. The WC of Chilean students differs from the CDC-2012 curves. The regional norms proposed are a means to identify children and adolescents with a high risk of suffering from overweight and obesity disorders. PMID:26184250

  9. Cardiac arrest risk standardization using administrative data compared to registry data

    PubMed Central

    Gaieski, David F.; Donnino, Michael W.; Nelson, Joshua I. M.; Mutter, Eric L.; Carr, Brendan G.; Abella, Benjamin S.; Wiebe, Douglas J.

    2017-01-01

    Background Methods for comparing hospitals regarding cardiac arrest (CA) outcomes, vital for improving resuscitation performance, rely on data collected by cardiac arrest registries. However, most CA patients are treated at hospitals that do not participate in such registries. This study aimed to determine whether CA risk standardization modeling based on administrative data could perform as well as that based on registry data. Methods and results Two risk standardization logistic regression models were developed using 2453 patients treated from 2000–2015 at three hospitals in an academic health system. Registry and administrative data were accessed for all patients. The outcome was death at hospital discharge. The registry model was considered the “gold standard” with which to compare the administrative model, using metrics including comparing areas under the curve, calibration curves, and Bland-Altman plots. The administrative risk standardization model had a c-statistic of 0.891 (95% CI: 0.876–0.905) compared to a registry c-statistic of 0.907 (95% CI: 0.895–0.919). When limited to only non-modifiable factors, the administrative model had a c-statistic of 0.818 (95% CI: 0.799–0.838) compared to a registry c-statistic of 0.810 (95% CI: 0.788–0.831). All models were well-calibrated. There was no significant difference between c-statistics of the models, providing evidence that valid risk standardization can be performed using administrative data. Conclusions Risk standardization using administrative data performs comparably to standardization using registry data. This methodology represents a new tool that can enable opportunities to compare hospital performance in specific hospital systems or across the entire US in terms of survival after CA. PMID:28783754

  10. 7 CFR 42.142 - Curve for obtaining Operating Characteristic (OC) curve information for skip lot sampling and...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... CONDITION OF FOOD CONTAINERS Miscellaneous § 42.142 Curve for obtaining Operating Characteristic (OC) curve...

  11. Lamb wave dispersion and anisotropy profiling of composite plates via non-contact air-coupled and laser ultrasound

    NASA Astrophysics Data System (ADS)

    Harb, M. S.; Yuan, F. G.

    2015-03-01

    Conventional ultrasound inspection has been a standard non-destructive testing method for providing an in-service evaluation and noninvasive means of probing the interior of a structure. In particular, measurement of the propagation characteristics of Lamb waves allows inspection of plates that are typical components in aerospace industry. A rapid, complete non-contact hybrid approach for excitation and detection of Lamb waves is presented and applied for non-destructive evaluation of composites. An air-coupled transducer (ACT) excites ultrasonic waves on the surface of a composite plate, generating different propagating Lamb wave modes and a laser Doppler vibrometer (LDV) is used to measure the out-of-plane velocity of the plate. This technology, based on direct waveform imaging, focuses on measuring dispersive curves for A0 mode in a composite laminate and its anisotropy. A two-dimensional fast Fourier transform (2D-FFT) is applied to out-of-plane velocity data captured experimentally using LDV to go from the time-spatial domain to frequency-wavenumber domain. The result is a 2D array of amplitudes at discrete frequencies and wavenumbers for A0 mode in a given propagation direction along the composite. The peak values of the curve are then used to construct frequency wavenumber and phase velocity dispersion curves, which are also obtained directly using Snell's law and the incident angle of the excited ultrasonic waves. A high resolution and strong correlation between numerical and experimental results are observed for dispersive curves with Snell's law method in comparison to 2D-FFT method. Dispersion curves as well as velocity curves for the composite plate along different directions of wave propagation are measured. The visual read-out of the dispersion curves at different propagation directions as well as the phase velocity curves provide profiling and measurements of the composite anisotropy. The results proved a high sensitivity of the air-coupled and laser ultrasound technique in non-contact characterization of Lamb wave dispersion and material anisotropy of composite plates using simple Snell's law method.

  12. Using Floquet periodicity to easily calculate dispersion curves and wave structures of homogeneous waveguides

    NASA Astrophysics Data System (ADS)

    Hakoda, Christopher; Rose, Joseph; Shokouhi, Parisa; Lissenden, Clifford

    2018-04-01

    Dispersion curves are essential to any guided-wave-related project. The Semi-Analytical Finite Element (SAFE) method has become the conventional way to compute dispersion curves for homogeneous waveguides. However, only recently has a general SAFE formulation for commercial and open-source software become available, meaning that until now SAFE analyses have been variable and more time consuming than desirable. Likewise, the Floquet boundary conditions enable analysis of waveguides with periodicity and have been an integral part of the development of metamaterials. In fact, we have found the use of Floquet boundary conditions to be an extremely powerful tool for homogeneous waveguides, too. The nuances of using periodic boundary conditions for homogeneous waveguides that do not exhibit periodicity are discussed. Comparisons between this method and SAFE are made for selected homogeneous waveguide applications. The COMSOL Multiphysics software is used for the results shown, but any standard finite element software that can implement Floquet periodicity (user-defined or built-in) should suffice. Finally, we identify a number of complex waveguides for which dispersion curves can be found with relative ease by using the periodicity inherent to the Floquet boundary conditions.

  13. Managing the uncertainties of the streamflow data produced by the French national hydrological services

    NASA Astrophysics Data System (ADS)

    Puechberty, Rachel; Bechon, Pierre-Marie; Le Coz, Jérôme; Renard, Benjamin

    2015-04-01

    The French national hydrological services (NHS) manage the production of streamflow time series throughout the national territory. The hydrological data are made available to end-users through different web applications and the national hydrological archive (Banque Hydro). Providing end-users with qualitative and quantitative information on the uncertainty of the hydrological data is key to allow them drawing relevant conclusions and making appropriate decisions. Due to technical and organisational issues that are specific to the field of hydrometry, quantifying the uncertainty of hydrological measurements is still challenging and not yet standardized. The French NHS have made progress on building a consistent strategy to assess the uncertainty of their streamflow data. The strategy consists of addressing the uncertainties produced and propagated at each step of the data production with uncertainty analysis tools that are compatible with each other and compliant with international uncertainty guidance and standards. Beyond the necessary research and methodological developments, operational software tools and procedures are absolutely necessary to the data management and uncertainty analysis by field hydrologists. A first challenge is to assess, and if possible reduce, the uncertainty of streamgauging data, i.e. direct stage-discharge measurements. Interlaboratory experiments proved to be a very efficient way to empirically measure the uncertainty of a given streamgauging technique in given measurement conditions. The Q+ method (Le Coz et al., 2012) was developed to improve the uncertainty propagation method proposed in the ISO748 standard for velocity-area gaugings. Both empirical or computed (with Q+) uncertainty values can now be assigned in BAREME, which is the software used by the French NHS for managing streamgauging measurements. A second pivotal step is to quantify the uncertainty related to stage-discharge rating curves and their application to water level records to produce continuous discharge time series. The management of rating curves is also done using BAREME. The BaRatin method (Le Coz et al., 2014) was developed as a Bayesian approach of rating curve development and uncertainty analysis. Since BaRatin accounts for the individual uncertainties of gauging data used to build the rating curve, it was coupled with BAREME. The BaRatin method is still undergoing development and research, in particular to address non univocal or time-varying stage-discharge relations, due to hysteresis, variable backwater, rating shifts, etc. A new interface including new options is under development. The next steps are now to propagate the uncertainties of water level records, through uncertain rating curves, up to discharge time series and derived variables (e.g. annual mean flow) and statistics (e.g. flood quantiles). Bayesian tools are already available for both tasks but further validation and development is necessary for their integration in the operational data workflow of the French NHS. References Le Coz, J., Camenen, B., Peyrard, X., Dramais, G., 2012. Uncertainty in open-channel discharges measured with the velocity-area method. Flow Measurement and Instrumentation 26, 18-29. Le Coz, J., Renard, B., Bonnifait, L., Branger, F., Le Boursicaud, R., 2014. Combining hydraulic knowledge and uncertain gaugings in the estimation of hydrometric rating curves: a Bayesian approach, Journal of Hydrology, 509, 573-587.

  14. Quantitative PCR for human herpesviruses 6 and 7.

    PubMed Central

    Secchiero, P; Zella, D; Crowley, R W; Gallo, R C; Lusso, P

    1995-01-01

    A quantitative PCR assay for the detection of human herpesvirus 6 (HHV-6) (variants A and B) and HHV-7 DNAs in clinical samples was developed. The assay uses a nonhomologous internal standard (IS) for each virus that is coamplified with the wild-type target sequence in the same vial and with the same pair of primers. This method allows for a correction of the variability of efficiency of the PCR technique. A standard curve is constructed for each experiment by coamplification of known quantities of the cloned HHV-6 or HHV-7 target templates with the respective IS. Absolute quantitation of the test samples is then achieved by determining the viral target/IS ratio of the hybridization signals of the amplification products and plotting this value against the standard curve. Using this assay, we quantitated the amount of HHV-6 or HHV-7 DNA in infected cell cultures and demonstrated an inhibitory effect of phosphonoformic acid on the replication of HHV-6 and HHV-7 in vitro. As the first clinical application of this procedure, we performed preliminary measurements of the loads of HHV-6 and HHV-7 in lymph nodes from patients with Hodgkin's disease and AIDS. Application of this quantitative PCR method should be helpful for elucidating the pathogenic roles of HHV-6 and HHV-7. PMID:7559960

  15. Identification of Reliable Components in Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS): a Data-Driven Approach across Metabolic Processes.

    PubMed

    Motegi, Hiromi; Tsuboi, Yuuri; Saga, Ayako; Kagami, Tomoko; Inoue, Maki; Toki, Hideaki; Minowa, Osamu; Noda, Tetsuo; Kikuchi, Jun

    2015-11-04

    There is an increasing need to use multivariate statistical methods for understanding biological functions, identifying the mechanisms of diseases, and exploring biomarkers. In addition to classical analyses such as hierarchical cluster analysis, principal component analysis, and partial least squares discriminant analysis, various multivariate strategies, including independent component analysis, non-negative matrix factorization, and multivariate curve resolution, have recently been proposed. However, determining the number of components is problematic. Despite the proposal of several different methods, no satisfactory approach has yet been reported. To resolve this problem, we implemented a new idea: classifying a component as "reliable" or "unreliable" based on the reproducibility of its appearance, regardless of the number of components in the calculation. Using the clustering method for classification, we applied this idea to multivariate curve resolution-alternating least squares (MCR-ALS). Comparisons between conventional and modified methods applied to proton nuclear magnetic resonance ((1)H-NMR) spectral datasets derived from known standard mixtures and biological mixtures (urine and feces of mice) revealed that more plausible results are obtained by the modified method. In particular, clusters containing little information were detected with reliability. This strategy, named "cluster-aided MCR-ALS," will facilitate the attainment of more reliable results in the metabolomics datasets.

  16. 76 FR 9696 - Equipment Price Forecasting in Energy Conservation Standards Analysis

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-22

    ... for particular efficiency design options, an empirical experience curve fit to the available data may be used to forecast future costs of such design option technologies. If a statistical evaluation indicates a low level of confidence in estimates of the design option cost trend, this method should not be...

  17. Teaching Keynes's Principle of Effective Demand Using the Aggregate Labor Market Diagram.

    ERIC Educational Resources Information Center

    Dalziel, Paul; Lavoie, Marc

    2003-01-01

    Suggests a method to teach John Keynes's principle of effective demand using a standard aggregate labor market diagram familiar to students taking advanced undergraduate macroeconomics courses. States the analysis incorporates Michal Kalecki's version to show Keynesian unemployment as a point on the aggregate labor demand curve inside the…

  18. Development of SYBR Green I Based Real-Time RT-PCR Assay for Specific Detection of Watermelon silver mottle Virus.

    PubMed

    Rao, Xueqin; Sun, Jie

    2015-09-01

    Watermelon silver mottle virus (WSMoV), which belongs to the genus Tospovirus , causes significant loss in Cucurbitaceae plants. Development of a highly sensitive and reliable detection method for WSMoV. Recombinant plasmids for targeting the sequence of nucleocapsid protein gene of WSMoV were constructed. SYBR Green I real-time PCR was established and evaluated with standard recombinant plasmids and 27 watermelon samples showing WSMoV infection symptoms. The recombinant plasmid was used as template for SYBR Green I real-time PCR to generate standard and melting curves. Melting curve analysis indicated no primer-dimers and non-specific products in the assay. No cross-reaction was observed with Capsicum chlorosis virus (genus Tospovirus ) and Cucumber mosaic virus (genus Cucumovirus). Repeatability tests indicated that inter-assay variability of the Ct values was 1.6%. A highly sensitive, reliable and rapid detection method of SYBR Green I real-time PCR for timely detection of WSMoV plants and vector thrips was established, which will facilitate disease forecast and control.

  19. [Determination of eight pesticide residues in tea by liquid chromatography-tandem mass spectrometry and its uncertainty evaluation].

    PubMed

    Hu, Beizhen; Cai, Haijiang; Song, Weihua

    2012-09-01

    A method was developed for the determination of eight pesticide residues (fipronil, imidacloprid, acetamiprid, buprofezin, triadimefon, triadimenol, profenofos, pyridaben) in tea by liquid chromatography-tandem mass spectrometry. The sample was extracted by accelerated solvent extraction with acetone-dichloromethane (1:1, v/v) as solvent, and the extract was then cleaned-up with a Carb/NH2 solid phase extraction (SPE) column. The separation was performed on a Hypersil Gold C, column (150 mm x 2. 1 mm, 5 microm) and with the gradient elution of acetonitrile and 0. 1% formic acid. The eight pesticides were determined in the modes of electrospray ionization (ESI) and multiple reaction monitoring (MRM). The analytes were quantified by matrix-matched internal standard method for imidacloprid and acetamiprid, by matrix-matched external standard method for the other pesticides. The calibration curves showed good linearity in 1 - 100 microg/L for fipronil, and in 5 -200 microg/L for the other pesticides. The limits of quantification (LOQs, S/N> 10) were 2 p.g/kg for fipronil and 10 microg/kg for the other pesticides. The average recoveries ranged from 75. 5% to 115.0% with the relative standard deviations of 2.7% - 7.7% at the spiked levels of 2, 5, 50 microg/kg for fipronil and 10, 50, 100 microg/kg for the other pesticides. The uncertainty evaluation for the results was carried out according to JJF 1059-1999 "Evaluation and Expression of Uncertainty in Measurement". Items constituting measurement uncertainty involved standard solution, weighing of sample, sample pre-treatment, and the measurement repeatability of the equipment were evaluated. The results showed that the measurement uncertainty is mainly due to sample pre-treatment, standard curves and measurement repeatability of the equipment. The method developed is suitable for the conformation and quantification of the pesticides in tea.

  20. More complicated than it looks: The vagaries of calculating intra-abdominal pressure

    PubMed Central

    Hamad, Nadia M.; Shaw, Janet M.; Nygaard, Ingrid E.; Coleman, Tanner J.; Hsu, Yvonne; Egger, Marlene; Hitchcock, Robert W.

    2013-01-01

    Activities thought to induce high intra-abdominal pressure (IAP), such as lifting weights, are restricted in women with pelvic floor disorders. Standardized procedures to assess IAP during activity are lacking and typically only focus on maximal IAP, variably defined. Our intent in this methods paper is to establish the best strategies for calculating maximal IAP and to add area under the curve and first moment of the area as potentially useful measures in understanding biologic effects of IAP. Thirteen women completed a range of activities while wearing an intra-vaginal pressure transducer. We first analyzed various strategies heuristically using data from 3 women. The measure that appeared to best represent maximal IAP was an average of the three, five or ten highest values, depending on activity, determined using a top down approach, with peaks at least 1 second apart using algorithms written for Matlab computer software, we then compared this strategy with others commonly reported in the literature quantitatively using data from 10 additional volunteers. Maximal IAP calculated using the top down approach differed for some, but not all, activities compared to the single highest peak or to averaging all peaks. We also calculated area under the curve, which allows for a time component, and first moment of the area, which maintains the time component while weighting pressure amplitude. We validated methods of assessing IAP using computer-generated sine waves. We offer standardized methods for assessing maximal, area under the curve and first moment of the area for IAP to improve future reporting and application of this clinically relevant measure in exercise science. PMID:23439349

  1. LC-MS/MS-based approach for obtaining exposure estimates of metabolites in early clinical trials using radioactive metabolites as reference standards.

    PubMed

    Zhang, Donglu; Raghavan, Nirmala; Chando, Theodore; Gambardella, Janice; Fu, Yunlin; Zhang, Duxi; Unger, Steve E; Humphreys, W Griffith

    2007-12-01

    An LC-MS/MS-based approach that employs authentic radioactive metabolites as reference standards was developed to estimate metabolite exposures in early drug development studies. This method is useful to estimate metabolite levels in studies done with non-radiolabeled compounds where metabolite standards are not available to allow standard LC-MS/MS assay development. A metabolite mixture obtained from an in vivo source treated with a radiolabeled compound was partially purified, quantified, and spiked into human plasma to provide metabolite standard curves. Metabolites were analyzed by LC-MS/MS using the specific mass transitions and an internal standard. The metabolite concentrations determined by this approach were found to be comparable to those determined by valid LC-MS/MS assays. This approach does not requires synthesis of authentic metabolites or the knowledge of exact structures of metabolites, and therefore should provide a useful method to obtain early estimates of circulating metabolites in early clinical or toxicological studies.

  2. Measurement of macrocyclic trichothecene in floor dust of water-damaged buildings using gas chromatography/tandem mass spectrometry—dust matrix effects

    PubMed Central

    Saito, Rena; Park, Ju-Hyeong; LeBouf, Ryan; Green, Brett J.; Park, Yeonmi

    2017-01-01

    Gas chromatography-tandem mass spectrometry (GC-MS/MS) was used to detect fungal secondary metabolites. Detection of verrucarol, the hydrolysis product of Stachybotrys chartarum macrocyclic trichothecene (MCT), was confounded by matrix effects associated with heterogeneous indoor environmental samples. In this study, we examined the role of dust matrix effects associated with GC-MS/ MS to better quantify verrucarol in dust as a measure of total MCT. The efficiency of the internal standard (ISTD, 1,12-dodecanediol), and application of a matrix-matched standard correction method in measuring MCT in floor dust of water-damaged buildings was additionally examined. Compared to verrucarol, ISTD had substantially higher matrix effects in the dust extracts. The results of the ISTD evaluation showed that without ISTD adjustment, there was a 280% ion enhancement in the dust extracts compared to neat solvent. The recovery of verrucarol was 94% when the matrix-matched standard curve without the ISTD was used. Using traditional calibration curves with ISTD adjustment, none of the 21 dust samples collected from water damaged buildings were detectable. In contrast, when the matrix-matched calibration curves without ISTD adjustment were used, 38% of samples were detectable. The study results suggest that floor dust of water-damaged buildings may contain MCT. However, the measured levels of MCT in dust using the GC-MS/MS method could be significantly under- or overestimated, depending on the matrix effects, the inappropriate ISTD, or combination of the two. Our study further shows that the routine application of matrix-matched calibration may prove useful in obtaining accurate measurements of MCT in dust derived from damp indoor environments, while no isotopically labeled verrucarol is available. PMID:26853932

  3. Measurement of macrocyclic trichothecene in floor dust of water-damaged buildings using gas chromatography/tandem mass spectrometry-dust matrix effects.

    PubMed

    Saito, Rena; Park, Ju-Hyeong; LeBouf, Ryan; Green, Brett J; Park, Yeonmi

    2016-01-01

    Gas chromatography-tandem mass spectrometry (GC-MS/MS) was used to detect fungal secondary metabolites. Detection of verrucarol, the hydrolysis product of Stachybotrys chartarum macrocyclic trichothecene (MCT), was confounded by matrix effects associated with heterogeneous indoor environmental samples. In this study, we examined the role of dust matrix effects associated with GC-MS/MS to better quantify verrucarol in dust as a measure of total MCT. The efficiency of the internal standard (ISTD, 1,12-dodecanediol), and application of a matrix-matched standard correction method in measuring MCT in floor dust of water-damaged buildings was additionally examined. Compared to verrucarol, ISTD had substantially higher matrix effects in the dust extracts. The results of the ISTD evaluation showed that without ISTD adjustment, there was a 280% ion enhancement in the dust extracts compared to neat solvent. The recovery of verrucarol was 94% when the matrix-matched standard curve without the ISTD was used. Using traditional calibration curves with ISTD adjustment, none of the 21 dust samples collected from water damaged buildings were detectable. In contrast, when the matrix-matched calibration curves without ISTD adjustment were used, 38% of samples were detectable. The study results suggest that floor dust of water-damaged buildings may contain MCT. However, the measured levels of MCT in dust using the GC-MS/MS method could be significantly under- or overestimated, depending on the matrix effects, the inappropriate ISTD, or combination of the two. Our study further shows that the routine application of matrix-matched calibration may prove useful in obtaining accurate measurements of MCT in dust derived from damp indoor environments, while no isotopically labeled verrucarol is available.

  4. Pre-analytical conditions in non-invasive prenatal testing of cell-free fetal RHD.

    PubMed

    Clausen, Frederik Banch; Jakobsen, Tanja Roien; Rieneck, Klaus; Krog, Grethe Risum; Nielsen, Leif Kofoed; Tabor, Ann; Dziegiel, Morten Hanefeld

    2013-01-01

    Non-invasive prenatal testing of cell-free fetal DNA (cffDNA) in maternal plasma can predict the fetal RhD type in D negative pregnant women. In Denmark, routine antenatal screening for the fetal RhD gene (RHD) directs the administration of antenatal anti-D prophylaxis only to women who carry an RhD positive fetus. Prophylaxis reduces the risk of immunization that may lead to hemolytic disease of the fetus and the newborn. The reliability of predicting the fetal RhD type depends on pre-analytical factors and assay sensitivity. We evaluated the testing setup in the Capital Region of Denmark, based on data from routine antenatal RHD screening. Blood samples were drawn at gestational age 25 weeks. DNA extracted from 1 mL of plasma was analyzed for fetal RHD using a duplex method for exon 7/10. We investigated the effect of blood sample transportation time (n = 110) and ambient outdoor temperatures (n = 1539) on the levels of cffDNA and total DNA. We compared two different quantification methods, the delta Ct method and a universal standard curve. PCR pipetting was compared on two systems (n = 104). The cffDNA level was unaffected by blood sample transportation for up to 9 days and by ambient outdoor temperatures ranging from -10 °C to 28 °C during transport. The universal standard curve was applicable for cffDNA quantification. Identical levels of cffDNA were observed using the two automated PCR pipetting systems. We detected a mean of 100 fetal DNA copies/mL at a median gestational age of 25 weeks (range 10-39, n = 1317). The setup for real-time PCR-based, non-invasive prenatal testing of cffDNA in the Capital Region of Denmark is very robust. Our findings regarding the transportation of blood samples demonstrate the high stability of cffDNA. The applicability of a universal standard curve facilitates easy cffDNA quantification.

  5. Method of estimating flood-frequency parameters for streams in Idaho

    USGS Publications Warehouse

    Kjelstrom, L.C.; Moffatt, R.L.

    1981-01-01

    Skew coefficients for the log-Pearson type III distribution are generalized on the basis of some similarity of floods in the Snake River basin and other parts of Idaho. Generalized skew coefficients aid in shaping flood-frequency curves because skew coefficients computed from gaging stations having relatively short periods of peak flow records can be unreliable. Generalized skew coefficients can be obtained for a gaging station from one of three maps in this report. The map to be used depends on whether (1) snowmelt floods are domiant (generally when more than 20 percent of the drainage area is above 6,000 feet altitude), (2) rainstorm floods are dominant (generally when the mean altitude is less than 3,000 feet), or (3) either snowmelt or rainstorm floods can be the annual miximum discharge. For the latter case, frequency curves constructed using separate arrays of each type of runoff can be combined into one curve, which, for some stations, is significantly different than the frequency curve constructed using only annual maximum discharges. For 269 gaging stations, flood-frequency curves that include the generalized skew coefficients in the computation of the log-Pearson type III equation tend to fit the data better than previous analyses. Frequency curves for ungaged sites can be derived by estimating three statistics of the log-Pearson type III distribution. The mean and standard deviation of logarithms of annual maximum discharges are estimated by regression equations that use basin characteristics as independent variables. Skew coefficient estimates are the generalized skews. The log-Pearson type III equation is then applied with the three estimated statistics to compute the discharge at selected exceedance probabilities. Standard errors at the 2-percent exceedance probability range from 41 to 90 percent. (USGS)

  6. A new form of the calibration curve in radiochromic dosimetry. Properties and results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamponi, Matteo, E-mail: mtamponi@aslsassari.it; B

    Purpose: This work describes a new form of the calibration curve for radiochromic dosimetry that depends on one fit parameter. Some results are reported to show that the new curve performs as well as those previously used and, more importantly, significantly reduces the dependence on the lot of films, the film orientation on the scanner, and the time after exposure. Methods: The form of the response curve makes use of the net optical densities ratio against the dose and has been studied by means of the Beer–Lambert law and a simple modeling of the film. The new calibration curve hasmore » been applied to EBT3 films exposed at 6 and 15 MV energy beams of linear accelerators and read-out in transmission mode by means of a flatbed color scanner. Its performance has been compared to that of two established forms of the calibration curve, which use the optical density and the net optical density against the dose. Four series of measurements with four lots of EBT3 films were used to evaluate the precision, accuracy, and dependence on the time after exposure, orientation on the scanner and lot of films. Results: The new calibration curve is roughly subject to the same dose uncertainty, about 2% (1 standard deviation), and has the same accuracy, about 1.5% (dose values between 50 and 450 cGy), as the other calibration curves when films of the same lot are used. Moreover, the new calibration curve, albeit obtained from only one lot of film, shows a good agreement with experimental data from all other lots of EBT3 films used, with an accuracy of about 2% and a relative dose precision of 2.4% (1 standard deviation). The agreement also holds for changes of the film orientation and of the time after exposure. Conclusions: The dose accuracy of this new form of the calibration curve is always equal to or better than those obtained from the two types of curves previously used. The use of the net optical densities ratio considerably reduces the dependence on the lot of films, the landscape/portrait orientation, and the time after exposure. This form of the calibration curve could become even more useful with new optical digital devices using monochromatic light.« less

  7. Rapid and high-precision measurement of sulfur isotope and sulfur concentration in sediment pore water by multi-collector inductively coupled plasma mass spectrometry.

    PubMed

    Bian, Xiao-Peng; Yang, Tao; Lin, An-Jun; Jiang, Shao-Yong

    2015-01-01

    We have developed a technique for the rapid, precise and accurate determination of sulfur isotopes (δ(34)S) by MC-ICP-MS applicable to a range of sulfur-bearing solutions of different sulfur content. The 10 ppm Alfa-S solution (ammonium sulfate solution, working standard of the lab of the authors) was used to bracket other Alfa-S solutions of different concentrations and the measured δ(34)SV-CDT values of Alfa-S solutions deviate from the reference value to varying degrees (concentration effect). The stability of concentration effect has been verified and a correction curve has been constructed based on Alfa-S solutions to correct measured δ(34)SV-CDT values. The curve has been applied to AS solutions (dissolved ammonium sulfate from the lab of the authors) and pore water samples successfully, validating the reliability of our analytical method. This method also enables us to measure the sulfur concentration simultaneously when analyzing the sulfur isotope composition. There is a strong linear correlation (R(2)>0.999) between the sulfur concentrations and the intensity ratios of samples and the standard. We have constructed a regression curve based on Alfa-S solutions and this curve has been successfully used to determine sulfur concentrations of AS solutions and pore water samples. The analytical technique presented here enable rapid, precise and accurate S isotope measurement for a wide range of sulfur-bearing solutions - in particular for pore water samples with complex matrix and varying sulfur concentrations. Also, simultaneous measurement of sulfur concentrations is available. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Screening method based on walking plantar impulse for detecting musculoskeletal senescence and injury.

    PubMed

    Fan, Yifang; Fan, Yubo; Li, Zhiyu; Newman, Tony; Lv, Changsheng; Zhou, Yi

    2013-01-01

    No consensus has been reached on how musculoskeletal system injuries or aging can be explained by a walking plantar impulse. We standardize the plantar impulse by defining a principal axis of plantar impulse. Based upon this standardized plantar impulse, two indexes are presented: plantar pressure record time series and plantar-impulse distribution along the principal axis of plantar impulse. These indexes are applied to analyze the plantar impulse collected by plantar pressure plates from three sources: Achilles tendon ruptures; elderly people (ages 62-71); and young people (ages 19-23). Our findings reveal that plantar impulse distribution curves for Achilles tendon ruptures change irregularly with subjects' walking speed changes. When comparing distribution curves of the young, we see a significant difference in the elderly subjects' phalanges plantar pressure record time series. This verifies our hypothesis that a plantar impulse can function as a means to assess and evaluate musculoskeletal system injuries and aging.

  9. Tracing Personalized Health Curves during Infections

    PubMed Central

    Schneider, David S.

    2011-01-01

    It is difficult to describe host–microbe interactions in a manner that deals well with both pathogens and mutualists. Perhaps a way can be found using an ecological definition of tolerance, where tolerance is defined as the dose response curve of health versus parasite load. To plot tolerance, individual infections are summarized by reporting the maximum parasite load and the minimum health for a population of infected individuals and the slope of the resulting curve defines the tolerance of the population. We can borrow this method of plotting health versus microbe load in a population and make it apply to individuals; instead of plotting just one point that summarizes an infection in an individual, we can plot the values at many time points over the course of an infection for one individual. This produces curves that trace the course of an infection through phase space rather than over a more typical timeline. These curves highlight relationships like recovery and point out bifurcations that are difficult to visualize with standard plotting techniques. Only nine archetypical curves are needed to describe most pathogenic and mutualistic host–microbe interactions. The technique holds promise as both a qualitative and quantitative approach to dissect host–microbe interactions of all kinds. PMID:21957398

  10. Analysis of the variation of atmospheric electric field during solar events

    NASA Astrophysics Data System (ADS)

    Tacza, J.; Raulin, J. P.

    2016-12-01

    We present the capability of a new network of electric field mill sensors to monitor the atmospheric electric field at various locations in South America. The first task is to obtain a diurnal curve of atmospheric electric field variations under fair weather conditions, which we will consider as a reference curve. To accomplish this, we made daily, monthly, seasonal and annual averages. For all sensor location, the results show significant similarities with the Carnegie curve. The Carnegie curve is the characteristic curve in universal time of atmospheric electric field in fair weather and one thinks it is related to the currents flowing in the global atmospheric electric circuit. Ultimately, we pretend to study departures of the daily observations from the standard curve. This difference can be caused by solar, geophysical and atmospheric phenomena such as the solar activity cycle, solar flares and energetic charged particles, galactic cosmic rays, seismic activity and/or specific meteorological events. As an illustration we investigate solar effects on the atmospheric electric field observed at CASLEO (Lat. 31.798°S, Long. 69.295°W, Altitude: 2552 masl) by the method of superposed epoch analysis, between January 2010 and December 2015.

  11. The Prevalence of Idiopathic Scoliosis in Eleven Year-Old Korean Adolescents: A 3 Year Epidemiological Study

    PubMed Central

    Lee, Jin-Young; Moon, Seong-Hwan; Kim, Han Jo; Suh, Bo-Kyung; Nam, Ji Hoon; Jung, Jae Kyun; Lee, Hwan-Mo

    2014-01-01

    Purpose School screening allows for early detection and early treatment of scoliosis, with the purpose of reducing the number of patients requiring surgical treatment. Children between 10 and 14 years old are considered as good candidates for school screening tests of scoliosis. The purpose of the present study was to assess the epidemiological findings of idiopathic scoliosis in 11-year-old Korean adolescents. Materials and Methods A total of 37856 11-year-old adolescents were screened for scoliosis. There were 17110 girls and 20746 boys. Adolescents who were abnormal by Moiré topography were subsequently assessed by standardized clinical and radiological examinations. A scoliotic curve was defined as 10° or more. Results The prevalence of scoliosis was 0.19% and most of the curves were small (10° to 19°). The ratio of boys to girls was 1:5.5 overall. Sixty adolescents (84.5%) exhibited single curvature. Thoracolumbar curves were the most common type of curve identified, followed by thoracic and lumbar curves. Conclusion The prevalence of idiopathic scoliosis among 11-year-old Korean adolescents was 0.19%. PMID:24719147

  12. Continuous relaxation and retardation spectrum method for viscoelastic characterization of asphalt concrete

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, Sudip; Swamy, Aravind Krishna; Daniel, Jo S.

    2012-08-01

    This paper presents a simple and practical approach to obtain the continuous relaxation and retardation spectra of asphalt concrete directly from the complex (dynamic) modulus test data. The spectra thus obtained are continuous functions of relaxation and retardation time. The major advantage of this method is that the continuous form is directly obtained from the master curves which are readily available from the standard characterization tests of linearly viscoelastic behavior of asphalt concrete. The continuous spectrum method offers efficient alternative to the numerical computation of discrete spectra and can be easily used for modeling viscoelastic behavior. In this research, asphalt concrete specimens have been tested for linearly viscoelastic characterization. The linearly viscoelastic test data have been used to develop storage modulus and storage compliance master curves. The continuous spectra are obtained from the fitted sigmoid function of the master curves via the inverse integral transform. The continuous spectra are shown to be the limiting case of the discrete distributions. The continuous spectra and the time-domain viscoelastic functions (relaxation modulus and creep compliance) computed from the spectra matched very well with the approximate solutions. It is observed that the shape of the spectra is dependent on the master curve parameters. The continuous spectra thus obtained can easily be implemented in material mix design process. Prony-series coefficients can be easily obtained from the continuous spectra and used in numerical analysis such as finite element analysis.

  13. STACCATO: a novel solution to supernova photometric classification with biased training sets

    NASA Astrophysics Data System (ADS)

    Revsbech, E. A.; Trotta, R.; van Dyk, D. A.

    2018-01-01

    We present a new solution to the problem of classifying Type Ia supernovae from their light curves alone given a spectroscopically confirmed but biased training set, circumventing the need to obtain an observationally expensive unbiased training set. We use Gaussian processes (GPs) to model the supernovae's (SN's) light curves, and demonstrate that the choice of covariance function has only a small influence on the GPs ability to accurately classify SNe. We extend and improve the approach of Richards et al. - a diffusion map combined with a random forest classifier - to deal specifically with the case of biased training sets. We propose a novel method called Synthetically Augmented Light Curve Classification (STACCATO) that synthetically augments a biased training set by generating additional training data from the fitted GPs. Key to the success of the method is the partitioning of the observations into subgroups based on their propensity score of being included in the training set. Using simulated light curve data, we show that STACCATO increases performance, as measured by the area under the Receiver Operating Characteristic curve (AUC), from 0.93 to 0.96, close to the AUC of 0.977 obtained using the 'gold standard' of an unbiased training set and significantly improving on the previous best result of 0.88. STACCATO also increases the true positive rate for SNIa classification by up to a factor of 50 for high-redshift/low-brightness SNe.

  14. Molecular Form Differences Between Prostate-Specific Antigen (PSA) Standards Create Quantitative Discordances in PSA ELISA Measurements

    PubMed Central

    McJimpsey, Erica L.

    2016-01-01

    The prostate-specific antigen (PSA) assays currently employed for the detection of prostate cancer (PCa) lack the specificity needed to differentiate PCa from benign prostatic hyperplasia and have high false positive rates. The PSA calibrants used to create calibration curves in these assays are typically purified from seminal plasma and contain many molecular forms (intact PSA and cleaved subforms). The purpose of this study was to determine if the composition of the PSA molecular forms found in these PSA standards contribute to the lack of PSA test reliability. To this end, seminal plasma purified PSA standards from different commercial sources were investigated by western blot (WB) and in multiple research grade PSA ELISAs. The WB results revealed that all of the PSA standards contained different mass concentrations of intact and cleaved molecular forms. Increased mass concentrations of intact PSA yielded higher immunoassay absorbance values, even between lots from the same manufacturer. Standardization of seminal plasma derived PSA calibrant molecular form mass concentrations and purification methods will assist in closing the gaps in PCa testing measurements that require the use of PSA values, such as the % free PSA and Prostate Health Index by increasing the accuracy of the calibration curves. PMID:26911983

  15. Qualis-SIS: automated standard curve generation and quality assessment for multiplexed targeted quantitative proteomic experiments with labeled standards.

    PubMed

    Mohammed, Yassene; Percy, Andrew J; Chambers, Andrew G; Borchers, Christoph H

    2015-02-06

    Multiplexed targeted quantitative proteomics typically utilizes multiple reaction monitoring and allows the optimized quantification of a large number of proteins. One challenge, however, is the large amount of data that needs to be reviewed, analyzed, and interpreted. Different vendors provide software for their instruments, which determine the recorded responses of the heavy and endogenous peptides and perform the response-curve integration. Bringing multiplexed data together and generating standard curves is often an off-line step accomplished, for example, with spreadsheet software. This can be laborious, as it requires determining the concentration levels that meet the required accuracy and precision criteria in an iterative process. We present here a computer program, Qualis-SIS, that generates standard curves from multiplexed MRM experiments and determines analyte concentrations in biological samples. Multiple level-removal algorithms and acceptance criteria for concentration levels are implemented. When used to apply the standard curve to new samples, the software flags each measurement according to its quality. From the user's perspective, the data processing is instantaneous due to the reactivity paradigm used, and the user can download the results of the stepwise calculations for further processing, if necessary. This allows for more consistent data analysis and can dramatically accelerate the downstream data analysis.

  16. X-ray fluorescence determination of Sn, Sb, Pb in lead-based bearing alloys using a solution technique

    NASA Astrophysics Data System (ADS)

    Tian, Lunfu; Wang, Lili; Gao, Wei; Weng, Xiaodong; Liu, Jianhui; Zou, Deshuang; Dai, Yichun; Huang, Shuke

    2018-03-01

    For the quantitative analysis of the principal elements in lead-antimony-tin alloys, directly X-ray fluorescence (XRF) method using solid metal disks introduces considerable errors due to the microstructure inhomogeneity. To solve this problem, an aqueous solution XRF method is proposed for determining major amounts of Sb, Sn, Pb in lead-based bearing alloys. The alloy samples were dissolved by a mixture of nitric acid and tartaric acid to eliminated the effects of microstructure of these alloys on the XRF analysis. Rh Compton scattering was used as internal standard for Sb and Sn, and Bi was added as internal standard for Pb, to correct for matrix effects, instrumental and operational variations. High-purity lead, antimony and tin were used to prepare synthetic standards. Using these standards, calibration curves were constructed for the three elements after optimizing the spectrometer parameters. The method has been successfully applied to the analysis of lead-based bearing alloys and is more rapid than classical titration methods normally used. The determination results are consistent with certified values or those obtained by titrations.

  17. [Immuno-affinity chromatographic purification: the study of methods to test citrinin in monascus products by high performance liquid chromatography].

    PubMed

    Qiu, Wen-qian; Liu, Xiao-xia; Zheng, Kui-cheng; Fu, Wu-sheng

    2012-08-01

    To establish a method to test citrinin (CIT) in monascus products by immuno-affinity chromatography (IAC)-high performance liquid chromatography (HPLC), and to detect the content of CIT in monascus products in Fujian province. IAC-HPLC was applied to detect the CIT content in monascus products. The conditions to use HPLC were as follows: C(18) reversed-phase chromatographic column, 150.0 mm×4.6mm×3 µm; mobile phase: the volume ratio of acetonitrile and 0.1% phosphoric acid solution at 65:35; isocratic elution; column temperature: 28°C; flow velocity: 0.8 ml/min; fluorescence detector, excitation wavelength (λ(ex)) was 331 nm and emission wavelength (λ(em)) was 500 nm. The standard curved was established by the linear regression of peak area (Y) to CIT content (X, ng/ml). The accuracy and precision of the method would then be verified. And 32 kinds of monascus products were determined and their color values were compared by this method. The standard curve established in this study was Y = 4634.8X-136.42, r = 1.000; whose limits of detection was 20 µg/kg and the limits of qualification was 64 µg/kg. In the range between 200 and 800 µg/kg, the standard recovery rate was 98.9% - 110.0% (n = 3), and the relative standard deviation (RSD) was 0.51% - 1.76%. Out of the 32 samples, CIT was detected from 11 samples of monascus rice, 9 samples of monascus powder and 5 samples of monascus pigments, the content was around 0.212 - 14.500 mg/kg. 4 out of 7 functional monascus samples were detected out CIT, whose content at 0.142 - 0.275 mg/kg. The method to detect CIT in monascus products by IAC-HPLC has been established.

  18. Soil conservation service curve number: How to take into account spatial and temporal variability

    NASA Astrophysics Data System (ADS)

    Rianna, M.; Orlando, D.; Montesarchio, V.; Russo, F.; Napolitano, F.

    2012-09-01

    The most commonly used method to evaluate rainfall excess, is the Soil Conservation Service (SCS) runoff curve number model. This method is based on the determination of the CN valuethat is linked with a hydrological soil group, cover type, treatment, hydrologic condition and antecedent runoff condition. To calculate the antecedent runoff condition the standard procedure needs to calculate the rainfall over the entire basin during the five days previous to the beginning of the event in order to simulate and then to use that volume of rainfall to calculate the antecedent moisture condition (AMC). This is necessary in order to obtain the correct curve number value. The value of the modified parameter is then kept constant throughout the whole event. The aim of this work is to evaluate the possibility of improving the curve number method. The various assumptions are focused on modifying those related to rainfall and the determination of an AMC condition and their role in the determination of the value of the curve number parameter. In order to consider the spatial variability we assumed that the rainfall which influences the AMC and the CN value does not account for the rainfall over the entire basin, but for the rainfall within a single cell where the basin domain is discretized. Furthermore, in order to consider the temporal variability of rainfall we assumed that the value of the CN of the single cell is not maintained constant during the whole event, but instead varies throughout it according to the time interval used to define the AMC conditions.

  19. Age dependence of Olympic weightlifting ability.

    PubMed

    Meltzer, D E

    1994-08-01

    There is increasing interest among Masters athletes in standards for comparing performances of competitors of different ages. The goal of this study was to develop one such age-comparison method by examining the age dependence of ability in Olympic-style weightlifting. Previous research on the deterioration of muscular strength and power with increasing age offers only limited guidance toward this goal; therefore, analysis of performance data was required. The variation of weightlifting ability as a function of age was examined by two different methods. First, cross-sectional data corresponding to two separate populations of Masters weightlifters were analyzed in detail. Then, a longitudinal study of 64 U.S. male Masters weightlifters was carried out, with performance versus age curves resulting from the two methods were very similar, reflecting approximately 1.0-1.5% x yr-1 deterioration rates. These curves were characterized by common features regarding the rate of decline of muscular power with increasing age, in apparent agreement with published data regarding Masters sprinters and jumpers. We tentatively conclude that Olympic weightlifting ability in trained subjects undergoes a nonlinear decline with age, in which the second derivative of the performance versus age curve repeatedly changes sign.

  20. PubMed Central

    Weichert, Alexander; Hagen, Andreas; Tchirikov, Michael; Fuchs, Ilka B.; Henrich, Wolfgang; Entezami, Michael

    2017-01-01

    Introduction Doppler sonography of the uterine artery (UA) is done to monitor pregnancies, because the detected flow patterns are useful to draw inferences about possible disorders of trophoblast invasion. Increased resistance in the UA is associated with an increased risk of preeclampsia and/or intrauterine growth restriction (IUGR) and perinatal mortality. In the absence of standardized figures, the normal ranges of the various available reference curves sometimes differ quite substantially from one another. The causes for this are differences in the flow patterns of the UA depending on the position of the pulsed Doppler gates as well as branching of the UA. Because of the discrepancies between the different reference curves and the practical problems this poses for guideline recommendations, we thought it would be useful to create our own reference curves for Doppler measurements of the UA obtained from a singleton cohort under standardized conditions. Material and Methods This retrospective cohort study was carried out in the Department of Obstetrics of the Charité – Universitätsmedizin Berlin, the Department for Obstetrics and Prenatal Medicine of the University Hospital Halle (Saale) and the Center for Prenatal Diagnostics and Human Genetics Kurfürstendamm 199. Available datasets from the three study locations were identified and reference curves were generated using the LMS method. Measured values were correlated with age of gestation, and a cubic model and Box-Cox power transformation (L), the median (M) and the coefficient of variation (S) were used to smooth the curves. Results 103 720 Doppler examinations of the UA carried out in singleton pregnancies from the 11th week of gestation (10 + 1 GW) were analyzed. The mean pulsatility index (Mean PI) showed a continuous decline over the course of pregnancy, dropping to a plateau of around 0.84 between the 23rd and 27th GW, after which it decreased again. Conclusion Age of gestation, placental position, position of pulsed Doppler gates and branching of the UA can all change the flow pattern. The mean pulsatility index (Mean PI) showed a continuous decrease over time. There were significant differences between our data and alternative reference curves. A system of classifying Doppler studies and a reference curve adapted to the current technology are urgently required to differentiate better between physiological and pathological findings. PMID:28579623

  1. A comparison of small-field tissue phantom ratio data generation methods for an Elekta Agility 6 MV photon beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richmond, Neil, E-mail: neil.richmond@stees.nhs.uk; Brackenridge, Robert

    2014-04-01

    Tissue-phantom ratios (TPRs) are a common dosimetric quantity used to describe the change in dose with depth in tissue. These can be challenging and time consuming to measure. The conversion of percentage depth dose (PDD) data using standard formulae is widely employed as an alternative method in generating TPR. However, the applicability of these formulae for small fields has been questioned in the literature. Functional representation has also been proposed for small-field TPR production. This article compares measured TPR data for small 6 MV photon fields against that generated by conversion of PDD using standard formulae to assess the efficacymore » of the conversion data. By functionally fitting the measured TPR data for square fields greater than 4 cm in length, the TPR curves for smaller fields are generated and compared with measurements. TPRs and PDDs were measured in a water tank for a range of square field sizes. The PDDs were converted to TPRs using standard formulae. TPRs for fields of 4 × 4 cm{sup 2} and larger were used to create functional fits. The parameterization coefficients were used to construct extrapolated TPR curves for 1 × 1 cm{sup 2}, 2 × 2-cm{sup 2}, and 3 × 3-cm{sup 2} fields. The TPR data generated using standard formulae were in excellent agreement with direct TPR measurements. The TPR data for 1 × 1-cm{sup 2}, 2 × 2-cm{sup 2}, and 3 × 3-cm{sup 2} fields created by extrapolation of the larger field functional fits gave inaccurate initial results. The corresponding mean differences for the 3 fields were 4.0%, 2.0%, and 0.9%. Generation of TPR data using a standard PDD-conversion methodology has been shown to give good agreement with our directly measured data for small fields. However, extrapolation of TPR data using the functional fit to fields of 4 × 4 cm{sup 2} or larger resulted in generation of TPR curves that did not compare well with the measured data.« less

  2. How Does One Assess the Accuracy of Academic Success Predictors? ROC Analysis Applied to University Entrance Factors

    ERIC Educational Resources Information Center

    Vivo, Juana-Maria; Franco, Manuel

    2008-01-01

    This article attempts to present a novel application of a method of measuring accuracy for academic success predictors that could be used as a standard. This procedure is known as the receiver operating characteristic (ROC) curve, which comes from statistical decision techniques. The statistical prediction techniques provide predictor models and…

  3. Acid-base titrations for polyacids: Significance of the pK sub a and parameters in the Kern equation

    NASA Technical Reports Server (NTRS)

    Meites, L.

    1978-01-01

    A new method is suggested for calculating the dissociation constants of polyvalent acids, especially polymeric acids. In qualitative form the most significant characteristics of the titration curves are demonstrated and identified which are obtained when titrating the solutions of such acids with a standard base potentiometrically.

  4. Standard curves of placental weight and fetal/placental weight ratio in Japanese population: difference according to the delivery mode, fetal sex, or maternal parity.

    PubMed

    Ogawa, Masaki; Matsuda, Yoshio; Nakai, Akihito; Hayashi, Masako; Sato, Shoji; Matsubara, Shigeki

    2016-11-01

    Placental weight (PW) and fetal/placental weight ratio (F/P) have been considered to be useful parameters for understanding the pathophysiology of fetal growth. However, there have been no standard data on PW and F/P in Asian populations. This study was conducted to establish nomograms of PW and F/P in the Japanese population and to clarify characteristics of PW and F/P in this population. Included in the study were 79,590 Japanese cases: 58,871 vaginal and 20,719 cesarean deliveries at obstetrical facilities (2001-2002) and registered to the Japan Society of Obstetrics and Gynecology Database. Multiple pregnancies, stillbirths, and fetal anomalies were excluded. Nomograms of PW and F/P were created by spline methods in groups categorized by fetal sex (male or female) and maternal parity (primipara or multipara). Standard curves of PW and F/P were established, which indicated that PW and F/P were lower in cesarean deliveries than vaginal deliveries, especially during preterm period. PW differed depending on fetal sex and maternal parity. F/P differed according to fetal sex. We for the first time established standard curves of PW and F/P in the Japanese population with statistically sufficient data, which showed that PW and F/P were lower in cesarean deliveries. PW and F/P were also affected by fetal sex. These data might be useful to understand the pathophysiology between the fetus and placenta in utero. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Spectro-photometric determinations of Mn, Fe and Cu in aluminum master alloys

    NASA Astrophysics Data System (ADS)

    Rehan; Naveed, A.; Shan, A.; Afzal, M.; Saleem, J.; Noshad, M. A.

    2016-08-01

    Highly reliable, fast and cost effective Spectro-photometric methods have been developed for the determination of Mn, Fe & Cu in aluminum master alloys, based on the development of calibration curves being prepared via laboratory standards. The calibration curves are designed so as to induce maximum sensitivity and minimum instrumental error (Mn 1mg/100ml-2mg/100ml, Fe 0.01mg/100ml-0.2mg/100ml and Cu 2mg/100ml-10mg/ 100ml). The developed Spectro-photometric methods produce accurate results while analyzing Mn, Fe and Cu in certified reference materials. Particularly, these methods are suitable for all types of Al-Mn, Al-Fe and Al-Cu master alloys (5%, 10%, 50% etc. master alloys).Moreover, the sampling practices suggested herein include a reasonable amount of analytical sample, which truly represent the whole lot of a particular master alloy. Successive dilution technique was utilized to meet the calibration curve range. Furthermore, the workout methods were also found suitable for the analysis of said elements in ordinary aluminum alloys. However, it was observed that Cush owed a considerable interference with Fe, the later one may not be accurately measured in the presence of Cu greater than 0.01 %.

  6. Technical Note: Image filtering to make computer-aided detection robust to image reconstruction kernel choice in lung cancer CT screening

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohkubo, Masaki, E-mail: mook@clg.niigata-u.ac.jp

    Purpose: In lung cancer computed tomography (CT) screening, the performance of a computer-aided detection (CAD) system depends on the selection of the image reconstruction kernel. To reduce this dependence on reconstruction kernels, the authors propose a novel application of an image filtering method previously proposed by their group. Methods: The proposed filtering process uses the ratio of modulation transfer functions (MTFs) of two reconstruction kernels as a filtering function in the spatial-frequency domain. This method is referred to as MTF{sub ratio} filtering. Test image data were obtained from CT screening scans of 67 subjects who each had one nodule. Imagesmore » were reconstructed using two kernels: f{sub STD} (for standard lung imaging) and f{sub SHARP} (for sharp edge-enhancement lung imaging). The MTF{sub ratio} filtering was implemented using the MTFs measured for those kernels and was applied to the reconstructed f{sub SHARP} images to obtain images that were similar to the f{sub STD} images. A mean filter and a median filter were applied (separately) for comparison. All reconstructed and filtered images were processed using their prototype CAD system. Results: The MTF{sub ratio} filtered images showed excellent agreement with the f{sub STD} images. The standard deviation for the difference between these images was very small, ∼6.0 Hounsfield units (HU). However, the mean and median filtered images showed larger differences of ∼48.1 and ∼57.9 HU from the f{sub STD} images, respectively. The free-response receiver operating characteristic (FROC) curve for the f{sub SHARP} images indicated poorer performance compared with the FROC curve for the f{sub STD} images. The FROC curve for the MTF{sub ratio} filtered images was equivalent to the curve for the f{sub STD} images. However, this similarity was not achieved by using the mean filter or median filter. Conclusions: The accuracy of MTF{sub ratio} image filtering was verified and the method was demonstrated to be effective for reducing the kernel dependence of CAD performance.« less

  7. Peripheral Venous Waveform Analysis for Detecting Hemorrhage and Iatrogenic Volume Overload in a Porcine Model.

    PubMed

    Hocking, Kyle M; Sileshi, Ban; Baudenbacher, Franz J; Boyer, Richard B; Kohorst, Kelly L; Brophy, Colleen M; Eagle, Susan S

    2016-10-01

    Unrecognized hemorrhage and unguided resuscitation is associated with increased perioperative morbidity and mortality. The authors investigated peripheral venous waveform analysis (PIVA) as a method for quantitating hemorrhage as well as iatrogenic fluid overload during resuscitation. The authors conducted a prospective study on Yorkshire Pigs (n = 8) undergoing hemorrhage, autologous blood return, and administration of balanced crystalloid solution beyond euvolemia. Intra-arterial blood pressure, electrocardiogram, and pulse oximetry were applied to each subject. Peripheral venous pressure was measured continuously through an upper extremity standard peripheral IV catheter and analyzed with LabChart. The primary outcome was comparison of change in the first fundamental frequency (f1) of PIVA with standard and invasive monitoring and shock index (SI). Hemorrhage, return to euvolemia, and iatrogenic fluid overload resulted in significantly non-zero slopes of f1 amplitude. There were no significant differences in heart rate or mean arterial pressure, and a late change in SI. For the detection of hypovolemia the PIVA f1 amplitude change generated an receiver operator curves (ROC) curve with an area under the curve (AUC) of 0.93; heart rate AUC = 0.61; mean arterial pressure AUC = 0.48, and SI AUC = 0.72. For hypervolemia the f1 amplitude generated an ROC curve with an AUC of 0.85, heart rate AUC = 0.62, mean arterial pressure AUC = 0.63, and SI AUC = 0.65. In this study, PIVA demonstrated a greater sensitivity for detecting acute hemorrhage, return to euvolemia, and iatrogenic fluid overload compared with standard monitoring and SI. PIVA may provide a low-cost, minimally invasive monitoring solution for monitoring and resuscitating patients with perioperative hemorrhage.

  8. Validated method for quantification of genetically modified organisms in samples of maize flour.

    PubMed

    Kunert, Renate; Gach, Johannes S; Vorauer-Uhl, Karola; Engel, Edwin; Katinger, Hermann

    2006-02-08

    Sensitive and accurate testing for trace amounts of biotechnology-derived DNA from plant material is the prerequisite for detection of 1% or 0.5% genetically modified ingredients in food products or raw materials thereof. Compared to ELISA detection of expressed proteins, real-time PCR (RT-PCR) amplification has easier sample preparation and detection limits are lower. Of the different methods of DNA preparation CTAB method with high flexibility in starting material and generation of sufficient DNA with relevant quality was chosen. Previous RT-PCR data generated with the SYBR green detection method showed that the method is highly sensitive to sample matrices and genomic DNA content influencing the interpretation of results. Therefore, this paper describes a real-time DNA quantification based on the TaqMan probe method, indicating high accuracy and sensitivity with detection limits of lower than 18 copies per sample applicable and comparable to highly purified plasmid standards as well as complex matrices of genomic DNA samples. The results were evaluated with ValiData for homology of variance, linearity, accuracy of the standard curve, and standard deviation.

  9. Improved Strategies and Optimization of Calibration Models for Real-time PCR Absolute Quantification

    EPA Science Inventory

    Real-time PCR absolute quantification applications rely on the use of standard curves to make estimates of DNA target concentrations in unknown samples. Traditional absolute quantification approaches dictate that a standard curve must accompany each experimental run. However, t...

  10. What's the point? Hole-ography in Poincaré AdS

    NASA Astrophysics Data System (ADS)

    Espíndola, Ricardo; Güijosa, Alberto; Landetta, Alberto; Pedraza, Juan F.

    2018-01-01

    In the context of the AdS/CFT correspondence, we study bulk reconstruction of the Poincaré wedge of AdS_3 via hole-ography, i.e., in terms of differential entropy of the dual CFT_2. Previous work had considered the reconstruction of closed or open spacelike curves in global AdS, and of infinitely extended spacelike curves in Poincaré AdS that are subject to a periodicity condition at infinity. Working first at constant time, we find that a closed curve in Poincaré is described in the CFT by a family of intervals that covers the spatial axis at least twice. We also show how to reconstruct open curves, points and distances, and obtain a CFT action whose extremization leads to bulk points. We then generalize all of these results to the case of curves that vary in time, and discover that generic curves have segments that cannot be reconstructed using the standard hole-ographic construction. This happens because, for the nonreconstructible segments, the tangent geodesics fail to be fully contained within the Poincaré wedge. We show that a previously discovered variant of the hole-ographic method allows us to overcome this challenge, by reorienting the geodesics touching the bulk curve to ensure that they all remain within the wedge. Our conclusion is that all spacelike curves in Poincaré AdS can be completely reconstructed with CFT data, and each curve has in fact an infinite number of representations within the CFT.

  11. Fuels characterization studies. [jet fuels

    NASA Technical Reports Server (NTRS)

    Seng, G. T.; Antoine, A. C.; Flores, F. J.

    1980-01-01

    Current analytical techniques used in the characterization of broadened properties fuels are briefly described. Included are liquid chromatography, gas chromatography, and nuclear magnetic resonance spectroscopy. High performance liquid chromatographic ground-type methods development is being approached from several directions, including aromatic fraction standards development and the elimination of standards through removal or partial removal of the alkene and aromatic fractions or through the use of whole fuel refractive index values. More sensitive methods for alkene determinations using an ultraviolet-visible detector are also being pursued. Some of the more successful gas chromatographic physical property determinations for petroleum derived fuels are the distillation curve (simulated distillation), heat of combustion, hydrogen content, API gravity, viscosity, flash point, and (to a lesser extent) freezing point.

  12. A validated method for the quantitation of 1,1-difluoroethane using a gas in equilibrium method of calibration.

    PubMed

    Avella, Joseph; Lehrer, Michael; Zito, S William

    2008-10-01

    1,1-Difluoroethane (DFE), also known as Freon 152A, is a member of a class of compounds known as halogenated hydrocarbons. A number of these compounds have gained notoriety because of their ability to induce rapid onset of intoxication after inhalation exposure. Abuse of DFE has necessitated development of methods for its detection and quantitation in postmortem and human performance specimens. Furthermore, methodologies applicable to research studies are required as there have been limited toxicokinetic and toxicodynamic reports published on DFE. This paper describes a method for the quantitation of DFE using a gas chromatography-flame-ionization headspace technique that employs solventless standards for calibration. Two calibration curves using 0.5 mL whole blood calibrators which ranged from A: 0.225-1.350 to B: 9.0-180.0 mg/L were developed. These were evaluated for linearity (0.9992 and 0.9995), limit of detection of 0.018 mg/L, limit of quantitation of 0.099 mg/L (recovery 111.9%, CV 9.92%), and upper limit of linearity of 27,000.0 mg/L. Combined curve recovery results of a 98.0 mg/L DFE control that was prepared using an alternate technique was 102.2% with CV of 3.09%. No matrix interference was observed in DFE enriched blood, urine or brain specimens nor did analysis of variance detect any significant differences (alpha = 0.01) in the area under the curve of blood, urine or brain specimens at three identical DFE concentrations. The method is suitable for use in forensic laboratories because validation was performed on instrumentation routinely used in forensic labs and due to the ease with which the calibration range can be adjusted. Perhaps more importantly it is also useful for research oriented studies because the removal of solvent from standard preparation eliminates the possibility for solvent induced changes to the gas/liquid partitioning of DFE or chromatographic interference due to the presence of solvent in specimens.

  13. A comparison of methods to predict historical daily streamflow time series in the southeastern United States

    USGS Publications Warehouse

    Farmer, William H.; Archfield, Stacey A.; Over, Thomas M.; Hay, Lauren E.; LaFontaine, Jacob H.; Kiang, Julie E.

    2015-01-01

    Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water. Streamgages cannot be installed at every location where streamflow information is needed. As part of its National Water Census, the U.S. Geological Survey is planning to provide streamflow predictions for ungaged locations. In order to predict streamflow at a useful spatial and temporal resolution throughout the Nation, efficient methods need to be selected. This report examines several methods used for streamflow prediction in ungaged basins to determine the best methods for regional and national implementation. A pilot area in the southeastern United States was selected to apply 19 different streamflow prediction methods and evaluate each method by a wide set of performance metrics. Through these comparisons, two methods emerged as the most generally accurate streamflow prediction methods: the nearest-neighbor implementations of nonlinear spatial interpolation using flow duration curves (NN-QPPQ) and standardizing logarithms of streamflow by monthly means and standard deviations (NN-SMS12L). It was nearly impossible to distinguish between these two methods in terms of performance. Furthermore, neither of these methods requires significantly more parameterization in order to be applied: NN-SMS12L requires 24 regional regressions—12 for monthly means and 12 for monthly standard deviations. NN-QPPQ, in the application described in this study, required 27 regressions of particular quantiles along the flow duration curve. Despite this finding, the results suggest that an optimal streamflow prediction method depends on the intended application. Some methods are stronger overall, while some methods may be better at predicting particular statistics. The methods of analysis presented here reflect a possible framework for continued analysis and comprehensive multiple comparisons of methods of prediction in ungaged basins (PUB). Additional metrics of comparison can easily be incorporated into this type of analysis. By considering such a multifaceted approach, the top-performing models can easily be identified and considered for further research. The top-performing models can then provide a basis for future applications and explorations by scientists, engineers, managers, and practitioners to suit their own needs.

  14. An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems.

    PubMed

    Glover, Jack L; Hudson, Lawrence T

    2016-06-01

    The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in a US national aviation security standard.

  15. An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems

    PubMed Central

    Glover, Jack L.; Hudson, Lawrence T.

    2016-01-01

    The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in a US national aviation security standard. PMID:27499586

  16. An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems

    NASA Astrophysics Data System (ADS)

    Glover, Jack L.; Hudson, Lawrence T.

    2016-06-01

    The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in an international aviation security standard.

  17. Evidence for Periodicity in 43 year-long Monitoring of NGC 5548

    NASA Astrophysics Data System (ADS)

    Bon, E.; Zucker, S.; Netzer, H.; Marziani, P.; Bon, N.; Jovanović, P.; Shapovalova, A. I.; Komossa, S.; Gaskell, C. M.; Popović, L. Č.; Britzen, S.; Chavushyan, V. H.; Burenkov, A. N.; Sergeev, S.; La Mura, G.; Valdés, J. R.; Stalevski, M.

    2016-08-01

    We present an analysis of 43 years (1972 to 2015) of spectroscopic observations of the Seyfert 1 galaxy NGC 5548. This includes 12 years of new unpublished observations (2003 to 2015). We compiled about 1600 Hβ spectra and analyzed the long-term spectral variations of the 5100 Å continuum and the Hβ line. Our analysis is based on standard procedures, including the Lomb-Scargle method, which is known to be rather limited to such heterogeneous data sets, and a new method developed specifically for this project that is more robust and reveals a ˜5700 day periodicity in the continuum light curve, the Hβ light curve, and the radial velocity curve of the red wing of the Hβ line. The data are consistent with orbital motion inside the broad emission line region of the source. We discuss several possible mechanisms that can explain this periodicity, including orbiting dusty and dust-free clouds, a binary black hole system, tidal disruption events, and the effect of an orbiting star periodically passing through an accretion disk.

  18. Evaluation of Delamination Onset and Growth Characterization Methods under Mode I Fatigue Loading

    NASA Technical Reports Server (NTRS)

    Murri, Gretchen B.

    2013-01-01

    Double-cantilevered beam specimens of IM7/8552 graphite/epoxy from two different manufacturers were tested in static and fatigue to compare the material characterization data and to evaluate a proposed ASTM standard for generating Paris Law equations for delamination growth. Static results were used to generate compliance calibration constants for reducing the fatigue data, and a delamination resistance curve, GIR, for each material. Specimens were tested in fatigue at different initial cyclic GImax levels to determine a delamination onset curve and the delamination growth rate. The delamination onset curve equations were similar for the two sources. Delamination growth rate was calculated by plotting da/dN versus GImax on a log-log scale and fitting a Paris Law. Two different data reduction methods were used to calculate da/dN. To determine the effects of fiber-bridging, growth results were normalized by the delamination resistance curves. Paris Law exponents decreased by 31% to 37% after normalizing the data. Visual data records from the fatigue tests were used to calculate individual compliance constants from the fatigue data. The resulting da/dN versus GImax plots showed improved repeatability for each source, compared to using averaged static data. The Paris Law expressions for the two sources showed the closest agreement using the individually fit compliance data.

  19. Foot-ankle complex injury risk curves using calcaneus bone mineral density data.

    PubMed

    Yoganandan, Narayan; Chirvi, Sajal; Voo, Liming; DeVogel, Nicholas; Pintar, Frank A; Banerjee, Anjishnu

    2017-08-01

    Biomechanical data from post mortem human subject (PMHS) experiments are used to derive human injury probability curves and develop injury criteria. This process has been used in previous and current automotive crashworthiness studies, Federal safety standards, and dummy design and development. Human bone strength decreases as the individuals reach their elderly age. Injury risk curves using the primary predictor variable (e.g., force) should therefore account for such strength reduction when the test data are collected from PMHS specimens of different ages (age at the time of death). This demographic variable is meant to be a surrogate for fracture, often representing bone strength as other parameters have not been routinely gathered in previous experiments. However, bone mineral densities (BMD) can be gathered from tested specimens (presented in this manuscript). The objective of this study is to investigate different approaches of accounting for BMD in the development of human injury risk curves. Using simulated underbody blast (UBB) loading experiments conducted with the PMHS lower leg-foot-ankle complexes, a comparison is made between the two methods: treating BMD as a covariate and pre-scaling test data based on BMD. Twelve PMHS lower leg-foot-ankle specimens were subjected to UBB loads. Calcaneus BMD was obtained from quantitative computed tomography (QCT) images. Fracture forces were recorded using a load cell. They were treated as uncensored data in the survival analysis model which used the Weibull distribution in both methods. The width of the normalized confidence interval (NCIS) was obtained using the mean and ± 95% confidence limit curves. The mean peak forces of 3.9kN and 8.6kN were associated with the 5% and 50% probability of injury for the covariate method of deriving the risk curve for the reference age of 45 years. The mean forces of 5.4 kN and 9.2kN were associated with the 5% and 50% probability of injury for the pre-scaled method. The NCIS magnitudes were greater in the covariate-based risk curves (0.52-1.00) than in the risk curves based on the pre-scaled method (0.24-0.66). The pre-scaling method resulted in a generally greater injury force and a tighter injury risk curve confidence interval. Although not directly applicable to the foot-ankle fractures, when compared with the use of spine BMD from QCT scans to pre-scale the force, the calcaneus BMD scaled data produced greater force at the same risk level in general. Pre-scaling the force data using BMD is an alternate, and likely a more accurate, method instead of using covariate to account for the age-related bone strength change in deriving risk curves from biomechanical experiments using PMHS. Because of the proximity of the calcaneus bone to the impacting load, it is suggested to use and determine the BMD of the foot-ankle bone in future UBB and other loading conditions to derive human injury probability curves for the foot-ankle complex. Copyright © 2017. Published by Elsevier Ltd.

  20. Trainee competence in thoracoscopic esophagectomy in the prone position: evaluation using cumulative sum techniques.

    PubMed

    Oshikiri, Taro; Yasuda, Takashi; Yamamoto, Masashi; Kanaji, Shingo; Yamashita, Kimihiro; Matsuda, Takeru; Sumi, Yasuo; Nakamura, Tetsu; Fujino, Yasuhiro; Tominaga, Masahiro; Suzuki, Satoshi; Kakeji, Yoshihiro

    2016-09-01

    Minimally invasive esophagectomy (MIE) has less morbidity than the open approach. In particular, thoracoscopic esophagectomy in the prone position (TEP) has been performed worldwide. Using the cumulative sum control chart (CUSUM) method, this study aimed to confirm whether a trainee surgeon who learned established standards would become skilled in TEP with a shorter learning curve than that of the mentoring surgeon. Surgeon A performed TEP in 100 patients; the first 22 patients comprised period 1. His learning curve, defined based on the operation time (OT) of the thoracic procedure, was evaluated using the CUSUM method, and short-term outcomes were assessed. Another 22 patients underwent TEP performed by surgeon B, with outcomes compared to those of surgeon A's period 1. Using the CUSUM chart, the peak point of the thoracic procedure OT occurred at the 44th case in surgeon A's experience of 100 cases. With surgeon A's first 22 cases (period 1), the peak point of the thoracic procedure OT could not be confirmed and graph is expanding soaring at CUSUM chart. The CUSUM chart of surgeon B's experience of 22 cases clearly indicated that the peak point of the thoracic procedure OT occurred at the 17th case. The rate of recurrent laryngeal nerve palsy for surgeon B (9 %) was significantly lower than for surgeon A in period 1 (36 %) (p = 0.0266). There is some possibility for a trainee surgeon to attain the required basic skills to perform TEP in a relatively short period of time using a standardized procedure developed by a mentoring surgeon. The CUSUM method should be useful in evaluating trainee competence during an initial series of procedures, by assessing the learning curve defined by OT.

  1. 2D Potential Theory using Complex Algebra: New Perspectives for Interpretation of Marine Magnetic Anomaly

    NASA Astrophysics Data System (ADS)

    Le Maire, P.; Munschy, M.

    2017-12-01

    Interpretation of marine magnetic anomalies enable to perform accurate global kinematic models. Several methods have been proposed to compute the paleo-latitude of the oceanic crust as its formation. A model of the Earth's magnetic field is used to determine a relationship between the apparent inclination of the magnetization and the paleo-latitude. Usually, the estimation of the apparent inclination is qualitative, with the fit between magnetic data and forward models. We propose to apply a new method using complex algebra to obtain the apparent inclination of the magnetization of the oceanic crust. For two dimensional bodies, we rewrite Talwani's equations using complex algebra; the corresponding complex function of the complex variable, called CMA (complex magnetic anomaly) is easier to use for forward modelling and inversion of the magnetic data. This complex equation allows to visualize the data in the complex plane (Argand diagram) and offers a new way to interpret data (curves to the right of the figure (B), while the curves to the left represent the standard display of magnetic anomalies (A) for the model displayed (C) at the bottom of the figure). In the complex plane, the effect of the apparent inclination is to rotate the curves, while on the standard display the evolution of the shape of the anomaly is more complicated (figure). This innovative method gives the opportunity to study a set of magnetic profiles (provided by the Geological Survey of Norway) acquired in the Norwegian Sea, near the Jan Mayen fracture zone. In this area, the age of the oceanic crust ranges from 40 to 55 Ma and the apparent inclination of the magnetization is computed.

  2. Numerical integration of discontinuous functions: moment fitting and smart octree

    NASA Astrophysics Data System (ADS)

    Hubrich, Simeon; Di Stolfo, Paolo; Kudela, László; Kollmannsberger, Stefan; Rank, Ernst; Schröder, Andreas; Düster, Alexander

    2017-11-01

    A fast and simple grid generation can be achieved by non-standard discretization methods where the mesh does not conform to the boundary or the internal interfaces of the problem. However, this simplification leads to discontinuous integrands for intersected elements and, therefore, standard quadrature rules do not perform well anymore. Consequently, special methods are required for the numerical integration. To this end, we present two approaches to obtain quadrature rules for arbitrary domains. The first approach is based on an extension of the moment fitting method combined with an optimization strategy for the position and weights of the quadrature points. In the second approach, we apply the smart octree, which generates curved sub-cells for the integration mesh. To demonstrate the performance of the proposed methods, we consider several numerical examples, showing that the methods lead to efficient quadrature rules, resulting in less integration points and in high accuracy.

  3. Quantitation of sugar content in pyrolysis liquids after acid hydrolysis using high-performance liquid chromatography without neutralization.

    PubMed

    Johnston, Patrick A; Brown, Robert C

    2014-08-13

    A rapid method for the quantitation of total sugars in pyrolysis liquids using high-performance liquid chromatography (HPLC) was developed. The method avoids the tedious and time-consuming sample preparation required by current analytical methods. It is possible to directly analyze hydrolyzed pyrolysis liquids, bypassing the neutralization step usually required in determination of total sugars. A comparison with traditional methods was used to determine the validity of the results. The calibration curve coefficient of determination on all standard compounds was >0.999 using a refractive index detector. The relative standard deviation for the new method was 1.13%. The spiked sugar recoveries on the pyrolysis liquid samples were between 104 and 105%. The research demonstrates that it is possible to obtain excellent accuracy and efficiency using HPLC to quantitate glucose after acid hydrolysis of polymeric and oligomeric sugars found in fast pyrolysis bio-oils without neutralization.

  4. High-throughput real-time quantitative reverse transcription PCR.

    PubMed

    Bookout, Angie L; Cummins, Carolyn L; Mangelsdorf, David J; Pesola, Jean M; Kramer, Martha F

    2006-02-01

    Extensive detail on the application of the real-time quantitative polymerase chain reaction (QPCR) for the analysis of gene expression is provided in this unit. The protocols are designed for high-throughput, 384-well-format instruments, such as the Applied Biosystems 7900HT, but may be modified to suit any real-time PCR instrument. QPCR primer and probe design and validation are discussed, and three relative quantitation methods are described: the standard curve method, the efficiency-corrected DeltaCt method, and the comparative cycle time, or DeltaDeltaCt method. In addition, a method is provided for absolute quantification of RNA in unknown samples. RNA standards are subjected to RT-PCR in the same manner as the experimental samples, thus accounting for the reaction efficiencies of both procedures. This protocol describes the production and quantitation of synthetic RNA molecules for real-time and non-real-time RT-PCR applications.

  5. An update on 'dose calibrator' settings for nuclides used in nuclear medicine.

    PubMed

    Bergeron, Denis E; Cessna, Jeffrey T

    2018-06-01

    Most clinical measurements of radioactivity, whether for therapeutic or imaging nuclides, rely on commercial re-entrant ionization chambers ('dose calibrators'). The National Institute of Standards and Technology (NIST) maintains a battery of representative calibrators and works to link calibration settings ('dial settings') to primary radioactivity standards. Here, we provide a summary of NIST-determined dial settings for 22 radionuclides. We collected previously published dial settings and determined some new ones using either the calibration curve method or the dialing-in approach. The dial settings with their uncertainties are collected in a comprehensive table. In general, current manufacturer-provided calibration settings give activities that agree with National Institute of Standards and Technology standards to within a few percent.

  6. Detection of Alicyclobacillus spp. in Fruit Juice by Combination of Immunomagnetic Separation and a SYBR Green I Real-Time PCR Assay

    PubMed Central

    Yuan, Yahong; Liu, Bin; Wang, Ling; Yue, Tianli

    2015-01-01

    An approach based on immunomagnetic separation (IMS) and SYBR Green I real-time PCR (real-time PCR) with species-specific primers and melting curve analysis was proposed as a rapid and effective method for detecting Alicyclobacillus spp. in fruit juices. Specific primers targeting the 16S rDNA sequences of Alicyclobacillus spp. were designed and then confirmed by the amplification of DNA extracted from standard strains and isolates. Spiked samples containing known amounts of target bacteria were used to obtain standard curves; the correlation coefficient was greater than 0.986 and the real-time PCR amplification efficiencies were 98.9%- 101.8%. The detection limit of the testing system was 2.8×101 CFU/mL. The coefficient of variation for intra-assay and inter-assay variability were all within the acceptable limit of 5%. Besides, the performance of the IMS-real-time PCR assay was further investigated by detecting naturally contaminated kiwi fruit juice; the sensitivity, specificity and accuracy were 91.7%, 95.9% and 95.3%, respectively. The established IMS-real-time PCR procedure provides a new method for identification and quantitative detection of Alicyclobacillus spp. in fruit juice. PMID:26488469

  7. [Determination of LF-VD refining furnace slag by X ray fluorescence spectrometry].

    PubMed

    Kan, Bin; Cheng, Jian-ping; Song, Zu-feng

    2004-10-01

    Eight components, i.e. TFe, CaO, MgO, Al2O3, SiO2, TiO2, MnO and P2O5 in refining furnace slag were determined by X ray fluorescence spectrometer. Because the content of CaO was high, the authors selected 12 national and departmental grade slag standard samples and prepared a series of synthetic standard samples by adding spectrally pure reagents to them. The calibration curve is suitable to the sample analysis of CaO, MgO and SiO2 with widely varying range. Meanwhile, the points on the curve are even. The samples were prepared at high temperature by adding Li2B4O7 as flux. The experiments for the selection of the sample preparation conditions about strip reagents, melting temperature and dulition ratio were carried out. The matrix effects on absorption and enhancement were corrected by means of PH model and theoretical alpha coefficient. Moreover, the precision and accuracy experiments were performed. In comparison with chemical analysis method, the quantitative analytical results for each component are satisfactory. The method has proven rapid, precise and simple.

  8. Beyond the SCS curve number: A new stochastic spatial runoff approach

    NASA Astrophysics Data System (ADS)

    Bartlett, M. S., Jr.; Parolari, A.; McDonnell, J.; Porporato, A. M.

    2015-12-01

    The Soil Conservation Service curve number (SCS-CN) method is the standard approach in practice for predicting a storm event runoff response. It is popular because its low parametric complexity and ease of use. However, the SCS-CN method does not describe the spatial variability of runoff and is restricted to certain geographic regions and land use types. Here we present a general theory for extending the SCS-CN method. Our new theory accommodates different event based models derived from alternative rainfall-runoff mechanisms or distributions of watershed variables, which are the basis of different semi-distributed models such as VIC, PDM, and TOPMODEL. We introduce a parsimonious but flexible description where runoff is initiated by a pure threshold, i.e., saturation excess, that is complemented by fill and spill runoff behavior from areas of partial saturation. To facilitate event based runoff prediction, we derive simple equations for the fraction of the runoff source areas, the probability density function (PDF) describing runoff variability, and the corresponding average runoff value (a runoff curve analogous to the SCS-CN). The benefit of the theory is that it unites the SCS-CN method, VIC, PDM, and TOPMODEL as the same model type but with different assumptions for the spatial distribution of variables and the runoff mechanism. The new multiple runoff mechanism description for the SCS-CN enables runoff prediction in geographic regions and site runoff types previously misrepresented by the traditional SCS-CN method. In addition, we show that the VIC, PDM, and TOPMODEL runoff curves may be more suitable than the SCS-CN for different conditions. Lastly, we explore predictions of sediment and nutrient transport by applying the PDF describing runoff variability within our new framework.

  9. Defining overweight and obesity among Greek children living in Thessaloniki: International versus local reference standards

    PubMed Central

    Christoforidis, A; Dimitriadou, M; Papadopolou, E; Stilpnopoulou, D; Katzos, G; Athanassiou-Metaxa, M

    2011-01-01

    Background: Body Mass Index (BMI) offers a simple and reasonable measure of obesity that, with the use of the appropriate reference, can help in the early detection of children with weight problems. Our aim was to compare the two most commonly used international BMI references and the national Greek BMI reference in identifying Greek children being overweight and obese. Methods: A group of 1557 children (820 girls and 737 boys, mean age: 11.42 ± 3.51 years) were studied. Weight and height was measured using standard methods, and BMI was calculated. Overweight and obesity were determined using the International Obesity Task Force (IOTF) standards, the Centers for Disease Control and Prevention (CDC) BMI-forage curves and the most recent Greek BMI-for-age curves. Results: Results showed that the IOTF's cut-off limits identifies a significantly higher prevalence of overweight (22.4%) compared with both the CDC's (11.8%, p=0.03) and the Greek's (7.4%, p=0.002) cut-off limits. However, the prevalence of obesity was generally increased when it was determined using the CDC's cut-off limits (13.9%) compared to the prevalence calculated with both the IOTF's (6.5%, p=0.05) and the Greek's (6.9%, n.s.) cut off limits. Conclusions: The use of the national Greek reference standards for BMI underestimates the true prevalence of overweight and obesity. On the contrary, both the IOTF and the CDC standards, although independently, detect an increased number of overweight and obese children and thus they should be adopted in the clinical practice for an earlier identification and a timelier intervention. PMID:22110296

  10. Bayesian analysis of stage-fall-discharge rating curves and their uncertainties

    NASA Astrophysics Data System (ADS)

    Mansanarez, V.; Le Coz, J.; Renard, B.; Lang, M.; Pierrefeu, G.; Vauchel, P.

    2016-09-01

    Stage-fall-discharge (SFD) rating curves are traditionally used to compute streamflow records at sites where the energy slope of the flow is variable due to variable backwater effects. We introduce a model with hydraulically interpretable parameters for estimating SFD rating curves and their uncertainties. Conventional power functions for channel and section controls are used. The transition to a backwater-affected channel control is computed based on a continuity condition, solved either analytically or numerically. The practical use of the method is demonstrated with two real twin-gauge stations, the Rhône River at Valence, France, and the Guthusbekken stream at station 0003ṡ0033, Norway. Those stations are typical of a channel control and a section control, respectively, when backwater-unaffected conditions apply. The performance of the method is investigated through sensitivity analysis to prior information on controls and to observations (i.e., available gaugings) for the station of Valence. These analyses suggest that precisely identifying SFD rating curves requires adapted gauging strategy and/or informative priors. The Madeira River, one of the largest tributaries of the Amazon, provides a challenging case typical of large, flat, tropical river networks where bed roughness can also be variable in addition to slope. In this case, the difference in staff gauge reference levels must be estimated as another uncertain parameter of the SFD model. The proposed Bayesian method is a valuable alternative solution to the graphical and empirical techniques still proposed in hydrometry guidance and standards.

  11. Anatomical curve identification

    PubMed Central

    Bowman, Adrian W.; Katina, Stanislav; Smith, Joanna; Brown, Denise

    2015-01-01

    Methods for capturing images in three dimensions are now widely available, with stereo-photogrammetry and laser scanning being two common approaches. In anatomical studies, a number of landmarks are usually identified manually from each of these images and these form the basis of subsequent statistical analysis. However, landmarks express only a very small proportion of the information available from the images. Anatomically defined curves have the advantage of providing a much richer expression of shape. This is explored in the context of identifying the boundary of breasts from an image of the female torso and the boundary of the lips from a facial image. The curves of interest are characterised by ridges or valleys. Key issues in estimation are the ability to navigate across the anatomical surface in three-dimensions, the ability to recognise the relevant boundary and the need to assess the evidence for the presence of the surface feature of interest. The first issue is addressed by the use of principal curves, as an extension of principal components, the second by suitable assessment of curvature and the third by change-point detection. P-spline smoothing is used as an integral part of the methods but adaptations are made to the specific anatomical features of interest. After estimation of the boundary curves, the intermediate surfaces of the anatomical feature of interest can be characterised by surface interpolation. This allows shape variation to be explored using standard methods such as principal components. These tools are applied to a collection of images of women where one breast has been reconstructed after mastectomy and where interest lies in shape differences between the reconstructed and unreconstructed breasts. They are also applied to a collection of lip images where possible differences in shape between males and females are of interest. PMID:26041943

  12. Evaluation of methods for characterizing the melting curves of a high temperature cobalt-carbon fixed point to define and determine its melting temperature

    NASA Astrophysics Data System (ADS)

    Lowe, David; Machin, Graham

    2012-06-01

    The future mise en pratique for the realization of the kelvin will be founded on the melting temperatures of particular metal-carbon eutectic alloys as thermodynamic temperature references. However, at the moment there is no consensus on what should be taken as the melting temperature. An ideal melting or freezing curve should be a completely flat plateau at a specific temperature. Any departure from the ideal is due to shortcomings in the realization and should be accommodated within the uncertainty budget. However, for the proposed alloy-based fixed points, melting takes place over typically some hundreds of millikelvins. Including the entire melting range within the uncertainties would lead to an unnecessarily pessimistic view of the utility of these as reference standards. Therefore, detailed analysis of the shape of the melting curve is needed to give a value associated with some identifiable aspect of the phase transition. A range of approaches are or could be used; some purely practical, determining the point of inflection (POI) of the melting curve, some attempting to extrapolate to the liquidus temperature just at the end of melting, and a method that claims to give the liquidus temperature and an impurity correction based on the analytical Scheil model of solidification that has not previously been applied to eutectic melting. The different methods have been applied to cobalt-carbon melting curves that were obtained under conditions for which the Scheil model might be valid. In the light of the findings of this study it is recommended that the POI continue to be used as a pragmatic measure of temperature but where required a specified limits approach should be used to define and determine the melting temperature.

  13. Diagnostic accuracy of stress perfusion CMR in comparison with quantitative coronary angiography: fully quantitative, semiquantitative, and qualitative assessment.

    PubMed

    Mordini, Federico E; Haddad, Tariq; Hsu, Li-Yueh; Kellman, Peter; Lowrey, Tracy B; Aletras, Anthony H; Bandettini, W Patricia; Arai, Andrew E

    2014-01-01

    This study's primary objective was to determine the sensitivity, specificity, and accuracy of fully quantitative stress perfusion cardiac magnetic resonance (CMR) versus a reference standard of quantitative coronary angiography. We hypothesized that fully quantitative analysis of stress perfusion CMR would have high diagnostic accuracy for identifying significant coronary artery stenosis and exceed the accuracy of semiquantitative measures of perfusion and qualitative interpretation. Relatively few studies apply fully quantitative CMR perfusion measures to patients with coronary disease and comparisons to semiquantitative and qualitative methods are limited. Dual bolus dipyridamole stress perfusion CMR exams were performed in 67 patients with clinical indications for assessment of myocardial ischemia. Stress perfusion images alone were analyzed with a fully quantitative perfusion (QP) method and 3 semiquantitative methods including contrast enhancement ratio, upslope index, and upslope integral. Comprehensive exams (cine imaging, stress/rest perfusion, late gadolinium enhancement) were analyzed qualitatively with 2 methods including the Duke algorithm and standard clinical interpretation. A 70% or greater stenosis by quantitative coronary angiography was considered abnormal. The optimum diagnostic threshold for QP determined by receiver-operating characteristic curve occurred when endocardial flow decreased to <50% of mean epicardial flow, which yielded a sensitivity of 87% and specificity of 93%. The area under the curve for QP was 92%, which was superior to semiquantitative methods: contrast enhancement ratio: 78%; upslope index: 82%; and upslope integral: 75% (p = 0.011, p = 0.019, p = 0.004 vs. QP, respectively). Area under the curve for QP was also superior to qualitative methods: Duke algorithm: 70%; and clinical interpretation: 78% (p < 0.001 and p < 0.001 vs. QP, respectively). Fully quantitative stress perfusion CMR has high diagnostic accuracy for detecting obstructive coronary artery disease. QP outperforms semiquantitative measures of perfusion and qualitative methods that incorporate a combination of cine, perfusion, and late gadolinium enhancement imaging. These findings suggest a potential clinical role for quantitative stress perfusion CMR. Copyright © 2014 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  14. Principal Curves on Riemannian Manifolds.

    PubMed

    Hauberg, Soren

    2016-09-01

    Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.

  15. Quantifying the safety effects of horizontal curves on two-way, two-lane rural roads.

    PubMed

    Gooch, Jeffrey P; Gayah, Vikash V; Donnell, Eric T

    2016-07-01

    The objective of this study is to quantify the safety performance of horizontal curves on two-way, two-lane rural roads relative to tangent segments. Past research is limited by small samples sizes, outdated statistical evaluation methods, and unreported standard errors. This study overcomes these drawbacks by using the propensity scores-potential outcomes framework. The impact of adjacent curves on horizontal curve safety is also explored using a cross-sectional regression model of only horizontal curves. The models estimated in the present study used eight years of crash data (2005-2012) obtained from over 10,000 miles of state-owned two-lane rural roads in Pennsylvania. These data included information on roadway geometry (e.g., horizontal curvature, lane width, and shoulder width), traffic volume, roadside hazard rating, and the presence of various low-cost safety countermeasures (e.g., centerline and shoulder rumble strips, curve and intersection warning pavement markings, and aggressive driving pavement dots). Crash prediction is performed by means of mixed effects negative binomial regression using the explanatory variables noted previously, as well as attributes of adjacent horizontal curves. The results indicate that both the presence of a horizontal curve and its degree of curvature must be considered when predicting the frequency of total crashes on horizontal curves. Both are associated with an increase in crash frequency, which is consistent with previous findings in the literature. Mixed effects negative binomial regression models for total crash frequency on horizontal curves indicate that the distance to adjacent curves is not statistically significant. However, the degree of curvature of adjacent curves in close proximity (within 0.75 miles) was found to be statistically significant and negatively correlated with crash frequency on the subject curve. This is logical, as drivers exiting a sharp curve are likely to be driving slower and with more awareness as they approach the next horizontal curve. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Determination of Glyphosate, its Degradation Product Aminomethylphosphonic Acid, and Glufosinate, in Water by Isotope Dilution and Online Solid-Phase Extraction and Liquid Chromatography/Tandem Mass Spectrometry

    USGS Publications Warehouse

    Meyer, Michael T.; Loftin, Keith A.; Lee, Edward A.; Hinshaw, Gary H.; Dietze, Julie E.; Scribner, Elisabeth A.

    2009-01-01

    The U.S. Geological Survey method (0-2141-09) presented is approved for the determination of glyphosate, its degradation product aminomethylphosphonic acid (AMPA), and glufosinate in water. It was was validated to demonstrate the method detection levels (MDL), compare isotope dilution to standard addition, and evaluate method and compound stability. The original method USGS analytical method 0-2136-01 was developed using liquid chromatography/mass spectrometry and quantitation by standard addition. Lower method detection levels and increased specificity were achieved in the modified method, 0-2141-09, by using liquid chromatography/tandem mass spectrometry (LC/MS/MS). The use of isotope dilution for glyphosate and AMPA and pseudo isotope dilution of glufosinate in place of standard addition was evaluated. Stable-isotope labeled AMPA and glyphosate were used as the isotope dilution standards. In addition, the stability of glyphosate and AMPA was studied in raw filtered and derivatized water samples. The stable-isotope labeled glyphosate and AMPA standards were added to each water sample and the samples then derivatized with 9-fluorenylmethylchloroformate. After derivatization, samples were concentrated using automated online solid-phase extraction (SPE) followed by elution in-line with the LC mobile phase; the compounds separated and then were analyzed by LC/MS/MS using electrospray ionization in negative-ion mode with multiple-reaction monitoring. The deprotonated derivatized parent molecule and two daughter-ion transition pairs were identified and optimized for glyphosate, AMPA, glufosinate, and the glyphosate and AMPA stable-isotope labeled internal standards. Quantitative comparison between standard addition and isotope dilution was conducted using 473 samples analyzed between April 2004 and June 2006. The mean percent difference and relative standard deviation between the two quantitation methods was 7.6 plus or minus 6.30 (n = 179), AMPA 9.6 plus or minus 8.35 (n = 206), and glufosinate 9.3 plus or minus 9.16 (n = 16). The analytical variation of the method, comparison of quantitation by isotope dilution and multipoint linear regressed standard curves, and method detection levels were evaluated by analyzing six sets of distilled-water, groundwater, and surface-water samples spiked in duplicate at 0.0, 0.05, 0.10 and 0.50 microgram per liter and analyzed on 6 different days during 1 month. The grand means of the normalized concentration percentage recovery for glyphosate, AMPA, and glufosinate among all three matrices and spiked concentrations ranged from 99 to 114 plus or minus 2 to 7 percent of the expected spiked concentration. The grand mean of the percentage difference between concentrations calculated by standard addition and linear regressed multipoint standard curves ranged from 8 to 15 plus or minus 2 to 9 percent for the three compounds. The method reporting levels calculated from all the 0.05- microgram per liter spiked samples were 0.02 microgram per liter for all three compounds. Compound stability experiments were conducted on 10 samples derivatized four times for periods between 136 to 269 days. The glyphosate and AMPA concentrations remained relatively constant in samples held up to 136 days before derivatization. The half life of glyphosate varied from 169 to 223 days in the underivatized samples. Derivatized samples were analyzed the day after derivitization, and again 54 and 64 days after derivatization. The derivatized samples analyzed at days 52 and 64 were within 20 percent of the concentrations of the derivatized samples analyzed the day after derivatization.

  17. Improved modeling of in vivo confocal Raman data using multivariate curve resolution (MCR) augmentation of ordinary least squares models.

    PubMed

    Hancewicz, Thomas M; Xiao, Chunhong; Zhang, Shuliang; Misra, Manoj

    2013-12-01

    In vivo confocal Raman spectroscopy has become the measurement technique of choice for skin health and skin care related communities as a way of measuring functional chemistry aspects of skin that are key indicators for care and treatment of various skin conditions. Chief among these techniques are stratum corneum water content, a critical health indicator for severe skin condition related to dryness, and natural moisturizing factor components that are associated with skin protection and barrier health. In addition, in vivo Raman spectroscopy has proven to be a rapid and effective method for quantifying component penetration in skin for topically applied skin care formulations. The benefit of such a capability is that noninvasive analytical chemistry can be performed in vivo in a clinical setting, significantly simplifying studies aimed at evaluating product performance. This presumes, however, that the data and analysis methods used are compatible and appropriate for the intended purpose. The standard analysis method used by most researchers for in vivo Raman data is ordinary least squares (OLS) regression. The focus of work described in this paper is the applicability of OLS for in vivo Raman analysis with particular attention given to use for non-ideal data that often violate the inherent limitations and deficiencies associated with proper application of OLS. We then describe a newly developed in vivo Raman spectroscopic analysis methodology called multivariate curve resolution-augmented ordinary least squares (MCR-OLS), a relatively simple route to addressing many of the issues with OLS. The method is compared with the standard OLS method using the same in vivo Raman data set and using both qualitative and quantitative comparisons based on model fit error, adherence to known data constraints, and performance against calibration samples. A clear improvement is shown in each comparison for MCR-OLS over standard OLS, thus supporting the premise that the MCR-OLS method is better suited for general-purpose multicomponent analysis of in vivo Raman spectral data. This suggests that the methodology is more readily adaptable to a wide range of component systems and is thus more generally applicable than standard OLS.

  18. Quantification of almond skin polyphenols by liquid chromatography-mass spectrometry.

    PubMed

    Bolling, Bradley W; Dolnikowski, Gregory; Blumberg, Jeffrey B; Oliver Chen, C Y

    2009-01-01

    Reverse phase HPLC coupled to negative mode electrospray ionization (ESI) mass spectrometry (MS) was used to quantify 16 flavonoids and 2 phenolic acids from almond skin extracts. Calibration curves of standard compounds were run daily and daidzein was used as an internal standard. The inter-day relative standard deviation (RSD) of standard curve slopes ranged from 13% to 25% of the mean. On column (OC) limits of detection (LOD) for polyphenols ranged from 0.013 to 1.4 pmol, and flavonoid glycosides had a 7-fold greater sensitivity than aglycones. Limits of quantification were 0.043 to 2.7 pmol OC, with a mean of 0.58 pmol flavonoid OC. Mean inter-day RSD of polyphenols in almond skin extract was 6.8% with a range of 4% to 11%, and intra-day RSD was 2.4%. Liquid nitrogen (LN(2)) or hot water (HW) blanching was used to facilitate removal of the almond skins prior to extraction using assisted solvent extraction (ASE) or steeping with acidified aqueous methanol. Recovery of polyphenols was greatest in HW blanched almond extracts with a mean value of 2.1 mg/g skin. ASE and steeping extracted equivalent polyphenols, although ASE of LN(2) blanched skins yielded 52% more aglycones and 23% less flavonoid glycosides. However, the extraction methods did not alter flavonoid profile of HW blanched almond skins. The recovery of polyphenolic components that were spiked into almond skins before the steeping extraction was 97% on a mass basis. This LC-MS method presents a reliable means of quantifying almond polyphenols.

  19. Quantification of Almond Skin Polyphenols by Liquid Chromatography-Mass Spectrometry

    PubMed Central

    Bolling, Bradley W.; Dolnikowski, Gregory; Blumberg, Jeffrey B.; Oliver Chen, C.Y.

    2014-01-01

    Reverse phase HPLC coupled to negative mode electrospray ionization (ESI) mass spectrometry (MS) was used to quantify 16 flavonoids and 2 phenolic acids from almond skin extracts. Calibration curves of standard compounds were run daily and daidzein was used as an internal standard. The inter-day relative standard deviation (RSD) of standard curve slopes ranged from 13% to 25% of the mean. On column (OC) limits of detection (LOD) for polyphenols ranged from 0.013 to 1.4 pmol, and flavonoid glycosides had a 7-fold greater sensitivity than aglycones. Limits of quantification were 0.043 to 2.7 pmol OC, with a mean of 0.58 pmol flavonoid OC. Mean inter-day RSD of polyphenols in almond skin extract was 6.8% with a range of 4% to 11%, and intra-day RSD was 2.4%. Liquid nitrogen (LN2) or hot water (HW) blanching was used to facilitate removal of the almond skins prior to extraction using assisted solvent extraction (ASE) or steeping with acidified aqueous methanol. Recovery of polyphenols was greatest in HW blanched almond extracts with a mean value of 2.1 mg/g skin. ASE and steeping extracted equivalent polyphenols, although ASE of LN2 blanched skins yielded 52% more aglycones and 23% less flavonoid glycosides. However, the extraction methods did not alter flavonoid profile of HW blanched almond skins. The recovery of polyphenolic components that were spiked into almond skins before the steeping extraction was 97% on a mass basis. This LC-MS method presents a reliable means of quantifying almond polyphenols. PMID:19490319

  20. Standardization of Tc-99 by two methods and participation at the CCRI(II)-K2. Tc-99 comparison.

    PubMed

    Sahagia, M; Antohe, A; Ioan, R; Luca, A; Ivan, C

    2014-05-01

    The work accomplished within the participation at the 2012 key comparison of Tc-99 is presented. The solution was standardized for the first time in IFIN-HH by two methods: LSC-TDCR and 4π(PC)β-γ efficiency tracer. The methods are described and the results are compared. For the LSC-TDCR method, the program TDCR07c, written and provided by P. Cassette, was used for processing the measurement data. The results are 2.1% higher than when applying the TDCR06b program; the higher value, calculated with the software TDCR07c, was used for reporting the final result in the comparison. The tracer used for the 4π(PC)β-γ efficiency tracer method was a standard (60)Co solution. The sources were prepared from the mixture (60)Co+(99)Tc solution and a general extrapolation curve, type: N(βTc-99)/(M)(Tc-99)=f [1-ε(Co-60)], was drawn. This value was not used for the final result of the comparison. The difference between the values of activity concentration obtained by the two methods was within the limit of the combined standard uncertainty of the difference of these two results. © 2013 Published by Elsevier Ltd.

  1. Determination of PCBs in fish using enzyme-linked immunosorbent assay (ELISA)

    USGS Publications Warehouse

    Lasrado, J.A.; Santerre, C.R.; Zajicek, J.L.; Stahl, J.R.; Tillitt, D.E.; Deardorff, D.

    2003-01-01

    Polychlorinated biphenyls (PCBs) were determined in fish tissue using an enzyme-linked immunosorbent assay (ELISA). Standard curves for Aroclor 1248, 1254, and 1260 in catfish tissue were developed with ranges from 0.05 to 0.5 ppm and 0.5 to 5.0 ppm. Wild fish were initially analyzed using gas chromatography/electron-capture detection (GC/ECD) and those having residues within the standard curve ranges were analyzed with ELISA. Results obtained using ELISA and GC/ECD were not significantly different (p < 0.05) from 0.05 to 0.5 ppm. From 0.5 to 5.0 ppm, the standard curve for Aroclor 1254 was the best predictor of total PCB in wild fish samples.

  2. Reference intervals and percentiles for carotid-femoral pulse wave velocity in a healthy population aged between 9 and 87 years.

    PubMed

    Diaz, Alejandro; Zócalo, Yanina; Bia, Daniel; Wray, Sandra; Fischer, Edmundo Cabrera

    2018-04-01

    There is little information regarding age-related reference intervals (RIs) of carotid-femoral pulse wave velocity (cfPWV) for large healthy populations in South America. The aims of this study were to determine cfPWV RIs and percentiles in a cohort of healthy children, adolescents, and adults and to generate year-to-year percentile curves and body-height percentile curves for children and adolescents. cfPWV was measured in 1722 healthy participants with no cardiovascular risk factors (9-87 years, 60% men). First, RIs were evaluated for males and females through correlation and covariate analysis. Then, mean and standard deviation age-related equations were obtained for cfPWV using parametric regression methods based on fractional polynomials and age-specific (year-to-year) percentile curves that were defined using the standard normal distribution. Age-specific first, 2.5th, 5th, 10th, 25th, 50th, 75th, 90th, 95th, 97.5th, and 99th percentile curves were calculated. Finally, height-related cfPWV percentile curves for children and adolescents (<21 years) were established. After adjusting for age and blood pressure differences with respect to females, males showed higher cfPWV levels (6.60 vs 6.45 m/s; P < .01). Thus, specific RIs for males and females were reported. The study provides the largest database to date concerning cfPWV in healthy people from Argentina. Specific RIs and percentiles of cfPWV are now available according to age and sex. Specific percentiles of cfPWV according to body height were reported for people younger than 21 years. ©2018 Wiley Periodicals, Inc.

  3. Development and Evaluation of a Novel Curved Biopsy Device for CT-Guided Biopsy of Lesions Unreachable Using Standard Straight Needle Trajectories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulze-Hagen, Maximilian Franz, E-mail: mschulze@ukaachen.de; Pfeffer, Jochen; Zimmermann, Markus

    PurposeTo evaluate the feasibility of a novel curved CT-guided biopsy needle prototype with shape memory to access otherwise not accessible biopsy targets.Methods and MaterialsA biopsy needle curved by 90° with specific radius was designed. It was manufactured using nitinol to acquire shape memory, encased in a straight guiding trocar to be driven out for access of otherwise inaccessible targets. Fifty CT-guided punctures were conducted in a biopsy phantom and 10 CT-guided punctures in a swine corpse. Biposies from porcine liver and muscle tissue were separately gained using the biopsy device, and histological examination was performed subsequently.ResultsMean time for placement ofmore » the trocar and deployment of the inner biopsy needle was ~205 ± 69 and ~93 ± 58 s, respectively, with a mean of ~4.5 ± 1.3 steps to reach adequate biopsy position. Mean distance from the tip of the needle to the target was ~0.7 ± 0.8 mm. CT-guided punctures in the swine corpse took relatively longer and required more biopsy steps (~574 ± 107 and ~380 ± 148 s, 8 ± 2.6 steps). Histology demonstrated appropriate tissue samples in nine out of ten cases (90%).ConclusionsTargets that were otherwise inaccessible via standard straight needle trajectories could be successfully reached with the curved biopsy needle prototype. Shape memory and preformed size with specific radius of the curved needle simplify the target accessibility with a low risk of injuring adjacent structures.« less

  4. DICOM image quantification secondary capture (DICOM IQSC) integrated with numeric results, regions, and curves: implementation and applications in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Cao, Xinhua; Xu, Xiaoyin; Voss, Stephan

    2017-03-01

    In this paper, we describe an enhanced DICOM Secondary Capture (SC) that integrates Image Quantification (IQ) results, Regions of Interest (ROIs), and Time Activity Curves (TACs) with screen shots by embedding extra medical imaging information into a standard DICOM header. A software toolkit of DICOM IQSC has been developed to implement the SC-centered information integration of quantitative analysis for routine practice of nuclear medicine. Primary experiments show that the DICOM IQSC method is simple and easy to implement seamlessly integrating post-processing workstations with PACS for archiving and retrieving IQ information. Additional DICOM IQSC applications in routine nuclear medicine and clinic research are also discussed.

  5. Validation of an Analytical Method for Determination of Benzo[a]pyrene Bread using QuEChERS Method by GC-MS

    PubMed Central

    Eslamizad, Samira; Yazdanpanah, Hassan; Javidnia, Katayon; Sadeghi, Ramezan; Bayat, Mitra; Shahabipour, Sara; Khalighian, Najmeh; Kobarfard, Farzad

    2016-01-01

    A fast and simple modified QuEChERS (quick, easy, cheap, rugged and safe) extraction method based on spiked calibration curves and direct sample introduction was developed for determination of Benzo [a] pyrene (BaP) in bread by gas chromatography-mass spectrometry single quadrupole selected ion monitoring (GC/MS-SQ-SIM). Sample preparation includes: extraction of BaP into acetone followed by cleanup with dispersive solid phase extraction. The use of spiked samples for constructing the calibration curve substantially reduced adverse matrix-related effects. The average recovery of BaP at 6 concentration levels was in range of 95-120%. The method was proved to be reproducible with relative standard deviation less than 14.5% for all of the concentration levels. The limit of detection and limit of quantification were 0.3 ng/g and 0.5 ng/g, respectively. Correlation coefficient of 0.997 was obtained for spiked calibration standards over the concentration range of 0.5-20 ng/g. To the best of our knowledge, this is the first time that a QuEChERS method is used for the analysis of BaP in breads. The developed method was used for determination of BaP in 29 traditional (Sangak) and industrial (Senan) bread samples collected from Tehran in 2014. These results showed that two Sangak samples were contaminated with BaP. Therefore, a comprehensive survey for monitoring of BaP in Sangak bread samples seems to be needed. This is the first report concerning contamination of bread samples with BaP in Iran. PMID:27642317

  6. J{sub IC} evaluation of the smooth and the side-grooved CT specimen in the reactor pressure vessel steel (SA508-3)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oh, S.W.; Lim, M.B.; Kim, T.H.

    1993-12-31

    The elastic-plastic fracture toughness J{sub IC} of SA508-3 forging steel was investigated by using CT-type specimens. The thickness of the smooth specimen is B{sub 0} = 25.4 mm and the side groove specimen is B{sub N} = 20.4 mm and the side groove deep is S{center_dot}G = [(B{sub 0} {minus} B{sub N})/B{sub 0}] {times} 100 = 19.7% and the groove angle is 90{degree}. The J{sub IC} tests estimated according to the method proposed in the ASTM E813-81 and JSME S001-81. The side-grooved specimen have the advantage of J{sub IC} estimation, it is much easier to determine the onset of ductilemore » tearing by the R-curve method and it improved accuracy and scatter of the toughness values thus determined, provided all the size-requirements for the specimen prescribed in the JSME method were satisfied. But it is difficult to find by the ASTM method. The critical stretched zone width (SZW{sub C}) of the side-grooved specimens found to be smaller than that previously determined for the standard CT specimens without side-grooves. This was attributed to higher triaxiality produced by the side-grooves. The stretched zone width method gave slightly larger J{sub IC} values than those by the R-curve method for SA508-3, as has been observed for the standard specimen without side-groove.« less

  7. Measurement of trace elements in tree rings using the PIXE method

    NASA Astrophysics Data System (ADS)

    Aoki, Toru; Katayama, Yukio; Kagawa, Akira; Koh, Susumu; Yoshida, Kohji

    1998-03-01

    Standard materials were prepared in order to calculate element concentrations in tree samples using the particle induced X-ray emission (PIXE) method. Five standard solutions (1) Ti, Fe, Cu, As, Rb, Sr; (2) Ca, V, Co, Zn, As, Rb; (3) Ti, Mn, Ni, As, Sr; (4) K, Mn, Co, As, Rb, Sr; and (5) Ca, Mn, Cu, As, Rb, Sr, were added to filter papers. The dried filter papers were used as standard samples. Pellets of Pepperbush leaves (National Institute for Environmental Studies (NIES)) and Peach leaves (National Institute of Standards and Technology (NIST)) were used as references. The peak counts of Ca, Mn, Cu, Zn, Rb, and Sr in samples taken from a kaki ( Diospros kaki Thunb.) were measured and the concentrations (ppm) of the elements were calculated using the yield curve obtained from the standard filter papers. The concentrations of Mn, Zn, Rb, and Ca were compared with the data obtained from a separate INAA analysis. Concentrations of Mn, Zn, and Ca obtained by both methods were almost the same, but the concentrations of Rb differed slightly. The amounts of trace elements in samples taken from a sugi ( Cryptomeria japonica D. Don) were also measured.

  8. Improving 3d Spatial Queries Search: Newfangled Technique of Space Filling Curves in 3d City Modeling

    NASA Astrophysics Data System (ADS)

    Uznir, U.; Anton, F.; Suhaibah, A.; Rahman, A. A.; Mioc, D.

    2013-09-01

    The advantages of three dimensional (3D) city models can be seen in various applications including photogrammetry, urban and regional planning, computer games, etc.. They expand the visualization and analysis capabilities of Geographic Information Systems on cities, and they can be developed using web standards. However, these 3D city models consume much more storage compared to two dimensional (2D) spatial data. They involve extra geometrical and topological information together with semantic data. Without a proper spatial data clustering method and its corresponding spatial data access method, retrieving portions of and especially searching these 3D city models, will not be done optimally. Even though current developments are based on an open data model allotted by the Open Geospatial Consortium (OGC) called CityGML, its XML-based structure makes it challenging to cluster the 3D urban objects. In this research, we propose an opponent data constellation technique of space-filling curves (3D Hilbert curves) for 3D city model data representation. Unlike previous methods, that try to project 3D or n-dimensional data down to 2D or 3D using Principal Component Analysis (PCA) or Hilbert mappings, in this research, we extend the Hilbert space-filling curve to one higher dimension for 3D city model data implementations. The query performance was tested using a CityGML dataset of 1,000 building blocks and the results are presented in this paper. The advantages of implementing space-filling curves in 3D city modeling will improve data retrieval time by means of optimized 3D adjacency, nearest neighbor information and 3D indexing. The Hilbert mapping, which maps a subinterval of the [0, 1] interval to the corresponding portion of the d-dimensional Hilbert's curve, preserves the Lebesgue measure and is Lipschitz continuous. Depending on the applications, several alternatives are possible in order to cluster spatial data together in the third dimension compared to its clustering in 2D.

  9. Simultaneous identification and quantification of tetrodotoxin in fresh pufferfish and pufferfish-based products using immunoaffinity columns and liquid chromatography/quadrupole-linear ion trap mass spectrometry

    NASA Astrophysics Data System (ADS)

    Guo, Mengmeng; Wu, Haiyan; Jiang, Tao; Tan, Zhijun; Zhao, Chunxia; Zheng, Guanchao; Li, Zhaoxin; Zhai, Yuxiu

    2017-07-01

    In this study, we established a comprehensive method for simultaneous identification and quantification of tetrodotoxin (TTX) in fresh pufferfish tissues and pufferfish-based products using liquid chromatography/quadrupole-linear ion trap mass spectrometry (LC-QqLIT-MS). TTX was extracted by 1% acetic acid-methanol, and most of the lipids were then removed by freezing lipid precipitation, followed by purification and concentration using immunoaffinity columns (IACs). Matrix effects were substantially reduced due to the high specificity of the IACs, and thus, background interference was avoided. Quantitation analysis was therefore performed using an external calibration curve with standards prepared in mobile phase. The method was evaluated by fortifying samples at 1, 10, and 100 ng/g, respectively, and the recoveries ranged from 75.8%-107%, with a relative standard deviation of less than 15%. The TTX calibration curves were linear over the range of 1-1 000 μg/L, with a detection limit of 0.3 ng/g and a quantification limit of 1 ng/g. Using this method, samples can be further analyzed using an information-dependent acquisition (IDA) experiment, in the positive mode, from a single liquid chromatography-tandem mass spectrometry injection, which can provide an extra level of confirmation by matching the full product ion spectra acquired for a standard sample with those from an enhanced product ion (EPI) library. The scheduled multiple reaction monitoring method enabled TTX to be screened for, and TTX was positively identified using the IDA and EPI spectra. This method was successfully applied to analyze a total of 206 samples of fresh pufferfish tissues and pufferfish-based products. The results from this study show that the proposed method can be used to quantify and identify TTX in a single run with excellent sensitivity and reproducibility, and is suitable for the analysis of complex matrix pufferfish samples.

  10. Schroth Physiotherapeutic Scoliosis-Specific Exercises Added to the Standard of Care Lead to Better Cobb Angle Outcomes in Adolescents with Idiopathic Scoliosis - an Assessor and Statistician Blinded Randomized Controlled Trial.

    PubMed

    Schreiber, Sanja; Parent, Eric C; Khodayari Moez, Elham; Hedden, Douglas M; Hill, Douglas L; Moreau, Marc; Lou, Edmond; Watkins, Elise M; Southon, Sarah C

    2016-01-01

    The North American non-surgical standard of care for adolescent idiopathic scoliosis (AIS) includes observation and bracing, but not exercises. Schroth physiotherapeutic scoliosis-specific exercises (PSSE) showed promise in several studies of suboptimal methodology. The Scoliosis Research Society calls for rigorous studies supporting the role of exercises before including it as a treatment recommendation for scoliosis. To determine the effect of a six-month Schroth PSSE intervention added to standard of care (Experimental group) on the Cobb angle compared to standard of care alone (Control group) in patients with AIS. Fifty patients with AIS aged 10-18 years, with curves of 10°-45° and Risser grade 0-5 were recruited from a single pediatric scoliosis clinic and randomized to the Experimental or Control group. Outcomes included the change in the Cobb angles of the Largest Curve and Sum of Curves from baseline to six months. The intervention consisted of a 30-45 minute daily home program and weekly supervised sessions. Intention-to-treat and per protocol linear mixed effects model analyses are reported. In the intention-to-treat analysis, after six months, the Schroth group had significantly smaller Largest Curve than controls (-3.5°, 95% CI -1.1° to -5.9°, p = 0.006). Likewise, the between-group difference in the square root of the Sum of Curves was -0.40°, (95% CI -0.03° to -0.8°, p = 0.046), suggesting that an average patient with 51.2° at baseline, will have a 49.3° Sum of Curves at six months in the Schroth group, and 55.1° in the control group with the difference between groups increasing with severity. Per protocol analyses produced similar, but larger differences: Largest Curve = -4.1° (95% CI -1.7° to -6.5°, p = 0.002) and [Formula: see text] (95% CI -0.8 to 0.2, p = 0.006). Schroth PSSE added to the standard of care were superior compared to standard of care alone for reducing the curve severity in patients with AIS. NCT01610908.

  11. Changes in the Flow-Volume Curve According to the Degree of Stenosis in Patients With Unilateral Main Bronchial Stenosis

    PubMed Central

    Yoo, Jung-Geun; Yi, Chin A; Lee, Kyung Soo; Jeon, Kyeongman; Um, Sang-Won; Koh, Won-Jung; Suh, Gee Young; Chung, Man Pyo; Kwon, O Jung

    2015-01-01

    Objectives The shape of the flow-volume (F-V) curve is known to change to showing a prominent plateau as stenosis progresses in patients with tracheal stenosis. However, no study has evaluated changes in the F-V curve according to the degree of bronchial stenosis in patients with unilateral main bronchial stenosis. Methods We performed an analysis of F-V curves in 29 patients with unilateral bronchial stenosis with the aid of a graphic digitizer between January 2005 and December 2011. Results The primary diseases causing unilateral main bronchial stenosis were endobronchial tuberculosis (86%), followed by benign bronchial tumor (10%), and carcinoid (3%). All unilateral main bronchial stenoses were classified into one of five grades (I, ≤25%; II, 26%-50%; III, 51%-75%; IV, 76%-90%; V, >90% to near-complete obstruction without ipsilateral lung collapse). A monophasic F-V curve was observed in patients with grade I stenosis and biphasic curves were observed for grade II-IV stenosis. Both monophasic (81%) and biphasic shapes (18%) were observed in grade V stenosis. After standardization of the biphasic shape of the F-V curve, the breakpoints of the biphasic curve moved in the direction of high volume (x-axis) and low flow (y-axis) according to the progression of stenosis. Conclusion In unilateral bronchial stenosis, a biphasic F-V curve appeared when bronchial stenosis was >25% and disappeared when obstruction was near complete. In addition, the breakpoint moved in the direction of high volume and low flow with the progression of stenosis. PMID:26045916

  12. Sensitive and selective liquid chromatography-tandem mass spectrometry method for the quantification of aniracetam in human plasma.

    PubMed

    Zhang, Jingjing; Liang, Jiabi; Tian, Yuan; Zhang, Zunjian; Chen, Yun

    2007-10-15

    A rapid, sensitive and selective LC-MS/MS method was developed and validated for the quantification of aniracetam in human plasma using estazolam as internal standard (IS). Following liquid-liquid extraction, the analytes were separated using a mobile phase of methanol-water (60:40, v/v) on a reverse phase C18 column and analyzed by a triple-quadrupole mass spectrometer in the selected reaction monitoring (SRM) mode using the respective [M+H]+ ions, m/z 220-->135 for aniracetam and m/z 295-->205 for the IS. The assay exhibited a linear dynamic range of 0.2-100 ng/mL for aniracetam in human plasma. The lower limit of quantification (LLOQ) was 0.2 ng/mL with a relative standard deviation of less than 15%. Acceptable precision and accuracy were obtained for concentrations over the standard curve range. The validated LC-MS/MS method has been successfully applied to study the pharmacokinetics of aniracetam in healthy male Chinese volunteers.

  13. A generic standard additions based method to determine endogenous analyte concentrations by immunoassays to overcome complex biological matrix interference.

    PubMed

    Pang, Susan; Cowen, Simon

    2017-12-13

    We describe a novel generic method to derive the unknown endogenous concentrations of analyte within complex biological matrices (e.g. serum or plasma) based upon the relationship between the immunoassay signal response of a biological test sample spiked with known analyte concentrations and the log transformed estimated total concentration. If the estimated total analyte concentration is correct, a portion of the sigmoid on a log-log plot is very close to linear, allowing the unknown endogenous concentration to be estimated using a numerical method. This approach obviates conventional relative quantification using an internal standard curve and need for calibrant diluent, and takes into account the individual matrix interference on the immunoassay by spiking the test sample itself. This technique is based on standard additions for chemical analytes. Unknown endogenous analyte concentrations within even 2-fold diluted human plasma may be determined reliably using as few as four reaction wells.

  14. Evaluation of diagnostic accuracy in detecting ordered symptom statuses without a gold standard

    PubMed Central

    Wang, Zheyu; Zhou, Xiao-Hua; Wang, Miqu

    2011-01-01

    Our research is motivated by 2 methodological problems in assessing diagnostic accuracy of traditional Chinese medicine (TCM) doctors in detecting a particular symptom whose true status has an ordinal scale and is unknown—imperfect gold standard bias and ordinal scale symptom status. In this paper, we proposed a nonparametric maximum likelihood method for estimating and comparing the accuracy of different doctors in detecting a particular symptom without a gold standard when the true symptom status had an ordered multiple class. In addition, we extended the concept of the area under the receiver operating characteristic curve to a hyper-dimensional overall accuracy for diagnostic accuracy and alternative graphs for displaying a visual result. The simulation studies showed that the proposed method had good performance in terms of bias and mean squared error. Finally, we applied our method to our motivating example on assessing the diagnostic abilities of 5 TCM doctors in detecting symptoms related to Chills disease. PMID:21209155

  15. Quality assessment of SPR sensor chips; case study on L1 chips.

    PubMed

    Olaru, Andreea; Gheorghiu, Mihaela; David, Sorin; Polonschii, Cristina; Gheorghiu, Eugen

    2013-07-15

    Surface quality of the Surface Plasmon Resonance (SPR) chips is a major limiting issue in most SPR analyses, even more for supported lipid membranes experiments, where both the organization of the lipid matrix and the subsequent incorporation of the target molecule depend on the surface quality. A novel quantitative method to characterize the quality of SPR sensors chips is described for L1 chips subject to formation of lipid films, injection of membrane disrupting compounds, followed by appropriate regeneration procedures. The method consists in analysis of the SPR reflectivity curves for several standard solutions (e.g. PBS, HEPES or deionized water). This analysis reveals the decline of sensor surface as a function of the number of experimental cycles (consisting in biosensing assay and regeneration step) and enables active control of surface regeneration for enhanced reproducibility. We demonstrate that quantitative evaluation of the changes in reflectivity curves (shape of the SPR dip) and of the slope of the calibration curve provides a rapid and effective procedure for surface quality assessment. Whereas the method was tested on L1 SPR sensors chips, we stress on its amenability to assess the quality of other types of SPR chips, as well. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Minimizing thermal degradation in gas chromatographic quantitation of pentaerythritol tetranitrate.

    PubMed

    Lubrano, Adam L; Field, Christopher R; Newsome, G Asher; Rogers, Duane A; Giordano, Braden C; Johnson, Kevin J

    2015-05-15

    An analytical method for establishing calibration curves for the quantitation of pentaerythriol tetranitrate (PETN) from sorbent-filled thermal desorption tubes by gas chromatography with electron capture detection (TDS-GC-ECD) was developed. As PETN has been demonstrated to thermally degrade under typical GC instrument conditions, peaks corresponding to both PETN degradants and molecular PETN are observed. The retention time corresponding to intact PETN was verified by high-resolution mass spectrometry with a flowing atmospheric pressure afterglow (FAPA) ionization source, which enabled soft ionization of intact PETN eluting the GC and subsequent accurate-mass identification. The GC separation parameters were transferred to a conventional GC-ECD instrument where analytical method-induced PETN degradation was further characterized and minimized. A method calibration curve was established by direct liquid deposition of PETN standard solutions onto the glass frit at the head of sorbent-filled thermal desorption tubes. Two local, linear relationships between detector response and PETN concentration were observed, with a total dynamic range of 0.25-25ng. Published by Elsevier B.V.

  17. [Study on analysis of copy paper by Fourier transform infrared spectroscopy].

    PubMed

    Li, Ji-Min; Wang, Yan-Ji; Wang, Jing-Han; Yao, Li-Juan; Zhang, Biao

    2009-06-01

    A new method of fast identification of copy papers by Fourier transform infrared spectroscopy (FTIR) was developed. The kinds of filler and the cellulosic degree of crystallinity were analyzed by FTIR, and the ageing curves of cellulosic paper were studied with heating and ultraviolet light. The cellulosic degree of crystallinity was showed by the ratio of absorbance at 1 429 cm(-1) to that at 893 cm(-1), the standard deviation of different brands of copy papers was 0.010 7-0.016 0, and the standard deviation of the same brands of copy papers was 0.014 8. The kinds of filler and the cellulosic degree of crystallinity were different in copy papers from different brands of different manufacturing plants, different brands of same manufacturing plants and different manufacturing times of the same brands from the same manufacturing plants, and the curves of ageing were different with heating and ultraviolet light. The results of fast identification of copy papers by FTIR are satisfactory.

  18. Assessing item fit for unidimensional item response theory models using residuals from estimated item response functions.

    PubMed

    Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee

    2013-07-01

    Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.

  19. Calibrant-Free Analyte Quantitation via a Variable Velocity Flow Cell.

    PubMed

    Beck, Jason G; Skuratovsky, Aleksander; Granger, Michael C; Porter, Marc D

    2017-01-17

    In this paper, we describe a novel method for analyte quantitation that does not rely on calibrants, internal standards, or calibration curves but, rather, leverages the relationship between disparate and predictable surface-directed analyte flux to an array of sensing addresses and a measured resultant signal. To reduce this concept to practice, we fabricated two flow cells such that the mean linear fluid velocity, U, was varied systematically over an array of electrodes positioned along the flow axis. This resulted in a predictable variation of the address-directed flux of a redox analyte, ferrocenedimethanol (FDM). The resultant limiting currents measured at a series of these electrodes, and accurately described by a convective-diffusive transport model, provided a means to calculate an "unknown" concentration without the use of calibrants, internal standards, or a calibration curve. Furthermore, the experiment and concentration calculation only takes minutes to perform. Deviation in calculated FDM concentrations from true values was minimized to less than 0.5% when empirically derived values of U were employed.

  20. Screening Method Based on Walking Plantar Impulse for Detecting Musculoskeletal Senescence and Injury

    PubMed Central

    Fan, Yifang; Fan, Yubo; Li, Zhiyu; Newman, Tony; Lv, Changsheng; Zhou, Yi

    2013-01-01

    No consensus has been reached on how musculoskeletal system injuries or aging can be explained by a walking plantar impulse. We standardize the plantar impulse by defining a principal axis of plantar impulse. Based upon this standardized plantar impulse, two indexes are presented: plantar pressure record time series and plantar-impulse distribution along the principal axis of plantar impulse. These indexes are applied to analyze the plantar impulse collected by plantar pressure plates from three sources: Achilles tendon ruptures; elderly people (ages 62–71); and young people (ages 19–23). Our findings reveal that plantar impulse distribution curves for Achilles tendon ruptures change irregularly with subjects’ walking speed changes. When comparing distribution curves of the young, we see a significant difference in the elderly subjects’ phalanges plantar pressure record time series. This verifies our hypothesis that a plantar impulse can function as a means to assess and evaluate musculoskeletal system injuries and aging. PMID:24386288

  1. EXTRACTING PERIODIC TRANSIT SIGNALS FROM NOISY LIGHT CURVES USING FOURIER SERIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samsing, Johan

    We present a simple and powerful method for extracting transit signals associated with a known transiting planet from noisy light curves. Assuming the orbital period of the planet is known and the signal is periodic, we illustrate that systematic noise can be removed in Fourier space at all frequencies by only using data within a fixed time frame with a width equal to an integer number of orbital periods. This results in a reconstruction of the full transit signal, which on average is unbiased despite no prior knowledge of either the noise or the transit signal itself being used inmore » the analysis. The method therefore has clear advantages over standard phase folding, which normally requires external input such as nearby stars or noise models for removing systematic components. In addition, we can extract the full orbital transit signal (360°) simultaneously, and Kepler-like data can be analyzed in just a few seconds. We illustrate the performance of our method by applying it to a dataset composed of light curves from Kepler with a fake injected signal emulating a planet with rings. For extracting periodic transit signals, our presented method is in general the optimal and least biased estimator and could therefore lead the way toward the first detections of, e.g., planet rings and exo-trojan asteroids.« less

  2. Micro-scale method for liquid-chromatographic determination of chloramphenicol in serum.

    PubMed

    Petersdorf, S H; Raisys, V A; Opheim, K E

    1979-07-01

    We describe the use of "high-performance" liquid chromatography to measure chloramphenicol in as little as 25 microL of serum. Serum is treated to precipitate proteins with acetonitrile containing p-nitroacetanilide as an internal standard. Chloramphenicol is eluted with a mobile phase of methanol in pH 7.0 phosphate buffer (35/65 by vol). The drug is measured at 278 nm and simultaneously monitored at 254 nm; interfering substances are detected by examining the 278 nm/254 absorbance ratios. This method is sensitive to less than 0.5 mg/L and the standard curve is linear to at least 50 mg/L. Inter-day precision ranged between 3--6%. We encountered no interference from endogenous compounds or from other drugs we tested.

  3. Overall Memory Impairment Identification with Mathematical Modeling of the CVLT-II Learning Curve in Multiple Sclerosis

    PubMed Central

    Stepanov, Igor I.; Abramson, Charles I.; Hoogs, Marietta; Benedict, Ralph H. B.

    2012-01-01

    The CVLT-II provides standardized scores for each of the List A five learning trials, so that the clinician can compare the patient's raw trials 1–5 scores with standardized ones. However, frequently, a patient's raw scores fluctuate making a proper interpretation difficult. The CVLT-II does not offer any other methods for classifying a patient's learning and memory status on the background of the learning curve. The main objective of this research is to illustrate that discriminant analysis provides an accurate assessment of the learning curve, if suitable predictor variables are selected. Normal controls were ninety-eight healthy volunteers (78 females and 20 males). A group of MS patients included 365 patients (266 females and 99 males) with clinically defined multiple sclerosis. We show that the best predictor variables are coefficients B3 and B4 of our mathematical model B3 ∗ exp(−B2  ∗  (X − 1)) + B4  ∗  (1 − exp(−B2  ∗  (X − 1))) because discriminant functions, calculated separately for B3 and B4, allow nearly 100% correct classification. These predictors allow identification of separate impairment of readiness to learn or ability to learn, or both. PMID:22745911

  4. Overall Memory Impairment Identification with Mathematical Modeling of the CVLT-II Learning Curve in Multiple Sclerosis.

    PubMed

    Stepanov, Igor I; Abramson, Charles I; Hoogs, Marietta; Benedict, Ralph H B

    2012-01-01

    The CVLT-II provides standardized scores for each of the List A five learning trials, so that the clinician can compare the patient's raw trials 1-5 scores with standardized ones. However, frequently, a patient's raw scores fluctuate making a proper interpretation difficult. The CVLT-II does not offer any other methods for classifying a patient's learning and memory status on the background of the learning curve. The main objective of this research is to illustrate that discriminant analysis provides an accurate assessment of the learning curve, if suitable predictor variables are selected. Normal controls were ninety-eight healthy volunteers (78 females and 20 males). A group of MS patients included 365 patients (266 females and 99 males) with clinically defined multiple sclerosis. We show that the best predictor variables are coefficients B3 and B4 of our mathematical model B3 ∗ exp(-B2  ∗  (X - 1)) + B4  ∗  (1 - exp(-B2  ∗  (X - 1))) because discriminant functions, calculated separately for B3 and B4, allow nearly 100% correct classification. These predictors allow identification of separate impairment of readiness to learn or ability to learn, or both.

  5. [Evaluation of uncertainty for determination of tin and its compounds in air of workplace by flame atomic absorption spectrometry].

    PubMed

    Wei, Qiuning; Wei, Yuan; Liu, Fangfang; Ding, Yalei

    2015-10-01

    To investigate the method for uncertainty evaluation of determination of tin and its compounds in the air of workplace by flame atomic absorption spectrometry. The national occupational health standards, GBZ/T160.28-2004 and JJF1059-1999, were used to build a mathematical model of determination of tin and its compounds in the air of workplace and to calculate the components of uncertainty. In determination of tin and its compounds in the air of workplace using flame atomic absorption spectrometry, the uncertainty for the concentration of the standard solution, atomic absorption spectrophotometer, sample digestion, parallel determination, least square fitting of the calibration curve, and sample collection was 0.436%, 0.13%, 1.07%, 1.65%, 3.05%, and 2.89%, respectively. The combined uncertainty was 9.3%.The concentration of tin in the test sample was 0.132 mg/m³, and the expanded uncertainty for the measurement was 0.012 mg/m³ (K=2). The dominant uncertainty for determination of tin and its compounds in the air of workplace comes from least squares fitting of the calibration curve and sample collection. Quality control should be improved in the process of calibration curve fitting and sample collection.

  6. The Next-Generation PCR-Based Quantification Method for Ambient Waters: Digital PCR.

    PubMed

    Cao, Yiping; Griffith, John F; Weisberg, Stephen B

    2016-01-01

    Real-time quantitative PCR (qPCR) is increasingly being used for ambient water monitoring, but development of digital polymerase chain reaction (digital PCR) has the potential to further advance the use of molecular techniques in such applications. Digital PCR refines qPCR by partitioning the sample into thousands to millions of miniature reactions that are examined individually for binary endpoint results, with DNA density calculated from the fraction of positives using Poisson statistics. This direct quantification removes the need for standard curves, eliminating the labor and materials associated with creating and running standards with each batch, and removing biases associated with standard variability and mismatching amplification efficiency between standards and samples. Confining reactions and binary endpoint measurements to small partitions also leads to other performance advantages, including reduced susceptibility to inhibition, increased repeatability and reproducibility, and increased capacity to measure multiple targets in one analysis. As such, digital PCR is well suited for ambient water monitoring applications and is particularly advantageous as molecular methods move toward autonomous field application.

  7. General Theory of the Steady Motion of an Airplane

    NASA Technical Reports Server (NTRS)

    De Bothezat, George

    1921-01-01

    The writer points out briefly the history of the method proposed for the study of steady motion of an airplane, which is different from other methods now used. M. Paul Painleve has shown how convenient the drag-lift curve was for the study of airplane steady motion. The author later added to the drift-lift curve the curve called the "speed curve" which permits a direct checking of the speed of the airplane under all flying conditions. But the speed curve was plotted in the same quadrant as the drag-lift curve. Later, with the progressive development of aeronautical science, and with the continually increasing knowledge concerning engines and propellers, the author was brought to add the three other quadrants to the original quadrant, and thus was obtained the steady motion chart which is described in detail in this report. This charts permits one to read directly for a given airplane its horizontal speed at any altitude, its rate of climb at any altitude, its apparent inclination to the horizon at any moment, its ceiling, its propeller thrust, revolutions, efficiency, and power absorbed, that is the complete set of quantities involved in the subject, and to follow the variations of all these quantities both for variable altitude and for variable throttle. The chart also permits one to follow the variation of all of the above in flight as a function of the lift coefficient and of the speed. The author also discusses the interaction of the airplane and propeller through the slipstream and the question of the properties of the engine-propeller system and its dependence upon the properties of the engine considered alone and of the propeller considered alone. There is also a discussion of a standard atmosphere.

  8. Greenhouse Gas Analysis by GC/MS

    NASA Astrophysics Data System (ADS)

    Bock, E. M.; Easton, Z. M.; Macek, P.

    2015-12-01

    Current methods to analyze greenhouse gases rely on designated complex, multiple-column, multiple-detector gas chromatographs. A novel method was developed in partnership with Shimadzu for simultaneous quantification of carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) in environmental gas samples. Gas bulbs were used to make custom standard mixtures by injecting small volumes of pure analyte into the nitrogen-filled bulb. Resulting calibration curves were validated using a certified gas standard. The use of GC/MS systems to perform this analysis has the potential to move the analysis of greenhouse gasses from expensive, custom GC systems to standard single-quadrupole GC/MS systems that are available in most laboratories, which wide variety of applications beyond greenhouse gas analysis. Additionally, use of mass spectrometry can provide confirmation of identity of target analytes, and will assist in the identification of unknown peaks should they be present in the chromatogram.

  9. Possibilities And Influencing Parameters For The Early Detection Of Sheet Metal Failure In Press Shop Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerlach, Joerg; Kessler, Lutz; Paul, Udo

    2007-05-17

    The concept of forming limit curves (FLC) is widely used in industrial practice. The required data should be delivered for typical material properties (measured on coils with properties in a range of +/- of the standard deviation from the mean production values) by the material suppliers. In particular it should be noted that its use for the validation of forming robustness providing forming limit curves for the variety of scattering in the mechanical properties is impossible. Therefore a forecast of the expected limit strains without expensive cost and time-consuming experiments is necessary. In the paper the quality of a regressionmore » analysis for determining forming limit curves based on tensile test results is presented and discussed.Owing to the specific definition of limit strains with FLCs following linear strain paths, the significance of this failure definition is limited. To consider nonlinear strain path effects, different methods are given in literature. One simple method is the concept of limit stresses. It should be noted that the determined value of the critical stress is dependent on the extrapolation of the tensile test curve. When the yield curve extrapolation is very similar to an exponential function, the definition of the critical stress value is very complicated due to the low slope of the hardening function at large strains.A new method to determine general failure behavior in sheet metal forming is the common use and interpretation of three criteria: onset on material instability (comparable with FLC concept), value of critical shear fracture and the value of ductile fracture. This method seems to be particularly successful for newly developed high strength steel grades in connection with more complex strain paths for some specific material elements. Nevertheless the identification of the different failure material parameters or functions will increase and the user has to learn with the interpretation of the numerical results.« less

  10. Use of armored RNA as a standard to construct a calibration curve for real-time RT-PCR.

    PubMed

    Donia, D; Divizia, M; Pana', A

    2005-06-01

    Armored Enterovirus RNA was used to standardize a real-time reverse transcription (RT)-PCR for environmental testing. Armored technology is a system to produce a robust and stable RNA standard, trapped into phage proteins, to be used as internal control. The Armored Enterovirus RNA protected sequence includes 263 bp of highly conserved sequences in 5' UTR region. During these tests, Armored RNA has been used to produce a calibration curve, comparing three different fluorogenic chemistry: TaqMan system, Syber Green I and Lux-primers. The effective evaluation of three amplifying commercial reagent kits, in use to carry out real-time RT-PCR, and several extraction procedures of protected viral RNA have been carried out. The highest Armored RNA recovery was obtained by heat treatment while chemical extraction may decrease the quantity of RNA. The best sensitivity and specificity was obtained using the Syber Green I technique since it is a reproducible test, easy to use and the cheapest one. TaqMan and Lux-primer assays provide good RT-PCR efficiency in relationship to the several extraction methods used, since labelled probe or primer request in these chemistry strategies, increases the cost of testing.

  11. Anthocyanin Concentration of “Assaria” Pomegranate Fruits During Different Cold Storage Conditions

    PubMed Central

    Antunes, Dulce

    2004-01-01

    The concentration of anthocyanins in fruits of “Assaria” pomegranate, a sweet Portuguese cultivar typically grown in Algarve (south Portugal), was monitored during storage under different conditions. The fruits were exposed to cold storage (5°C) after the following treatments: spraying with wax; spraying with 1.5% CaCl2; spraying with wax and 1.5% CaCl2; covering boxes with 25 μc thickness low-density polyethylene film. Untreated fruits were used as a control. The anthocyanin levels were quantified by either comparison with an external standard of cyanidin 3-rutinoside (based on the peak area) or individual calculation from the peak areas based on standard curves of each anthocyanin type. The storage time as well as the fruit treatment prior to storage influenced total anthocyanin content. The highest levels were observed at the end of the first month of storage, except for the fruits treated with CaCl2, where the maximal values were achieved at the end of the second month. The anthocyanin quantification method influenced the final result. When total anthocyanin was calculated as a sum of individual pigments quantified based on standard curves of each anthocyanin type, lower values were obtained. PMID:15577199

  12. Light curves for bump Cepheids computed with a dynamically zoned pulsation code

    NASA Technical Reports Server (NTRS)

    Adams, T. F.; Castor, J. I.; Davis, C. G.

    1980-01-01

    The dynamically zoned pulsation code developed by Castor, Davis, and Davison was used to recalculate the Goddard model and to calculate three other Cepheid models with the same period (9.8 days). This family of models shows how the bumps and other features of the light and velocity curves change as the mass is varied at constant period. The use of a code that is capable of producing reliable light curves demonstrates that the light and velocity curves for 9.8 day Cepheid models with standard homogeneous compositions do not show bumps like those that are observed unless the mass is significantly lower than the 'evolutionary mass.' The light and velocity curves for the Goddard model presented here are similar to those computed independently by Fischel, Sparks, and Karp. They should be useful as standards for future investigators.

  13. Effect of blood sampling schedule and method of calculating the area under the curve on validity and precision of glycaemic index values.

    PubMed

    Wolever, Thomas M S

    2004-02-01

    To evaluate the suitability for glycaemic index (GI) calculations of using blood sampling schedules and methods of calculating area under the curve (AUC) different from those recommended, the GI values of five foods were determined by recommended methods (capillary blood glucose measured seven times over 2.0 h) in forty-seven normal subjects and different calculations performed on the same data set. The AUC was calculated in four ways: incremental AUC (iAUC; recommended method), iAUC above the minimum blood glucose value (AUCmin), net AUC (netAUC) and iAUC including area only before the glycaemic response curve cuts the baseline (AUCcut). In addition, iAUC was calculated using four different sets of less than seven blood samples. GI values were derived using each AUC calculation. The mean GI values of the foods varied significantly according to the method of calculating GI. The standard deviation of GI values calculating using iAUC (20.4), was lower than six of the seven other methods, and significantly less (P<0.05) than that using netAUC (24.0). To be a valid index of food glycaemic response independent of subject characteristics, GI values in subjects should not be related to their AUC after oral glucose. However, calculating GI using AUCmin or less than seven blood samples resulted in significant (P<0.05) relationships between GI and mean AUC. It is concluded that, in subjects without diabetes, the recommended blood sampling schedule and method of AUC calculation yields more valid and/or more precise GI values than the seven other methods tested here. The only method whose results agreed reasonably well with the recommended method (ie. within +/-5 %) was AUCcut.

  14. Effect of Data Reduction and Fiber-Bridging on Mode I Delamination Characterization of Unidirectional Composites

    NASA Technical Reports Server (NTRS)

    Murri, Gretchen B.

    2011-01-01

    Reliable delamination characterization data for laminated composites are needed for input in analytical models of structures to predict delamination onset and growth. The double-cantilevered beam (DCB) specimen is used to measure fracture toughness, GIc, and strain energy release rate, GImax, for delamination onset and growth in laminated composites under mode I loading. The current study was conducted as part of an ASTM Round Robin activity to evaluate a proposed testing standard for Mode I fatigue delamination propagation. Static and fatigue tests were conducted on specimens of IM7/977-3 and G40-800/5276-1 graphite/epoxies, and S2/5216 glass/epoxy DCB specimens to evaluate the draft standard "Standard Test Method for Mode I Fatigue Delamination Propagation of Unidirectional Fiber-Reinforced Polymer Matrix Composites." Static results were used to generate a delamination resistance curve, GIR, for each material, which was used to determine the effects of fiber-bridging on the delamination growth data. All three materials were tested in fatigue at a cyclic GImax level equal to 90% of the fracture toughness, GIc, to determine the delamination growth rate. Two different data reduction methods, a 2-point and a 7-point fit, were used and the resulting Paris Law equations were compared. Growth rate results were normalized by the delamination resistance curve for each material and compared to the nonnormalized results. Paris Law exponents were found to decrease by 5.4% to 46.2% due to normalizing the growth data. Additional specimens of the IM7/977-3 material were tested at 3 lower cyclic GImax levels to compare the effect of loading level on delamination growth rates. The IM7/977-3 tests were also used to determine the delamination threshold curve for that material. The results show that tests at a range of loading levels are necessary to describe the complete delamination behavior of this material.

  15. LAMOST Spectrograph Response Curves: Stability and Application to Flux Calibration

    NASA Astrophysics Data System (ADS)

    Du, Bing; Luo, A.-Li; Kong, Xiao; Zhang, Jian-Nan; Guo, Yan-Xin; Cook, Neil James; Hou, Wen; Yang, Hai-Feng; Li, Yin-Bi; Song, Yi-Han; Chen, Jian-Jun; Zuo, Fang; Wu, Ke-Fei; Wang, Meng-Xin; Wu, Yue; Wang, You-Fen; Zhao, Yong-Heng

    2016-12-01

    The task of flux calibration for Large sky Area Multi-Object Spectroscopic Telescope (LAMOST) spectra is difficult due to many factors, such as the lack of standard stars, flat-fielding for large field of view, and variation of reddening between different stars, especially at low Galactic latitudes. Poor selection, bad spectral quality, or extinction uncertainty of standard stars not only might induce errors to the calculated spectral response curve (SRC) but also might lead to failures in producing final 1D spectra. In this paper, we inspected spectra with Galactic latitude | b| ≥slant 60^\\circ and reliable stellar parameters, determined through the LAMOST Stellar Parameter Pipeline (LASP), to study the stability of the spectrograph. To guarantee that the selected stars had been observed by each fiber, we selected 37,931 high-quality exposures of 29,000 stars from LAMOST DR2, and more than seven exposures for each fiber. We calculated the SRCs for each fiber for each exposure and calculated the statistics of SRCs for spectrographs with both the fiber variations and time variations. The result shows that the average response curve of each spectrograph (henceforth ASPSRC) is relatively stable, with statistical errors ≤10%. From the comparison between each ASPSRC and the SRCs for the same spectrograph obtained by the 2D pipeline, we find that the ASPSRCs are good enough to use for the calibration. The ASPSRCs have been applied to spectra that were abandoned by the LAMOST 2D pipeline due to the lack of standard stars, increasing the number of LAMOST spectra by 52,181 in DR2. Comparing those same targets with the Sloan Digital Sky Survey (SDSS), the relative flux differences between SDSS spectra and LAMOST spectra with the ASPSRC method are less than 10%, which underlines that the ASPSRC method is feasible for LAMOST flux calibration.

  16. Growth rate measurement in free jet experiments

    NASA Astrophysics Data System (ADS)

    Charpentier, Jean-Baptiste; Renoult, Marie-Charlotte; Crumeyrolle, Olivier; Mutabazi, Innocent

    2017-07-01

    An experimental method was developed to measure the growth rate of the capillary instability for free liquid jets. The method uses a standard shadow-graph imaging technique to visualize a jet, produced by extruding a liquid through a circular orifice, and a statistical analysis of the entire jet. The analysis relies on the computation of the standard deviation of a set of jet profiles, obtained in the same experimental conditions. The principle and robustness of the method are illustrated with a set of emulated jet profiles. The method is also applied to free falling jet experiments conducted for various Weber numbers and two low-viscosity solutions: a Newtonian and a viscoelastic one. Growth rate measurements are found in good agreement with linear stability theory in the Rayleigh's regime, as expected from previous studies. In addition, the standard deviation curve is used to obtain an indirect measurement of the initial perturbation amplitude and to identify beads on a string structure on the jet. This last result serves to demonstrate the capability of the present technique to explore in the future the dynamics of viscoelastic liquid jets.

  17. Growth standard charts for monitoring bodyweight in dogs of different sizes

    PubMed Central

    Salt, Carina; Morris, Penelope J.; Wilson, Derek; Lund, Elizabeth M.; Cole, Tim J.; Butterwick, Richard F.

    2017-01-01

    Limited information is available on what constitutes optimal growth in dogs. The primary aim of this study was to develop evidence-based growth standards for dogs, using retrospective analysis of bodyweight and age data from >6 million young dogs attending a large corporate network of primary care veterinary hospitals across the USA. Electronic medical records were used to generate bodyweight data from immature client-owned dogs, that were healthy and had remained in ideal body condition throughout the first 3 years of life. Growth centile curves were constructed using Generalised Additive Models for Location, Shape and Scale. Curves were displayed graphically as centile charts covering the age range 12 weeks to 2 years. Over 100 growth charts were modelled, specific to different combinations of breed, sex and neuter status. Neutering before 37 weeks was associated with a slight upward shift in growth trajectory, whilst neutering after 37 weeks was associated with a slight downward shift in growth trajectory. However, these shifts were small in comparison to inter-individual variability amongst dogs, suggesting that separate curves for neutered dogs were not needed. Five bodyweight categories were created to cover breeds up to 40kg, using both visual assessment and hierarchical cluster analysis of breed-specific growth curves. For 20/24 of the individual breed centile curves, agreement with curves for the corresponding bodyweight categories was good. For the remaining 4 breed curves, occasional deviation across centile lines was observed, but overall agreement was acceptable. This suggested that growth could be described using size categories rather than requiring curves for specific breeds. In the current study, a series of evidence-based growth standards have been developed to facilitate charting of bodyweight in healthy dogs. Additional studies are required to validate these standards and create a clinical tool for growth monitoring in pet dogs. PMID:28873413

  18. Growth standard charts for monitoring bodyweight in dogs of different sizes.

    PubMed

    Salt, Carina; Morris, Penelope J; German, Alexander J; Wilson, Derek; Lund, Elizabeth M; Cole, Tim J; Butterwick, Richard F

    2017-01-01

    Limited information is available on what constitutes optimal growth in dogs. The primary aim of this study was to develop evidence-based growth standards for dogs, using retrospective analysis of bodyweight and age data from >6 million young dogs attending a large corporate network of primary care veterinary hospitals across the USA. Electronic medical records were used to generate bodyweight data from immature client-owned dogs, that were healthy and had remained in ideal body condition throughout the first 3 years of life. Growth centile curves were constructed using Generalised Additive Models for Location, Shape and Scale. Curves were displayed graphically as centile charts covering the age range 12 weeks to 2 years. Over 100 growth charts were modelled, specific to different combinations of breed, sex and neuter status. Neutering before 37 weeks was associated with a slight upward shift in growth trajectory, whilst neutering after 37 weeks was associated with a slight downward shift in growth trajectory. However, these shifts were small in comparison to inter-individual variability amongst dogs, suggesting that separate curves for neutered dogs were not needed. Five bodyweight categories were created to cover breeds up to 40kg, using both visual assessment and hierarchical cluster analysis of breed-specific growth curves. For 20/24 of the individual breed centile curves, agreement with curves for the corresponding bodyweight categories was good. For the remaining 4 breed curves, occasional deviation across centile lines was observed, but overall agreement was acceptable. This suggested that growth could be described using size categories rather than requiring curves for specific breeds. In the current study, a series of evidence-based growth standards have been developed to facilitate charting of bodyweight in healthy dogs. Additional studies are required to validate these standards and create a clinical tool for growth monitoring in pet dogs.

  19. Prediction of seismic collapse risk of steel moment frame mid-rise structures by meta-heuristic algorithms

    NASA Astrophysics Data System (ADS)

    Jough, Fooad Karimi Ghaleh; Şensoy, Serhan

    2016-12-01

    Different performance levels may be obtained for sideway collapse evaluation of steel moment frames depending on the evaluation procedure used to handle uncertainties. In this article, the process of representing modelling uncertainties, record to record (RTR) variations and cognitive uncertainties for moment resisting steel frames of various heights is discussed in detail. RTR uncertainty is used by incremental dynamic analysis (IDA), modelling uncertainties are considered through backbone curves and hysteresis loops of component, and cognitive uncertainty is presented in three levels of material quality. IDA is used to evaluate RTR uncertainty based on strong ground motion records selected by the k-means algorithm, which is favoured over Monte Carlo selection due to its time saving appeal. Analytical equations of the Response Surface Method are obtained through IDA results by the Cuckoo algorithm, which predicts the mean and standard deviation of the collapse fragility curve. The Takagi-Sugeno-Kang model is used to represent material quality based on the response surface coefficients. Finally, collapse fragility curves with the various sources of uncertainties mentioned are derived through a large number of material quality values and meta variables inferred by the Takagi-Sugeno-Kang fuzzy model based on response surface method coefficients. It is concluded that a better risk management strategy in countries where material quality control is weak, is to account for cognitive uncertainties in fragility curves and the mean annual frequency.

  20. Evaluation of predictive capacities of biomarkers based on research synthesis.

    PubMed

    Hattori, Satoshi; Zhou, Xiao-Hua

    2016-11-10

    The objective of diagnostic studies or prognostic studies is to evaluate and compare predictive capacities of biomarkers. Suppose we are interested in evaluation and comparison of predictive capacities of continuous biomarkers for a binary outcome based on research synthesis. In analysis of each study, subjects are often classified into two groups of the high-expression and low-expression groups according to a cut-off value, and statistical analysis is based on a 2 × 2 table defined by the response and the high expression or low expression of the biomarker. Because the cut-off is study specific, it is difficult to interpret a combined summary measure such as an odds ratio based on the standard meta-analysis techniques. The summary receiver operating characteristic curve is a useful method for meta-analysis of diagnostic studies in the presence of heterogeneity of cut-off values to examine discriminative capacities of biomarkers. We develop a method to estimate positive or negative predictive curves, which are alternative to the receiver operating characteristic curve based on information reported in published papers of each study. These predictive curves provide a useful graphical presentation of pairs of positive and negative predictive values and allow us to compare predictive capacities of biomarkers of different scales in the presence of heterogeneity in cut-off values among studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. A prospective, observational study comparing the PK/PD relationships of generic Meropenem (Mercide®) to the innovator brand in critically ill patients

    PubMed Central

    Mer, Mervyn; Snyman, Jacques Rene; van Rensburg, Constance Elizabeth Jansen; van Tonder, Jacob John; Laurens, Ilze

    2016-01-01

    Introduction Clinicians’ skepticism, fueled by evidence of inferiority of some multisource generic antimicrobial products, results in the underutilization of more cost-effective generics, especially in critically ill patients. The aim of this observational study was to demonstrate equivalence between the generic or comparator brand of meropenem (Mercide®) and the leading innovator brand (Meronem®) by means of an ex vivo technique whereby antimicrobial activity is used to estimate plasma concentration of the active moiety. Methods Patients from different high care and intensive care units were recruited for observation when prescribed either of the meropenem brands under investigation. Blood samples were collected over 6 hours after a 30 minute infusion of the different brands. Meropenem concentration curves were established against United States Pharmacopeia standard meropenem (Sigma-Aldrich) by using standard laboratory techniques for culture of Klebsiella pneumoniae. Patients’ plasma samples were tested ex vivo, using a disc diffusion assay, to confirm antimicrobial activity and estimate plasma concentrations of the two brands. Results Both brands of meropenem demonstrated similar curves in donor plasma when concentrations in vials were confirmed. Patient-specific serum concentrations were determined from zones of inhibition against a standard laboratory Klebsiella strain ex vivo, confirming at least similar in vivo concentrations as the concentration curves (90% confidence interval) overlapped; however, the upper limit of the area under the curve for the ratio comparator/innovator exceeded the 1.25-point estimate, i.e., 4% higher for comparator meropenem. Conclusion This observational, in-practice study demonstrates similar ex vivo activity and in vivo plasma concentration time curves for the products under observation. Assay sensitivity is also confirmed. Current registration status of generic small molecules is in place. The products are therefore clinically interchangeable based on registration status as well as bioassay results, demonstrating sufficient overlap for clinical comfort. The slightly higher observed comparator meropenem concentration (4%) is still clinically acceptable due to the large therapeutic index and should ally fears of inferiority. PMID:27895516

  2. Round Robin Test of Residual Resistance Ratio of Nb$$_3$$Sn Composite Superconductors

    DOE PAGES

    Matsushita, Teruo; Otabe, Edmund Soji; Kim, Dong Ho; ...

    2017-12-07

    A round robin test of residual resistance ratio (RRR) was performed for Nb 3Sn composite superconductors prepared by internal tin method by six institutes with the international standard test method described in IEC 61788-4. It was found that uncertainty mainly resulted from determination of the cryogenic resistance from the intersection of two straight lines drawn to fit the voltage vs. temperature curve around the resistive transition. As a result, the measurement clarified that RRR can be measured with expanded uncertainty not larger than 5% with the coverage factor 2 by using this test method.

  3. Round Robin Test of Residual Resistance Ratio of Nb$$_3$$Sn Composite Superconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsushita, Teruo; Otabe, Edmund Soji; Kim, Dong Ho

    A round robin test of residual resistance ratio (RRR) was performed for Nb 3Sn composite superconductors prepared by internal tin method by six institutes with the international standard test method described in IEC 61788-4. It was found that uncertainty mainly resulted from determination of the cryogenic resistance from the intersection of two straight lines drawn to fit the voltage vs. temperature curve around the resistive transition. As a result, the measurement clarified that RRR can be measured with expanded uncertainty not larger than 5% with the coverage factor 2 by using this test method.

  4. Standard UBV Observations at the Çanakkale University Observatory (ÇUO)

    NASA Astrophysics Data System (ADS)

    Bakis, Hicran; Bakis, Volkan; Demircan, Osman; Budding, Edwin

    2005-07-01

    By using standard and comparison star observations carried out at different times of the year, at Çanakkale Onsekiz Mart University Observatory, we obtained the atmospheric extinction coefficients at the observatory. We also obtained transformation coefficients and zero-point constants for the transformation to the standard Johnson UBV system, of observations in the local system carried out with the SSP5A photometer and T40 telescope. The transmission curves and the mean wavelengths of the UBV filters as measured in the laboratory appear not much different from those of the standard Johnson system and found inside the transmission curve of the standard mean atmosphere.

  5. 49 CFR 213.59 - Elevation of curved track; runoff.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 4 2011-10-01 2011-10-01 false Elevation of curved track; runoff. 213.59 Section 213.59 Transportation Other Regulations Relating to Transportation (Continued) FEDERAL RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION TRACK SAFETY STANDARDS Track Geometry § 213.59 Elevation of curved...

  6. Development and Validation of a Reversed Phase HPLC Method for Determination of Anacardic Acids in Cashew (Anacardium occidentale) Nut Shell Liquid.

    PubMed

    Oiram Filho, Francisco; Alcântra, Daniel Barbosa; Rodrigues, Tigressa Helena Soares; Alexandre E Silva, Lorena Mara; de Oliveira Silva, Ebenezer; Zocolo, Guilherme Julião; de Brito, Edy Sousa

    2018-04-01

    Cashew nut shell liquid (CNSL) contains phenolic lipids with aliphatic chains that are of commercial interest. In this work, a chromatographic method was developed to monitor and quantify anacardic acids (AnAc) in CNSL. Samples containing AnAc were analyzed on a high-performance liquid chromatograph coupled to a diode array detector, equipped with a reversed phase C18 (150 × 4.6 mm × 5 μm) column using acetonitrile and water as the mobile phase both acidified with acetic acid to pH 3.0 in an isocratic mode (80:20:1). The chromatographic method showed adequate selectivity, as it could clearly separate the different AnAc. To validate this method, AnAc triene was used as an external standard at seven different concentrations varying from 50 to 1,000 μg mL-1. The Student's t-test and F-test were applied to ensure high confidence for the obtained data from the analytical calibration curve. The results were satisfactory with respect to intra-day (relative standard deviation (RSD) = 0.60%) and inter-day (RSD = 0.67%) precision, linearity (y = 2,670.8x - 26,949, r2 > 0.9998), system suitability for retention time (RSD = 1.02%), area under the curve (RSD = 0.24%), selectivity and limits of detection (19.8 μg mg-1) and quantification (60.2 μg mg-1). The developed chromatographic method was applied for the analysis of different CNSL samples, and it was deemed suitable for the quantification of AnAc.

  7. Analysis of the Magnetic Field Influence on the Rheological Properties of Healthy Persons Blood

    PubMed Central

    Nawrocka-Bogusz, Honorata

    2013-01-01

    The influence of magnetic field on whole blood rheological properties remains a weakly known phenomenon. An in vitro analysis of the magnetic field influence on the rheological properties of healthy persons blood is presented in this work. The study was performed on blood samples taken from 25 healthy nonsmoking persons and included comparative analysis of the results of both the standard rotary method (flow curve measurement) and the oscillatory method known also as the mechanical dynamic analysis, performed before and after exposition of blood samples to magnetic field. The principle of the oscillatory technique lies in determining the amplitude and phase of the oscillations of the studied sample subjected to action of a harmonic force of controlled amplitude and frequency. The flow curve measurement involved determining the shear rate dependence of blood viscosity. The viscoelastic properties of the blood samples were analyzed in terms of complex blood viscosity. All the measurements have been performed by means of the Contraves LS40 rheometer. The data obtained from the flow curve measurements complemented by hematocrit and plasma viscosity measurements have been analyzed using the rheological model of Quemada. No significant changes of the studied rheological parameters have been found. PMID:24078918

  8. Finding Exoplanets Using Point Spread Function Photometry on Kepler Data

    NASA Astrophysics Data System (ADS)

    Amaro, Rachael Christina; Scolnic, Daniel; Montet, Ben

    2018-01-01

    The Kepler Mission has been able to identify over 5,000 exoplanet candidates using mostly aperture photometry. Despite the impressive number of discoveries, a large portion of Kepler’s data set is neglected due to limitations using aperture photometry on faint sources in crowded fields. We present an alternate method that overcomes those restrictions — Point Spread Function (PSF) photometry. This powerful tool, which is already used in supernova astronomy, was used for the first time on Kepler Full Frame Images, rather than just looking at the standard light curves. We present light curves for stars in our data set and demonstrate that PSF photometry can at least get down to the same photometric precision as aperture photometry. As a check for the robustness of this method, we change small variables (stamp size, interpolation amount, and noise correction) and show that the PSF light curves maintain the same repeatability across all combinations for one of our models. We also present our progress in the next steps of this project, including the creation of a PSF model from the data itself and applying the model across the entire data set at once.

  9. 7 CFR 42.140 - Operating Characteristic (OC) curves for on-line sampling and inspection.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS STANDARDS FOR CONDITION OF FOOD CONTAINERS...

  10. In-silico prediction of concentration-dependent viscosity curves for monoclonal antibody solutions

    PubMed Central

    Tomar, Dheeraj S.; Li, Li; Broulidakis, Matthew P.; Luksha, Nicholas G.; Burns, Christopher T.; Singh, Satish K.; Kumar, Sandeep

    2017-01-01

    ABSTRACT Early stage developability assessments of monoclonal antibody (mAb) candidates can help reduce risks and costs associated with their product development. Forecasting viscosity of highly concentrated mAb solutions is an important aspect of such developability assessments. Reliable predictions of concentration-dependent viscosity behaviors for mAb solutions in platform formulations can help screen or optimize drug candidates for flexible manufacturing and drug delivery options. Here, we present a computational method to predict concentration-dependent viscosity curves for mAbs solely from their sequence—structural attributes. This method was developed using experimental data on 16 different mAbs whose concentration-dependent viscosity curves were experimentally obtained under standardized conditions. Each concentration-dependent viscosity curve was fitted with a straight line, via logarithmic manipulations, and the values for intercept and slope were obtained. Intercept, which relates to antibody diffusivity, was found to be nearly constant. In contrast, slope, the rate of increase in solution viscosity with solute concentration, varied significantly across different mAbs, demonstrating the importance of intermolecular interactions toward viscosity. Next, several molecular descriptors for electrostatic and hydrophobic properties of the 16 mAbs derived using their full-length homology models were examined for potential correlations with the slope. An equation consisting of hydrophobic surface area of full-length antibody and charges on VH, VL, and hinge regions was found to be capable of predicting the concentration-dependent viscosity curves of the antibody solutions. Availability of this computational tool may facilitate material-free high-throughput screening of antibody candidates during early stages of drug discovery and development. PMID:28125318

  11. In-silico prediction of concentration-dependent viscosity curves for monoclonal antibody solutions.

    PubMed

    Tomar, Dheeraj S; Li, Li; Broulidakis, Matthew P; Luksha, Nicholas G; Burns, Christopher T; Singh, Satish K; Kumar, Sandeep

    2017-04-01

    Early stage developability assessments of monoclonal antibody (mAb) candidates can help reduce risks and costs associated with their product development. Forecasting viscosity of highly concentrated mAb solutions is an important aspect of such developability assessments. Reliable predictions of concentration-dependent viscosity behaviors for mAb solutions in platform formulations can help screen or optimize drug candidates for flexible manufacturing and drug delivery options. Here, we present a computational method to predict concentration-dependent viscosity curves for mAbs solely from their sequence-structural attributes. This method was developed using experimental data on 16 different mAbs whose concentration-dependent viscosity curves were experimentally obtained under standardized conditions. Each concentration-dependent viscosity curve was fitted with a straight line, via logarithmic manipulations, and the values for intercept and slope were obtained. Intercept, which relates to antibody diffusivity, was found to be nearly constant. In contrast, slope, the rate of increase in solution viscosity with solute concentration, varied significantly across different mAbs, demonstrating the importance of intermolecular interactions toward viscosity. Next, several molecular descriptors for electrostatic and hydrophobic properties of the 16 mAbs derived using their full-length homology models were examined for potential correlations with the slope. An equation consisting of hydrophobic surface area of full-length antibody and charges on V H , V L , and hinge regions was found to be capable of predicting the concentration-dependent viscosity curves of the antibody solutions. Availability of this computational tool may facilitate material-free high-throughput screening of antibody candidates during early stages of drug discovery and development.

  12. Consideration of Kaolinite Interference Correction for Quartz Measurements in Coal Mine Dust

    PubMed Central

    Lee, Taekhee; Chisholm, William P.; Kashon, Michael; Key-Schwartz, Rosa J.; Harper, Martin

    2015-01-01

    Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed “deviation,” not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected. PMID:23767881

  13. Consideration of kaolinite interference correction for quartz measurements in coal mine dust.

    PubMed

    Lee, Taekhee; Chisholm, William P; Kashon, Michael; Key-Schwartz, Rosa J; Harper, Martin

    2013-01-01

    Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed "deviation," not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected.

  14. A Comparison of a Machine Learning Model with EuroSCORE II in Predicting Mortality after Elective Cardiac Surgery: A Decision Curve Analysis

    PubMed Central

    Allyn, Jérôme; Allou, Nicolas; Augustin, Pascal; Philip, Ivan; Martinet, Olivier; Belghiti, Myriem; Provenchere, Sophie; Montravers, Philippe; Ferdynus, Cyril

    2017-01-01

    Background The benefits of cardiac surgery are sometimes difficult to predict and the decision to operate on a given individual is complex. Machine Learning and Decision Curve Analysis (DCA) are recent methods developed to create and evaluate prediction models. Methods and finding We conducted a retrospective cohort study using a prospective collected database from December 2005 to December 2012, from a cardiac surgical center at University Hospital. The different models of prediction of mortality in-hospital after elective cardiac surgery, including EuroSCORE II, a logistic regression model and a machine learning model, were compared by ROC and DCA. Of the 6,520 patients having elective cardiac surgery with cardiopulmonary bypass, 6.3% died. Mean age was 63.4 years old (standard deviation 14.4), and mean EuroSCORE II was 3.7 (4.8) %. The area under ROC curve (IC95%) for the machine learning model (0.795 (0.755–0.834)) was significantly higher than EuroSCORE II or the logistic regression model (respectively, 0.737 (0.691–0.783) and 0.742 (0.698–0.785), p < 0.0001). Decision Curve Analysis showed that the machine learning model, in this monocentric study, has a greater benefit whatever the probability threshold. Conclusions According to ROC and DCA, machine learning model is more accurate in predicting mortality after elective cardiac surgery than EuroSCORE II. These results confirm the use of machine learning methods in the field of medical prediction. PMID:28060903

  15. Cardiac arrest risk standardization using administrative data compared to registry data.

    PubMed

    Grossestreuer, Anne V; Gaieski, David F; Donnino, Michael W; Nelson, Joshua I M; Mutter, Eric L; Carr, Brendan G; Abella, Benjamin S; Wiebe, Douglas J

    2017-01-01

    Methods for comparing hospitals regarding cardiac arrest (CA) outcomes, vital for improving resuscitation performance, rely on data collected by cardiac arrest registries. However, most CA patients are treated at hospitals that do not participate in such registries. This study aimed to determine whether CA risk standardization modeling based on administrative data could perform as well as that based on registry data. Two risk standardization logistic regression models were developed using 2453 patients treated from 2000-2015 at three hospitals in an academic health system. Registry and administrative data were accessed for all patients. The outcome was death at hospital discharge. The registry model was considered the "gold standard" with which to compare the administrative model, using metrics including comparing areas under the curve, calibration curves, and Bland-Altman plots. The administrative risk standardization model had a c-statistic of 0.891 (95% CI: 0.876-0.905) compared to a registry c-statistic of 0.907 (95% CI: 0.895-0.919). When limited to only non-modifiable factors, the administrative model had a c-statistic of 0.818 (95% CI: 0.799-0.838) compared to a registry c-statistic of 0.810 (95% CI: 0.788-0.831). All models were well-calibrated. There was no significant difference between c-statistics of the models, providing evidence that valid risk standardization can be performed using administrative data. Risk standardization using administrative data performs comparably to standardization using registry data. This methodology represents a new tool that can enable opportunities to compare hospital performance in specific hospital systems or across the entire US in terms of survival after CA.

  16. Photometric properties of intermediate-redshift Type Ia supernovae observed by the Sloan Digital Sky Survey-II Supernova Survey

    DOE PAGES

    Takanashi, N.; Doi, M.; Yasuda, N.; ...

    2016-12-06

    We have analyzed multi-band light curves of 328 intermediate redshift (0.05 <= z < 0.24) type Ia supernovae (SNe Ia) observed by the Sloan Digital Sky Survey-II Supernova Survey (SDSS-II SN Survey). The multi-band light curves were parameterized by using the Multi-band Stretch Method, which can simply parameterize light curve shapes and peak brightness without dust extinction models. We found that most of the SNe Ia which appeared in red host galaxies (u - r > 2.5) don't have a broad light curve width and the SNe Ia which appeared in blue host galaxies (u - r < 2.0) havemore » a variety of light curve widths. The Kolmogorov-Smirnov test shows that the colour distribution of SNe Ia appeared in red / blue host galaxies is different (significance level of 99.9%). We also investigate the extinction law of host galaxy dust. As a result, we find the value of Rv derived from SNe Ia with medium light curve width is consistent with the standard Galactic value. On the other hand, the value of Rv derived from SNe Ia that appeared in red host galaxies becomes significantly smaller. Furthermore, these results indicate that there may be two types of SNe Ia with different intrinsic colours, and they are obscured by host galaxy dust with two different properties.« less

  17. Photometric properties of intermediate-redshift Type Ia supernovae observed by the Sloan Digital Sky Survey-II Supernova Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takanashi, N.; Doi, M.; Yasuda, N.

    We have analyzed multi-band light curves of 328 intermediate redshift (0.05 <= z < 0.24) type Ia supernovae (SNe Ia) observed by the Sloan Digital Sky Survey-II Supernova Survey (SDSS-II SN Survey). The multi-band light curves were parameterized by using the Multi-band Stretch Method, which can simply parameterize light curve shapes and peak brightness without dust extinction models. We found that most of the SNe Ia which appeared in red host galaxies (u - r > 2.5) don't have a broad light curve width and the SNe Ia which appeared in blue host galaxies (u - r < 2.0) havemore » a variety of light curve widths. The Kolmogorov-Smirnov test shows that the colour distribution of SNe Ia appeared in red / blue host galaxies is different (significance level of 99.9%). We also investigate the extinction law of host galaxy dust. As a result, we find the value of Rv derived from SNe Ia with medium light curve width is consistent with the standard Galactic value. On the other hand, the value of Rv derived from SNe Ia that appeared in red host galaxies becomes significantly smaller. Furthermore, these results indicate that there may be two types of SNe Ia with different intrinsic colours, and they are obscured by host galaxy dust with two different properties.« less

  18. Pre-Clinical Testing of a Real-Time PCR Assay for Diahhreal Disease Agent Cryptosporidium

    DTIC Science & Technology

    2014-05-16

    ETEC, Shigella , and CR assays are shown in Table 4. Standard curves are shown in Figures 1 - 5. Limit of detection estimation derived from standard...curve are shown in Table 5. These data include results from both ‘JBAIDS ETEC/ Shigella ’ and ‘JBAIDS Cryptospordium’ projects. Standard cuve for... Shigella -ipaH 0.088 1.5 × 10 8 Isolates of Cryptosporidium parvum from Waterborne Inc., using Qiagen extraction kit Parasite NanoDrop

  19. Rainfall and runoff Intensity-Duration-Frequency Curves for Washington State considering the change and uncertainty of observed and anticipated extreme rainfall and snow events

    NASA Astrophysics Data System (ADS)

    Demissie, Y. K.; Mortuza, M. R.; Li, H. Y.

    2015-12-01

    The observed and anticipated increasing trends in extreme storm magnitude and frequency, as well as the associated flooding risk in the Pacific Northwest highlighted the need for revising and updating the local intensity-duration-frequency (IDF) curves, which are commonly used for designing critical water infrastructure. In Washington State, much of the drainage system installed in the last several decades uses IDF curves that are outdated by as much as half a century, making the system inadequate and vulnerable for flooding as seen more frequently in recent years. In this study, we have developed new and forward looking rainfall and runoff IDF curves for each county in Washington State using recently observed and projected precipitation data. Regional frequency analysis coupled with Bayesian uncertainty quantification and model averaging methods were used to developed and update the rainfall IDF curves, which were then used in watershed and snow models to develop the runoff IDF curves that explicitly account for effects of snow and drainage characteristic into the IDF curves and related designs. The resulted rainfall and runoff IDF curves provide more reliable, forward looking, and spatially resolved characteristics of storm events that can assist local decision makers and engineers to thoroughly review and/or update the current design standards for urban and rural storm water management infrastructure in order to reduce the potential ramifications of increasing severe storms and resulting floods on existing and planned storm drainage and flood management systems in the state.

  20. Simultaneous multielement atomic absorption spectrometry with graphite furnace atomization

    NASA Astrophysics Data System (ADS)

    Harnly, James M.; Miller-Ihli, Nancy J.; O'Haver, Thomas C.

    The extended analytical range capability of a simultaneous multielement atomic absorption continuum source spectrometer (SIMAAC) was tested for furnace atomization with respect to the signal measurement mode (peak height and area), the atomization mode (from the wall or from a platform), and the temperature program mode (stepped or ramped atomization). These parameters were evaluated with respect to the shapes of the analytical curves, the detection limits, carry-over contamination and accuracy. Peak area measurements gave more linear calibration curves. Methods for slowing the atomization step heating rate, the use of a ramped temperature program or a platform, produced similar calibration curves and longer linear ranges than atomization with a stepped temperature program. Peak height detection limits were best using stepped atomization from the wall. Peak area detection limits for all atomization modes were similar. Carry-over contamination was worse for peak area than peak height, worse for ramped atomization than stepped atomization, and worse for atomization from a platform than from the wall. Accurate determinations (100 ± 12% for Ca, Cu, Fe, Mn, and Zn in National Bureau of Standards' Standard Reference Materials Bovine Liver 1577 and Rice Flour 1568 were obtained using peak area measurements with ramped atomization from the wall and stepped atomization from a platform. Only stepped atomization from a platform gave accurate recoveries for K. Accurate recoveries, 100 ± 10%, with precisions ranging from 1 to 36 % (standard deviation), were obtained for the determination of Al, Co, Cr, Fe, Mn, Mo, Ni. Pb, V and Zn in Acidified Waters (NBS SRM 1643 and 1643a) using stepped atomization from a platform.

  1. Laparoscopic colorectal surgery in learning curve: Role of implementation of a standardized technique and recovery protocol. A cohort study

    PubMed Central

    Luglio, Gaetano; De Palma, Giovanni Domenico; Tarquini, Rachele; Giglio, Mariano Cesare; Sollazzo, Viviana; Esposito, Emanuela; Spadarella, Emanuela; Peltrini, Roberto; Liccardo, Filomena; Bucci, Luigi

    2015-01-01

    Background Despite the proven benefits, laparoscopic colorectal surgery is still under utilized among surgeons. A steep learning is one of the causes of its limited adoption. Aim of the study is to determine the feasibility and morbidity rate after laparoscopic colorectal surgery in a single institution, “learning curve” experience, implementing a well standardized operative technique and recovery protocol. Methods The first 50 patients treated laparoscopically were included. All the procedures were performed by a trainee surgeon, supervised by a consultant surgeon, according to the principle of complete mesocolic excision with central vascular ligation or TME. Patients underwent a fast track recovery programme. Recovery parameters, short-term outcomes, morbidity and mortality have been assessed. Results Type of resections: 20 left side resections, 8 right side resections, 14 low anterior resection/TME, 5 total colectomy and IRA, 3 total panproctocolectomy and pouch. Mean operative time: 227 min; mean number of lymph-nodes: 18.7. Conversion rate: 8%. Mean time to flatus: 1.3 days; Mean time to solid stool: 2.3 days. Mean length of hospital stay: 7.2 days. Overall morbidity: 24%; major morbidity (Dindo–Clavien III): 4%. No anastomotic leak, no mortality, no 30-days readmission. Conclusion Proper laparoscopic colorectal surgery is safe and leads to excellent results in terms of recovery and short term outcomes, even in a learning curve setting. Key factors for better outcomes and shortening the learning curve seem to be the adoption of a standardized technique and training model along with the strict supervision of an expert colorectal surgeon. PMID:25859386

  2. Respiratory motion management using audio-visual biofeedback for respiratory-gated radiotherapy of synchrotron-based pulsed heavy-ion beam delivery

    PubMed Central

    He, Pengbo; Li, Qiang; Liu, Xinguo; Dai, Zhongying; Zhao, Ting; Fu, Tingyan; Shen, Guosheng; Ma, Yuanyuan; Huang, Qiyan; Yan, Yuanlin

    2014-01-01

    Purpose: To efficiently deliver respiratory-gated radiation during synchrotron-based pulsed heavy-ion radiotherapy, a novel respiratory guidance method combining a personalized audio-visual biofeedback (BFB) system, breath hold (BH), and synchrotron-based gating was designed to help patients synchronize their respiratory patterns with synchrotron pulses and to overcome typical limitations such as low efficiency, residual motion, and discomfort. Methods: In-house software was developed to acquire body surface marker positions and display BFB, gating signals, and real-time beam profiles on a LED screen. Patients were prompted to perform short BHs or short deep breath holds (SDBH) with the aid of BFB following a personalized standard BH/SDBH (stBH/stSDBH) guiding curve or their own representative BH/SDBH (reBH/reSDBH) guiding curve. A practical simulation was performed for a group of 15 volunteers to evaluate the feasibility and effectiveness of this method. Effective dose rates (EDRs), mean absolute errors between the guiding curves and the measured curves, and mean absolute deviations of the measured curves were obtained within 10%–50% duty cycles (DCs) that were synchronized with the synchrotron’s flat-top phase. Results: All maneuvers for an individual volunteer took approximately half an hour, and no one experienced discomfort during the maneuvers. Using the respiratory guidance methods, the magnitude of residual motion was almost ten times less than during nongated irradiation, and increases in the average effective dose rate by factors of 2.39–4.65, 2.39–4.59, 1.73–3.50, and 1.73–3.55 for the stBH, reBH, stSDBH, and reSDBH guiding maneuvers, respectively, were observed in contrast with conventional free breathing-based gated irradiation, depending on the respiratory-gated duty cycle settings. Conclusions: The proposed respiratory guidance method with personalized BFB was confirmed to be feasible in a group of volunteers. Increased effective dose rate and improved overall treatment precision were observed compared to conventional free breathing-based, respiratory-gated irradiation. Because breathing guidance curves could be established based on the respective average respiratory period and amplitude for each patient, it may be easier for patients to cooperate using this technique. PMID:25370622

  3. TU-EF-304-06: A Comparison of CT Number to Relative Linear Stopping Power Conversion Curves Used by Proton Therapy Centers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, P; Lowenstein, J; Kry, S

    Purpose: To compare the CT Number (CTN) to Relative Linear Stopping Power (RLSP) conversion curves used by 14 proton institutions in their dose calculations. Methods: The proton institution’s CTN to RLSP conversion curves were collected by the Imaging and Radiation Oncology Core (IROC) Houston QA Center during its on-site dosimetry review audits. The CTN values were converted to scaled CT Numbers. The scaling assigns a CTN of 0 to air and 1000 to water to allow intercomparison. The conversion curves were compared and the mean curve was calculated based on institutions’ predicted RLSP values for air (CTN 0), lung (CTNmore » 250), fat (CTN 950), water (1000), liver (CTN 1050), and bone (CTN 2000) points. Results: One institution’s curve was found to have a unique curve shape between the scaled CTN of 1025 to 1225. This institution modified its curve based on the findings. Another institution had higher RLSP values than expected for both low and high CTNs. This institution recalibrated their two CT scanners and the new data placed their curve closer to the mean of all institutions. After corrections were made to several conversion curves, four institutions still fall outside 2 standard deviations at very low CTNs (100–200), and two institutions fall outside between CTN 850–900. The largest percent difference in RLSP values between institutions for the specific tissues reviewed was 22% for the lung point. Conclusion: The review and comparison of CTN to RLSP conversion curves allows IROC Houston to identify any outliers and make recommendations for improvement. Several institutions improved their clinical dose calculation accuracy as a Result of this review. There is still area for improvement, particularly in the lung area of the curve. The IROC Houston QA Center is supported by NCI grant CA180803.« less

  4. Development and initial validation of the Classification of Early-Onset Scoliosis (C-EOS).

    PubMed

    Williams, Brendan A; Matsumoto, Hiroko; McCalla, Daren J; Akbarnia, Behrooz A; Blakemore, Laurel C; Betz, Randal R; Flynn, John M; Johnston, Charles E; McCarthy, Richard E; Roye, David P; Skaggs, David L; Smith, John T; Snyder, Brian D; Sponseller, Paul D; Sturm, Peter F; Thompson, George H; Yazici, Muharrem; Vitale, Michael G

    2014-08-20

    Early-onset scoliosis is a heterogeneous condition, with highly variable manifestations and natural history. No standardized classification system exists to describe and group patients, to guide optimal care, or to prognosticate outcomes within this population. A classification system for early-onset scoliosis is thus a necessary prerequisite to the timely evolution of care of these patients. Fifteen experienced surgeons participated in a nominal group technique designed to achieve a consensus-based classification system for early-onset scoliosis. A comprehensive list of factors important in managing early-onset scoliosis was generated using a standardized literature review, semi-structured interviews, and open forum discussion. Three group meetings and two rounds of surveying guided the selection of classification components, subgroupings, and cut-points. Initial validation of the system was conducted using an interobserver reliability assessment based on the classification of a series of thirty cases. Nominal group technique was used to identify three core variables (major curve angle, etiology, and kyphosis) with high group content validity scores. Age and curve progression ranked slightly lower. Participants evaluated the cases of thirty patients with early-onset scoliosis for reliability testing. The mean kappa value for etiology (0.64) was substantial, while the mean kappa values for major curve angle (0.95) and kyphosis (0.93) indicated almost perfect agreement. The final classification consisted of a continuous age prefix, etiology (congenital or structural, neuromuscular, syndromic, and idiopathic), major curve angle (1, 2, 3, or 4), and kyphosis (-, N, or +) variables, and an optional progression modifier (P0, P1, or P2). Utilizing formal consensus-building methods in a large group of surgeons experienced in treating early-onset scoliosis, a novel classification system for early-onset scoliosis was developed with all core components demonstrating substantial to excellent interobserver reliability. This classification system will serve as a foundation to guide ongoing research efforts and standardize communication in the clinical setting. Copyright © 2014 by The Journal of Bone and Joint Surgery, Incorporated.

  5. Computer-Aided Recognition of Facial Attributes for Fetal Alcohol Spectrum Disorders.

    PubMed

    Valentine, Matthew; Bihm, Dustin C J; Wolf, Lior; Hoyme, H Eugene; May, Philip A; Buckley, David; Kalberg, Wendy; Abdul-Rahman, Omar A

    2017-12-01

    To compare the detection of facial attributes by computer-based facial recognition software of 2-D images against standard, manual examination in fetal alcohol spectrum disorders (FASD). Participants were gathered from the Fetal Alcohol Syndrome Epidemiology Research database. Standard frontal and oblique photographs of children were obtained during a manual, in-person dysmorphology assessment. Images were submitted for facial analysis conducted by the facial dysmorphology novel analysis technology (an automated system), which assesses ratios of measurements between various facial landmarks to determine the presence of dysmorphic features. Manual blinded dysmorphology assessments were compared with those obtained via the computer-aided system. Areas under the curve values for individual receiver-operating characteristic curves revealed the computer-aided system (0.88 ± 0.02) to be comparable to the manual method (0.86 ± 0.03) in detecting patients with FASD. Interestingly, cases of alcohol-related neurodevelopmental disorder (ARND) were identified more efficiently by the computer-aided system (0.84 ± 0.07) in comparison to the manual method (0.74 ± 0.04). A facial gestalt analysis of patients with ARND also identified more generalized facial findings compared to the cardinal facial features seen in more severe forms of FASD. We found there was an increased diagnostic accuracy for ARND via our computer-aided method. As this category has been historically difficult to diagnose, we believe our experiment demonstrates that facial dysmorphology novel analysis technology can potentially improve ARND diagnosis by introducing a standardized metric for recognizing FASD-associated facial anomalies. Earlier recognition of these patients will lead to earlier intervention with improved patient outcomes. Copyright © 2017 by the American Academy of Pediatrics.

  6. TYPE Ia SUPERNOVA DISTANCE MODULUS BIAS AND DISPERSION FROM K-CORRECTION ERRORS: A DIRECT MEASUREMENT USING LIGHT CURVE FITS TO OBSERVED SPECTRAL TIME SERIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saunders, C.; Aldering, G.; Aragon, C.

    2015-02-10

    We estimate systematic errors due to K-corrections in standard photometric analyses of high-redshift Type Ia supernovae. Errors due to K-correction occur when the spectral template model underlying the light curve fitter poorly represents the actual supernova spectral energy distribution, meaning that the distance modulus cannot be recovered accurately. In order to quantify this effect, synthetic photometry is performed on artificially redshifted spectrophotometric data from 119 low-redshift supernovae from the Nearby Supernova Factory, and the resulting light curves are fit with a conventional light curve fitter. We measure the variation in the standardized magnitude that would be fit for a givenmore » supernova if located at a range of redshifts and observed with various filter sets corresponding to current and future supernova surveys. We find significant variation in the measurements of the same supernovae placed at different redshifts regardless of filters used, which causes dispersion greater than ∼0.05 mag for measurements of photometry using the Sloan-like filters and a bias that corresponds to a 0.03 shift in w when applied to an outside data set. To test the result of a shift in supernova population or environment at higher redshifts, we repeat our calculations with the addition of a reweighting of the supernovae as a function of redshift and find that this strongly affects the results and would have repercussions for cosmology. We discuss possible methods to reduce the contribution of the K-correction bias and uncertainty.« less

  7. National Health and Nutrition Examination Survey whole-body dual-energy X-ray absorptiometry reference data for GE Lunar systems.

    PubMed

    Fan, Bo; Shepherd, John A; Levine, Michael A; Steinberg, Dee; Wacker, Wynn; Barden, Howard S; Ergun, David; Wu, Xin P

    2014-01-01

    The National Health and Nutrition Examination Survey (NHANES 1999-2004) includes adult and pediatric comparisons for total body bone and body composition results. Because dual-energy x-ray absorptiometry (DXA) measurements from different manufacturers are not standardized, NHANES reference values currently are applicable only to a single make and model of Hologic DXA system. The purpose of this study was to derive body composition reference curves for GE Healthcare Lunar DXA systems. Published values from the NHANES 1999-2004 survey were acquired from the Centers for Disease Control and Prevention website. Using previously reported cross-calibration equations between Hologic and GE-Lunar, we converted the total body and regional bone and soft-tissue measurements from NHANES 1999-2004 to GE-Lunar values. The LMS (LmsChartMaker Pro Version 3.5) curve fitting method was used to generate GE-Lunar reference curves. Separate curves were generated for each sex and ethnicity. The reference curves were also divided into pediatric (≤20 years old) and adult (>20 years old) groups. Adult reference curves were derived as a function of age. Additional relationships of pediatric DXA values were derived as a function of height, lean mass, and bone area. Robustness was tested between Hologic and GE-Lunar Z-score values. The NHANES 1999-2004 survey included a sample of 20,672 participants' (9630 female) DXA scans. A total of 8056 participants were younger than 20 yr and were included in the pediatric reference data set. Participants enrolled in the study who weighed more than 136 kg (over scanner table limit) were excluded. The average Z-scores comparing the new GE-Lunar reference curves are close to zero, and the standard deviation of the Z-scores are close to one for all variables. As expected, all measurements on the GE-Lunar reference curves for participants younger than 20 yr increase monotonically with age. In the adult population, most of the curves are constant at younger age and drop moderately as age increases. We have presented NHANES reference curves applicable to DXA whole-body scans acquired on GE Healthcare Lunar systems by age, sex and ethnicity. Users of GE Healthcare GE-Lunar DXA systems can now benefit from the large body composition reference data set collected in the NHANES 1999-2004 study. Copyright © 2014 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  8. Fracture Toughness of Advanced Ceramics at Room Temperature

    PubMed Central

    Quinn, George D.; Salem, Jonathan; Bar-on, Isa; Cho, Kyu; Foley, Michael; Fang, Ho

    1992-01-01

    This report presents the results obtained by the five U.S. participating laboratories in the Versailles Advanced Materials and Standards (VAMAS) round-robin for fracture toughness of advanced ceramics. Three test methods were used: indentation fracture, indentation strength, and single-edge pre-cracked beam. Two materials were tested: a gas-pressure sintered silicon nitride and a zirconia toughened alumina. Consistent results were obtained with the latter two test methods. Interpretation of fracture toughness in the zirconia alumina composite was complicated by R-curve and environmentally-assisted crack growth phenomena. PMID:28053447

  9. Antioxidant activity and phytochemical compounds of snake fruit (Salacca Zalacca)

    NASA Astrophysics Data System (ADS)

    Suica-Bunghez, I. R.; Teodorescu, S.; Dulama, I. D.; Voinea, O. C.; imionescu, S.; Ion, R. M.

    2016-06-01

    Snake fruit (Salacca zalacca) is a palm tree species, which is found in Malaysia and Indonesia. This study was conducted to investigate and compare the composition, total phenolic, flavonoid, tanins and monoterpenoids contents in the core and shell fruits. Concentration values of extracts were obtained from standard curves obtained. Antioxidant activity was determined using DPPH method. For all methods it was used the UV-VIS Specord M40, using different wavelength. The infrared spectral analysis was carried out to caracterized the type of functional group existent in snake fruit parts (shell and core).

  10. Comparison of geometric morphometric outline methods in the discrimination of age-related differences in feather shape

    PubMed Central

    Sheets, H David; Covino, Kristen M; Panasiewicz, Joanna M; Morris, Sara R

    2006-01-01

    Background Geometric morphometric methods of capturing information about curves or outlines of organismal structures may be used in conjunction with canonical variates analysis (CVA) to assign specimens to groups or populations based on their shapes. This methodological paper examines approaches to optimizing the classification of specimens based on their outlines. This study examines the performance of four approaches to the mathematical representation of outlines and two different approaches to curve measurement as applied to a collection of feather outlines. A new approach to the dimension reduction necessary to carry out a CVA on this type of outline data with modest sample sizes is also presented, and its performance is compared to two other approaches to dimension reduction. Results Two semi-landmark-based methods, bending energy alignment and perpendicular projection, are shown to produce roughly equal rates of classification, as do elliptical Fourier methods and the extended eigenshape method of outline measurement. Rates of classification were not highly dependent on the number of points used to represent a curve or the manner in which those points were acquired. The new approach to dimensionality reduction, which utilizes a variable number of principal component (PC) axes, produced higher cross-validation assignment rates than either the standard approach of using a fixed number of PC axes or a partial least squares method. Conclusion Classification of specimens based on feather shape was not highly dependent of the details of the method used to capture shape information. The choice of dimensionality reduction approach was more of a factor, and the cross validation rate of assignment may be optimized using the variable number of PC axes method presented herein. PMID:16978414

  11. Development of a Thiolysis HPLC Method for the Analysis of Procyanidins in Cranberry Products.

    PubMed

    Gao, Chi; Cunningham, David G; Liu, Haiyan; Khoo, Christina; Gu, Liwei

    2018-03-07

    The objective of this study was to develop a thiolysis HPLC method to quantify total procyanidins, the ratio of A-type linkages, and A-type procyanidin equivalents in cranberry products. Cysteamine was utilized as a low-odor substitute of toluene-α-thiol for thiolysis depolymerization. A reaction temperature of 70 °C and reaction time of 20 min, in 0.3 M of HCl, were determined to be optimum depolymerization conditions. Thiolytic products of cranberry procyanidins were separated by RP-HPLC and identified using high-resolution mass spectrometry. Standards curves of good linearity were obtained on thiolyzed procyanidin dimer A2 and B2 external standards. The detection and quantification limits, recovery, and precision of this method were validated. The new method was applied to quantitate total procyanidins, average degree of polymerization, ratio of A-type linkages, and A-type procyanidin equivalents in cranberry products. Results showed that the method was suitable for quantitative and qualitative analysis of procyanidins in cranberry products.

  12. A Validation of an Intelligent Decision-Making Support System for the Nutrition Diagnosis of Bariatric Surgery Patients

    PubMed Central

    Martins, Cristina; Dias, João; Pinto, José S

    2014-01-01

    Background Bariatric surgery is an important method for treatment of morbid obesity. It is known that significant nutritional deficiencies might occur after surgery, such as, calorie-protein malnutrition, iron deficiency anemia, and lack of vitamin B12, thiamine, and folic acid. Objective The objective of our study was to validate a computerized intelligent decision support system that suggests nutritional diagnoses of patients submitted to bariatric surgery. Methods There were fifteen clinical cases that were developed and sent to three dietitians in order to evaluate and define a nutritional diagnosis. After this step, the cases were sent to four bariatric surgery expert dietitians who were aiming to collaborate on a gold standard. The nutritional diagnosis was to be defined individually, and any disagreements were solved through a consensus. The final result was used as the gold standard. Bayesian networks were used to implement the system, and database training was done with Shell Netica. For the system validation, a similar answer rate was calculated, as well as the specificity and sensibility. Receiver operating characteristic (ROC) curves were projected to each nutritional diagnosis. Results Among the four experts, the rate of similar answers found was 80% (48/60) to 93% (56/60), depending on the nutritional diagnosis. The rate of similar answers of the system, compared to the gold standard, was 100% (60/60). The system sensibility and specificity were 95.0%. The ROC curves projection showed that the system was able to represent the expert knowledge (gold standard), and to help them in their daily tasks. Conclusions The system that was developed was validated to be used by health care professionals for decision-making support in their nutritional diagnosis of patients submitted to bariatric surgery. PMID:25601419

  13. Determination of campesterol, stigmasterol, and beta-sitosterol in saw palmetto raw materials and dietary supplements by gas chromatography: single-laboratory validation.

    PubMed

    Sorenson, Wendy R; Sullivan, Darryl

    2006-01-01

    In conjunction with an AOAC Presidential Task Force on Dietary Supplements, a method was validated for measurement of 3 plant sterols (phytosterols) in saw palmetto raw materials, extracts, and dietary supplements. AOAC Official Method 994.10, "Cholesterol in Foods," was modified for purposes of this validation. Test samples were saponified at high temperature with ethanolic potassium hydroxide solution. The unsaponifiable fraction containing phytosterols (campesterol, stigmasterol, and beta-sitosterol) was extracted with toluene. Phytosterols were derivatized to trimethylsilyl ethers and then quantified by gas chromatography with a hydrogen flame ionization detector. The presence of the phytosterols was detected at concentrations greater than or equal to 1.00 mg/100 g based on 2-3 g of sample. The standard curve range for this assay was 0.00250 to 0.200 mg/mL. The calibration curves for all phytosterols had correlation coefficients greater than or equal to 0.995. Precision studies produced relative standard deviation values of 1.52 to 7.27% for campesterol, 1.62 to 6.48% for stigmasterol, and 1.39 to 10.5% for beta-sitosterol. Recoveries for samples fortified at 100% of the inherent values averaged 98.5 to 105% for campesterol, 95.0 to 108% for stigmasterol, and 85.0 to 103% for beta-sitosterol.

  14. A retrospective analysis of compact fluorescent lamp experience curves and their correlations to deployment programs

    DOE PAGES

    Smith, Sarah Josephine; Wei, Max; Sohn, Michael D.

    2016-09-17

    Experience curves are useful for understanding technology development and can aid in the design and analysis of market transformation programs. Here, we employ a novel approach to create experience curves, to examine both global and North American compact fluorescent lamp (CFL) data for the years 1990–2007. We move away from the prevailing method of fitting a single, constant, exponential curve to data and instead search for break points where changes in the learning rate may have occurred. Our analysis suggests a learning rate of approximately 21% for the period of 1990–1997, and 51% and 79% in global and North Americanmore » datasets, respectively, after 1998. We use price data for this analysis; therefore our learning rates encompass developments beyond typical “learning by doing”, including supply chain impacts such as market competition. We examine correlations between North American learning rates and the initiation of new programs, abrupt technological advances, and economic and political events, and find an increased learning rate associated with design advancements and federal standards programs. Our findings support the use of segmented experience curves for retrospective and prospective technology analysis, and may imply that investments in technology programs have contributed to an increase of the CFL learning rate.« less

  15. A retrospective analysis of compact fluorescent lamp experience curves and their correlations to deployment programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Sarah Josephine; Wei, Max; Sohn, Michael D.

    Experience curves are useful for understanding technology development and can aid in the design and analysis of market transformation programs. Here, we employ a novel approach to create experience curves, to examine both global and North American compact fluorescent lamp (CFL) data for the years 1990–2007. We move away from the prevailing method of fitting a single, constant, exponential curve to data and instead search for break points where changes in the learning rate may have occurred. Our analysis suggests a learning rate of approximately 21% for the period of 1990–1997, and 51% and 79% in global and North Americanmore » datasets, respectively, after 1998. We use price data for this analysis; therefore our learning rates encompass developments beyond typical “learning by doing”, including supply chain impacts such as market competition. We examine correlations between North American learning rates and the initiation of new programs, abrupt technological advances, and economic and political events, and find an increased learning rate associated with design advancements and federal standards programs. Our findings support the use of segmented experience curves for retrospective and prospective technology analysis, and may imply that investments in technology programs have contributed to an increase of the CFL learning rate.« less

  16. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  17. Calculating Potential Energy Curves with Quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Powell, Andrew D.; Dawes, Richard

    2014-06-01

    Quantum Monte Carlo (QMC) is a computational technique that can be applied to the electronic Schrödinger equation for molecules. QMC methods such as Variational Monte Carlo (VMC) and Diffusion Monte Carlo (DMC) have demonstrated the capability of capturing large fractions of the correlation energy, thus suggesting their possible use for high-accuracy quantum chemistry calculations. QMC methods scale particularly well with respect to parallelization making them an attractive consideration in anticipation of next-generation computing architectures which will involve massive parallelization with millions of cores. Due to the statistical nature of the approach, in contrast to standard quantum chemistry methods, uncertainties (error-bars) are associated with each calculated energy. This study focuses on the cost, feasibility and practical application of calculating potential energy curves for small molecules with QMC methods. Trial wave functions were constructed with the multi-configurational self-consistent field (MCSCF) method from GAMESS-US.[1] The CASINO Monte Carlo quantum chemistry package [2] was used for all of the DMC calculations. An overview of our progress in this direction will be given. References: M. W. Schmidt et al. J. Comput. Chem. 14, 1347 (1993). R. J. Needs et al. J. Phys.: Condensed Matter 22, 023201 (2010).

  18. Measurement of Menadione in Urine by HPLC

    PubMed Central

    Rajabi, Ala Al; Peterson, James; Choi, Sang Woon; Suttie, John; Barakat, Susan; Booth, Sarah L

    2010-01-01

    Menadione is a metabolite of vitamin K that is excreted in urine. A high performance liquid chromatography (HPLC) method using a C30 column, post-column zinc reduction and fluorescence detection was developed to measure urinary menadione. The mobile phase was composed of 95% methanol with 0.55% aqueous solution and 5% DI H2O. Menaquinone-2 (MK-2) was used as an internal standard. The standard calibration curve was linear with a correlation coefficient (R2) of 0.999 for both menadione and MK-2. The lower limit of quantification (LLOQ) was 0.3 pmole menadione/mL urine. Sample preparation involved hydrolysis of menadiol conjugates and oxidizing the released menadiol to menadione. Using this method, urinary menadione was shown to increase in response to 3 years of phylloquinone supplementation. This HPLC method is a sensitive and reproducible way to detect menadione in urine. Research support: USDA ARS Cooperative Agreement 58-1950-7-707. PMID:20719580

  19. Measurement of menadione in urine by HPLC.

    PubMed

    Al Rajabi, Ala; Peterson, James; Choi, Sang-Woon; Suttie, John; Barakat, Susan; Booth, Sarah L

    2010-09-15

    Menadione is a metabolite of vitamin K that is excreted in urine. A high performance liquid chromatography (HPLC) method using a C(30) column, post-column zinc reduction and fluorescence detection was developed to measure urinary menadione. The mobile phase was composed of 95% methanol with 0.55% aqueous solution and 5% DI H(2)O. Menaquinone-2 (MK-2) was used as an internal standard. The standard calibration curve was linear with a correlation coefficient (R(2)) of 0.999 for both menadione and MK-2. The lower limit of quantification (LLOQ) was 0.3pmole menadione/mL urine. Sample preparation involved hydrolysis of menadiol conjugates and oxidizing the released menadiol to menadione. Using this method, urinary menadione was shown to increase in response to 3 years of phylloquinone supplementation. This HPLC method is a sensitive and reproducible way to detect menadione in urine. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  20. A new mathematical approach for the estimation of the AUC and its variability under different experimental designs in preclinical studies.

    PubMed

    Navarro-Fontestad, Carmen; González-Álvarez, Isabel; Fernández-Teruel, Carlos; Bermejo, Marival; Casabó, Vicente Germán

    2012-01-01

    The aim of the present work was to develop a new mathematical method for estimating the area under the curve (AUC) and its variability that could be applied in different preclinical experimental designs and amenable to be implemented in standard calculation worksheets. In order to assess the usefulness of the new approach, different experimental scenarios were studied and the results were compared with those obtained with commonly used software: WinNonlin® and Phoenix WinNonlin®. The results do not show statistical differences among the AUC values obtained by both procedures, but the new method appears to be a better estimator of the AUC standard error, measured as the coverage of 95% confidence interval. In this way, the new proposed method demonstrates to be as useful as WinNonlin® software when it was applicable. Copyright © 2011 John Wiley & Sons, Ltd.

  1. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al.

    PubMed

    Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang

    2015-09-21

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.

  2. Reference Curve for the Mean Uterine Artery Pulsatility Index in Singleton Pregnancies.

    PubMed

    Weichert, Alexander; Hagen, Andreas; Tchirikov, Michael; Fuchs, Ilka B; Henrich, Wolfgang; Entezami, Michael

    2017-05-01

    Doppler sonography of the uterine artery (UA) is done to monitor pregnancies, because the detected flow patterns are useful to draw inferences about possible disorders of trophoblast invasion. Increased resistance in the UA is associated with an increased risk of preeclampsia and/or intrauterine growth restriction (IUGR) and perinatal mortality. In the absence of standardized figures, the normal ranges of the various available reference curves sometimes differ quite substantially from one another. The causes for this are differences in the flow patterns of the UA depending on the position of the pulsed Doppler gates as well as branching of the UA. Because of the discrepancies between the different reference curves and the practical problems this poses for guideline recommendations, we thought it would be useful to create our own reference curves for Doppler measurements of the UA obtained from a singleton cohort under standardized conditions. This retrospective cohort study was carried out in the Department of Obstetrics of the Charité - Universitätsmedizin Berlin, the Department for Obstetrics and Prenatal Medicine of the University Hospital Halle (Saale) and the Center for Prenatal Diagnostics and Human Genetics Kurfürstendamm 199. Available datasets from the three study locations were identified and reference curves were generated using the LMS method. Measured values were correlated with age of gestation, and a cubic model and Box-Cox power transformation (L), the median (M) and the coefficient of variation (S) were used to smooth the curves. 103 720 Doppler examinations of the UA carried out in singleton pregnancies from the 11th week of gestation (10 + 1 GW) were analyzed. The mean pulsatility index (Mean PI) showed a continuous decline over the course of pregnancy, dropping to a plateau of around 0.84 between the 23rd and 27th GW, after which it decreased again. Age of gestation, placental position, position of pulsed Doppler gates and branching of the UA can all change the flow pattern. The mean pulsatility index (Mean PI) showed a continuous decrease over time. There were significant differences between our data and alternative reference curves. A system of classifying Doppler studies and a reference curve adapted to the current technology are urgently required to differentiate better between physiological and pathological findings.

  3. A new responder criterion (relative effect per patient (REPP) > 0.2) externally validated in a large total hip replacement multicenter cohort (EUROHIP).

    PubMed

    Huber, J; Hüsler, J; Dieppe, P; Günther, K P; Dreinhöfer, K; Judge, A

    2016-03-01

    To validate a new method to identify responders (relative effect per patient (REPP) >0.2) using the OMERACT-OARSI criteria as gold standard in a large multicentre sample. The REPP ([score before - after treatment]/score before treatment) was calculated for 845 patients of a large multicenter European cohort study for THR. The patients with a REPP >0.2 were defined as responders. The responder rate was compared to the gold standard (OMERACT-OARSI criteria) using receiver operator characteristic (ROC) curve analysis for sensitivity, specificity and percentage of appropriately classified patients. With the criterion REPP>0.2 85.4% of the patients were classified as responders, applying the OARSI-OMERACT criteria 85.7%. The new method had 98.8% sensitivity, 94.2% specificity and 98.1% of the patients were correctly classified compared to the gold standard. The external validation showed a high sensitivity and also specificity of a new criterion to identify a responder compared to the gold standard method. It is simple and has no uncertainties due to a single classification criterion. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. A novel second-order standard addition analytical method based on data processing with multidimensional partial least-squares and residual bilinearization.

    PubMed

    Lozano, Valeria A; Ibañez, Gabriela A; Olivieri, Alejandro C

    2009-10-05

    In the presence of analyte-background interactions and a significant background signal, both second-order multivariate calibration and standard addition are required for successful analyte quantitation achieving the second-order advantage. This report discusses a modified second-order standard addition method, in which the test data matrix is subtracted from the standard addition matrices, and quantitation proceeds via the classical external calibration procedure. It is shown that this novel data processing method allows one to apply not only parallel factor analysis (PARAFAC) and multivariate curve resolution-alternating least-squares (MCR-ALS), but also the recently introduced and more flexible partial least-squares (PLS) models coupled to residual bilinearization (RBL). In particular, the multidimensional variant N-PLS/RBL is shown to produce the best analytical results. The comparison is carried out with the aid of a set of simulated data, as well as two experimental data sets: one aimed at the determination of salicylate in human serum in the presence of naproxen as an additional interferent, and the second one devoted to the analysis of danofloxacin in human serum in the presence of salicylate.

  5. A semiparametric separation curve approach for comparing correlated ROC data from multiple markers

    PubMed Central

    Tang, Liansheng Larry; Zhou, Xiao-Hua

    2012-01-01

    In this article we propose a separation curve method to identify the range of false positive rates for which two ROC curves differ or one ROC curve is superior to the other. Our method is based on a general multivariate ROC curve model, including interaction terms between discrete covariates and false positive rates. It is applicable with most existing ROC curve models. Furthermore, we introduce a semiparametric least squares ROC estimator and apply the estimator to the separation curve method. We derive a sandwich estimator for the covariance matrix of the semiparametric estimator. We illustrate the application of our separation curve method through two real life examples. PMID:23074360

  6. The average receiver operating characteristic curve in multireader multicase imaging studies

    PubMed Central

    Samuelson, F W

    2014-01-01

    Objective: In multireader, multicase (MRMC) receiver operating characteristic (ROC) studies for evaluating medical imaging systems, the area under the ROC curve (AUC) is often used as a summary metric. Owing to the limitations of AUC, plotting the average ROC curve to accompany the rigorous statistical inference on AUC is recommended. The objective of this article is to investigate methods for generating the average ROC curve from ROC curves of individual readers. Methods: We present both a non-parametric method and a parametric method for averaging ROC curves that produce a ROC curve, the area under which is equal to the average AUC of individual readers (a property we call area preserving). We use hypothetical examples, simulated data and a real-world imaging data set to illustrate these methods and their properties. Results: We show that our proposed methods are area preserving. We also show that the method of averaging the ROC parameters, either the conventional bi-normal parameters (a, b) or the proper bi-normal parameters (c, da), is generally not area preserving and may produce a ROC curve that is intuitively not an average of multiple curves. Conclusion: Our proposed methods are useful for making plots of average ROC curves in MRMC studies as a companion to the rigorous statistical inference on the AUC end point. The software implementing these methods is freely available from the authors. Advances in knowledge: Methods for generating the average ROC curve in MRMC ROC studies are formally investigated. The area-preserving criterion we defined is useful to evaluate such methods. PMID:24884728

  7. Estimating time-dependent ROC curves using data under prevalent sampling.

    PubMed

    Li, Shanshan

    2017-04-15

    Prevalent sampling is frequently a convenient and economical sampling technique for the collection of time-to-event data and thus is commonly used in studies of the natural history of a disease. However, it is biased by design because it tends to recruit individuals with longer survival times. This paper considers estimation of time-dependent receiver operating characteristic curves when data are collected under prevalent sampling. To correct the sampling bias, we develop both nonparametric and semiparametric estimators using extended risk sets and the inverse probability weighting techniques. The proposed estimators are consistent and converge to Gaussian processes, while substantial bias may arise if standard estimators for right-censored data are used. To illustrate our method, we analyze data from an ovarian cancer study and estimate receiver operating characteristic curves that assess the accuracy of the composite markers in distinguishing subjects who died within 3-5 years from subjects who remained alive. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Electrical characterization of gold-DNA-gold structures in presence of an external magnetic field by means of I-V curve analysis.

    PubMed

    Khatir, Nadia Mahmoudi; Banihashemian, Seyedeh Maryam; Periasamy, Vengadesh; Ritikos, Richard; Abd Majid, Wan Haliza; Abdul Rahman, Saadah

    2012-01-01

    This work presents an experimental study of gold-DNA-gold structures in the presence and absence of external magnetic fields with strengths less than 1,200.00 mT. The DNA strands, extracted by standard method were used to fabricate a Metal-DNA-Metal (MDM) structure. Its electric behavior when subjected to a magnetic field was studied through its current-voltage (I-V) curve. Acquisition of the I-V curve demonstrated that DNA as a semiconductor exhibits diode behavior in the MDM structure. The current versus magnetic field strength followed a decreasing trend because of a diminished mobility in the presence of a low magnetic field. This made clear that an externally imposed magnetic field would boost resistance of the MDM structure up to 1,000.00 mT and for higher magnetic field strengths we can observe an increase in potential barrier in MDM junction. The magnetic sensitivity indicates the promise of using MDM structures as potential magnetic sensors.

  9. [Computer aided design for fixed partial denture framework based on reverse engineering technology].

    PubMed

    Sun, Yu-chun; Lü, Pei-jun; Wang, Yong

    2006-03-01

    To explore a computer aided design (CAD) route for the framework of domestic fixed partial denture (FPD) and confirm the suitable method of 3-D CAD. The working area of a dentition model was scanned with a 3-D mechanical scanner. Using the reverse engineering (RE) software, margin and border curves were extracted and several reference curves were created to ensure the dimension and location of pontic framework that was taken from the standard database. The shoulder parts of the retainers were created after axial surfaces constructed. The connecting areas, axial line and curving surface of the framework connector were finally created. The framework of a three-unit FPD was designed with RE technology, which showed smooth surfaces and continuous contours. The design route is practical. The result of this study is significant in theory and practice, which will provide a reference for establishing the computer aided design/computer aided manufacture (CAD/CAM) system of domestic FPD.

  10. Electrical Characterization of Gold-DNA-Gold Structures in Presence of an External Magnetic Field by Means of I–V Curve Analysis

    PubMed Central

    Khatir, Nadia Mahmoudi; Banihashemian, Seyedeh Maryam; Periasamy, Vengadesh; Ritikos, Richard; Majid, Wan Haliza Abd; Rahman, Saadah Abdul

    2012-01-01

    This work presents an experimental study of gold-DNA-gold structures in the presence and absence of external magnetic fields with strengths less than 1,200.00 mT. The DNA strands, extracted by standard method were used to fabricate a Metal-DNA-Metal (MDM) structure. Its electric behavior when subjected to a magnetic field was studied through its current-voltage (I–V) curve. Acquisition of the I–V curve demonstrated that DNA as a semiconductor exhibits diode behavior in the MDM structure. The current versus magnetic field strength followed a decreasing trend because of a diminished mobility in the presence of a low magnetic field. This made clear that an externally imposed magnetic field would boost resistance of the MDM structure up to 1,000.00 mT and for higher magnetic field strengths we can observe an increase in potential barrier in MDM junction. The magnetic sensitivity indicates the promise of using MDM structures as potential magnetic sensors. PMID:22737025

  11. Electrochemical Skin Conductance May Be Used to Screen for Diabetic Cardiac Autonomic Neuropathy in a Chinese Population with Diabetes

    PubMed Central

    He, Tianyi; Wang, Chuan; Zuo, Anju; Liu, Pan; Li, Wenjuan

    2017-01-01

    Aims. This study aimed to assess whether the electrochemical skin conductance (ESC) could be used to screen for diabetic cardiac autonomic neuropathy (DCAN) in a Chinese population with diabetes. Methods. We recruited 75 patients with type 2 diabetes mellitus (T2DM) and 45 controls without diabetes. DCAN was diagnosed by the cardiovascular autonomic reflex tests (CARTs) as gold standard. In all subjects ESCs of hands and feet were also detected by SUDOSCAN™ as a new screening method. The efficacy was assessed by receiver operating characteristic (ROC) curve analysis. Results. The ESCs of both hands and feet were significantly lower in T2DM patients with DCAN than those without DCAN (67.33 ± 15.37 versus 78.03 ± 13.73, P = 0.002, and 57.77 ± 20.99 versus 75.03 ± 11.41, P < 0.001). The ROC curve analysis showed the areas under the ROC curve were both 0.75 for ESCs of hands and feet in screening DCAN. And the optimal cut-off values of ESCs, sensitivities, and specificities were 76 μS, 76.7%, and 75.6% for hands and 75 μS, 80.0%, and 60.0% for feet, respectively. Conclusions. ESC measurement is a reliable and feasible method to screen DCAN in the Chinese population with diabetes before further diagnosis with CARTs. PMID:28280746

  12. Bioanalytical method development and validation for the determination of glycine in human cerebrospinal fluid by ion-pair reversed-phase liquid chromatography-tandem mass spectrometry.

    PubMed

    Jiang, Jian; James, Christopher A; Wong, Philip

    2016-09-05

    A LC-MS/MS method has been developed and validated for the determination of glycine in human cerebrospinal fluid (CSF). The validated method used artificial cerebrospinal fluid as a surrogate matrix for calibration standards. The calibration curve range for the assay was 100-10,000ng/mL and (13)C2, (15)N-glycine was used as an internal standard (IS). Pre-validation experiments were performed to demonstrate parallelism with surrogate matrix and standard addition methods. The mean endogenous glycine concentration in a pooled human CSF determined on three days by using artificial CSF as a surrogate matrix and the method of standard addition was found to be 748±30.6 and 768±18.1ng/mL, respectively. A percentage difference of -2.6% indicated that artificial CSF could be used as a surrogate calibration matrix for the determination of glycine in human CSF. Quality control (QC) samples, except the lower limit of quantitation (LLOQ) QC and low QC samples, were prepared by spiking glycine into aliquots of pooled human CSF sample. The low QC sample was prepared from a separate pooled human CSF sample containing low endogenous glycine concentrations, while the LLOQ QC sample was prepared in artificial CSF. Standard addition was used extensively to evaluate matrix effects during validation. The validated method was used to determine the endogenous glycine concentrations in human CSF samples. Incurred sample reanalysis demonstrated reproducibility of the method. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. A Novel Database to Rank and Display Archeomagnetic Intensity Data

    NASA Astrophysics Data System (ADS)

    Donadini, F.; Korhonen, K.; Riisager, P.; Pesonen, L. J.; Kahma, K.

    2005-12-01

    To understand the content and the causes of the changes in the Earth's magnetic field beyond the observatory records one has to rely on archeomagnetic and lake sediment paleomagnetic data. The regional archeointensity curves are often of different quality and temporally variable which hampers the global analysis of the data in terms of dipole vs non-dipole field. We have developed a novel archeointensity database application utilizing MySQL, PHP (PHP Hypertext Preprocessor), and the Generic Mapping Tools (GMT) for ranking and displaying geomagnetic intensity data from the last 12000 years. Our application has the advantage that no specific software is required to query the database and view the results. Querying the database is performed using any Web browser; a fill-out form is used to enter the site location and a minimum ranking value to select the data points to be displayed. The form also features the possibility to select plotting of the data as an archeointensity curve with error bars, and a Virtual Axial Dipole Moment (VADM) or ancient field value (Ba) curve calculated using the CALS7K model (Continuous Archaeomagnetic and Lake Sediment geomagnetic model) of (Korte and Constable, 2005). The results of a query are displayed on a Web page containing a table summarizing the query parameters, a table showing the archeointensity values satisfying the query parameters, and a plot of VADM or Ba as a function of sample age. The database consists of eight related tables. The main one, INTENSITIES, stores the 3704 archeointensity measurements collected from 159 publications as VADM (and VDM when available) and Ba values, including their standard deviations and sampling locations. It also contains the number of samples and specimens measured from each site. The REFS table stores the references to a particular study. The names, latitudes, and longitudes of the regions where the samples were collected are stored in the SITES table. The MATERIALS, METHODS, SPECIMEN_TYPES and DATING_METHODS tables store information about the sample materials, intensity determination methods, specimen types and age determination methods. The SIGMA_COUNT table is used indirectly for ranking data according to the number of samples measured and their standard deviations. Each intensity measurement is assigned a score (0--2) depending on the number of specimens measured and their standard deviations, the intensity determination method, the type of specimens measured and materials. The ranking of each data point is calculated as the sum of the four scores and varies between 0 and 8. Additionally, users can select the parameters that will be included in the ranking.

  14. Development of a viability standard curve for microencapsulated probiotic bacteria using confocal microscopy and image analysis software.

    PubMed

    Moore, Sarah; Kailasapathy, Kasipathy; Phillips, Michael; Jones, Mark R

    2015-07-01

    Microencapsulation is proposed to protect probiotic strains from food processing procedures and to maintain probiotic viability. Little research has described the in situ viability of microencapsulated probiotics. This study successfully developed a real-time viability standard curve for microencapsulated bacteria using confocal microscopy, fluorescent dyes and image analysis software. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. A systematic evaluation of contemporary impurity correction methods in ITS-90 aluminium fixed point cells

    NASA Astrophysics Data System (ADS)

    da Silva, Rodrigo; Pearce, Jonathan V.; Machin, Graham

    2017-06-01

    The fixed points of the International Temperature Scale of 1990 (ITS-90) are the basis of the calibration of standard platinum resistance thermometers (SPRTs). Impurities in the fixed point material at the level of parts per million can give rise to an elevation or depression of the fixed point temperature of order of millikelvins, which often represents the most significant contribution to the uncertainty of SPRT calibrations. A number of methods for correcting for the effect of impurities have been advocated, but it is becoming increasingly evident that no single method can be used in isolation. In this investigation, a suite of five aluminium fixed point cells (defined ITS-90 freezing temperature 660.323 °C) have been constructed, each cell using metal sourced from a different supplier. The five cells have very different levels and types of impurities. For each cell, chemical assays based on the glow discharge mass spectroscopy (GDMS) technique have been obtained from three separate laboratories. In addition a series of high quality, long duration freezing curves have been obtained for each cell, using three different high quality SPRTs, all measured under nominally identical conditions. The set of GDMS analyses and freezing curves were then used to compare the different proposed impurity correction methods. It was found that the most consistent corrections were obtained with a hybrid correction method based on the sum of individual estimates (SIE) and overall maximum estimate (OME), namely the SIE/Modified-OME method. Also highly consistent was the correction technique based on fitting a Scheil solidification model to the measured freezing curves, provided certain well defined constraints are applied. Importantly, the most consistent methods are those which do not depend significantly on the chemical assay.

  16. Automatic Detection and Recognition of Craters Based on the Spectral Features of Lunar Rocks and Minerals

    NASA Astrophysics Data System (ADS)

    Ye, L.; Xu, X.; Luan, D.; Jiang, W.; Kang, Z.

    2017-07-01

    Crater-detection approaches can be divided into four categories: manual recognition, shape-profile fitting algorithms, machine-learning methods and geological information-based analysis using terrain and spectral data. The mainstream method is Shape-profile fitting algorithms. Many scholars throughout the world use the illumination gradient information to fit standard circles by least square method. Although this method has achieved good results, it is difficult to identify the craters with poor "visibility", complex structure and composition. Moreover, the accuracy of recognition is difficult to be improved due to the multiple solutions and noise interference. Aiming at the problem, we propose a method for the automatic extraction of impact craters based on spectral characteristics of the moon rocks and minerals: 1) Under the condition of sunlight, the impact craters are extracted from MI by condition matching and the positions as well as diameters of the craters are obtained. 2) Regolith is spilled while lunar is impacted and one of the elements of lunar regolith is iron. Therefore, incorrectly extracted impact craters can be removed by judging whether the crater contains "non iron" element. 3) Craters which are extracted correctly, are divided into two types: simple type and complex type according to their diameters. 4) Get the information of titanium and match the titanium distribution of the complex craters with normal distribution curve, then calculate the goodness of fit and set the threshold. The complex craters can be divided into two types: normal distribution curve type of titanium and non normal distribution curve type of titanium. We validated our proposed method with MI acquired by SELENE. Experimental results demonstrate that the proposed method has good performance in the test area.

  17. Effect of Cooking Process on the Residues of Three Carbamate Pesticides in Rice

    PubMed Central

    Shoeibi, Shahram; Amirahmadi, Maryam; Yazdanpanah, Hassan; Pirali-Hamedani, Morteza; Pakzad, Saied Reza; Kobarfard, Farzad

    2011-01-01

    A gas chromatography mass spectrometry with spike calibration curve method was used to quantify three carbamate pesticides residue in cooked white rice and to estimate the reduction percentage of the cooking process duration. The selected pesticides are three carbamate pesticides including carbaryl and pirimicarb that their MRL is issued by “The Institute of Standards of Iran” and propoxur which is used as a widely consumed pesticide in rice. The analytical method entailed the following steps: 1- Blending 15 g cooked sample with 120 mL acetonitrile for 1 min in solvent proof blender, 2- Adding 6 g NaCl and blending for 1 min, 3- Filtering upper layer through 25 g anhydrous Na2SO4, 4- Cleaning up with PSA and MgSO4, 5- Centrifuging for 7 min, 6- Evaporating about 0.3 mL and reconstituting in toluene till 1 mL, 7- Injecting 2 μL extract into GC/MS and analyzing by single quadruple selected ion monitoring GC/MS-SQ-SIM. The concentration of pesticides and the percentage of pesticides amounts after the cooking were determined by gas chromatography mass-spectrometry (GC/MS) using with interpolation of the relative peak areas for each pesticide to internal standard peak area in the sample on the spiked calibration curve. Calibration curve was linear over the range of 25 to 1000 ng/g, and LOQ was 25 ng/g for all three pesticides. The percent of loss for the three pesticides were 78%, 55% and 35% for carbaryl, propoxur and pirimicarb respectively. Different parameters such as vapor pressure, boiling point, and suspect ability of the compound to hydrolysis, could be responsible for the losing of pesticides during the cooking process. PMID:24363690

  18. Relating oxygen partial pressure, saturation and content: the haemoglobin-oxygen dissociation curve.

    PubMed

    Collins, Julie-Ann; Rudenski, Aram; Gibson, John; Howard, Luke; O'Driscoll, Ronan

    2015-09-01

    The delivery of oxygen by arterial blood to the tissues of the body has a number of critical determinants including blood oxygen concentration (content), saturation (S O2 ) and partial pressure, haemoglobin concentration and cardiac output, including its distribution. The haemoglobin-oxygen dissociation curve, a graphical representation of the relationship between oxygen satur-ation and oxygen partial pressure helps us to understand some of the principles underpinning this process. Historically this curve was derived from very limited data based on blood samples from small numbers of healthy subjects which were manipulated in vitro and ultimately determined by equations such as those described by Severinghaus in 1979. In a study of 3524 clinical specimens, we found that this equation estimated the S O2 in blood from patients with normal pH and S O2 >70% with remarkable accuracy and, to our knowledge, this is the first large-scale validation of this equation using clinical samples. Oxygen saturation by pulse oximetry (S pO2 ) is nowadays the standard clinical method for assessing arterial oxygen saturation, providing a convenient, pain-free means of continuously assessing oxygenation, provided the interpreting clinician is aware of important limitations. The use of pulse oximetry reduces the need for arterial blood gas analysis (S aO2 ) as many patients who are not at risk of hypercapnic respiratory failure or metabolic acidosis and have acceptable S pO2 do not necessarily require blood gas analysis. While arterial sampling remains the gold-standard method of assessing ventilation and oxygenation, in those patients in whom blood gas analysis is indicated, arterialised capillary samples also have a valuable role in patient care. The clinical role of venous blood gases however remains less well defined.

  19. Factors related to curved femur in elderly Japanese women

    PubMed Central

    Tsuchie, Hiroyuki; Miyakoshi, Naohisa; Kasukawa, Yuji; Senma, Seietsu; Narita, Yuichiro; Miyamoto, Seiya; Hatakeyama, Yuji; Sasaki, Kana; Shimada, Yoichi

    2016-01-01

    Background Multiple factors are involved in the development of atypical femoral fractures, and excessive curvature of the femur is thought to be one of them. However, the pathogenesis of femoral curvature is unknown. We evaluated the influence of factors related to bone metabolism and posture on the development of femoral curvature. Methods A total of 139 women participated in the present study. Curvatures were measured using antero-posterior and lateral radiography of the femur. We evaluated some bone and vitamin D metabolism markers in serum, the bone mineral density (BMD), lumbar spine alignment, and pelvic tilt. Results We divided the women into two groups, curved and non-curved groups, based on the average plus standard deviation as the cut-off between the groups. When univariate logistic regression analysis was performed to detect factors affecting femoral curvature, the following were identified as indices significantly affecting the curvature: age of the patients, serum concentrations of calcium, intact parathyroid hormone, pentosidine, homocysteine and 25-hydroxyvitamin D (25(OH)D), and BMD of the proximal femur (P < 0.05) both in the lateral and anterior curvatures. When we used multivariate analyses to assess these factors, only 25(OH)D and age (lateral and anterior standardized odds ratio: 0.776 and 0.385, and 2.312 and 4.472, respectively) affected the femoral curvature (P < 0.05). Conclusion Femoral curvature is strongly influenced by age and serum vitamin D. PMID:27228191

  20. Development and characterization of a dynamic lesion phantom for the quantitative evaluation of dynamic contrast-enhanced MRI

    PubMed Central

    Freed, Melanie; de Zwart, Jacco A.; Hariharan, Prasanna; R. Myers, Matthew; Badano, Aldo

    2011-01-01

    Purpose: To develop a dynamic lesion phantom that is capable of producing physiological kinetic curves representative of those seen in human dynamic contrast-enhanced MRI (DCE-MRI) data. The objective of this phantom is to provide a platform for the quantitative comparison of DCE-MRI protocols to aid in the standardization and optimization of breast DCE-MRI. Methods: The dynamic lesion consists of a hollow, plastic mold with inlet and outlet tubes to allow flow of a contrast agent solution through the lesion over time. Border shape of the lesion can be controlled using the lesion mold production method. The configuration of the inlet and outlet tubes was determined using fluid transfer simulations. The total fluid flow rate was determined using x-ray images of the lesion for four different flow rates (0.25, 0.5, 1.0, and 1.5 ml∕s) to evaluate the resultant kinetic curve shape and homogeneity of the contrast agent distribution in the dynamic lesion. High spatial and temporal resolution x-ray measurements were used to estimate the true kinetic curve behavior in the dynamic lesion for benign and malignant example curves. DCE-MRI example data were acquired of the dynamic phantom using a clinical protocol. Results: The optimal inlet and outlet tube configuration for the lesion molds was two inlet molds separated by 30° and a single outlet tube directly between the two inlet tubes. X-ray measurements indicated that 1.0 ml∕s was an appropriate total fluid flow rate and provided truth for comparison with MRI data of kinetic curves representative of benign and malignant lesions. DCE-MRI data demonstrated the ability of the phantom to produce realistic kinetic curves. Conclusions: The authors have constructed a dynamic lesion phantom, demonstrated its ability to produce physiological kinetic curves, and provided estimations of its true kinetic curve behavior. This lesion phantom provides a tool for the quantitative evaluation of DCE-MRI protocols, which may lead to improved discrimination of breast cancer lesions. PMID:21992378

  1. Assessment of spinal flexibility in adolescent idiopathic scoliosis: suspension versus side-bending radiography.

    PubMed

    Lamarre, Marie-Eve; Parent, Stefan; Labelle, Hubert; Aubin, Carl-Eric; Joncas, Julie; Cabral, Anne; Petit, Yvan

    2009-03-15

    Prospective evaluation of a new suspension test to determine curve flexibility in adolescent idiopathic scoliosis (AIS) in comparison with erect side-bending. To verify whether the suspension is a better method than side-bending to estimate curve reducibility and to assess spine flexibility. Spinal flexibility is a decisive biomechanical parameter for the planning of AIS surgery. Side-bending is often referred as the gold standard, but it has a low reproducibility and there is no agreement amongst surgeons about the most advantageous method to use. Even more, every technique evaluates reducibility instead of flexibility since the forces involved in the change in shape of the spine are not considered. Eighteen patients scheduled for AIS surgery were studied. Preoperative radiological evaluation consisted of 4 radiographs: standing posteroanterior, left and right erect side-bending, and suspension. The side-bending and the suspension tests were compared on the basis of the apical vertebrae derotation and the scoliosis curve reduction. Frontal and axial flexibility indices, expressed as the ratio between the moment induced by the body weight and the reduction, were calculated from the suspension data. The average scoliosis curve reduction and apical vertebra derotation were 21 degrees (37%) and 3 degrees (12%), respectively for erect side-bending and 26 degrees (39%) and 7 degrees (28%), respectively for suspension. The erect side-bending test generated a larger curve reduction (P = 0.05) when considering the moderate curves only and the suspension test (P = 0.02) when considering the severe curves. The suspension test produced a larger axial derotation (P = 0.007) when considering all the curves. The average traction force during suspension was 306 N (187 N-377 N). The average estimation for the frontal flexibility index was 1.64 degrees/Nm (0.84-2.82) and 0.51 degrees/Nm (0.01-1.39) for the axial flexibility index. Results of this study demonstrate the feasibility to really evaluate the spine flexibility with the suspension test. The estimated flexibility values are realistic and similar to those reported in vitro. Suspension should be used in the future for spine flexibility assessment.

  2. Comparison of the acetyl bromide spectrophotometric method with other analytical lignin methods for determining lignin concentration in forage samples.

    PubMed

    Fukushima, Romualdo S; Hatfield, Ronald D

    2004-06-16

    Present analytical methods to quantify lignin in herbaceous plants are not totally satisfactory. A spectrophotometric method, acetyl bromide soluble lignin (ABSL), has been employed to determine lignin concentration in a range of plant materials. In this work, lignin extracted with acidic dioxane was used to develop standard curves and to calculate the derived linear regression equation (slope equals absorptivity value or extinction coefficient) for determining the lignin concentration of respective cell wall samples. This procedure yielded lignin values that were different from those obtained with Klason lignin, acid detergent acid insoluble lignin, or permanganate lignin procedures. Correlations with in vitro dry matter or cell wall digestibility of samples were highest with data from the spectrophotometric technique. The ABSL method employing as standard lignin extracted with acidic dioxane has the potential to be employed as an analytical method to determine lignin concentration in a range of forage materials. It may be useful in developing a quick and easy method to predict in vitro digestibility on the basis of the total lignin content of a sample.

  3. Comparison between amperometric and true potentiometric end-point detection in the determination of water by the Karl Fischer method.

    PubMed

    Cedergren, A

    1974-06-01

    A rapid and sensitive method using true potentiometric end-point detection has been developed and compared with the conventional amperometric method for Karl Fischer determination of water. The effect of the sulphur dioxide concentration on the shape of the titration curve is shown. By using kinetic data it was possible to calculate the course of titrations and make comparisons with those found experimentally. The results prove that the main reaction is the slow step, both in the amperometric and the potentiometric method. Results obtained in the standardization of the Karl Fischer reagent showed that the potentiometric method, including titration to a preselected potential, gave a standard deviation of 0.001(1) mg of water per ml, the amperometric method using extrapolation 0.002(4) mg of water per ml and the amperometric titration to a pre-selected diffusion current 0.004(7) mg of water per ml. Theories and results dealing with dilution effects are presented. The time of analysis was 1-1.5 min for the potentiometric and 4-5 min for the amperometric method using extrapolation.

  4. Z-Index Parameterization for Volumetric CT Image Reconstruction via 3-D Dictionary Learning.

    PubMed

    Bai, Ti; Yan, Hao; Jia, Xun; Jiang, Steve; Wang, Ge; Mou, Xuanqin

    2017-12-01

    Despite the rapid developments of X-ray cone-beam CT (CBCT), image noise still remains a major issue for the low dose CBCT. To suppress the noise effectively while retain the structures well for low dose CBCT image, in this paper, a sparse constraint based on the 3-D dictionary is incorporated into a regularized iterative reconstruction framework, defining the 3-D dictionary learning (3-DDL) method. In addition, by analyzing the sparsity level curve associated with different regularization parameters, a new adaptive parameter selection strategy is proposed to facilitate our 3-DDL method. To justify the proposed method, we first analyze the distributions of the representation coefficients associated with the 3-D dictionary and the conventional 2-D dictionary to compare their efficiencies in representing volumetric images. Then, multiple real data experiments are conducted for performance validation. Based on these results, we found: 1) the 3-D dictionary-based sparse coefficients have three orders narrower Laplacian distribution compared with the 2-D dictionary, suggesting the higher representation efficiencies of the 3-D dictionary; 2) the sparsity level curve demonstrates a clear Z-shape, and hence referred to as Z-curve, in this paper; 3) the parameter associated with the maximum curvature point of the Z-curve suggests a nice parameter choice, which could be adaptively located with the proposed Z-index parameterization (ZIP) method; 4) the proposed 3-DDL algorithm equipped with the ZIP method could deliver reconstructions with the lowest root mean squared errors and the highest structural similarity index compared with the competing methods; 5) similar noise performance as the regular dose FDK reconstruction regarding the standard deviation metric could be achieved with the proposed method using (1/2)/(1/4)/(1/8) dose level projections. The contrast-noise ratio is improved by ~2.5/3.5 times with respect to two different cases under the (1/8) dose level compared with the low dose FDK reconstruction. The proposed method is expected to reduce the radiation dose by a factor of 8 for CBCT, considering the voted strongly discriminated low contrast tissues.

  5. The use of kernel density estimators in breakthrough curve reconstruction and advantages in risk analysis

    NASA Astrophysics Data System (ADS)

    Siirila, E. R.; Fernandez-Garcia, D.; Sanchez-Vila, X.

    2014-12-01

    Particle tracking (PT) techniques, often considered favorable over Eulerian techniques due to artificial smoothening in breakthrough curves (BTCs), are evaluated in a risk-driven framework. Recent work has shown that given a relatively few number of particles (np), PT methods can yield well-constructed BTCs with kernel density estimators (KDEs). This work compares KDE and non-KDE BTCs simulated as a function of np (102-108) and averaged as a function of the exposure duration, ED. Results show that regardless of BTC shape complexity, un-averaged PT BTCs show a large bias over several orders of magnitude in concentration (C) when compared to the KDE results, remarkably even when np is as low as 102. With the KDE, several orders of magnitude less np are required to obtain the same global error in BTC shape as the PT technique. PT and KDE BTCs are averaged as a function of the ED with standard and new methods incorporating the optimal h (ANA). The lowest error curve is obtained through the ANA method, especially for smaller EDs. Percent error of peak of averaged-BTCs, important in a risk framework, is approximately zero for all scenarios and all methods for np ≥105, but vary between the ANA and PT methods, when np is lower. For fewer np, the ANA solution provides a lower error fit except when C oscillations are present during a short time frame. We show that obtaining a representative average exposure concentration is reliant on an accurate representation of the BTC, especially when data is scarce.

  6. 134Cs emission probabilities determination by gamma spectrometry

    NASA Astrophysics Data System (ADS)

    de Almeida, M. C. M.; Poledna, R.; Delgado, J. U.; Silva, R. L.; Araujo, M. T. F.; da Silva, C. J.

    2018-03-01

    The National Laboratory for Ionizing Radiation Metrology (LNMRI/IRD/CNEN) of Rio de Janeiro performed primary and secondary standardization of different radionuclides reaching satisfactory uncertainties. A solution of 134Cs radionuclide was purchased from commercial supplier to emission probabilities determination of some of its energies. 134Cs is a beta gamma emitter with 754 days of half-life. This radionuclide is used as standard in environmental, water and food control. It is also important to germanium detector calibration. The gamma emission probabilities (Pγ) were determined mainly for some energies of the 134Cs by efficiency curve method and the Pγ absolute uncertainties obtained were below 1% (k=1).

  7. The Short- and Long-Run Marginal Cost Curve: A Pedagogical Note.

    ERIC Educational Resources Information Center

    Sexton, Robert L.; And Others

    1993-01-01

    Contends that the standard description of the relationship between the long-run marginal cost curve and the short-run marginal cost curve is often misleading and imprecise. Asserts that a sampling of college-level textbooks confirms this confusion. Provides a definition and instructional strategy that can be used to promote student understanding…

  8. Electrothermal atomic absorption spectrometric determination of copper in nickel-base alloys with various chemical modifiers*1

    NASA Astrophysics Data System (ADS)

    Tsai, Suh-Jen Jane; Shiue, Chia-Chann; Chang, Shiow-Ing

    1997-07-01

    The analytical characteristics of copper in nickel-base alloys have been investigated with electrothermal atomic absorption spectrometry. Deuterium background correction was employed. The effects of various chemical modifiers on the analysis of copper were investigated. Organic modifiers which included 2-(5-bromo-2-pyridylazo)-5-(diethylamino-phenol) (Br-PADAP), ammonium citrate, 1-(2-pyridylazo)-naphthol, 4-(2-pyridylazo)resorcinol, ethylenediaminetetraacetic acid and Triton X-100 were studied. Inorganic modifiers palladium nitrate, magnesium nitrate, aluminum chloride, ammonium dihydrogen phosphate, hydrogen peroxide and potassium nitrate were also applied in this work. In addition, zirconium hydroxide and ammonium hydroxide precipitation methods have also been studied. Interference effects were effectively reduced with Br-PADAP modifier. Aqueous standards were used to construct the calibration curves. The detection limit was 1.9 pg. Standard reference materials of nickel-base alloys were used to evaluate the accuracy of the proposed method. The copper contents determined with the proposed method agreed closely with the certified values of the reference materials. The recoveries were within the range 90-100% with relative standard deviation of less than 10%. Good precision was obtained.

  9. A novel strategy with standardized reference extract qualification and single compound quantitative evaluation for quality control of Panax notoginseng used as a functional food.

    PubMed

    Li, S P; Qiao, C F; Chen, Y W; Zhao, J; Cui, X M; Zhang, Q W; Liu, X M; Hu, D J

    2013-10-25

    Root of Panax notoginseng (Burk.) F.H. Chen (Sanqi in Chinese) is one of traditional Chinese medicines (TCMs) based functional food. Saponins are the major bioactive components. The shortage of reference compounds or chemical standards is one of the main bottlenecks for quality control of TCMs. A novel strategy, i.e. standardized reference extract based qualification and single calibrated components directly quantitative estimation of multiple analytes, was proposed to easily and effectively control the quality of natural functional foods such as Sanqi. The feasibility and credibility of this methodology were also assessed with a developed fast HPLC method. Five saponins, including ginsenoside Rg1, Re, Rb1, Rd and notoginsenoside R1 were rapidly separated using a conventional HPLC in 20 min. The quantification method was also compared with individual calibration curve method. The strategy is feasible and credible, which is easily and effectively adapted for improving the quality control of natural functional foods such as Sanqi. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. A curve-fitting approach to estimate the arterial plasma input function for the assessment of glucose metabolic rate and response to treatment.

    PubMed

    Vriens, Dennis; de Geus-Oei, Lioe-Fee; Oyen, Wim J G; Visser, Eric P

    2009-12-01

    For the quantification of dynamic (18)F-FDG PET studies, the arterial plasma time-activity concentration curve (APTAC) needs to be available. This can be obtained using serial sampling of arterial blood or an image-derived input function (IDIF). Arterial sampling is invasive and often not feasible in practice; IDIFs are biased because of partial-volume effects and cannot be used when no large arterial blood pool is in the field of view. We propose a mathematic function, consisting of an initial linear rising activity concentration followed by a triexponential decay, to describe the APTAC. This function was fitted to 80 oncologic patients and verified for 40 different oncologic patients by area-under-the-curve (AUC) comparison, Patlak glucose metabolic rate (MR(glc)) estimation, and therapy response monitoring (Delta MR(glc)). The proposed function was compared with the gold standard (serial arterial sampling) and the IDIF. To determine the free parameters of the function, plasma time-activity curves based on arterial samples in 80 patients were fitted after normalization for administered activity (AA) and initial distribution volume (iDV) of (18)F-FDG. The medians of these free parameters were used for the model. In 40 other patients (20 baseline and 20 follow-up dynamic (18)F-FDG PET scans), this model was validated. The population-based curve, individually calibrated by AA and iDV (APTAC(AA/iDV)), by 1 late arterial sample (APTAC(1 sample)), and by the individual IDIF (APTAC(IDIF)), was compared with the gold standard of serial arterial sampling (APTAC(sampled)) using the AUC. Additionally, these 3 methods of APTAC determination were evaluated with Patlak MR(glc) estimation and with Delta MR(glc) for therapy effects using serial sampling as the gold standard. Excellent individual fits to the function were derived with significantly different decay constants (P < 0.001). Correlations between AUC from APTAC(AA/iDV), APTAC(1 sample), and APTAC(IDIF) with the gold standard (APTAC(sampled)) were 0.880, 0.994, and 0.856, respectively. For MR(glc), these correlations were 0.963, 0.994, and 0.966, respectively. In response monitoring, these correlations were 0.947, 0.982, and 0.949, respectively. Additional scaling by 1 late arterial sample showed a significant improvement (P < 0.001). The fitted input function calibrated for AA and iDV performed similarly to IDIF. Performance improved significantly using 1 late arterial sample. The proposed model can be used when an IDIF is not available or when serial arterial sampling is not feasible.

  11. Optimizing Reservoir Operation to Adapt to the Climate Change

    NASA Astrophysics Data System (ADS)

    Madadgar, S.; Jung, I.; Moradkhani, H.

    2010-12-01

    Climate change and upcoming variation in flood timing necessitates the adaptation of current rule curves developed for operation of water reservoirs as to reduce the potential damage from either flood or draught events. This study attempts to optimize the current rule curves of Cougar Dam on McKenzie River in Oregon addressing some possible climate conditions in 21th century. The objective is to minimize the failure of operation to meet either designated demands or flood limit at a downstream checkpoint. A simulation/optimization model including the standard operation policy and a global optimization method, tunes the current rule curve upon 8 GCMs and 2 greenhouse gases emission scenarios. The Precipitation Runoff Modeling System (PRMS) is used as the hydrology model to project the streamflow for the period of 2000-2100 using downscaled precipitation and temperature forcing from 8 GCMs and two emission scenarios. An ensemble of rule curves, each associated with an individual scenario, is obtained by optimizing the reservoir operation. The simulation of reservoir operation, for all the scenarios and the expected value of the ensemble, is conducted and performance assessment using statistical indices including reliability, resilience, vulnerability and sustainability is made.

  12. Gas/oil capillary pressure at chalk at elevated pressures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christoffersen, K.R.; Whitson, C.H.

    1995-09-01

    Accurate capillary pressure curves are essential for studying the recovery of oil by gas injection in naturally fractured chalk reservoirs. A simple and fast method to determine high-pressure drainage capillary pressure curves has been developed. The effect of gas/oil interfacial tension (IFT) on the capillary pressure of chalk cores has been determined for a methane/n-pentane system. Measurements on a 5-md outcrop chalk core were made at pressures of 70, 105, and 130 bar, with corresponding IFT`s of 6.3, 3.2, and 1.5 mN/m. The results were both accurate and reproducible. The measured capillary pressure curves were not a linear function ofmore » IFT when compared with low-pressure centrifuge data. Measured capillary pressures were considerably lower than IFT-scaled centrifuge data. It appears that the deviation starts at an IFT of about 5 mN/m. According to the results of this study, the recovery of oil by gravity drainage in naturally fractured chalk reservoirs may be significantly underestimated if standard laboratory capillary pressure curves are scaled by IFT only. However, general conclusions cannot be made on the basis on only this series of experiments on one chalk core.« less

  13. Monitoring uterine activity during labor: a comparison of 3 methods.

    PubMed

    Euliano, Tammy Y; Nguyen, Minh Tam; Darmanjian, Shalom; McGorray, Susan P; Euliano, Neil; Onkala, Allison; Gregg, Anthony R

    2013-01-01

    Tocodynamometry (Toco; strain gauge technology) provides contraction frequency and approximate duration of labor contractions but suffers frequent signal dropout, necessitating repositioning by a nurse, and may fail in obese patients. The alternative invasive intrauterine pressure catheter (IUPC) is more reliable and adds contraction pressure information but requires ruptured membranes and introduces small risks of infection and abruption. Electrohysterography (EHG) reports the electrical activity of the uterus through electrodes placed on the maternal abdomen. This study compared all 3 methods of contraction detection simultaneously in laboring women. Upon consent, laboring women were monitored simultaneously with Toco, EHG, and IUPC. Contraction curves were generated in real-time for the EHG, and all 3 curves were stored electronically. A contraction detection algorithm was used to compare frequency and timing between methods. Seventy-three subjects were enrolled in the study; 14 were excluded due to hardware failure of 1 or more of the devices (n = 12) or inadequate data collection duration (n = 2). In comparison with the gold-standard IUPC, EHG performed significantly better than Toco with regard to the Contractions Consistency Index (CCI). The mean CCI for EHG was 0.88 ± 0.17 compared with 0.69 ± 0.27 for Toco (P < .0001). In contrast to Toco, EHG was not significantly affected by obesity. Toco does not correlate well with the gold-standard IUPC and fails more frequently in obese patients. EHG provides a reliable noninvasive alternative, regardless of body habitus. Copyright © 2013 Mosby, Inc. All rights reserved.

  14. Monitoring uterine activity during labor: a comparison of three methods

    PubMed Central

    EULIANO, Tammy Y.; NGUYEN, Minh Tam; DARMANJIAN, Shalom; MCGORRAY, Susan P.; EULIANO, Neil; ONKALA, Allison; GREGG, Anthony R.

    2012-01-01

    Objective Tocodynamometry (Toco—strain gauge technology) provides contraction frequency and approximate duration of labor contractions, but suffers frequent signal dropout necessitating re-positioning by a nurse, and may fail in obese patients. The alternative invasive intrauterine pressure catheter (IUPC) is more reliable and adds contraction pressure information, but requires ruptured membranes and introduces small risks of infection and abruption. Electrohysterography (EHG) reports the electrical activity of the uterus through electrodes placed on the maternal abdomen. This study compared all three methods of contraction detection simultaneously in laboring women. Study Design Upon consent, laboring women were monitored simultaneously with Toco, EHG, and IUPC. Contraction curves were generated in real-time for the EHG and all three curves were stored electronically. A contraction detection algorithm was used to compare frequency and timing between methods. Seventy-three subjects were enrolled in the study; 14 were excluded due to hardware failure of one or more of the devices (12) or inadequate data collection duration(2). Results In comparison with the gold-standard IUPC, EHG performed significantly better than Toco with regard to Contractions Consistency Index (CCI). The mean CCI for EHG was 0.88 ± 0.17 compared to 0.69 ± 0.27 for Toco (p<.0001). In contrast to Toco, EHG was not significantly affected by obesity. Conclusion Toco does not correlate well with the gold-standard IUPC and fails more frequently in obese patients. EHG provides a reliable non-invasive alternative regardless of body habitus. PMID:23122926

  15. SPECT bone scintigraphy for the assessment of condylar growth activity in mandibular asymmetry: is it accurate?

    PubMed

    Chan, B H; Leung, Y Y

    2018-04-01

    The comparison of serial radiographs and clinical photographs is considered the current accepted standard for the diagnosis of active condylar hyperplasia in patients with facial asymmetry. Single photon emission computed tomography (SPECT) has recently been proposed as an alternative method. SPECT can be interpreted using three reported methods absolute difference in uptake, uptake ratio, and relative uptake. SPECT findings were compared to those from serial comparisons of radiographs and clinical photographs taken at the time of SPECT and a year later; the sensitivities and specificities were determined. Two hundred patient scans were evaluated. Thirty-four patients showed active growth on serial growth assessment. On comparison with serial growth assessment, the sensitivity and specificity of the three methods ranged between 32.4% and 67.6%, and 36.1% and 78.3%, respectively. Analysis using receiver operating characteristic (ROC) curves revealed area under the curve (AUC) values of <0.58. The average age (mean±standard deviation) of patients with active growth was 18.6±2.8 years, and average growth in the anteroposterior, vertical, and transverse directions was 0.94±0.91mm, 0.88±0.86mm, and 1.4±0.66 mm, respectively. With such low sensitivity and specificity values, it is not justifiable to use SPECT in place of serial growth assessment for the determination of condylar growth status. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  16. Botulinum Neurotoxins: Qualitative and Quantitative Analysis Using the Mouse Phrenic Nerve Hemidiaphragm Assay (MPN).

    PubMed

    Bigalke, Hans; Rummel, Andreas

    2015-11-25

    The historical method for the detection of botulinum neurotoxin (BoNT) is represented by the mouse bioassay (MBA) measuring the animal survival rate. Since the endpoint of the MBA is the death of the mice due to paralysis of the respiratory muscle, an ex vivo animal replacement method, called mouse phrenic nerve (MPN) assay, employs the isolated N. phrenicus-hemidiaphragm tissue. Here, BoNT causes a dose-dependent characteristic decrease of the contraction amplitude of the indirectly stimulated muscle. Within the EQuATox BoNT proficiency 13 test samples were analysed using the MPN assay by serial dilution to a bath concentration resulting in a paralysis time within the range of calibration curves generated with BoNT/A, B and E standards, respectively. For serotype identification the diluted samples were pre-incubated with polyclonal anti-BoNT/A, B or E antitoxin or a combination of each. All 13 samples were qualitatively correctly identified thereby delivering superior results compared to single in vitro methods like LFA, ELISA and LC-MS/MS. Having characterized the BoNT serotype, the final bath concentrations were calculated using the calibration curves and then multiplied by the respective dilution factor to obtain the sample concentration. Depending on the source of the BoNT standards used, the quantitation of ten BoNT/A containing samples delivered a mean z-score of 7 and of three BoNT/B or BoNT/E containing samples z-scores <2, respectively.

  17. Extraction of features from ultrasound acoustic emissions: a tool to assess the hydraulic vulnerability of Norway spruce trunkwood?

    PubMed Central

    Rosner, Sabine; Klein, Andrea; Wimmer, Rupert; Karlsson, Bo

    2011-01-01

    Summary • The aim of this study was to assess the hydraulic vulnerability of Norway spruce (Picea abies) trunkwood by extraction of selected features of acoustic emissions (AEs) detected during dehydration of standard size samples. • The hydraulic method was used as the reference method to assess the hydraulic vulnerability of trunkwood of different cambial ages. Vulnerability curves were constructed by plotting the percentage loss of conductivity vs an overpressure of compressed air. • Differences in hydraulic vulnerability were very pronounced between juvenile and mature wood samples; therefore, useful AE features, such as peak amplitude, duration and relative energy, could be filtered out. The AE rates of signals clustered by amplitude and duration ranges and the AE energies differed greatly between juvenile and mature wood at identical relative water losses. • Vulnerability curves could be constructed by relating the cumulated amount of relative AE energy to the relative loss of water and to xylem tension. AE testing in combination with feature extraction offers a readily automated and easy to use alternative to the hydraulic method. PMID:16771986

  18. Analysis of serotonin concentrations in human milk by high-performance liquid chromatography with fluorescence detection.

    PubMed

    Chiba, Takeshi; Maeda, Tomoji; Tairabune, Tomohiko; Tomita, Takashi; Sanbe, Atsushi; Takeda, Rika; Kikuchi, Akihiko; Kudo, Kenzo

    2017-03-25

    Serotonin (5-hydroxytryptamine, 5-HT) plays an important role in milk volume homeostasis in the mammary gland during lactation; 5-HT in milk may also affect infant development. However, there are few reports on 5-HT concentrations in human breast milk. To address this issue, we developed a simple method based on high-performance liquid chromatography with fluorescence detection (HPLC-FD) for measuring 5-HT concentrations in human breast milk. Breast milk samples were provided by four healthy Japanese women. Calibration curves for 5-HT in each sample were prepared with the standard addition method between 5 and 1000 ng/ml, and all had correlation coefficients >0.999. The recovery of 5-HT was 96.1%-101.0%, with a coefficient of variation of 3.39%-8.62%. The range of 5-HT concentrations estimated from the calibration curves was 11.1-51.1 ng/ml. Thus, the HPLC-FD method described here can effectively extract 5-HT from human breast milk with high reproducibility. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Development and validation of an LC-UV method for the determination of sulfonamides in animal feeds.

    PubMed

    Kumar, P; Companyó, R

    2012-05-01

    A simple LC-UV method was developed for the determination of residues of eight sulfonamides (sulfachloropyridazine, sulfadiazine, sulfadimidine, sulfadoxine, sulfamethoxypyridazine, sulfaquinoxaline, sulfamethoxazole, and sulfadimethoxine) in six types of animal feed. C18, Oasis HLB, Plexa and Plexa PCX stationary phases were assessed for the clean-up step and the latter was chosen as it showed greater efficiency in the clean-up of interferences. Feed samples spiked with sulfonamides at 2 mg/kg were used to assess the trueness (recovery %) and precision of the method. Mean recovery values ranged from 47% to 66%, intra-day precision (RSD %) from 4% to 15% and inter-day precision (RSD %) from 7% to 18% in pig feed. Recoveries and intra-day precisions were also evaluated in rabbit, hen, cow, chicken and piglet feed matrices. Calibration curves with standards prepared in mobile phase and matrix-matched calibration curves were compared and the matrix effects were ascertained. The limits of detection and quantification in the feeds ranged from 74 to 265 µg/kg and from 265 to 868 µg/kg, respectively. Copyright © 2011 John Wiley & Sons, Ltd.

  20. Lunar soils grain size catalog

    NASA Technical Reports Server (NTRS)

    Graf, John C.

    1993-01-01

    This catalog compiles every available grain size distribution for Apollo surface soils, trench samples, cores, and Luna 24 soils. Original laboratory data are tabled, and cumulative weight distribution curves and histograms are plotted. Standard statistical parameters are calculated using the method of moments. Photos and location comments describe the sample environment and geological setting. This catalog can help researchers describe the geotechnical conditions and site variability of the lunar surface essential to the design of a lunar base.

  1. Computer-Based Image Analysis for Plus Disease Diagnosis in Retinopathy of Prematurity

    PubMed Central

    Wittenberg, Leah A.; Jonsson, Nina J.; Chan, RV Paul; Chiang, Michael F.

    2014-01-01

    Presence of plus disease in retinopathy of prematurity (ROP) is an important criterion for identifying treatment-requiring ROP. Plus disease is defined by a standard published photograph selected over 20 years ago by expert consensus. However, diagnosis of plus disease has been shown to be subjective and qualitative. Computer-based image analysis, using quantitative methods, has potential to improve the objectivity of plus disease diagnosis. The objective was to review the published literature involving computer-based image analysis for ROP diagnosis. The PubMed and Cochrane library databases were searched for the keywords “retinopathy of prematurity” AND “image analysis” AND/OR “plus disease.” Reference lists of retrieved articles were searched to identify additional relevant studies. All relevant English-language studies were reviewed. There are four main computer-based systems, ROPtool (AU ROC curve, plus tortuosity 0.95, plus dilation 0.87), RISA (AU ROC curve, arteriolar TI 0.71, venular diameter 0.82), Vessel Map (AU ROC curve, arteriolar dilation 0.75, venular dilation 0.96), and CAIAR (AU ROC curve, arteriole tortuosity 0.92, venular dilation 0.91), attempting to objectively analyze vessel tortuosity and dilation in plus disease in ROP. Some of them show promise for identification of plus disease using quantitative methods. This has potential to improve the diagnosis of plus disease, and may contribute to the management of ROP using both traditional binocular indirect ophthalmoscopy and image-based telemedicine approaches. PMID:21366159

  2. A Comparison of a Machine Learning Model with EuroSCORE II in Predicting Mortality after Elective Cardiac Surgery: A Decision Curve Analysis.

    PubMed

    Allyn, Jérôme; Allou, Nicolas; Augustin, Pascal; Philip, Ivan; Martinet, Olivier; Belghiti, Myriem; Provenchere, Sophie; Montravers, Philippe; Ferdynus, Cyril

    2017-01-01

    The benefits of cardiac surgery are sometimes difficult to predict and the decision to operate on a given individual is complex. Machine Learning and Decision Curve Analysis (DCA) are recent methods developed to create and evaluate prediction models. We conducted a retrospective cohort study using a prospective collected database from December 2005 to December 2012, from a cardiac surgical center at University Hospital. The different models of prediction of mortality in-hospital after elective cardiac surgery, including EuroSCORE II, a logistic regression model and a machine learning model, were compared by ROC and DCA. Of the 6,520 patients having elective cardiac surgery with cardiopulmonary bypass, 6.3% died. Mean age was 63.4 years old (standard deviation 14.4), and mean EuroSCORE II was 3.7 (4.8) %. The area under ROC curve (IC95%) for the machine learning model (0.795 (0.755-0.834)) was significantly higher than EuroSCORE II or the logistic regression model (respectively, 0.737 (0.691-0.783) and 0.742 (0.698-0.785), p < 0.0001). Decision Curve Analysis showed that the machine learning model, in this monocentric study, has a greater benefit whatever the probability threshold. According to ROC and DCA, machine learning model is more accurate in predicting mortality after elective cardiac surgery than EuroSCORE II. These results confirm the use of machine learning methods in the field of medical prediction.

  3. Development of a Quantitative Recombinase Polymerase Amplification Assay with an Internal Positive Control

    PubMed Central

    Richards-Kortum, Rebecca

    2015-01-01

    It was recently demonstrated that recombinase polymerase amplification (RPA), an isothermal amplification platform for pathogen detection, may be used to quantify DNA sample concentration using a standard curve. In this manuscript, a detailed protocol for developing and implementing a real-time quantitative recombinase polymerase amplification assay (qRPA assay) is provided. Using HIV-1 DNA quantification as an example, the assembly of real-time RPA reactions, the design of an internal positive control (IPC) sequence, and co-amplification of the IPC and target of interest are all described. Instructions and data processing scripts for the construction of a standard curve using data from multiple experiments are provided, which may be used to predict the concentration of unknown samples or assess the performance of the assay. Finally, an alternative method for collecting real-time fluorescence data with a microscope and a stage heater as a step towards developing a point-of-care qRPA assay is described. The protocol and scripts provided may be used for the development of a qRPA assay for any DNA target of interest. PMID:25867513

  4. Development of a quantitative recombinase polymerase amplification assay with an internal positive control.

    PubMed

    Crannell, Zachary A; Rohrman, Brittany; Richards-Kortum, Rebecca

    2015-03-30

    It was recently demonstrated that recombinase polymerase amplification (RPA), an isothermal amplification platform for pathogen detection, may be used to quantify DNA sample concentration using a standard curve. In this manuscript, a detailed protocol for developing and implementing a real-time quantitative recombinase polymerase amplification assay (qRPA assay) is provided. Using HIV-1 DNA quantification as an example, the assembly of real-time RPA reactions, the design of an internal positive control (IPC) sequence, and co-amplification of the IPC and target of interest are all described. Instructions and data processing scripts for the construction of a standard curve using data from multiple experiments are provided, which may be used to predict the concentration of unknown samples or assess the performance of the assay. Finally, an alternative method for collecting real-time fluorescence data with a microscope and a stage heater as a step towards developing a point-of-care qRPA assay is described. The protocol and scripts provided may be used for the development of a qRPA assay for any DNA target of interest.

  5. Sulfuric acid/hydrogen peroxide digestion and colorimetric a collaborative study.

    PubMed

    Christians, D K; Aspelund, T G; Brayton, S V; Roberts, L L

    1991-01-01

    Seven laboratories participated in a collaborative study of a method for determination of phosphorus in meat and meat products. Samples are digested in sulfuric acid and hydrogen peroxide; digestion is complete in approximately 10 min. Phosphorus is determined by colorimetric analysis of a dilute aliquot of the sample digest. The collaborators analyzed 3 sets of blind duplicate samples from each of 6 classes of meat (U.S. Department of Agriculture classifications): smoked ham, water-added ham, canned ham, pork sausage, cooked sausage, and hamburger. The calibration curve was linear over the range of standard solutions prepared (phosphorus levels from 0.05 to 1.00%); levels in the collaborative study samples ranged from 0.10 to 0.30%. Standard deviations for repeatability (sr) and reproducibility (SR) ranged from 0.004 to 0.012 and 0.007 to 0.014, respectively. Corresponding relative standard deviations (RSDr and RSDR, respectively) ranged from 1.70 to 7.28% and 3.50 to 9.87%. Six laboratories analyzed samples by both the proposed method and AOAC method 24.016 (14th Ed.). One laboratory reported results by the proposed method only. Statistical evaluations indicated no significant difference between the 2 methods. The method has been adopted official first action by AOAC.

  6. The panchromatic Hubble Andromeda treasury. VII. The steep mid-ultraviolet to near-infrared extinction curve in the central 200 pc of the M31 Bulge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Hui; Lauer, Tod R.; Olsen, Knut

    We measure the extinction curve in the central 200 pc of M31 at mid-ultraviolet to near-infrared wavelengths (from 1928 Å to 1.5 μm), using Swift/UVOT and Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3)/Advanced Camera for Surveys (ACS) observations in 13 bands. Taking advantage of the high angular resolution of the HST/WFC3 and ACS detectors, we develop a method to simultaneously determine the relative extinction and the fraction of obscured starlight for five dusty complexes located in the circumnuclear region. The extinction curves of these clumps (R{sub V} = 2.4-2.5) are steeper than the average Galactic one (R{sub V}more » = 3.1), but are similar to optical and near-infrared curves recently measured toward the Galactic bulge (R{sub V} ∼ 2.5). This similarity suggests that steep extinction curves may be common in the inner bulge of galaxies. In the ultraviolet, the extinction curves of these clumps are also unusual. We find that one dusty clump (size < 2 pc) exhibits a strong UV bump (extinction at 2175 Å), more than three standard deviation higher than that predicted by common models. Although the high stellar metallicity of the M31 bulge indicates that there are sufficient carbon and silicon to produce large dust grains, the grains may have been destroyed by supernova explosions or past activity of the central supermassive black hole, resulting in the observed steepened extinction curve.« less

  7. [Evaluation of vaporizers by anesthetic gas monitors corrected with a new method for preparation of calibration gases].

    PubMed

    Kurashiki, T

    1996-11-01

    For resolving the discrepancy of concentrations found among anesthetic gas monitors, the author proposed a new method using a vaporizer as a standard anesthetic gas generator for calibration. In this method, the carrier gas volume is measured by a mass flow meter (SEF-510 + FI-101) installed before the inlet of the vaporizer. The vaporized weight of volatile anesthetic agent is simultaneously measured by an electronic force balance (E12000S), on which the vaporizer is placed directly. The molar percent of the anesthetic is calculated using these data and is transformed into the volume percent. These gases discharging from the vaporizer are utilized for calibrating anesthetic gas monitors. These monitors are normalized by the linear equation describing the relationship between concentrations of calibration gases and readings of the anesthetic gas monitors. By using normalized monitors, flow rate-concentration performance curves of several anesthetic vaporizers were obtained. The author concludes that this method can serve as a standard in evaluating anesthetic vaporizers.

  8. Optimization of curved drift tubes for ultraviolet-ion mobility spectrometry

    NASA Astrophysics Data System (ADS)

    Ni, Kai; Ou, Guangli; Zhang, Xiaoguo; Yu, Zhou; Yu, Quan; Qian, Xiang; Wang, Xiaohao

    2015-08-01

    Ion mobility spectrometry (IMS) is a key trace detection technique for toxic pollutants and explosives in the atmosphere. Ultraviolet radiation photoionization source is widely used as an ionization source for IMS due to its advantages of high selectivity and non-radioactivity. However, UV-IMS bring problems that UV rays will be launched into the drift tube which will cause secondary ionization and lead to the photoelectric effect of the Faraday disk. So air is often used as working gas to reduce the effective distance of UV rays, but it will limit the application areas of UV-IMS. In this paper, we propose a new structure of curved drift tube, which can avoid abnormally incident UV rays. Furthermore, using curved drift tube may increase the length of drift tube and then improve the resolution of UV-IMS according to previous research. We studied the homogeneity of electric field in the curved drift tube, which determined the performance of UV-IMS. Numerical simulation of electric field in curved drift tube was conducted by SIMION in our study. In addition, modeling method and homogeneity standard for electric field were also presented. The influences of key parameters include radius of gyration, gap between electrode as well as inner diameter of curved drift tube, on the homogeneity of electric field were researched and some useful laws were summarized. Finally, an optimized curved drift tube is designed to achieve homogenous drift electric field. There is more than 98.75% of the region inside the curved drift tube where the fluctuation of the electric field strength along the radial direction is less than 0.2% of that along the axial direction.

  9. 7 CFR 43.105 - Operating characteristics (OC) curves.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 43.105 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS STANDARDS FOR SAMPLING PLANS Sampling Plans § 43.105 Operating characteristics (OC...

  10. Real-time PCR using SYBR Green for the detection of Shigella spp. in food and stool samples.

    PubMed

    Mokhtari, W; Nsaibia, S; Gharbi, A; Aouni, M

    2013-02-01

    Shigella spp are exquisitely fastidious Gram negative organisms that frequently get missed in the detection by traditional culture methods. For this reason, this work has adapted a classical PCR for detection of Shigella in food and stool specimens to real-time PCR using the SYBR Green format. This method follows a melting curve analysis to be more rapid and provide both qualitative and quantitative data about the targeted pathogen. A total of 117 stool samples with diarrhea and 102 food samples were analyzed in Public Health Regional Laboratory of Nabeul by traditional culture methods and real-time PCR. To validate the real-time PCR assay, an experiment was conducted with both spiked and naturally contaminated stool samples. All Shigella strains tested were ipaH positive and all non-Shigella strains yielded no amplification products. The melting temperature (T(m) = 81.5 ± 0.5 °C) was consistently specific for the amplicon. Correlation coefficients of standard curves constructed using the quantification cycle (C(q)) versus copy numbers of Shigella showed good linearity (R² = 0.995; slope = 2.952) and the minimum level of detection was 1.5 × 10³ CFU/g feces. All food samples analyzed were negative for Shigella by standard culture methods, whereas ipaH was detected in 8.8% culture negative food products. Moreover, the ipaH specific PCR system increased the detection rate over that by culture alone from 1.7% to 11.1% among patients with diarrhea. The data presented here shows that the SYBR Green I was suitable for use in the real-time PCR assay, which provided a specific, sensitive and efficient method for the detection and quantification of Shigella spp in food and stool samples. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. On analyzing free-response data on location level

    NASA Astrophysics Data System (ADS)

    Bandos, Andriy I.; Obuchowski, Nancy A.

    2017-03-01

    Free-response ROC (FROC) data are typically collected when primary question of interest is focused on the proportions of the correct detection-localization of known targets and frequencies of false positive responses, which can be multiple per subject (image). These studies are particularly relevant for CAD and related applications. The fundamental tool of the location-level FROC analysis is the FROC curve. Although there are many methods of FROC analysis, as we describe in this work, some of the standard and popular approaches, while important, are not suitable for analyzing specifically the location-level FROC performance as summarized by the FROC curve. Analysis of the FROC curve, on the other hand, might not be straightforward. Recently we developed an approach for the location-level analysis of the FROC data using the well-known tools for clustered ROC analysis. In the current work, based on previously developed concepts, and using specific examples, we demonstrate the key reasons why specifically location-level FROC performance cannot be fully addressed by the common approaches as well as illustrate the proposed solution. Specifically, we consider the two most salient FROC approaches, namely JAFROC and the area under the exponentially transformed FROC curve (AFE) and show that clearly superior FROC curves can have lower values for these indices. We describe the specific features that make these approaches inconsistent with FROC curves. This work illustrates some caveats for using the common approaches for location-level FROC analysis and provides guidelines for the appropriate assessment or comparison of FROC systems.

  12. Method development towards qualitative and semi-quantitative analysis of multiple pesticides from food surfaces and extracts by desorption electrospray ionization mass spectrometry as a preselective tool for food control.

    PubMed

    Gerbig, Stefanie; Stern, Gerold; Brunn, Hubertus E; Düring, Rolf-Alexander; Spengler, Bernhard; Schulz, Sabine

    2017-03-01

    Direct analysis of fruit and vegetable surfaces is an important tool for in situ detection of food contaminants such as pesticides. We tested three different ways to prepare samples for the qualitative desorption electrospray ionization mass spectrometry (DESI-MS) analysis of 32 pesticides found on nine authentic fruits collected from food control. Best recovery rates for topically applied pesticides (88%) were found by analyzing the surface of a glass slide which had been rubbed against the surface of the food. Pesticide concentration in all samples was at or below the maximum residue level allowed. In addition to the high sensitivity of the method for qualitative analysis, quantitative or, at least, semi-quantitative information is needed in food control. We developed a DESI-MS method for the simultaneous determination of linear calibration curves of multiple pesticides of the same chemical class using normalization to one internal standard (ISTD). The method was first optimized for food extracts and subsequently evaluated for the quantification of pesticides in three authentic food extracts. Next, pesticides and the ISTD were applied directly onto food surfaces, and the corresponding calibration curves were obtained. The determination of linear calibration curves was still feasible, as demonstrated for three different food surfaces. This proof-of-principle method was used to simultaneously quantify two pesticides on an authentic sample, showing that the method developed could serve as a fast and simple preselective tool for disclosure of pesticide regulation violations. Graphical Abstract Multiple pesticide residues were detected and quantified in-situ from an authentic set of food items and extracts in a proof of principle study.

  13. Quantitative determination of galantamine in human plasma by sensitive liquid chromatography-tandem mass spectrometry using loratadine as an internal standard.

    PubMed

    Nirogi, Ramakrishna V S; Kandikere, Vishwottam N; Mudigonda, Koteshwara; Maurya, Santosh

    2007-02-01

    A simple, rapid, sensitive, and selective liquid chromatography-tandem mass spectrometry method is developed and validated for the quantitation of galantamine, an acetylcholinesterase inhibitor in human plasma, using a commercially available compound, loratadine, as the internal standard. Following liquid-liquid extraction, the analytes are separated using an isocratic mobile phase on a reverse-phase C18 column and analyzed by mass spectrometry in the multiple reaction monitoring mode using the respective (M+H)+ ions, m/z 288 to 213 for galantamine and m/z 383 and 337 for the internal standard. The assay exhibit a linear dynamic range of 0.5-100 ng/mL for galantamine in human plasma. The lower limit of quantitation is 0.5 ng/mL, with a relative standard deviation of less than 8%. Acceptable precision and accuracy are obtained for concentrations over the standard curve range. A run time of 2.5 min for each sample makes it possible to analyze more than 400 human plasma samples per day. The validated method is successfully used to analyze human plasma samples for application in pharmacokinetic, bioavailability, or bioequivalence studies.

  14. Design of air-gapped magnetic-core inductors for superimposed direct and alternating currents

    NASA Technical Reports Server (NTRS)

    Ohri, A. K.; Wilson, T. G.; Owen, H. A., Jr.

    1976-01-01

    Using data on standard magnetic-material properties and standard core sizes for air-gap-type cores, an algorithm designed for a computer solution is developed which optimally determines the air-gap length and locates the quiescent point on the normal magnetization curve so as to yield an inductor design with the minimum number of turns for a given ac voltage and frequency and with a given dc bias current superimposed in the same winding. Magnetic-material data used in the design are the normal magnetization curve and a family of incremental permeability curves. A second procedure, which requires a simpler set of calculations, starts from an assigned quiescent point on the normal magnetization curve and first screens candidate core sizes for suitability, then determines the required turns and air-gap length.

  15. Determination of Ethanol in Kombucha Products: Single-Laboratory Validation, First Action 2016.12.

    PubMed

    Ebersole, Blake; Liu, Ying; Schmidt, Rich; Eckert, Matt; Brown, Paula N

    2017-05-01

    Kombucha is a fermented nonalcoholic beverage that has drawn government attention due to the possible presence of excess ethanol (≥0.5% alcohol by volume; ABV). A validated method that provides better precision and accuracy for measuring ethanol levels in kombucha is urgently needed by the kombucha industry. The current study validated a method for determining ethanol content in commercial kombucha products. The ethanol content in kombucha was measured using headspace GC with flame ionization detection. An ethanol standard curve ranging from 0.05 to 5.09% ABV was used, with correlation coefficients greater than 99.9%. The method detection limit was 0.003% ABV and the LOQ was 0.01% ABV. The RSDr ranged from 1.62 to 2.21% and the Horwitz ratio ranged from 0.4 to 0.6. The average accuracy of the method was 98.2%. This method was validated following the guidelines for single-laboratory validation by AOAC INTERNATIONAL and meets the requirements set by AOAC SMPR 2016.001, "Standard Method Performance Requirements for Determination of Ethanol in Kombucha."

  16. Simple solution for a complex problem: proanthocyanidins, galloyl glucoses and ellagitannins fit on a single calibration curve in high performance-gel permeation chromatography.

    PubMed

    Stringano, Elisabetta; Gea, An; Salminen, Juha-Pekka; Mueller-Harvey, Irene

    2011-10-28

    This study was undertaken to explore gel permeation chromatography (GPC) for estimating molecular weights of proanthocyanidin fractions isolated from sainfoin (Onobrychis viciifolia). The results were compared with data obtained by thiolytic degradation of the same fractions. Polystyrene, polyethylene glycol and polymethyl methacrylate standards were not suitable for estimating the molecular weights of underivatized proanthocyanidins. Therefore, a novel HPLC-GPC method was developed based on two serially connected PolarGel-L columns using DMF that contained 5% water, 1% acetic acid and 0.15 M LiBr at 0.7 ml/min and 50 °C. This yielded a single calibration curve for galloyl glucoses (trigalloyl glucose, pentagalloyl glucose), ellagitannins (pedunculagin, vescalagin, punicalagin, oenothein B, gemin A), proanthocyanidins (procyanidin B2, cinnamtannin B1), and several other polyphenols (catechin, epicatechin gallate, epicallocatechin gallate, amentoflavone). These GPC predicted molecular weights represented a considerable advance over previously reported HPLC-GPC methods for underivatized proanthocyanidins. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Ultrafast quantitation of six quinolones in water samples by second-order capillary electrophoresis data modeling with multivariate curve resolution-alternating least squares.

    PubMed

    Alcaráz, Mirta R; Vera-Candioti, Luciana; Culzoni, María J; Goicoechea, Héctor C

    2014-04-01

    This paper presents the development of a capillary electrophoresis method with diode array detector coupled to multivariate curve resolution-alternating least squares (MCR-ALS) to conduct the resolution and quantitation of a mixture of six quinolones in the presence of several unexpected components. Overlapping of time profiles between analytes and water matrix interferences were mathematically solved by data modeling with the well-known MCR-ALS algorithm. With the aim of overcoming the drawback originated by two compounds with similar spectra, a special strategy was implemented to model the complete electropherogram instead of dividing the data in the region as usually performed in previous works. The method was first applied to quantitate analytes in standard mixtures which were randomly prepared in ultrapure water. Then, tap water samples spiked with several interferences were analyzed. Recoveries between 76.7 and 125 % and limits of detection between 5 and 18 μg L(-1) were achieved.

  18. Determination of Flavonoids in Wine by High Performance Liquid Chromatography

    NASA Astrophysics Data System (ADS)

    da Queija, Celeste; Queirós, M. A.; Rodrigues, Ligia M.

    2001-02-01

    The experiment presented is an application of HPLC to the analysis of flavonoids in wines, designed for students of instrumental methods. It is done in two successive 4-hour laboratory sessions. While the hydrolysis of the wines is in progress, the students prepare the calibration curves with standard solutions of flavonoids and calculate the regression lines and correlation coefficients. During the second session they analyze the hydrolyzed wine samples and calculate the concentrations of the flavonoids using the calibration curves obtained earlier. This laboratory work is very attractive to students because they deal with a common daily product whose components are reported to have preventive and therapeutic effects. Furthermore, students can execute preparative work and apply a more elaborate technique that is nowadays an indispensable tool in instrumental analysis.

  19. Influence of body weight and type of chow on the sensitivity of rats to the behavioral effects of the direct-acting dopamine receptor agonist quinpirole

    PubMed Central

    Baladi, Michelle G; Newman, Amy H; France, Charles P

    2013-01-01

    Rationale Amount and type of food can alter dopamine systems and sensitivity to drugs acting on those systems. Objectives This study examined whether changes in body weight, food type, or both body weight and food type contribute to these effects. Methods Rats had free or restricted access (increasing, decreasing, or maintaining body weight) to standard (5.7% fat) or high fat (34.3%) chow. Results In rats gaining weight with restricted or free access to high fat chow, both limbs of the quinpirole yawning dose-response curve (0.0032–0.32 mg/kg) shifted leftward compared with rats eating standard chow. Restricting access to standard or high fat chow (maintaining or decreasing body weight) decreased or eliminated quinpirole-induced yawning; within one week of resuming free feeding, sensitivity to quinpirole was restored, although the descending limb of the dose-response curve was shifted leftward in rats eating high fat chow. These are not likely pharmacokinetic differences because quinpirole-induced hypothermia was not different among groups. PG01037 and L-741,626 antagonized the ascending and descending limbs of the quinpirole dose-response curve in rats eating high fat chow, indicating D3 and D2 receptor mediation, respectively. Rats eating high fat chow also developed insulin resistance. Conclusions These results show that amount and type of chow alter sensitivity to a direct-acting dopamine receptor agonist with the impact of each factor depending on whether body weight increases, decreases, or is maintained. These data demonstrate that feeding conditions, perhaps related to insulin and insulin sensitivity, profoundly impact the actions of drugs acting on dopamine systems. PMID:21544521

  20. SN 2012ec: mass of the progenitor from PESSTO follow-up of the photospheric phase

    NASA Astrophysics Data System (ADS)

    Barbarino, C.; Dall'Ora, M.; Botticella, M. T.; Della Valle, M.; Zampieri, L.; Maund, J. R.; Pumo, M. L.; Jerkstrand, A.; Benetti, S.; Elias-Rosa, N.; Fraser, M.; Gal-Yam, A.; Hamuy, M.; Inserra, C.; Knapic, C.; LaCluyze, A. P.; Molinaro, M.; Ochner, P.; Pastorello, A.; Pignata, G.; Reichart, D. E.; Ries, C.; Riffeser, A.; Schmidt, B.; Schmidt, M.; Smareglia, R.; Smartt, S. J.; Smith, K.; Sollerman, J.; Sullivan, M.; Tomasella, L.; Turatto, M.; Valenti, S.; Yaron, O.; Young, D.

    2015-04-01

    We present the results of a photometric and spectroscopic monitoring campaign of SN 2012ec, which exploded in the spiral galaxy NGC 1084, during the photospheric phase. The photometric light curve exhibits a plateau with luminosity L = 0.9 × 1042 erg s-1 and duration ˜90 d, which is somewhat shorter than standard Type II-P supernovae (SNe). We estimate the nickel mass M(56Ni) = 0.040 ± 0.015 M⊙ from the luminosity at the beginning of the radioactive tail of the light curve. The explosion parameters of SN 2012ec were estimated from the comparison of the bolometric light curve and the observed temperature and velocity evolution of the ejecta with predictions from hydrodynamical models. We derived an envelope mass of 12.6 M⊙, an initial progenitor radius of 1.6 × 1013 cm and an explosion energy of 1.2 foe. These estimates agree with an independent study of the progenitor star identified in pre-explosion images, for which an initial mass of M = 14-22 M⊙ was determined. We have applied the same analysis to two other Type II-P SNe (SNe 2012aw and 2012A), and carried out a comparison with the properties of SN 2012ec derived in this paper. We find a reasonable agreement between the masses of the progenitors obtained from pre-explosion images and masses derived from hydrodynamical models. We estimate the distance to SN 2012ec with the standardized candle method (SCM) and compare it with other estimates based on other primary and secondary indicators. SNe 2012A, 2012aw and 2012ec all follow the standard relations for the SCM for the use of Type II-P SNe as distance indicators.

  1. Satellite-derived land covers for runoff estimation using SCS-CN method in Chen-You-Lan Watershed, Taiwan

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Yan; Lin, Chao-Yuan

    2017-04-01

    The Soil Conservation Service Curve Number (SCS-CN) method, which was originally developed by the USDA Natural Resources Conservation Service, is widely used to estimate direct runoff volume from rainfall. The runoff Curve Number (CN) parameter is based on the hydrologic soil group and land use factors. In Taiwan, the national land use maps were interpreted from aerial photos in 1995 and 2008. Rapid updating of post-disaster land use map is limited due to the high cost of production, so the classification of satellite images is the alternative method to obtain the land use map. In this study, Normalized Difference Vegetation Index (NDVI) in Chen-You-Lan Watershed was derived from dry and wet season of Landsat imageries during 2003 - 2008. Land covers were interpreted from mean value and standard deviation of NDVI and were categorized into 4 groups i.e. forest, grassland, agriculture and bare land. Then, the runoff volume of typhoon events during 2005 - 2009 were estimated using SCS-CN method and verified with the measured runoff data. The result showed that the model efficiency coefficient is 90.77%. Therefore, estimating runoff by using the land cover map classified from satellite images is practicable.

  2. Modification of the degree-day formula for diurnal meltwater generation and refreezing

    NASA Astrophysics Data System (ADS)

    Žaknić-Ćatović, Ana; Howard, Ken W. F.; Ćatović, Zlatko

    2018-02-01

    The standard degree-day, temperature-index approach to calculating snowmelt generation and refreezing (the SDD method) is convenient and popularly used but seriously miscalculates the volumes of water that change phase on days when temperatures fluctuate either side of the freezing point. Additionally, the SDD method does not provide any estimate of the duration of daily melting and refreezing events. A modified version of the standard formula is introduced (the MDD method) that overcomes such problems by removing dependence on a single temperature index (the average daily temperature estimated over a 24-h period beginning at midnight) and instead transfers reliance onto daily air temperature extremes (maximum and minimum temperatures) at known times of occurrence. In this way, the modified formula retains the simplicity of the standard approach while targeting those segments of the diurnal air temperature curve that directly relate to periods of melting and freezing. Newly introduced temperature and time degree-day parameters allow the duration of melting and refreezing events to be estimated. The MDD method was evaluated for two sites in the snow-belt region of Canada where the availability of hourly records of daily temperature allowed the required MDD input parameters to be calculated reliably and thus used for comparative purposes. During testing, the MDD input parameters were obtained from daily temperature extremes and their times of occurrence, using two alternative approaches to synthetic air temperature curve generation, one linear, the other trigonometric. Very good agreement was obtained in both cases and confirms the value of the MDD approach. However, there is no significant benefit to be gained by using air temperature approximating functions more complicated than the linear method for supplementing the missing continuous air temperature measurements. Finally, the MDD approach is not seen as a replacement for the regular SDD method, so much as tool that can be applied when the SDD methodology is likely to become unreliable. This is best achieved by using a hybrid SDD-MDD algorithm that invokes the MDD approach only when the necessary conditions arise.

  3. Guiding curve based on the normal breathing as monitored by thermocouple for regular breathing.

    PubMed

    Lim, Sangwook; Park, Sung Ho; Ahn, Seung Do; Suh, Yelin; Shin, Seong Soo; Lee, Sang-wook; Kim, Jong Hoon; Choi, Eun Kyoung; Yi, Byong Yong; Kwon, Soo Il; Kim, Sookil; Jeung, Tae Sig

    2007-11-01

    Adapting radiation fields to a moving target requires information continuously on the location of internal target by detecting it directly or indirectly. The aim of this study is to make the breathing regular effectively with minimizing stress to the patient. A system for regulating patient's breath consists of a respiratory monitoring mask (ReMM), a thermocouple module, a screen, inner earphones, and a personal computer. A ReMM with thermocouple was developed previously to measure the patient's respiration. A software was written in LabView 7.0 (National Instruments, TX), which acquires respiration signal and displays its pattern. Two curves are displayed on the screen: One is a curve indicating the patient's current breathing pattern; the other is a guiding curve, which is iterated with one period of the patient's normal breathing curve. The guiding curves were acquired for each volunteer before they breathed with guidance. Ten volunteers participated in this study to evaluate this system. A cycle of the representative guiding curve was acquired by monitoring each volunteer's free breathing with ReMM and was then generated iteratively. The regularity was compared between a free breath curve and a guided breath curve by measuring standard deviations of amplitudes and periods of two groups of breathing. When the breathing was guided, the standard deviation of amplitudes and periods on average were reduced from 0.0029 to 0.00139 (arbitrary units) and from 0.359 s to 0.202 s, respectively. And the correlation coefficients between breathing curves and guiding curves were greater than 0.99 for all volunteers. The regularity was improved statistically when the guiding curve was used.

  4. The Effects of Autocorrelation on the Curve-of-Factors Growth Model

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Beretvas, S. Natasha; Pituch, Keenan A.

    2011-01-01

    This simulation study examined the performance of the curve-of-factors model (COFM) when autocorrelation and growth processes were present in the first-level factor structure. In addition to the standard curve-of factors growth model, 2 new models were examined: one COFM that included a first-order autoregressive autocorrelation parameter, and a…

  5. Variation of curve number with storm depth

    NASA Astrophysics Data System (ADS)

    Banasik, K.; Hejduk, L.

    2012-04-01

    The NRCS Curve Number (known also as SCS-CN) method is well known as a tool in predicting flood runoff depth from small ungauged catchment. The traditional way of determination the CNs, based on soil characteristics, land use and hydrological conditions, seemed to have tendency to overpredict the floods in some cases. Over 30 year rainfall-runoff data, collected in two small (A=23.4 & 82.4 km2), lowland, agricultural catchments in Center of Poland (Banasik & Woodward 2010), were used to determine runoff Curve Number and to check a tendency of changing. The observed CN declines with increasing storm size, which according recent views of Hawkins (1993) could be classified as a standard response of watershed. The analysis concluded, that using CN value according to the procedure described in USDA-SCS Handbook one receives representative value for estimating storm runoff from high rainfall depths in the analyzes catchments. This has been confirmed by applying "asymptotic approach" for estimating the watershed curve number from the rainfall-runoff data. Furthermore, the analysis indicated that CN, estimated from mean retention parameter S of recorded events with rainfall depth higher than initial abstraction, is also approaching the theoretical CN. The observed CN, ranging from 59.8 to 97.1 and from 52.3 to 95.5, in the smaller and the larger catchment respectively, declines with increasing storm size, which has been classified as a standard response of watershed. The investigation demonstrated also changeability of the CN during a year, with much lower values during the vegetation season. Banasik K. & D.E. Woodward (2010). "Empirical determination of curve number for a small agricultural watrshed in Poland". 2nd Joint Federal Interagency Conference, Las Vegas, NV, June 27 - July 1, 2010 (http://acwi.gov/sos/pubs/2ndJFIC/Contents/10E_Banasik_ 28_02_10. pdf). Hawkins R. H. (1993). "Asymptotic determination of curve numbers from data". Journal of Irrigation and Drainage Division. American Society of Civil Engineers, 119(2). pp. 334-345. ACKNOWLEDGMENTS The investigation described in the paper is part of the research project no. N N305 396238 founded by PL-Ministry of Science and Higher Education. The support provided by this organization is gratefully acknowledged.

  6. Assessing the performance of handheld glucose testing for critical care.

    PubMed

    Kost, Gerald J; Tran, Nam K; Louie, Richard F; Gentile, Nicole L; Abad, Victor J

    2008-12-01

    We assessed the performance of a point-of-care (POC) glucose meter system (GMS) with multitasking test strip by using the locally-smoothed (LS) median absolute difference (MAD) curve method in conjunction with a modified Bland-Altman difference plot and superimposed International Organization for Standardization (ISO) 15197 tolerance bands. We analyzed performance for tight glycemic control (TGC). A modified glucose oxidase enzyme with a multilayer-gold, multielectrode, four-well test strip (StatStriptrade mark, NOVA Biomedical, Waltham, MA) was used. There was no test strip calibration code. Pragmatic comparison was done of GMS results versus paired plasma glucose measurements from chemistry analyzers in clinical laboratories. Venous samples (n = 1,703) were analyzed at 35 hospitals that used 20 types of chemistry analyzers. Erroneous results were identified using the Bland-Altman plot and ISO 15197 criteria. Discrepant values were analyzed for the TGC interval of 80-110 mg/dL. The GMS met ISO 15197 guidelines; 98.6% (410 of 416) of observations were within tolerance for glucose <75 mg/dL, and for > or =75 mg/dL, 100% were within tolerance. Paired differences (handheld minus reference) averaged -2.2 (SD 9.8) mg/dL; the median was -1 (range, -96 to 45) mg/dL. LS MAD curve analysis revealed satisfactory performance below 186 mg/dL; above 186 mg/dL, the recommended error tolerance limit (5 mg/dL) was not met. No discrepant values appeared. All points fell in Clarke Error Grid zone A. Linear regression showed y = 1.018x - 0.716 mg/dL, and r2 = 0.995. LS MAD curves draw on human ability to discriminate performance visually. LS MAD curve and ISO 15197 performance were acceptable for TGC. POC and reference glucose calibration should be harmonized and standardized.

  7. Suturing training in Augmented Reality: gaining proficiency in suturing skills faster.

    PubMed

    Botden, S M B I; de Hingh, I H J T; Jakimowicz, J J

    2009-09-01

    Providing informative feedback and setting goals tends to motivate trainees to practice more extensively. Augmented Reality simulators retain the benefit of realistic haptic feedback and additionally generate objective assessment and informative feedback during the training. This study researched the performance curve of the adapted suturing module on the ProMIS Augmented Reality simulator. Eighteen novice participants were pretrained on the MIST-VR to become acquainted with laparoscopy. Subsequently, they practiced 16 knots on the suturing module, of which the assessment scores were recorded to evaluate the gain in laparoscopic suturing skills. The scoring of the assessment method was calculated from the "time spent in the correct area" during the knot tying and the quality of the knot. Both the baseline knot and the knot at the top of the performance curve were assessed by two independent objective observers, by means of a standardized evaluation form, to objectify the gain in suturing skills. There was a statistically significant difference between the scores of the second knot (mean 72.59, standard deviation (SD) 16.28) and the top of the performance curve (mean 95.82, SD 3.05; p < 0.001, paired t-test). The scoring of the objective observers also differed significantly (mean 11.83 and 22.11, respectively; SD 3.37 and 3.89, respectively; p < 0.001) (interobserver reliability Cronbach's alpha = 0.96). The median amount of repetitions to reach the top of the performance curve was eight, which also showed significant differences between both the assessment score (mean 88.14, SD 13.53, p < 0.001) and scoring of the objective observers of the second knot (mean 20.51, SD 4.14; p < 0.001). This adapted suturing module on the ProMIS Augmented Reality laparoscopic simulator is a potent tool for gaining laparoscopic suturing skills.

  8. Statistical tools for transgene copy number estimation based on real-time PCR.

    PubMed

    Yuan, Joshua S; Burris, Jason; Stewart, Nathan R; Mentewab, Ayalew; Stewart, C Neal

    2007-11-01

    As compared with traditional transgene copy number detection technologies such as Southern blot analysis, real-time PCR provides a fast, inexpensive and high-throughput alternative. However, the real-time PCR based transgene copy number estimation tends to be ambiguous and subjective stemming from the lack of proper statistical analysis and data quality control to render a reliable estimation of copy number with a prediction value. Despite the recent progresses in statistical analysis of real-time PCR, few publications have integrated these advancements in real-time PCR based transgene copy number determination. Three experimental designs and four data quality control integrated statistical models are presented. For the first method, external calibration curves are established for the transgene based on serially-diluted templates. The Ct number from a control transgenic event and putative transgenic event are compared to derive the transgene copy number or zygosity estimation. Simple linear regression and two group T-test procedures were combined to model the data from this design. For the second experimental design, standard curves were generated for both an internal reference gene and the transgene, and the copy number of transgene was compared with that of internal reference gene. Multiple regression models and ANOVA models can be employed to analyze the data and perform quality control for this approach. In the third experimental design, transgene copy number is compared with reference gene without a standard curve, but rather, is based directly on fluorescence data. Two different multiple regression models were proposed to analyze the data based on two different approaches of amplification efficiency integration. Our results highlight the importance of proper statistical treatment and quality control integration in real-time PCR-based transgene copy number determination. These statistical methods allow the real-time PCR-based transgene copy number estimation to be more reliable and precise with a proper statistical estimation. Proper confidence intervals are necessary for unambiguous prediction of trangene copy number. The four different statistical methods are compared for their advantages and disadvantages. Moreover, the statistical methods can also be applied for other real-time PCR-based quantification assays including transfection efficiency analysis and pathogen quantification.

  9. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhong-Li, E-mail: zl.liu@163.com; Zhang, Xiu-Lu; Cai, Ling-Cang

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curvemore » of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.« less

  10. Photometric theory for wide-angle phenomena

    NASA Technical Reports Server (NTRS)

    Usher, Peter D.

    1990-01-01

    An examination is made of the problem posed by wide-angle photographic photometry, in order to extract a photometric-morphological history of Comet P/Halley. Photometric solutions are presently achieved over wide angles through a generalization of an assumption-free moment-sum method. Standard stars in the field allow a complete solution to be obtained for extinction, sky brightness, and the characteristic curve. After formulating Newton's method for the solution of the general nonlinear least-square problem, an implementation is undertaken for a canonical data set. Attention is given to the problem of random and systematic photometric errors.

  11. Defining overweight and obesity among Greek children living in Thessaloniki: International versus local reference standards.

    PubMed

    Christoforidis, A; Dimitriadou, M; Papadopolou, E; Stilpnopoulou, D; Katzos, G; Athanassiou-Metaxa, M

    2011-04-01

    Body Mass Index (BMI) offers a simple and reasonable measure of obesity that, with the use of the appropriate reference, can help in the early detection of children with weight problems. Our aim was to compare the two most commonly used international BMI references and the national Greek BMI reference in identifying Greek children being overweight and obese. A group of 1557 children (820 girls and 737 boys, mean age: 11.42 ± 3.51 years) were studied. Weight and height was measured using standard methods, and BMI was calculated. Overweight and obesity were determined using the International Obesity Task Force (IOTF) standards, the Centers for Disease Control and Prevention (CDC) BMI-forage curves and the most recent Greek BMI-for-age curves. RESULTS showed that the IOTF's cut-off limits identifies a significantly higher prevalence of overweight (22.4%) compared with both the CDC's (11.8%, p=0.03) and the Greek's (7.4%, p=0.002) cut-off limits. However, the prevalence of obesity was generally increased when it was determined using the CDC's cut-off limits (13.9%) compared to the prevalence calculated with both the IOTF's (6.5%, p=0.05) and the Greek's (6.9%, n.s.) cut off limits. The use of the national Greek reference standards for BMI underestimates the true prevalence of overweight and obesity. On the contrary, both the IOTF and the CDC standards, although independently, detect an increased number of overweight and obese children and thus they should be adopted in the clinical practice for an earlier identification and a timelier intervention.

  12. Rapid PCR-mediated synthesis of competitor molecules for accurate quantification of beta(2) GABA(A) receptor subunit mRNA.

    PubMed

    Vela, J; Vitorica, J; Ruano, D

    2001-12-01

    We describe a fast and easy method for the synthesis of competitor molecules based on non-specific conditions of PCR. RT-competitive PCR is a sensitive technique that allows quantification of very low quantities of mRNA molecules in small tissue samples. This technique is based on the competition established between the native and standard templates for nucleotides, primers or other factors during PCR. Thus, the most critical parameter is the use of good internal standards to generate a standard curve from which the amount of native sequences can be properly estimated. At the present time different types of internal standards and methods for their synthesis have been described. Normally, most of these methods are time-consuming and require the use of different sets of primers, different rounds of PCR or specific modifications, such as site-directed mutagenesis, that need subsequent analysis of the PCR products. Using our method, we obtained in a single round of PCR and with the same primer pair, competitor molecules that were successfully used in RT-competitive PCR experiments. The principal advantage of this method is high versatility and economy. Theoretically it is possible to synthesize a specific competitor molecule for each primer pair used. Finally, using this method we have been able to quantify the increase in the expression of the beta(2) GABA(A) receptor subunit mRNA that occurs during rat hippocampus development.

  13. Development of a New Growth Standard for Breastfed Chinese Infants: What Is the Difference from the WHO Growth Standards?

    PubMed

    Huang, Xiaona; Chang, Jenjen; Feng, Weiwei; Xu, Yiqun; Xu, Tao; Tang, He; Wang, Huishan; Pan, Xiaoping

    2016-01-01

    The objectives of this longitudinal study were to examine the trajectory of breastfed infants' growth in China to update growth standards for early childhood, and to compare these updated Chinese growth standards with the growth standards recommended by the World Health Organization (WHO) in 2006.This longitudinal cohort study enrolled 1,840 healthy breastfed infants living in an "optimal" environment favorable to growth and followed up until one year of age from 2007 to 2010. The study subjects were recruited from 60 communities in twelve cities in China. A participating infant's birth weight was measured within the first hour of the infant's life, and birth length and head circumference within 24 hours after birth. Repeated weekly and monthly anthropometric measurements were also taken. Multilevel (ML) modelling via MLwiN2.25 was fitted to estimate the growth curves of weight-for-age (WFA), length-for-age (LFA), and head circumference-for-age (HFA) for the study sample as a whole and by child sex, controlling for mode of delivery, the gravidity and parity of the mother, infant's physical measurements at birth, infant's daily food intaking frequency per day, infant's medical conditions, the season when the infant's physical measurement was taken, parents' ages, heights, and attained education, and family structure and income per month. During the first four weeks after birth, breastfed infants showed an increase in weight, length, and head circumference of 1110g, 4.9 cm, and 3.2 cm, respectively, among boys, and 980 g, 4.4 cm, and 2.8 cm, respectively, among girls. Throughout infancy, the total growth for these three was 6930 g, 26.4 cm, and 12.5 cm, respectively, among boys, and 6480 g, 25.5 cm, and 11.7 cm, respectively, among girls. As expected, there was a significant sex difference in growth during the first year. In comparison with the WHO growth standards, breastfed children in our study were heavier in weight, longer in length, and bigger in head circumference, with the exception of a few age points during the first two to four months on the upper two percentile curves.Our data suggested the growth curves for breastfed infants in China were significantly different in comparison with those based on the WHO standards. The adoption of the WHO infant growth standards among Chinese infants, as well as the methods used in the development of such growth standards in China, need careful and coordinated consideration.

  14. Development of a New Growth Standard for Breastfed Chinese Infants: What Is the Difference from the WHO Growth Standards?

    PubMed Central

    Chang, Jenjen; Feng, Weiwei; Xu, Yiqun; Xu, Tao; Tang, He; Wang, Huishan; Pan, Xiaoping

    2016-01-01

    The objectives of this longitudinal study were to examine the trajectory of breastfed infants’ growth in China to update growth standards for early childhood, and to compare these updated Chinese growth standards with the growth standards recommended by the World Health Organization (WHO) in 2006.This longitudinal cohort study enrolled 1,840 healthy breastfed infants living in an "optimal" environment favorable to growth and followed up until one year of age from 2007 to 2010. The study subjects were recruited from 60 communities in twelve cities in China. A participating infant’s birth weight was measured within the first hour of the infant’s life, and birth length and head circumference within 24 hours after birth. Repeated weekly and monthly anthropometric measurements were also taken. Multilevel (ML) modelling via MLwiN2.25 was fitted to estimate the growth curves of weight-for-age (WFA), length-for-age (LFA), and head circumference-for-age (HFA) for the study sample as a whole and by child sex, controlling for mode of delivery, the gravidity and parity of the mother, infant’s physical measurements at birth, infant’s daily food intaking frequency per day, infant’s medical conditions, the season when the infant’s physical measurement was taken, parents’ ages, heights, and attained education, and family structure and income per month. During the first four weeks after birth, breastfed infants showed an increase in weight, length, and head circumference of 1110g, 4.9 cm, and 3.2 cm, respectively, among boys, and 980 g, 4.4 cm, and 2.8 cm, respectively, among girls. Throughout infancy, the total growth for these three was 6930 g, 26.4 cm, and 12.5 cm, respectively, among boys, and 6480 g, 25.5 cm, and 11.7 cm, respectively, among girls. As expected, there was a significant sex difference in growth during the first year. In comparison with the WHO growth standards, breastfed children in our study were heavier in weight, longer in length, and bigger in head circumference, with the exception of a few age points during the first two to four months on the upper two percentile curves.Our data suggested the growth curves for breastfed infants in China were significantly different in comparison with those based on the WHO standards. The adoption of the WHO infant growth standards among Chinese infants, as well as the methods used in the development of such growth standards in China, need careful and coordinated consideration. PMID:27977706

  15. Extensions to decision curve analysis, a novel method for evaluating diagnostic tests, prediction models and molecular markers

    PubMed Central

    Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat

    2008-01-01

    Background Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. Methods In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Results Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Conclusion Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided. PMID:19036144

  16. Effect of the Macintosh curved blade size on direct laryngoscopic view in edentulous patients.

    PubMed

    Kim, Hyerim; Chang, Jee-Eun; Han, Sung-Hee; Lee, Jung-Man; Yoon, Soohyuk; Hwang, Jin-Young

    2018-01-01

    In the present study, we compared the laryngoscopic view depending on the size of the Macintosh curved blade in edentulous patients. Thirty-five edentulous adult patients scheduled for elective surgery were included in the study. After induction of anesthesia, two direct laryngoscopies were performed alternately using a standard-sized Macintosh curved blade (No. 4 for men and No. 3 for women) and smaller-sized Macintosh curved blade (No. 3 for men and No. 2 for women). During direct laryngoscopy with each blade, two digital photographs of the lateral view were taken when the blade tip was placed in the valleculae; the laryngoscope was lifted to achieve the best laryngeal view. Then, the best laryngeal views were assessed using the percentage of glottic opening (POGO) score. On the photographs of the lateral view of direct laryngoscopy, the angles between the line extending along the laryngoscopic handle and the horizontal line were measured. The POGO score was improved with the smaller-sized blade compared with the standard-sized blade (87.3% [11.8%] vs. 71.3% [20.0%], P<0.001, respectively). The angles between the laryngoscopic handle and the horizontal line were greater with the smaller-sized blade compared to the standard-sized blade when the blade tip was placed on the valleculae and when the laryngoscope was lifted to achieve the best laryngeal view (both P<0.001). Compared to a standard-sized Macintosh blade, a smaller-sized Macintosh curved blade improved the laryngeal exposure in edentulous patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Quantification of HIV-1 DNA using real-time recombinase polymerase amplification.

    PubMed

    Crannell, Zachary Austin; Rohrman, Brittany; Richards-Kortum, Rebecca

    2014-06-17

    Although recombinase polymerase amplification (RPA) has many advantages for the detection of pathogenic nucleic acids in point-of-care applications, RPA has not yet been implemented to quantify sample concentration using a standard curve. Here, we describe a real-time RPA assay with an internal positive control and an algorithm that analyzes real-time fluorescence data to quantify HIV-1 DNA. We show that DNA concentration and the onset of detectable amplification are correlated by an exponential standard curve. In a set of experiments in which the standard curve and algorithm were used to analyze and quantify additional DNA samples, the algorithm predicted an average concentration within 1 order of magnitude of the correct concentration for all HIV-1 DNA concentrations tested. These results suggest that quantitative RPA (qRPA) may serve as a powerful tool for quantifying nucleic acids and may be adapted for use in single-sample point-of-care diagnostic systems.

  18. Next-Generation Intensity-Duration-Frequency Curves for Hydrologic Design in Snow-Dominated Environments

    NASA Astrophysics Data System (ADS)

    Yan, Hongxiang; Sun, Ning; Wigmosta, Mark; Skaggs, Richard; Hou, Zhangshuan; Leung, Ruby

    2018-02-01

    There is a renewed focus on the design of infrastructure resilient to extreme hydrometeorological events. While precipitation-based intensity-duration-frequency (IDF) curves are commonly used as part of infrastructure design, a large percentage of peak runoff events in snow-dominated regions are caused by snowmelt, particularly during rain-on-snow (ROS) events. In these regions, precipitation-based IDF curves may lead to substantial overestimation/underestimation of design basis events and subsequent overdesign/underdesign of infrastructure. To overcome this deficiency, we proposed next-generation IDF (NG-IDF) curves, which characterize the actual water reaching the land surface. We compared NG-IDF curves to standard precipitation-based IDF curves for estimates of extreme events at 376 Snowpack Telemetry (SNOTEL) stations across the western United States that each had at least 30 years of high-quality records. We found standard precipitation-based IDF curves at 45% of the stations were subject to underdesign, many with significant underestimation of 100 year extreme events, for which the precipitation-based IDF curves can underestimate water potentially available for runoff by as much as 125% due to snowmelt and ROS events. The regions with the greatest potential for underdesign were in the Pacific Northwest, the Sierra Nevada Mountains, and the Middle and Southern Rockies. We also found the potential for overdesign at 20% of the stations, primarily in the Middle Rockies and Arizona mountains. These results demonstrate the need to consider snow processes in the development of IDF curves, and they suggest use of the more robust NG-IDF curves for hydrologic design in snow-dominated environments.

  19. RESPONSIVENESS OF THE ACTIVITIES OF DAILY LIVING SCALE OF THE KNEE OUTCOME SURVEY AND NUMERIC PAIN RATING SCALE IN PATIENTS WITH PATELLOFEMORAL PAIN

    PubMed Central

    Piva, Sara R.; Gil, Alexandra B.; Moore, Charity G.; Fitzgerald, G. Kelley

    2016-01-01

    Objective To assess internal and external responsiveness of the Activity of Daily Living Scale of the Knee Outcome Survey and Numeric Pain Rating Scale on patients with patellofemoral pain. Design One group pre-post design. Subjects A total of 60 individuals with patellofemoral pain (33 women; mean age 29.9 (standard deviation 9.6) years). Methods The Activity of Daily Living Scale and the Numeric Pain Rating Scale were assessed before and after 8 weeks of physical therapy program. Patients completed a global rating of change scale at the end of therapy. The standardized effect size, Guyatt responsiveness index, and the minimum clinical important difference were calculated. Results Standardized effect size of the Activity of Daily Living Scale was 0.63, Guyatt responsiveness index was 1.4, area under the curve was 0.83 (95% confidence interval: 0.72, 0.94), and the minimum clinical important difference corresponded to an increase of 7.1 percentile points. Standardized effect size of the Numeric Pain Rating Scale was 0.72, Guyatt responsiveness index was 2.2, area under the curve was 0.80 (95% confidence interval: 0.70, 0.92), and the minimum clinical important difference corresponded to a decrease of 1.16 points. Conclusion Information from this study may be helpful to therapists when evaluating the effectiveness of rehabilitation intervention on physical function and pain, and to power future clinical trials on patients with patellofemoral pain. PMID:19229444

  20. The κ-generalized distribution: A new descriptive model for the size distribution of incomes

    NASA Astrophysics Data System (ADS)

    Clementi, F.; Di Matteo, T.; Gallegati, M.; Kaniadakis, G.

    2008-05-01

    This paper proposes the κ-generalized distribution as a model for describing the distribution and dispersion of income within a population. Formulas for the shape, moments and standard tools for inequality measurement-such as the Lorenz curve and the Gini coefficient-are given. A method for parameter estimation is also discussed. The model is shown to fit extremely well the data on personal income distribution in Australia and in the United States.

  1. pcr: an R package for quality assessment, analysis and testing of qPCR data

    PubMed Central

    Ahmed, Mahmoud

    2018-01-01

    Background Real-time quantitative PCR (qPCR) is a broadly used technique in the biomedical research. Currently, few different analysis models are used to determine the quality of data and to quantify the mRNA level across the experimental conditions. Methods We developed an R package to implement methods for quality assessment, analysis and testing qPCR data for statistical significance. Double Delta CT and standard curve models were implemented to quantify the relative expression of target genes from CT in standard qPCR control-group experiments. In addition, calculation of amplification efficiency and curves from serial dilution qPCR experiments are used to assess the quality of the data. Finally, two-group testing and linear models were used to test for significance of the difference in expression control groups and conditions of interest. Results Using two datasets from qPCR experiments, we applied different quality assessment, analysis and statistical testing in the pcr package and compared the results to the original published articles. The final relative expression values from the different models, as well as the intermediary outputs, were checked against the expected results in the original papers and were found to be accurate and reliable. Conclusion The pcr package provides an intuitive and unified interface for its main functions to allow biologist to perform all necessary steps of qPCR analysis and produce graphs in a uniform way. PMID:29576953

  2. Triceps and Subscapular Skinfold Thickness Percentiles and Cut-Offs for Overweight and Obesity in a Population-Based Sample of Schoolchildren and Adolescents in Bogota, Colombia.

    PubMed

    Ramírez-Vélez, Robinson; López-Cifuentes, Mario Ferney; Correa-Bautista, Jorge Enrique; González-Ruíz, Katherine; González-Jiménez, Emilio; Córdoba-Rodríguez, Diana Paola; Vivas, Andrés; Triana-Reina, Hector Reynaldo; Schmidt-RioValle, Jacqueline

    2016-09-24

    The assessment of skinfold thickness is an objective measure of adiposity. The aims of this study were to establish Colombian smoothed centile charts and LMS L (Box-Cox transformation), M (median), and S (coefficient of variation) tables for triceps, subscapular, and triceps + subscapular skinfolds; appropriate cut-offs were selected using receiver operating characteristic (ROC) analysis based on a population-based sample of children and adolescents in Bogotá, Colombia. A cross-sectional study was conducted in 9618 children and adolescents (55.7% girls; age range of 9-17.9 years). Triceps and subscapular skinfold measurements were obtained using standardized methods. We calculated the triceps + subscapular skinfold (T + SS) sum. Smoothed percentile curves for triceps and subscapular skinfold thickness were derived using the LMS method. ROC curve analyses were used to evaluate the optimal cut-off point of skinfold thickness for overweight and obesity, based on the International Obesity Task Force definitions. Subscapular and triceps skinfolds and T + SS were significantly higher in girls than in boys (p < 0.001). The ROC analysis showed that subscapular and triceps skinfolds and T + SS have a high discriminatory power in the identification of overweight and obesity in the sample population in this study. Our results provide sex- and age-specific normative reference standards for skinfold thickness values from a population from Bogotá, Colombia.

  3. Ratio of sequential chromatograms for quantitative analysis and peak deconvolution: Application to standard addition method and process monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Synovec, R.E.; Johnson, E.L.; Bahowick, T.J.

    1990-08-01

    This paper describes a new technique for data analysis in chromatography, based on taking the point-by-point ratio of sequential chromatograms that have been base line corrected. This ratio chromatogram provides a robust means for the identification and the quantitation of analytes. In addition, the appearance of an interferent is made highly visible, even when it coelutes with desired analytes. For quantitative analysis, the region of the ratio chromatogram corresponding to the pure elution of an analyte is identified and is used to calculate a ratio value equal to the ratio of concentrations of the analyte in sequential injections. For themore » ratio value calculation, a variance-weighted average is used, which compensates for the varying signal-to-noise ratio. This ratio value, or equivalently the percent change in concentration, is the basis of a chromatographic standard addition method and an algorithm to monitor analyte concentration in a process stream. In the case of overlapped peaks, a spiking procedure is used to calculate both the original concentration of an analyte and its signal contribution to the original chromatogram. Thus, quantitation and curve resolution may be performed simultaneously, without peak modeling or curve fitting. These concepts are demonstrated by using data from ion chromatography, but the technique should be applicable to all chromatographic techniques.« less

  4. Determination of Campesterol, Stigmasterol, and beta-Sitosterol in Saw Palmetto Raw Materials and Dietary Supplements by Gas Chromatography: Single-Laboratory Validation

    PubMed Central

    Sorenson, Wendy R.; Sullivan, Darryl

    2008-01-01

    In conjunction with an AOAC Presidential Task Force on Dietary Supplements, a method was validated for measurement of 3 plant sterols (phytosterols) in saw palmetto raw materials, extracts, and dietary supplements. AOAC Official Method 994.10, “Cholesterol in Foods,” was modified for purposes of this validation. Test samples were saponified at high temperature with ethanolic potassium hydroxide solution. The unsaponifiable fraction containing phytosterols (campesterol, stigmasterol, and beta-sitosterol) was extracted with toluene. Phytosterols were derivatized to trimethylsilyl ethers and then quantified by gas Chromatography with a hydrogen flame ionization detector. The presence of the phytosterols was detected at concentrations greater than or equal to 1.00 mg/100 g based on 2–3 g of sample. The standard curve range for this assay was 0.00250 to 0.200 mg/mL. The calibration curves for all phytosterols had correlation coefficients greater than or equal to 0.995. Precision studies produced relative standard deviation values of 1.52 to 7.27% for campesterol, 1.62 to 6.48% for stigmasterol, and 1.39 to 10.5% for beta-sitosterol. Recoveries for samples fortified at 100% of the inherent values averaged 98.5 to 105% for campesterol, 95.0 to 108% for stigmasterol, and 85.0 to 103% for beta-sitosterol. PMID:16512224

  5. Antimicrobial Effects of β-Lactams on Imipenem-Resistant Ceftazidime-Susceptible Pseudomonas aeruginosa

    PubMed Central

    Wi, Yu Mi; Choi, Ji-Young; Lee, Ji-Young; Kang, Cheol-In; Chung, Doo Ryeon; Peck, Kyong Ran; Song, Jae-Hoon

    2017-01-01

    ABSTRACT We studied the resistance mechanism and antimicrobial effects of β-lactams on imipenem-resistant Pseudomonas aeruginosa isolates that were susceptible to ceftazidime as detected by time-kill curve methods. Among 215 P. aeruginosa isolates from hospitalized patients in eight hospitals in the Republic of Korea, 18 isolates (23.4% of 77 imipenem-resistant isolates) were imipenem resistant and ceftazidime susceptible. Multilocus sequence typing revealed diverse genotypes, which indicated independent emergence. These 18 isolates were negative for carbapenemase genes. All 18 imipenem-resistant ceftazidime-susceptible isolates showed decreased mRNA expression of oprD, and overexpression of mexB was observed in 13 isolates. In contrast, overexpression of ampC, mexD, mexF, or mexY was rarely found. Time-kill curve methods were applied to three selected imipenem-resistant ceftazidime-susceptible isolates at a standard inoculum (5 × 105 CFU/ml) or at a high inoculum (5 × 107 CFU/ml) to evaluate the antimicrobial effects of β-lactams. Inoculum effects were detected for all three β-lactam antibiotics, ceftazidime, cefepime, and piperacillin-tazobactam, against all three isolates. The antibiotics had significant killing effects in the standard inoculum, but no effects in the high inoculum were observed. Our results suggest that β-lactam antibiotics should be used with caution in patients with imipenem-resistant ceftazidime-susceptible P. aeruginosa infection, especially in high-inoculum infections such as endocarditis and osteomyelitis. PMID:28373200

  6. Development of FQ-PCR method to determine the level of ADD1 expression in fatty and lean pigs.

    PubMed

    Cui, J X; Chen, W; Zeng, Y Q

    2015-10-30

    To determine how adipocyte determination and differentiation factor 1 (ADD1), a gene involved in the determination of pork quality, is regulated in Laiwu and Large pigs, we used TaqMan fluorescence quantitative real-time polymerase chain reaction (FQ-PCR) to detect differential expression in the longissimus muscle of Laiwu (fatty) and Large White (lean) pigs. In this study, the ADD1 and GAPDH cDNA sequences were cloned using a T-A cloning assay, and the clone sequences were consistent with those deposited in GenBank. Thus, the target fragment was successfully recombined into the vector, and its integrity was maintained. The standard curve and regression equation were established through the optimized FQ-PCR protocol. The standard curve of porcine ADD1 and GAPDH cDNA was determined, and its linear range extension could reach seven orders of magnitudes. The results showed that this method was used to quantify ADD1 expression in the longissimus muscle of two breeds of pig, and was found to be accurate, sensitive, and convenient. These results provide information regarding porcine ADD1 mRNA expression and the mechanism of adipocyte differentiation, and this study could help in the effort to meet the demands of consumers interested in the maintenance of health and prevention of obesity. Furthermore, it could lead to new approaches in the prevention and clinical treatment of this disease.

  7. Transformation-invariant and nonparametric monotone smooth estimation of ROC curves.

    PubMed

    Du, Pang; Tang, Liansheng

    2009-01-30

    When a new diagnostic test is developed, it is of interest to evaluate its accuracy in distinguishing diseased subjects from non-diseased subjects. The accuracy of the test is often evaluated by receiver operating characteristic (ROC) curves. Smooth ROC estimates are often preferable for continuous test results when the underlying ROC curves are in fact continuous. Nonparametric and parametric methods have been proposed by various authors to obtain smooth ROC curve estimates. However, there are certain drawbacks with the existing methods. Parametric methods need specific model assumptions. Nonparametric methods do not always satisfy the inherent properties of the ROC curves, such as monotonicity and transformation invariance. In this paper we propose a monotone spline approach to obtain smooth monotone ROC curves. Our method ensures important inherent properties of the underlying ROC curves, which include monotonicity, transformation invariance, and boundary constraints. We compare the finite sample performance of the newly proposed ROC method with other ROC smoothing methods in large-scale simulation studies. We illustrate our method through a real life example. Copyright (c) 2008 John Wiley & Sons, Ltd.

  8. Estimation of the uncertainty of analyte concentration from the measurement uncertainty.

    PubMed

    Brown, Simon; Cooke, Delwyn G; Blackwell, Leonard F

    2015-09-01

    Ligand-binding assays, such as immunoassays, are usually analysed using standard curves based on the four-parameter and five-parameter logistic models. An estimate of the uncertainty of an analyte concentration obtained from such curves is needed for confidence intervals or precision profiles. Using a numerical simulation approach, it is shown that the uncertainty of the analyte concentration estimate becomes significant at the extremes of the concentration range and that this is affected significantly by the steepness of the standard curve. We also provide expressions for the coefficient of variation of the analyte concentration estimate from which confidence intervals and the precision profile can be obtained. Using three examples, we show that the expressions perform well.

  9. Automated data processing and radioassays.

    PubMed

    Samols, E; Barrows, G H

    1978-04-01

    Radioassays include (1) radioimmunoassays, (2) competitive protein-binding assays based on competition for limited antibody or specific binding protein, (3) immunoradiometric assay, based on competition for excess labeled antibody, and (4) radioreceptor assays. Most mathematical models describing the relationship between labeled ligand binding and unlabeled ligand concentration have been based on the law of mass action or the isotope dilution principle. These models provide useful data reduction programs, but are theoretically unfactory because competitive radioassay usually is not based on classical dilution principles, labeled and unlabeled ligand do not have to be identical, antibodies (or receptors) are frequently heterogenous, equilibrium usually is not reached, and there is probably steric and cooperative influence on binding. An alternative, more flexible mathematical model based on the probability or binding collisions being restricted by the surface area of reactive divalent sites on antibody and on univalent antigen has been derived. Application of these models to automated data reduction allows standard curves to be fitted by a mathematical expression, and unknown values are calculated from binding data. The vitrues and pitfalls are presented of point-to-point data reduction, linear transformations, and curvilinear fitting approaches. A third-order polynomial using the square root of concentration closely approximates the mathematical model based on probability, and in our experience this method provides the most acceptable results with all varieties of radioassays. With this curvilinear system, linear point connection should be used between the zero standard and the beginning of significant dose response, and also towards saturation. The importance is stressed of limiting the range of reported automated assay results to that portion of the standard curve that delivers optimal sensitivity. Published methods for automated data reduction of Scatchard plots for radioreceptor assay are limited by calculation of a single mean K value. The quality of the input data is generally the limiting factor in achieving good precision with automated as it is with manual data reduction. The major advantages of computerized curve fitting include: (1) handling large amounts of data rapidly and without computational error; (2) providing useful quality-control data; (3) indicating within-batch variance of the test results; (4) providing ongoing quality-control charts and between assay variance.

  10. Effect of tree-ring detrending method on apparent growth trends of black and white spruce in interior Alaska

    NASA Astrophysics Data System (ADS)

    Sullivan, Patrick F.; Pattison, Robert R.; Brownlee, Annalis H.; Cahoon, Sean M. P.; Hollingsworth, Teresa N.

    2016-11-01

    Boreal forests are critical sinks in the global carbon cycle. However, recent studies have revealed increasing frequency and extent of wildfires, decreasing landscape greenness, increasing tree mortality and declining growth of black and white spruce in boreal North America. We measured ring widths from a large set of increment cores collected across a vast area of interior Alaska and examined implications of data processing decisions for apparent trends in black and white spruce growth. We found that choice of detrending method had important implications for apparent long-term growth trends and the strength of climate-growth correlations. Trends varied from strong increases in growth since the Industrial Revolution, when ring widths were detrended using single-curve regional curve standardization (RCS), to strong decreases in growth, when ring widths were normalized by fitting a horizontal line to each ring width series. All methods revealed a pronounced growth peak for black and white spruce centered near 1940. Most detrending methods showed a decline from the peak, leaving recent growth of both species near the long-term mean. Climate-growth analyses revealed negative correlations with growing season temperature and positive correlations with August precipitation for both species. Multiple-curve RCS detrending produced the strongest and/or greatest number of significant climate-growth correlations. Results provide important historical context for recent growth of black and white spruce. Growth of both species might decline with future warming, if not mitigated by increasing precipitation. However, widespread drought-induced mortality is probably not imminent, given that recent growth was near the long-term mean.

  11. Modelling a Compensation Standard for a Regional Forest Ecosystem: A Case Study in Yanqing District, Beijing, China

    PubMed Central

    Li, Tan; Zhang, Qingguo; Zhang, Ying

    2018-01-01

    The assessment of forest ecosystem services can quantify the impact of these services on human life and is the main basis for formulating a standard of compensation for these services. Moreover, the calculation of the indirect value of forest ecosystem services should not be ignored, as has been the case in some previous publications. A low compensation standard and the lack of a dynamic coordination mechanism are the main problems existing in compensation implementation. Using comparison and analysis, this paper employed accounting for both the costs and benefits of various alternatives. The analytic hierarchy process (AHP) method and the Pearl growth-curve method were used to adjust the results. This research analyzed the contribution of each service value from the aspects of forest produce services, ecology services, and society services. We also conducted separate accounting for cost and benefit, made a comparison of accounting and evaluation methods, and estimated the implementation period of the compensation standard. The main conclusions of this research include the fact that any compensation standard should be determined from the points of view of both benefit and cost in a region. The results presented here allow the range between the benefit and cost compensation to be laid out more reasonably. The practical implications of this research include the proposal that regional decision-makers should consider a dynamic compensation method to meet with the local economic level by using diversified ways to raise the compensation standard, and that compensation channels should offer a mixed mode involving both the market and government. PMID:29561789

  12. Analysis of Indonesian educational system standard with KSIM cross-impact method

    NASA Astrophysics Data System (ADS)

    Arridjal, F.; Aldila, D.; Bustamam, A.

    2017-07-01

    The Result of The Programme of International Student Assessment (PISA) on 2012 shows that Indonesia is on 64'th position from 65 countries in Mathematics Mean Score. The 2013 Learning Curve Mapping, Indonesia is included in the 10th category of countries with the lowest performance on cognitive skills aspect, i.e. 37'th position from 40 countries. Competency is built by 3 aspects, one of them is cognitive aspect. The low result of mapping on cognitive aspect, describe the low of graduate competences as an output of Indonesia National Education System (INES). INES adopting a concept Eight Educational System Standards (EESS), one of them is graduate competency standard which connected directly with Indonesia's students. This research aims is to model INES by using KSIM cross-impact. Linear regression models of EESS constructed using the accreditation national data of Senior High Schools in Indonesia. The results then interpreted as impact value on the construction of KSIM cross-impact INES. The construction is used to analyze the interaction of EESS and doing numerical simulation for possible public policy in the education sector, i.e. stimulate the growth of education staff standard, content, process and infrastructure. All simulations of public policy has been done with 2 methods i.e with a multiplier impact method and with constant intervention method. From numerical simulation result, it is shown that stimulate the growth standard of content in the construction KSIM cross-impact EESS is the best option for public policy to maximize the growth of graduate competency standard.

  13. Modelling a Compensation Standard for a Regional Forest Ecosystem: A Case Study in Yanqing District, Beijing, China.

    PubMed

    Li, Tan; Zhang, Qingguo; Zhang, Ying

    2018-03-21

    The assessment of forest ecosystem services can quantify the impact of these services on human life and is the main basis for formulating a standard of compensation for these services. Moreover, the calculation of the indirect value of forest ecosystem services should not be ignored, as has been the case in some previous publications. A low compensation standard and the lack of a dynamic coordination mechanism are the main problems existing in compensation implementation. Using comparison and analysis, this paper employed accounting for both the costs and benefits of various alternatives. The analytic hierarchy process (AHP) method and the Pearl growth-curve method were used to adjust the results. This research analyzed the contribution of each service value from the aspects of forest produce services, ecology services, and society services. We also conducted separate accounting for cost and benefit, made a comparison of accounting and evaluation methods, and estimated the implementation period of the compensation standard. The main conclusions of this research include the fact that any compensation standard should be determined from the points of view of both benefit and cost in a region. The results presented here allow the range between the benefit and cost compensation to be laid out more reasonably. The practical implications of this research include the proposal that regional decision-makers should consider a dynamic compensation method to meet with the local economic level by using diversified ways to raise the compensation standard, and that compensation channels should offer a mixed mode involving both the market and government.

  14. Accurate Solution of Multi-Region Continuum Biomolecule Electrostatic Problems Using the Linearized Poisson-Boltzmann Equation with Curved Boundary Elements

    PubMed Central

    Altman, Michael D.; Bardhan, Jaydeep P.; White, Jacob K.; Tidor, Bruce

    2009-01-01

    We present a boundary-element method (BEM) implementation for accurately solving problems in biomolecular electrostatics using the linearized Poisson–Boltzmann equation. Motivating this implementation is the desire to create a solver capable of precisely describing the geometries and topologies prevalent in continuum models of biological molecules. This implementation is enabled by the synthesis of four technologies developed or implemented specifically for this work. First, molecular and accessible surfaces used to describe dielectric and ion-exclusion boundaries were discretized with curved boundary elements that faithfully reproduce molecular geometries. Second, we avoided explicitly forming the dense BEM matrices and instead solved the linear systems with a preconditioned iterative method (GMRES), using a matrix compression algorithm (FFTSVD) to accelerate matrix-vector multiplication. Third, robust numerical integration methods were employed to accurately evaluate singular and near-singular integrals over the curved boundary elements. Finally, we present a general boundary-integral approach capable of modeling an arbitrary number of embedded homogeneous dielectric regions with differing dielectric constants, possible salt treatment, and point charges. A comparison of the presented BEM implementation and standard finite-difference techniques demonstrates that for certain classes of electrostatic calculations, such as determining absolute electrostatic solvation and rigid-binding free energies, the improved convergence properties of the BEM approach can have a significant impact on computed energetics. We also demonstrate that the improved accuracy offered by the curved-element BEM is important when more sophisticated techniques, such as non-rigid-binding models, are used to compute the relative electrostatic effects of molecular modifications. In addition, we show that electrostatic calculations requiring multiple solves using the same molecular geometry, such as charge optimization or component analysis, can be computed to high accuracy using the presented BEM approach, in compute times comparable to traditional finite-difference methods. PMID:18567005

  15. A method for determination of [Fe3+]/[Fe2+] ratio in superparamagnetic iron oxide

    NASA Astrophysics Data System (ADS)

    Jiang, Changzhao; Yang, Siyu; Gan, Neng; Pan, Hongchun; Liu, Hong

    2017-10-01

    Superparamagnetic iron oxide nanoparticles (SPION), as a kind of nanophase materials, are widely used in biomedical application, such as magnetic resonance imaging (MRI), drug delivery, and magnetic field assisted therapy. The magnetic property of SPION has close connection with its crystal structure, namely it is related to the ratio of Fe3+ and Fe2+ which form the SPION. So a simple way to determine the content of the Fe3+ and Fe2+ is important for researching the property of SPION. This review covers a method for determination of the Fe3+ and Fe2+ ratio in SPION by UV-vis spectrophotometry based the reaction of Fe2+ and 1,10-phenanthroline. The standard curve of Fe with R2 = 0.9999 is used for determination the content of Fe2+ and total iron with 2.5 mL 0.01% (w/v) SPION digested by HCl, pH = 4.30 HOAc-NaAc buffer 10 mL, 0.01% (w/v) 1,10-phenanthroline 5 mL and 10% (w/v) ascorbic acid 1 mL for total iron determine independently. But the presence of Fe3+ interfere with obtaining the actual value of Fe2+ (the error close to 9%). We designed a calibration curve to eliminate the error by devising a series of solution of different ratio of [Fe3+]/[Fe2+], and obtain the calibration curve. Through the calibration curve, the error between the measured value and the actual value can be reduced to 0.4%. The R2 of linearity of the method is 0.99441 and 0.99929 for Fe2+ and total iron respectively. The error of accuracy of recovery and precision of inter-day and intra-day are both lower than 2%, which can prove the reliability of the determination method.

  16. Analytical Problems and Suggestions in the Analysis of Behavioral Economic Demand Curves.

    PubMed

    Yu, Jihnhee; Liu, Liu; Collins, R Lorraine; Vincent, Paula C; Epstein, Leonard H

    2014-01-01

    Behavioral economic demand curves (Hursh, Raslear, Shurtleff, Bauman, & Simmons, 1988) are innovative approaches to characterize the relationships between consumption of a substance and its price. In this article, we investigate common analytical issues in the use of behavioral economic demand curves, which can cause inconsistent interpretations of demand curves, and then we provide methodological suggestions to address those analytical issues. We first demonstrate that log transformation with different added values for handling zeros changes model parameter estimates dramatically. Second, demand curves are often analyzed using an overparameterized model that results in an inefficient use of the available data and a lack of assessment of the variability among individuals. To address these issues, we apply a nonlinear mixed effects model based on multivariate error structures that has not been used previously to analyze behavioral economic demand curves in the literature. We also propose analytical formulas for the relevant standard errors of derived values such as P max, O max, and elasticity. The proposed model stabilizes the derived values regardless of using different added increments and provides substantially smaller standard errors. We illustrate the data analysis procedure using data from a relative reinforcement efficacy study of simulated marijuana purchasing.

  17. Innovative design method of automobile profile based on Fourier descriptor

    NASA Astrophysics Data System (ADS)

    Gao, Shuyong; Fu, Chaoxing; Xia, Fan; Shen, Wei

    2017-10-01

    Aiming at the innovation of the contours of automobile side, this paper presents an innovative design method of vehicle side profile based on Fourier descriptor. The design flow of this design method is: pre-processing, coordinate extraction, standardization, discrete Fourier transform, simplified Fourier descriptor, exchange descriptor innovation, inverse Fourier transform to get the outline of innovative design. Innovative concepts of the innovative methods of gene exchange among species and the innovative methods of gene exchange among different species are presented, and the contours of the innovative design are obtained separately. A three-dimensional model of a car is obtained by referring to the profile curve which is obtained by exchanging xenogeneic genes. The feasibility of the method proposed in this paper is verified by various aspects.

  18. A single extraction and HPLC procedure for simultaneous analysis of phytosterols, tocopherols and lutein in soybeans.

    PubMed

    Slavin, Margaret; Yu, Liangli Lucy

    2012-12-15

    A saponification/extraction procedure and high performance liquid chromatography (HPLC) analysis method were developed and validated for simultaneous analysis of phytosterols, tocopherols and lutein (a carotenoid) in soybeans. Separation was achieved on a phenyl column with a ternary, isocratic solvent system of acetonitrile, methanol and water (48:22.5:29.5, v/v/v). Evaporative light scattering detection (ELSD) was used to quantify β-sitosterol, stigmasterol, campesterol, and α-, δ- and γ-tocopherols, while lutein was quantified with visible light absorption at 450 nm. Peak identification was verified by retention times and spikes with external standards. Standard curves were constructed (R(2)>0.99) to allow for sample quantification. Recovery of the saponification and extraction was demonstrated via analysis of spiked samples. Also, the accuracy of results of four soybeans using the described saponification and HPLC analytical method was validated against existing methods. This method offers a more efficient alternative to individual methods for quantifying lutein, tocopherols and sterols in soybeans. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. A stage-normalized function for the synthesis of stage-discharge relations for the Colorado River in Grand Canyon, Arizona

    USGS Publications Warehouse

    Wiele, Stephen M.; Torizzo, Margaret

    2003-01-01

    A method was developed to construct stage-discharge rating curves for the Colorado River in Grand Canyon, Arizona, using two stage-discharge pairs and a stage-normalized rating curve. Stage-discharge rating curves formulated with the stage-normalized curve method are compared to (1) stage-discharge rating curves for six temporary stage gages and two streamflow-gaging stations developed by combining stage records with modeled unsteady flow; (2) stage-discharge rating curves developed from stage records and discharge measurements at three streamflow-gaging stations; and (3) stages surveyed at known discharges at the Northern Arizona Sand Bar Studies sites. The stage-normalized curve method shows good agreement with field data when the discharges used in the construction of the rating curves are at least 200 cubic meters per second apart. Predictions of stage using the stage-normalized curve method are also compared to predictions of stage from a steady-flow model.

  20. Model of Numerical Spatial Classification for Sustainable Agriculture in Badung Regency and Denpasar City, Indonesia

    NASA Astrophysics Data System (ADS)

    Trigunasih, N. M.; Lanya, I.; Subadiyasa, N. N.; Hutauruk, J.

    2018-02-01

    Increasing number and activity of the population to meet the needs of their lives greatly affect the utilization of land resources. Land needs for activities of the population continue to grow, while the availability of land is limited. Therefore, there will be changes in land use. As a result, the problems faced by land degradation and conversion of agricultural land become non-agricultural. The objectives of this research are: (1) to determine parameter of spatial numerical classification of sustainable food agriculture in Badung Regency and Denpasar City (2) to know the projection of food balance in Badung Regency and Denpasar City in 2020, 2030, 2040, and 2050 (3) to specify of function of spatial numerical classification in the making of zonation model of sustainable agricultural land area in Badung regency and Denpasar city (4) to determine the appropriate model of the area to protect sustainable agricultural land in spatial and time scale in Badung and Denpasar regencies. The method used in this research was quantitative method include: survey, soil analysis, spatial data development, geoprocessing analysis (spatial analysis of overlay and proximity analysis), interpolation of raster digital elevation model data, and visualization (cartography). Qualitative methods consisted of literature studies, and interviews. The parameters observed for a total of 11 parameters Badung regency and Denpasar as much as 9 parameters. Numerical classification parameter analysis results used the standard deviation and the mean of the population data and projections relationship rice field in the food balance sheet by modelling. The result of the research showed that, the number of different numerical classification parameters in rural areas (Badung) and urban areas (Denpasar), in urban areas the number of parameters is less than the rural areas. The based on numerical classification weighting and scores generate population distribution parameter analysis results of a standard deviation and average value. Numerical classification produced 5 models, which was divided into three zones are sustainable neighbourhood, buffer and converted in Denpasar and Badung. The results of Population curve parameter analysis in Denpasar showed normal curve, in contrast to the Badung regency showed abnormal curve, therefore Denpasar modeling carried out throughout the region, while in the Badung regency modeling done in each district. Relationship modelling and projections lands role in food balance in Badung views of sustainable land area whereas in Denpasar seen from any connection to the green open spaces in the spatial plan Denpasar 2011-2031. Modelling in Badung (rural) is different in Denpasar (urban), as well as population curve parameter analysis results in Badung showed abnormal curve while in Denpasar showed normal curve. Relationship modelling and projections lands role in food balance in the Badung regency sustainable in terms of land area, while in Denpasar in terms of linkages with urban green space in Denpasar City’s regional landuse plan of 2011-2031.

  1. Analysis of atmospheric pollutant metals by laser ablation inductively coupled plasma mass spectrometry with a radial line-scan dried-droplet approach

    NASA Astrophysics Data System (ADS)

    Tang, Xiaoxing; Qian, Yuan; Guo, Yanchuan; Wei, Nannan; Li, Yulan; Yao, Jian; Wang, Guanghua; Ma, Jifei; Liu, Wei

    2017-12-01

    A novel method has been improved for analyzing atmospheric pollutant metals (Be, Mn, Fe, Co, Ni, Cu, Zn, Se, Sr, Cd, and Pb) by laser ablation inductively coupled plasma mass spectrometry. In this method, solid standards are prepared by depositing droplets of aqueous standard solutions on the surface of a membrane filter, which is the same type as used for collecting atmospheric pollutant metals. Laser parameters were optimized, and ablation behaviors of the filter discs were studied. The mode of radial line scans across the filter disc was a representative ablation strategy and can avoid error from the inhomogeneous filter standards and marginal effect of the filter disc. Pt, as the internal standard, greatly improved the correlation coefficient of the calibration curve. The developed method provides low detection limits, from 0.01 ng m- 3 for Be and Co to 1.92 ng m- 3 for Fe. It was successfully applied for the determination of atmospheric pollutant metals collected in Lhasa, China. The analytical results showed good agreement with those obtained by conventional liquid analysis. In contrast to the conventional acid digestion procedure, the novel method not only greatly reduces sample preparation and shortens the analysis time but also provides a possible means for studying the spatial distribution of atmospheric filter samples.

  2. The Use of Statistically Based Rolling Supply Curves for Electricity Market Analysis: A Preliminary Look

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenkin, Thomas J; Larson, Andrew; Ruth, Mark F

    In light of the changing electricity resource mixes across the United States, an important question in electricity modeling is how additions and retirements of generation, including additions in variable renewable energy (VRE) generation could impact markets by changing hourly wholesale energy prices. Instead of using resource-intensive production cost models (PCMs) or building and using simple generator supply curves, this analysis uses a 'top-down' approach based on regression analysis of hourly historical energy and load data to estimate the impact of supply changes on wholesale electricity prices, provided the changes are not so substantial that they fundamentally alter the market andmore » dispatch-order driven behavior of non-retiring units. The rolling supply curve (RSC) method used in this report estimates the shape of the supply curve that fits historical hourly price and load data for given time intervals, such as two-weeks, and then repeats this on a rolling basis through the year. These supply curves can then be modified on an hourly basis to reflect the impact of generation retirements or additions, including VRE and then reapplied to the same load data to estimate the change in hourly electricity price. The choice of duration over which these RSCs are estimated has a significant impact on goodness of fit. For example, in PJM in 2015, moving from fitting one curve per year to 26 rolling two-week supply curves improves the standard error of the regression from 16 dollars/MWh to 6 dollars/MWh and the R-squared of the estimate from 0.48 to 0.76. We illustrate the potential use and value of the RSC method by estimating wholesale price effects under various generator retirement and addition scenarios, and we discuss potential limits of the technique, some of which are inherent. The ability to do this type of analysis is important to a wide range of market participants and other stakeholders, and it may have a role in complementing use of or providing calibrating insights to PCMs.« less

  3. Indicator methods to evaluate the hygienic performance of industrial scale operating Biowaste Composting Plants.

    PubMed

    Martens, Jürgen

    2005-01-01

    The hygienic performance of biowaste composting plants to ensure the quality of compost is of high importance. Existing compost quality assurance systems reflect this importance through intensive testing of hygienic parameters. In many countries, compost quality assurance systems are under construction and it is necessary to check and to optimize the methods to state the hygienic performance of composting plants. A set of indicator methods to evaluate the hygienic performance of normal operating biowaste composting plants was developed. The indicator methods were developed by investigating temperature measurements from indirect process tests from 23 composting plants belonging to 11 design types of the Hygiene Design Type Testing System of the German Compost Quality Association (BGK e.V.). The presented indicator methods are the grade of hygienization, the basic curve shape, and the hygienic risk area. The temperature courses of single plants are not distributed normally, but they were grouped by cluster analysis in normal distributed subgroups. That was a precondition to develop the mentioned indicator methods. For each plant the grade of hygienization was calculated through transformation into the standard normal distribution. It shows the part in percent of the entire data set which meet the legal temperature requirements. The hygienization grade differs widely within the design types and falls below 50% for about one fourth of the plants. The subgroups are divided visually into basic curve shapes which stand for different process courses. For each plant the composition of the entire data set out of the various basic curve shapes can be used as an indicator for the basic process conditions. Some basic curve shapes indicate abnormal process courses which can be emended through process optimization. A hygienic risk area concept using the 90% range of variation of the normal temperature courses was introduced. Comparing the design type range of variation with the legal temperature defaults showed hygienic risk areas over the temperature courses which could be minimized through process optimization. The hygienic risk area of four design types shows a suboptimal hygienic performance.

  4. A robust LC-MS/MS method for the determination of pidotimod in different biological matrixes and its application to in vivo and in vitro pharmacokinetic studies.

    PubMed

    Wang, Guangji; Wang, Qian; Rao, Tai; Shen, Boyu; Kang, Dian; Shao, Yuhao; Xiao, Jingcheng; Chen, Huimin; Liang, Yan

    2016-06-15

    Pidotimod, (R)-3-[(S)-(5-oxo-2-pyrrolidinyl) carbonyl]-thiazolidine-4-carboxylic acid, was frequently used to treat children with recurrent respiratory infections. Preclinical pharmacokinetics of pidotimod was still rarely reported to date. Herein, a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method was developed and validated to determine pidotimod in rat plasma, tissue homogenate and Caco-2 cells. In this process, phenacetin was chosen as the internal standard due to its similarity in chromatographic and mass spectrographic characteristics with pidotimod. The plasma calibration curves were established within the concentration range of 0.01-10.00μg/mL, and similar linear curves were built using tissue homogenate and Caco-2 cells. The calibration curves for all biological samples showed good linearity (r>0.99) over the concentration ranges tested. The intra- and inter-day precision (RSD, %) values were below 15% and accuracy (RE, %) was ranged from -15% to 15% at all quality control levels. For plasma, tissue homogenate and Caco-2 cells, no obvious matrix effect was found, and the average recoveries were all above 75%. Thus, the method demonstrated excellent accuracy, precision and robustness for high throughput applications, and was then successfully applied to the studies of absorption in rat plasma, distribution in rat tissues and intracellular uptake characteristics in Caco-2 cells for pidotimod. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. A Curved, Elastostatic Boundary Element for Plane Anisotropic Structures

    NASA Technical Reports Server (NTRS)

    Smeltzer, Stanley S.; Klang, Eric C.

    2001-01-01

    The plane-stress equations of linear elasticity are used in conjunction with those of the boundary element method to develop a novel curved, quadratic boundary element applicable to structures composed of anisotropic materials in a state of plane stress or plane strain. The curved boundary element is developed to solve two-dimensional, elastostatic problems of arbitrary shape, connectivity, and material type. As a result of the anisotropy, complex variables are employed in the fundamental solution derivations for a concentrated unit-magnitude force in an infinite elastic anisotropic medium. Once known, the fundamental solutions are evaluated numerically by using the known displacement and traction boundary values in an integral formulation with Gaussian quadrature. All the integral equations of the boundary element method are evaluated using one of two methods: either regular Gaussian quadrature or a combination of regular and logarithmic Gaussian quadrature. The regular Gaussian quadrature is used to evaluate most of the integrals along the boundary, and the combined scheme is employed for integrals that are singular. Individual element contributions are assembled into the global matrices of the standard boundary element method, manipulated to form a system of linear equations, and the resulting system is solved. The interior displacements and stresses are found through a separate set of auxiliary equations that are derived using an Airy-type stress function in terms of complex variables. The capabilities and accuracy of this method are demonstrated for a laminated-composite plate with a central, elliptical cutout that is subjected to uniform tension along one of the straight edges of the plate. Comparison of the boundary element results for this problem with corresponding results from an analytical model show a difference of less than 1%.

  6. On the reduction of occultation light curves. [stellar occultations by planets

    NASA Technical Reports Server (NTRS)

    Wasserman, L.; Veverka, J.

    1973-01-01

    The two basic methods of reducing occultation light curves - curve fitting and inversion - are reviewed and compared. It is shown that the curve fitting methods have severe problems of nonuniqueness. In addition, in the case of occultation curves dominated by spikes, it is not clear that such solutions are meaningful. The inversion method does not suffer from these drawbacks. Methods of deriving temperature profiles from refractivity profiles are then examined. It is shown that, although the temperature profiles are sensitive to small errors in the refractivity profile, accurate temperatures can be obtained, particularly at the deeper levels of the atmosphere. The ambiguities that arise when the occultation curve straddles the turbopause are briefly discussed.

  7. Effect of IFN-gamma on the killing of S. aureus in human whole blood. Assessment of bacterial viability by CFU determination and by a new method using alamarBlue.

    PubMed

    DeForge, L E; Billeci, K L; Kramer, S M

    2000-11-01

    Given the increasing incidence of methicillin resistant Staphylococcus aureus (MRSA) and the recent emergence of MRSA with a reduced susceptibility to vancomycin, alternative approaches to the treatment of infection are of increasing relevance. The purpose of these studies was to evaluate the effect of IFN-gamma on the ability of white blood cells to kill S. aureus and to develop a simpler, higher throughput bacterial killing assay. Using a methicillin sensitive clinical isolate of S. aureus, a clinical isolate of MRSA, and a commercially available strain of MRSA, studies were conducted using a killing assay in which the bacteria were added directly into whole blood. The viability of the bacteria in samples harvested at various time points was then evaluated both by the classic CFU assay and by a new assay using alamarBlue. In the latter method, serially diluted samples and a standard curve containing known concentrations of bacteria were placed on 96-well plates, and alamarBlue was added. Fluorescence readings were taken, and the viability of the bacteria in the samples was calculated using the standard curve. The results of these studies demonstrated that the CFU and alamarBlue methods yielded equivalent detection of bacteria diluted in buffer. For samples incubated in whole blood, however, the alamarBlue method tended to yield lower viabilities than the CFU method due to the emergence of a slower growing subpopulation of S. aureus upon incubation in the blood matrix. A significant increase in bacterial killing was observed upon pretreatment of whole blood for 24 h with 5 or 25 ng/ml IFN-gamma. This increase in killing was detected equivalently by the CFU and alamarBlue methods. In summary, these studies describe a method that allows for the higher throughput analysis of the effects of immunomodulators on bacterial killing.

  8. A digital image-based method for determining of total acidity in red wines using acid-base titration without indicator.

    PubMed

    Tôrres, Adamastor Rodrigues; Lyra, Wellington da Silva; de Andrade, Stéfani Iury Evangelista; Andrade, Renato Allan Navarro; da Silva, Edvan Cirino; Araújo, Mário César Ugulino; Gaião, Edvaldo da Nóbrega

    2011-05-15

    This work proposes the use of digital image-based method for determination of total acidity in red wines by means of acid-base titration without using an external indicator or any pre-treatment of the sample. Digital images present the colour of the emergent radiation which is complementary to the radiation absorbed by anthocyanines present in wines. Anthocyanines change colour depending on the pH of the medium, and from the variation of colour in the images obtained during titration, the end point can be localized with accuracy and precision. RGB-based values were employed to build titration curves, and end points were localized by second derivative curves. The official method recommends potentiometric titration with a NaOH standard solution, and sample dilution until the pH reaches 8.2-8.4. In order to illustrate the feasibility of the proposed method, titrations of ten red wines were carried out. Results were compared with the reference method, and no statistically significant difference was observed between the results by applying the paired t-test at the 95% confidence level. The proposed method yielded more precise results than the official method. This is due to the trivariate nature of the measurements (RGB), associated with digital images. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Michael K.; Brown, Jason R.; Thornberg, Steven Michael

    HFE-7100 and FC-72 fluorinert are two fluids used during weapon component manufacturing. HFE-7100 is a solvent used in the cleaning of parts, and FC-72 is the blowing agent of a polymeric removable foam. The presence of either FC-72 or HFE-7100 gas in weapon components can provide valuable information as to the stability of the materials. Therefore, gas standards are needed so HFE-7100 and FC-72 gas concentrations can be accurately measured. There is no current established procedure for generating gas standards of either HFE-7100 or FC-72. This report outlines the development of a method to generate gas standards ranging in concentrationmore » from 0.1 ppm to 10% by volume. These standards were then run on a Jeol GC-Mate II mass spectrometer and analyzed to produce calibration curves. We present a manifold design that accurately generates gas standards of HFE-7100 and FC-72 and a procedure that allows the amount of each to be determined.« less

  10. Are LOD and LOQ Reliable Parameters for Sensitivity Evaluation of Spectroscopic Methods?

    PubMed

    Ershadi, Saba; Shayanfar, Ali

    2018-03-22

    The limit of detection (LOD) and the limit of quantification (LOQ) are common parameters to assess the sensitivity of analytical methods. In this study, the LOD and LOQ of previously reported terbium sensitized analysis methods were calculated by different methods, and the results were compared with sensitivity parameters [lower limit of quantification (LLOQ)] of U.S. Food and Drug Administration guidelines. The details of the calibration curve and standard deviation of blank samples of three different terbium-sensitized luminescence methods for the quantification of mycophenolic acid, enrofloxacin, and silibinin were used for the calculation of LOD and LOQ. A comparison of LOD and LOQ values calculated by various methods and LLOQ shows a considerable difference. The significant difference of the calculated LOD and LOQ with various methods and LLOQ should be considered in the sensitivity evaluation of spectroscopic methods.

  11. Analysis of anthocyanins in commercial fruit juices by using nano-liquid chromatography-electrospray-mass spectrometry and high-performance liquid chromatography with UV-vis detector.

    PubMed

    Fanali, Chiara; Dugo, Laura; D'Orazio, Giovanni; Lirangi, Melania; Dachà, Marina; Dugo, Paola; Mondello, Luigi

    2011-01-01

    Nano-LC and conventional HPLC techniques were applied for the analysis of anthocyanins present in commercial fruit juices using a capillary column of 100 μm id and a 2.1 mm id narrow-bore C(18) column. Analytes were detected by UV-Vis at 518 nm and ESI-ion trap MS with HPLC and nano-LC, respectively. Commercial blueberry juice (14 anthocyanins detected) was used to optimize chromatographic separation of analytes and other analysis parameters. Qualitative identification of anthocyanins was performed by comparing the recorded mass spectral data with those of published papers. The use of the same mobile phase composition in both techniques revealed that the miniaturized method exhibited shorter analysis time and higher sensitivity than narrow-bore chromatography. Good intra-day and day-to-day precision of retention time was obtained in both methods with values of RSD less than 3.4 and 0.8% for nano-LC and HPLC, respectively. Quantitative analysis was performed by external standard curve calibration of cyanidin-3-O-glucoside standard. Calibration curves were linear in the concentration ranges studied, 0.1-50 and 6-50 μg/mL for HPLC-UV/Vis and nano-LC-MS, respectively. LOD and LOQ values were good for both methods. In addition to commercial blueberry juice, qualitative and quantitative analysis of other juices (e.g. raspberry, sweet cherry and pomegranate) was performed. The optimized nano-LC-MS method allowed an easy and selective identification and quantification of anthocyanins in commercial fruit juices; it offered good results, shorter analysis time and reduced mobile phase volume with respect to narrow-bore HPLC. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Computer-assisted assessment of ultrasound real-time elastography: initial experience in 145 breast lesions.

    PubMed

    Zhang, Xue; Xiao, Yang; Zeng, Jie; Qiu, Weibao; Qian, Ming; Wang, Congzhi; Zheng, Rongqin; Zheng, Hairong

    2014-01-01

    To develop and evaluate a computer-assisted method of quantifying five-point elasticity scoring system based on ultrasound real-time elastography (RTE), for classifying benign and malignant breast lesions, with pathologic results as the reference standard. Conventional ultrasonography (US) and RTE images of 145 breast lesions (67 malignant, 78 benign) were performed in this study. Each lesion was automatically contoured on the B-mode image by the level set method and mapped on the RTE image. The relative elasticity value of each pixel was reconstructed and classified into hard or soft by the fuzzy c-means clustering method. According to the hardness degree inside lesion and its surrounding tissue, the elasticity score of the RTE image was computed in an automatic way. Visual assessments of the radiologists were used for comparing the diagnostic performance. Histopathologic examination was used as the reference standard. The Student's t test and receiver operating characteristic (ROC) curve analysis were performed for statistical analysis. Considering score 4 or higher as test positive for malignancy, the diagnostic accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were 93.8% (136/145), 92.5% (62/67), 94.9% (74/78), 93.9% (62/66), and 93.7% (74/79) for the computer-assisted scheme, and 89.7% (130/145), 85.1% (57/67), 93.6% (73/78), 92.0% (57/62), and 88.0% (73/83) for manual assessment. Area under ROC curve (Az value) for the proposed method was higher than the Az value for visual assessment (0.96 vs. 0.93). Computer-assisted quantification of classical five-point scoring system can significantly eliminate the interobserver variability and thereby improve the diagnostic confidence of classifying the breast lesions to avoid unnecessary biopsy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Effect of different breath alcohol concentrations on driving performance in horizontal curves.

    PubMed

    Zhang, Xingjian; Zhao, Xiaohua; Du, Hongji; Ma, Jianming; Rong, Jian

    2014-11-01

    Driving under the influence of alcohol on curved roadway segments has a higher risk than driving on straight segments. To explore the effect of different breath alcohol concentration (BrAC) levels on driving performance in roadway curves, a driving simulation experiment was designed to collect 25 participants' driving performance parameters (i.e., speed and lane position) under the influence of 4 BrAC levels (0.00%, 0.03%, 0.06% and 0.09%) on 6 types of roadway curves (3 radii×2 turning directions). Driving performance data for 22 participants were collected successfully. Then the average and standard deviation of the two parameters were analyzed, considering the entire curve and different sections of the curve, respectively. The results show that the speed throughout curves is higher when drinking and driving than during sober driving. The significant interaction between alcohol and radius exists in the middle and tangent segments after a curve exit, indicating that a small radius can reduce speed at high BrAC levels. The significant impairment of alcohol on the stability of speed occurs mainly in the curve section between the point of curve (PC) and point of tangent (PT), with no impairment noted in tangent sections. The stability of speed is significantly worsened at higher BrAC levels. Alcohol and radius have interactive effects on the standard deviation of speed in the entry segment of curves, indicating that the small radius amplifies the instability of speed at high BrAC levels. For lateral movement, drivers tend to travel on the right side of the lane when drinking and driving, mainly in the approach and middle segments of curves. Higher BrAC levels worsen the stability of lateral movement in every segment of the curve, regardless of its radius and turning direction. The results are expected to provide reference for detecting the drinking and driving state. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Preventing conflicts among bid curves used with transactive controllers in a market-based resource allocation system

    DOEpatents

    Fuller, Jason C.; Chassin, David P.; Pratt, Robert G.; Hauer, Matthew; Tuffner, Francis K.

    2017-03-07

    Disclosed herein are representative embodiments of methods, apparatus, and systems for distributing a resource (such as electricity) using a resource allocation system. One of the disclosed embodiments is a method for operating a transactive thermostatic controller configured to submit bids to a market-based resource allocation system. According to the exemplary method, a first bid curve is determined, the first bid curve indicating a first set of bid prices for corresponding temperatures and being associated with a cooling mode of operation for a heating and cooling system. A second bid curve is also determined, the second bid curve indicating a second set of bid prices for corresponding temperatures and being associated with a heating mode of operation for a heating and cooling system. In this embodiment, the first bid curve, the second bid curve, or both the first bid curve and the second bid curve are modified to prevent overlap of any portion of the first bid curve and the second bid curve.

  15. Extensions to decision curve analysis, a novel method for evaluating diagnostic tests, prediction models and molecular markers.

    PubMed

    Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat

    2008-11-26

    Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided.

  16. [Testing methods for the characterization of catheter balloons and lumina].

    PubMed

    Werner, C; Rössler, K; Deckert, F

    1995-10-01

    The present paper reports on the characterization of catheter balloons and lumina on the basis of such known parameters as residual volume, compliance, burst pressure and flow rate, with the aim of developing standards, test methods and testing equipment as well as standards. These are becoming ever more important with the coming into force of the EC directive on medical products [7] and the law governing medical products in Germany [13], which requires manufacturers to specify the properties of their products. Our testing concept is based on a commercially available machine that subjects materials to alternating extension and compression forces over the long-term, to which we added a special hydraulic module. Using the multimedia technology we achieved a real time superimposition of the volume-diameter curve on the balloon. The function of the testing device and method is demonstrated on dilatation catheters. Our initial results reveal compatibility with the requirements of the 1% accuracy class. Use of this methodology for comparative testing of catheters and quality evaluation is recommended.

  17. Determination of whey adulteration in milk powder by using laser induced breakdown spectroscopy.

    PubMed

    Bilge, Gonca; Sezer, Banu; Eseller, Kemal Efe; Berberoglu, Halil; Topcu, Ali; Boyaci, Ismail Hakki

    2016-12-01

    A rapid and in situ method has been developed to detect and quantify adulterated milk powder through adding whey powder by using laser induced breakdown spectroscopy (LIBS). The methodology is based on elemental composition differences between milk and whey products. Milk powder, sweet and acid whey powders were produced as standard samples, and milk powder was adulterated with whey powders. Based on LIBS spectra of standard samples and commercial products, species was identified using principle component analysis (PCA) method, and discrimination rate of milk and whey powders was found as 80.5%. Calibration curves were obtained with partial least squares regression (PLS). Correlation coefficient (R(2)) and limit of detection (LOD) values were 0.981 and 1.55% for adulteration with sweet whey powder, and 0.985 and 0.55% for adulteration with acid whey powder, respectively. The results were found to be consistent with the data from inductively coupled plasma - mass spectrometer (ICP-MS) method. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Parameter identification of piezoelectric hysteresis model based on improved artificial bee colony algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Geng; Zhou, Kexin; Zhang, Yeming

    2018-04-01

    The widely used Bouc-Wen hysteresis model can be utilized to accurately simulate the voltage-displacement curves of piezoelectric actuators. In order to identify the unknown parameters of the Bouc-Wen model, an improved artificial bee colony (IABC) algorithm is proposed in this paper. A guiding strategy for searching the current optimal position of the food source is proposed in the method, which can help balance the local search ability and global exploitation capability. And the formula for the scout bees to search for the food source is modified to increase the convergence speed. Some experiments were conducted to verify the effectiveness of the IABC algorithm. The results show that the identified hysteresis model agreed well with the actual actuator response. Moreover, the identification results were compared with the standard particle swarm optimization (PSO) method, and it can be seen that the search performance in convergence rate of the IABC algorithm is better than that of the standard PSO method.

  19. Liquid chromatography-tandem mass spectrometry method of loxoprofen in human plasma.

    PubMed

    Lee, Hye Won; Ji, Hye Young; Sohn, Dong Hwan; Kim, Se-Mi; Lee, Yong Bok; Lee, Hye Suk

    2009-07-01

    A rapid, sensitive and selective liquid chromatography-electrospray ionization mass spectrometric method for the determination of loxoprofen in human plasma was developed. Loxoprofen and ketoprofen (internal standard) were extracted from 20 microL of human plasma sample using ethyl acetate at acidic pH and analyzed on an Atlantis dC(18) column with the mobile phase of methanol:water (75:25, v/v). The analytes were quantified in the selected reaction monitoring mode. The standard curve was linear over the concentration range of 0.1-20 microg/mL with a lower limit of quantification of 0.1 microg/mL. The coefficient of variation and relative error for intra- and inter-assay at four quality control levels were 2.8-5.2 and 4.8-7.0%, respectively. The recoveries of loxoprofen and ketoprofen were 69.7 and 67.6%, respectively. The matrix effects for loxoprofen and ketoprofen were practically absent. This method was successfully applied to the pharmacokinetic study of loxoprofen in humans. (c) 2009 John Wiley & Sons, Ltd.

  20. Comparative study on the performance of textural image features for active contour segmentation.

    PubMed

    Moraru, Luminita; Moldovanu, Simona

    2012-07-01

    We present a computerized method for the semi-automatic detection of contours in ultrasound images. The novelty of our study is the introduction of a fast and efficient image function relating to parametric active contour models. This new function is a combination of the gray-level information and first-order statistical features, called standard deviation parameters. In a comprehensive study, the developed algorithm and the efficiency of segmentation were first tested for synthetic images. Tests were also performed on breast and liver ultrasound images. The proposed method was compared with the watershed approach to show its efficiency. The performance of the segmentation was estimated using the area error rate. Using the standard deviation textural feature and a 5×5 kernel, our curve evolution was able to produce results close to the minimal area error rate (namely 8.88% for breast images and 10.82% for liver images). The image resolution was evaluated using the contrast-to-gradient method. The experiments showed promising segmentation results.

  1. [High Precision Identification of Igneous Rock Lithology by Laser Induced Breakdown Spectroscopy].

    PubMed

    Wang, Chao; Zhang, Wei-gang; Yan, Zhi-quan

    2015-09-01

    In the field of petroleum exploration, lithology identification of finely cuttings sample, especially high precision identification of igneous rock with similar property, has become one of the geological problems. In order to solve this problem, a new method is proposed based on element analysis of Laser-Induced Breakdown Spectroscopy (LIBS) and Total Alkali versus Silica (TAS) diagram. Using independent LIBS system, factors influencing spectral signal, such as pulse energy, acquisition time delay, spectrum acquisition method and pre-ablation are researched through contrast experiments systematically. The best analysis conditions of igneous rock are determined: pulse energy is 50 mJ, acquisition time delay is 2 μs, the analysis result is integral average of 20 different points of sample's surface, and pre-ablation has been proved not suitable for igneous rock sample by experiment. The repeatability of spectral data is improved effectively. Characteristic lines of 7 elements (Na, Mg, Al, Si, K, Ca, Fe) commonly used for lithology identification of igneous rock are determined, and igneous rock samples of different lithology are analyzed and compared. Calibration curves of Na, K, Si are generated by using national standard series of rock samples, and all the linearly dependent coefficients are greater than 0.9. The accuracy of quantitative analysis is investigated by national standard samples. Element content of igneous rock is analyzed quantitatively by calibration curve, and its lithology is identified accurately by the method of TAS diagram, whose accuracy rate is 90.7%. The study indicates that LIBS can effectively achieve the high precision identification of the lithology of igneous rock.

  2. Can color-coded parametric maps improve dynamic enhancement pattern analysis in MR mammography?

    PubMed

    Baltzer, P A; Dietzel, M; Vag, T; Beger, S; Freiberg, C; Herzog, A B; Gajda, M; Camara, O; Kaiser, W A

    2010-03-01

    Post-contrast enhancement characteristics (PEC) are a major criterion for differential diagnosis in MR mammography (MRM). Manual placement of regions of interest (ROIs) to obtain time/signal intensity curves (TSIC) is the standard approach to assess dynamic enhancement data. Computers can automatically calculate the TSIC in every lesion voxel and combine this data to form one color-coded parametric map (CCPM). Thus, the TSIC of the whole lesion can be assessed. This investigation was conducted to compare the diagnostic accuracy (DA) of CCPM with TSIC for the assessment of PEC. 329 consecutive patients with 469 histologically verified lesions were examined. MRM was performed according to a standard protocol (1.5 T, 0.1 mmol/kgbw Gd-DTPA). ROIs were drawn manually within any lesion to calculate the TSIC. CCPMs were created in all patients using dedicated software (CAD Sciences). Both methods were rated by 2 observers in consensus on an ordinal scale. Receiver operating characteristics (ROC) analysis was used to compare both methods. The area under the curve (AUC) was significantly (p=0.026) higher for CCPM (0.829) than TSIC (0.749). The sensitivity was 88.5% (CCPM) vs. 82.8% (TSIC), whereas equal specificity levels were found (CCPM: 63.7%, TSIC: 63.0%). The color-coded parametric maps (CCPMs) showed a significantly higher DA compared to TSIC, in particular the sensitivity could be increased. Therefore, the CCPM method is a feasible approach to assessing dynamic data in MRM and condenses several imaging series into one parametric map. © Georg Thieme Verlag KG Stuttgart · New York.

  3. Optimal study design with identical power: an application of power equivalence to latent growth curve models.

    PubMed

    von Oertzen, Timo; Brandmaier, Andreas M

    2013-06-01

    Structural equation models have become a broadly applied data-analytic framework. Among them, latent growth curve models have become a standard method in longitudinal research. However, researchers often rely solely on rules of thumb about statistical power in their study designs. The theory of power equivalence provides an analytical answer to the question of how design factors, for example, the number of observed indicators and the number of time points assessed in repeated measures, trade off against each other while holding the power for likelihood-ratio tests on the latent structure constant. In this article, we present applications of power-equivalent transformations on a model with data from a previously published study on cognitive aging, and highlight consequences of participant attrition on power. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  4. Differentiation of minute virus of mice and mouse parvovirus by high resolution melting curve analysis.

    PubMed

    Rao, Dan; Wu, Miaoli; Wang, Jing; Yuan, Wen; Zhu, Yujun; Cong, Feng; Xu, Fengjiao; Lian, Yuexiao; Huang, Bihong; Wu, Qiwen; Chen, Meili; Zhang, Yu; Huang, Ren; Guo, Pengju

    2017-12-01

    Murine parvovirus is one of the most prevalent infectious pathogens in mouse colonies. A specific primer pair targeting the VP2 gene of minute virus of mice (MVM) and mouse parvovirus (MPV) was utilized for high resolution melting (HRM) analysis. The resulting melting curves could distinguish these two virus strains and there was no detectable amplification of the other mouse pathogens which included rat parvovirus (KRV), ectromelia virus (ECT), mouse adenovirus (MAD), mouse cytomegalovirus (MCMV), polyoma virus (Poly), Helicobactor hepaticus (H. hepaticus) and Salmonella typhimurium (S. typhimurium). The detection limit of the standard was 10 copies/μL. This study showed that the PCR-HRM assay could be an alternative useful method with high specificity and sensitivity for differentiating murine parvovirus strains MVM and MPV. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. The equation of state of n-pentane in the atomistic model TraPPE-EH

    NASA Astrophysics Data System (ADS)

    Valeev, B. U.; Pisarev, V. V.

    2018-01-01

    In this work, we study the vapor-liquid equilibrium in n-pentane. We use the TraPPE-EH (transferable potentials for phase equilibria-explicit hydrogen) forcefield, where each hydrogen and carbon atom is considered as independent center of force. The fluid behavior was investigated with different values of density and temperature by molecular dynamics method. The n-pentane evaporation curve was calculated in the temperature range of 290 to 390 K. The densities of the coexisting phases are also calculated. The compression curve at 370 K was calculated and isothermal bulk modulus was found. The simulated properties of n-pentane are in good agreement with data from a database of the National Institute of Standards and Technology, so the TraPPE-EH model can be recommended for simulations of hydrocarbons.

  6. Energy dispersive X-ray fluorescence (EDXRF) equipment calibration for multielement analysis of soil and rock samples

    NASA Astrophysics Data System (ADS)

    de Moraes, Alex Silva; Tech, Lohane; Melquíades, Fábio Luiz; Bastos, Rodrigo Oliveira

    2014-11-01

    Considering the importance to understand the behavior of the elements on different natural and/or anthropic processes, this study had as objective to verify the accuracy of a multielement analysis method for rocks characterization by using soil standards as calibration reference. An EDXRF equipment was used. The analyses were made on samples doped with known concentration of Mn, Zn, Rb, Sr and Zr, for the obtainment of the calibration curves, and on a certified rock sample to check the accuracy of the analytical curves. Then, a set of rock samples from Rio Bonito, located in Figueira city, Paraná State, Brazil, were analyzed. The concentration values obtained, in ppm, for Mn, Rb, Sr and Zr varied, respectively, from 175 to 1084, 7.4 to 268, 28 to 2247 and 15 to 761.

  7. Tympanometry in infants: a study of the sensitivity and specificity of 226-Hz and 1,000-Hz probe tones.

    PubMed

    Carmo, Michele Picanço; Costa, Nayara Thais de Oliveira; Momensohn-Santos, Teresa Maria

    2013-10-01

    Introduction For infants under 6 months, the literature recommends 1,000-Hz tympanometry, which has a greater sensitivity for the correct identification of middle ear disorders in this population. Objective To systematically analyze national and international publications found in electronic databases that used tympanometry with 226-Hz and 1,000-Hz probe tones. Data Synthesis Initially, we identified 36 articles in the SciELO database, 11 in the Latin American and Caribbean Literature on the Health Sciences (LILACS) database, 199 in MEDLINE, 0 in the Cochrane database, 16 in ISI Web of Knowledge, and 185 in the Scopus database. We excluded 433 articles because they did not fit the selection criteria, leaving 14 publications that were analyzed in their entirety. Conclusions The 1,000-Hz tone test has greater sensitivity and specificity for the correct identification of tympanometric curve changes. However, it is necessary to clarify the doubts that still exist regarding the use of this test frequency. Improved methods for rating curves, standardization of normality criteria, and the types of curves found in infants should be addressed.

  8. Tympanometry in Infants: A Study of the Sensitivity and Specificity of 226-Hz and 1,000-Hz Probe Tones

    PubMed Central

    Carmo, Michele Picanço; Costa, Nayara Thais de Oliveira; Momensohn-Santos, Teresa Maria

    2013-01-01

    Introduction For infants under 6 months, the literature recommends 1,000-Hz tympanometry, which has a greater sensitivity for the correct identification of middle ear disorders in this population. Objective To systematically analyze national and international publications found in electronic databases that used tympanometry with 226-Hz and 1,000-Hz probe tones. Data Synthesis Initially, we identified 36 articles in the SciELO database, 11 in the Latin American and Caribbean Literature on the Health Sciences (LILACS) database, 199 in MEDLINE, 0 in the Cochrane database, 16 in ISI Web of Knowledge, and 185 in the Scopus database. We excluded 433 articles because they did not fit the selection criteria, leaving 14 publications that were analyzed in their entirety. Conclusions The 1,000-Hz tone test has greater sensitivity and specificity for the correct identification of tympanometric curve changes. However, it is necessary to clarify the doubts that still exist regarding the use of this test frequency. Improved methods for rating curves, standardization of normality criteria, and the types of curves found in infants should be addressed. PMID:25992044

  9. [Dopplerography of the large hepatic veins in the diagnosis of tricuspid valve insufficiency].

    PubMed

    Korytnikov, K I; Martyniuk, A D; Pustovit, L K

    1991-01-01

    During pulse dopplerography of the large hepatic veins in patients with tricuspid valve failure, the differences in the shape of the spectrum of Doppler's frequencies were revealed as dependent on cardiac rhythm. In sinus rhythm, the curve of the systolic flow is recordable beneath the baseline, in atrial fibrillation, over the baseline. In scanning of the large hepatic veins in patients with tricuspid valve failure, the shape of the curves of the spectrum of Doppler's frequencies coincides with the shape of the curves of liver pulsation. Tricuspid valve failure in sinus rhythm leads to a decrease of the maximum velocity of the systolic flow in the hepatic veins. There is a close correlation between the maximum velocity of the systolic flow of tricuspid regurgitation and the maximum velocity of the systolic flow in the large hepatic veins. Pulse dopplerography of the large hepatic veins is a safe enough method of the diagnosis of tricuspid valve failure and can be used in difficult cases when analysing the tricuspid blood flow from standard projections.

  10. Central Engine Memory of Gamma-Ray Bursts and Soft Gamma-Ray Repeaters

    NASA Astrophysics Data System (ADS)

    Zhang, Bin-Bin; Zhang, Bing; Castro-Tirado, Alberto J.

    2016-04-01

    Gamma-ray bursts (GRBs) are bursts of γ-rays generated from relativistic jets launched from catastrophic events such as massive star core collapse or binary compact star coalescence. Previous studies suggested that GRB emission is erratic, with no noticeable memory in the central engine. Here we report a discovery that similar light curve patterns exist within individual bursts for at least some GRBs. Applying the Dynamic Time Warping method, we show that similarity of light curve patterns between pulses of a single burst or between the light curves of a GRB and its X-ray flare can be identified. This suggests that the central engine of at least some GRBs carries “memory” of its activities. We also show that the same technique can identify memory-like emission episodes in the flaring emission in soft gamma-ray repeaters (SGRs), which are believed to be Galactic, highly magnetized neutron stars named magnetars. Such a phenomenon challenges the standard black hole central engine models for GRBs, and suggest a common physical mechanism behind GRBs and SGRs, which points toward a magnetar central engine of GRBs.

  11. Percentile Curves for Anthropometric Measures for Canadian Children and Youth

    PubMed Central

    Kuhle, Stefan; Maguire, Bryan; Ata, Nicole; Hamilton, David

    2015-01-01

    Body mass index (BMI) is commonly used to assess a child's weight status but it does not provide information about the distribution of body fat. Since the disease risks associated with obesity are related to the amount and distribution of body fat, measures that assess visceral or subcutaneous fat, such as waist circumference (WC), waist-to-height ratio (WHtR), or skinfolds thickness may be more suitable. The objective of this study was to develop percentile curves for BMI, WC, WHtR, and sum of 5 skinfolds (SF5) in a representative sample of Canadian children and youth. The analysis used data from 4115 children and adolescents between 6 and 19 years of age that participated in the Canadian Health Measures Survey Cycles 1 (2007/2009) and 2 (2009/2011). BMI, WC, WHtR, and SF5 were measured using standardized procedures. Age- and sex-specific centiles were calculated using the LMS method and the percentiles that intersect the adult cutpoints for BMI, WC, and WHtR at age 18 years were determined. Percentile curves for all measures showed an upward shift compared to curves from the pre-obesity epidemic era. The adult cutoffs for overweight and obesity corresponded to the 72nd and 91st percentile, respectively, for both sexes. The current study has presented for the first time percentile curves for BMI, WC, WHtR, and SF5 in a representative sample of Canadian children and youth. The percentile curves presented are meant to be descriptive rather than prescriptive as associations with cardiovascular disease markers or outcomes were not assessed. PMID:26176769

  12. Percentile Curves for Anthropometric Measures for Canadian Children and Youth.

    PubMed

    Kuhle, Stefan; Maguire, Bryan; Ata, Nicole; Hamilton, David

    2015-01-01

    Body mass index (BMI) is commonly used to assess a child's weight status but it does not provide information about the distribution of body fat. Since the disease risks associated with obesity are related to the amount and distribution of body fat, measures that assess visceral or subcutaneous fat, such as waist circumference (WC), waist-to-height ratio (WHtR), or skinfolds thickness may be more suitable. The objective of this study was to develop percentile curves for BMI, WC, WHtR, and sum of 5 skinfolds (SF5) in a representative sample of Canadian children and youth. The analysis used data from 4115 children and adolescents between 6 and 19 years of age that participated in the Canadian Health Measures Survey Cycles 1 (2007/2009) and 2 (2009/2011). BMI, WC, WHtR, and SF5 were measured using standardized procedures. Age- and sex-specific centiles were calculated using the LMS method and the percentiles that intersect the adult cutpoints for BMI, WC, and WHtR at age 18 years were determined. Percentile curves for all measures showed an upward shift compared to curves from the pre-obesity epidemic era. The adult cutoffs for overweight and obesity corresponded to the 72nd and 91st percentile, respectively, for both sexes. The current study has presented for the first time percentile curves for BMI, WC, WHtR, and SF5 in a representative sample of Canadian children and youth. The percentile curves presented are meant to be descriptive rather than prescriptive as associations with cardiovascular disease markers or outcomes were not assessed.

  13. soilphysics: An R package to determine soil preconsolidation pressure

    NASA Astrophysics Data System (ADS)

    da Silva, Anderson Rodrigo; de Lima, Renato Paiva

    2015-11-01

    Preconsolidation pressure is a parameter obtained from the soil compression curve and has been used as an indicator of load-bearing capacity of soil, as well as to characterize the impacts suffered by the use of machines. Despite its importance in soil physics, there is a few software or computational routines to support its determination. In this paper we present a computational package in R language, the package soilphysics, which contains implementations of the main methods for determining preconsolidation pressure, such as the method of Casagrande, Pacheco Silva, regression methods and the method of the virgin compression line intercept. There is still a consensus that Casagrande is the standard method, although the method of Pacheco Silva has shown similar values. The method of the virgin compression line intercept can be used when trying to be more conservative on the value (smaller) of preconsolidation pressure. Furthermore, Casagrande could be replaced by a regression method when the compression curve is obtained from saturated soils. The theory behind each method is presented and the algorithms are thoroughly described. We also give some support on how to use the R functions. Examples are used to illustrate the capabilities of the package, and the results are briefly discussed. The latter were validated using a recently published VBA. With soilphysics, the user has all the graphical and statistical power of R to determine preconsolidation pressure using different methods. The package is distribution free (under the GPL-2|3) and is currently available from the Comprehensive R Archive Network.

  14. Towards standardized assessment of endoscope optical performance: geometric distortion

    NASA Astrophysics Data System (ADS)

    Wang, Quanzeng; Desai, Viraj N.; Ngo, Ying Z.; Cheng, Wei-Chung; Pfefer, Joshua

    2013-12-01

    Technological advances in endoscopes, such as capsule, ultrathin and disposable devices, promise significant improvements in safety, clinical effectiveness and patient acceptance. Unfortunately, the industry lacks test methods for preclinical evaluation of key optical performance characteristics (OPCs) of endoscopic devices that are quantitative, objective and well-validated. As a result, it is difficult for researchers and developers to compare image quality and evaluate equivalence to, or improvement upon, prior technologies. While endoscope OPCs include resolution, field of view, and depth of field, among others, our focus in this paper is geometric image distortion. We reviewed specific test methods for distortion and then developed an objective, quantitative test method based on well-defined experimental and data processing steps to evaluate radial distortion in the full field of view of an endoscopic imaging system. Our measurements and analyses showed that a second-degree polynomial equation could well describe the radial distortion curve of a traditional endoscope. The distortion evaluation method was effective for correcting the image and can be used to explain other widely accepted evaluation methods such as picture height distortion. Development of consensus standards based on promising test methods for image quality assessment, such as the method studied here, will facilitate clinical implementation of innovative endoscopic devices.

  15. Comparison of Psychophysical and Physical Measurements of Real Ear to Coupler Differences.

    PubMed

    Koning, Raphael; Wouters, Jan; Francart, Tom

    2015-01-01

    The purpose of the study is to compare real ear to coupler difference (RECD) curves based on physical and psychophysical measures. For the physically measured RECD, the RECD was measured with real ear and coupler measurements for the ear simulator and HA1- and HA2 2-cc couplers. The psychophysically measured RECDs were derived from audiogram measures. RECDs were measured in 19 normally hearing subjects. The coupler measurement was done with the probe microphone and the coupler microphone itself. Psychophysically measured RECDs were derived for all subjects by measuring the audiogram in sound field and with an ER-3A insert phone. Reference data were obtained for the three coupler types. It was possible to derive the RECD curve with psychophysical methods. There was no overall statistical difference between the physically and psychophysically measured RECD curves for the HA2 2-cc coupler and the ear simulator. The standard deviation was, however, much higher for the psychophysically derived RECD, indicating that physically measured RECDs are more precise than psychophysically derived RECDs. For the physical RECD measurements, the coupler microphone should be used for the coupler measurement. Physically measured RECDs were validated on group level by the reliable derivation of the RECD curve from audiogram measures.

  16. Electron spin resonance spectral study of PVC and XLPE insulation materials and their life time analysis.

    PubMed

    Morsy, M A; Shwehdi, M H

    2006-03-01

    Electron spin resonance (ESR) study is carried out to characterize thermal endurance of insulating materials used in power cable industry. The presented work provides ESR investigation and evaluation of widely used cable insulation materials, namely polyvinyl chloride (PVC) and cross-linked polyethylene (XLPE). The results confirm the fact that PVC is rapidly degrades than XLPE. The study also indicates that colorants and cable's manufacturing processes enhance the thermal resistance of the PVC. It also verifies the powerfulness and the importance of the ESR-testing of insulation materials compared to other tests assumed by International Electrotechnical Commission (IEC) Standard 216-procedure, e.g. weight loss (WL), electric strength (ES) or tensile strength (TS). The estimated thermal endurance parameters by ESR-method show that the other standard methods overestimate these parameters and produce less accurate thermal life time curves of cable insulation materials.

  17. High-performance liquid chromatography-electrospray ionization mass spectrometry determination of sodium ferulate in human plasma.

    PubMed

    Yang, Cheng; Tian, Yuan; Zhang, Zunjian; Xu, Fengguo; Chen, Yun

    2007-02-19

    A selective and sensitive high-performance liquid chromatography-electrospray ionization mass spectrometry method has been developed for the determination of sodium ferulate in human plasma. The sample preparation was a liquid-liquid extraction and chromatographic separation was achieved with an Agilent ZORBAX SB-C(18) (3.5 microm, 100 mm x 3.0 mm) column, using a mobile phase of methanol-0.05% acetic acid 40:60 (v/v). Standard curves were linear (r(2)=0.9982) over the concentration range of 0.007-4.63 nM/ml and had acceptable accuracy and precision. The within- and between-batch precisions were within 12% relative standard deviation. The lower limit of quantification (LLOQ) was 0.007 nM/ml. The validated HPLC-ESI-MS method has been used successfully to study sodium ferulate pharmacokinetics, bioavailability and bioequivalence in 20 healthy volunteers.

  18. Dose calculation accuracy of different image value to density tables for cone-beam CT planning in head & neck and pelvic localizations.

    PubMed

    Barateau, Anaïs; Garlopeau, Christopher; Cugny, Audrey; De Figueiredo, Bénédicte Henriques; Dupin, Charles; Caron, Jérôme; Antoine, Mikaël

    2015-03-01

    We aimed to identify the most accurate combination of phantom and protocol for image value to density table (IVDT) on volume-modulated arc therapy (VMAT) dose calculation based on kV-Cone-beam CT imaging, for head and neck (H&N) and pelvic localizations. Three phantoms (Catphan(®)600, CIRS(®)062M (inner phantom for head and outer phantom for body), and TomoTherapy(®) "Cheese" phantom) were used to create IVDT curves of CBCT systems with two different CBCT protocols (Standard-dose Head and Standard Pelvis). Hounsfield Unit (HU) time stability and repeatability for a single On-Board-Imager (OBI) and compatibility of two distinct devices were assessed with Catphan(®)600. Images from the anthropomorphic phantom CIRS ATOM(®) for both CT and CBCT modalities were used for VMAT dose calculation from different IVDT curves. Dosimetric indices from CT and CBCT imaging were compared. IVDT curves from CBCT images were highly different depending on phantom used (up to 1000 HU for high densities) and protocol applied (up to 200 HU for high densities). HU time stability was verified over seven weeks. A maximum difference of 3% on the dose calculation indices studied was found between CT and CBCT VMAT dose calculation across the two localizations using appropriate IVDT curves. One IVDT curve per localization can be established with a bi-monthly verification of IVDT-CBCT. The IVDT-CBCTCIRS-Head phantom with the Standard-dose Head protocol was the most accurate combination for dose calculation on H&N CBCT images. For pelvic localizations, the IVDT-CBCTCheese established with the Standard Pelvis protocol provided the best accuracy. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  19. Design of ultraviolet wavelength and standard solution concentrations in relative response factors for simultaneous determination of multi-components with single reference standard in herbal medicines.

    PubMed

    Yang, Ting-Wen; Zhao, Chao; Fan, Yong; Qi, Lian-Wen; Li, Ping

    2015-10-10

    Single standard to determine multi-components (SSDMC) is a practical pattern for quality evaluation of herbal medicines (HMs). However, it remains challenging because of potential inconsistency of relative response factors (RRF) on different instruments. In this work, the effects of two key roles, i.e., ultraviolet (UV) wavelength and standard solution concentrations, on reproducibility of RRF were investigated. The effect of UV wavelength on reproducibility of RRF was studied by plotting the relationship of the peak area ratios (internal standard vs analyte) to wavelengths. The preferable wavelength should be set at the flat parts of the curve. Optimized 300 nm produced a 0.38% RSD for emodin/emodin-8-O-β-D-glucopyranoside on five instruments, much lower than 2.80% obtained from the maximum wavelength at 290 nm. Next, the effects of standard solution concentrations of emodin on its response factor (RF) were investigated. For one single point method, low concentration less than 49 b/k resulted in significant variations in RF. For emodin, when the concentration is higher than 7.00 μg mL(-1), a low standard deviation (SD) value at 0.13 was obtained, while lower than 7.00 μg mL(-1), a high SD at 3.71 was obtained. The developed SSDMC method was then applied to determination of target components in 10 Polygonum cuspidatum samples and showed comparable accuracy to conventional calibration methods with deviation less than 1%. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Next-Generation Intensity-Duration-Frequency Curves for Hydrologic Design in Snow-Dominated Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Hongxiang; Sun, Ning; Wigmosta, Mark

    There is a renewed focus on the design of infrastructure resilient to extreme hydrometeorological events. While precipitation-based intensity-duration-frequency (IDF) curves are commonly used as part of infrastructure design, a large percentage of peak runoff events in snow-dominated regions are caused by snowmelt, particularly during rain-on-snow (ROS) events. In these regions, precipitation-based IDF curves may lead to substantial over-/under-estimation of design basis events and subsequent over-/under-design of infrastructure. To overcome this deficiency, we proposed next-generation IDF (NG-IDF) curves, which characterize the actual water reaching the land surface. We compared NG-IDF curves to standard precipitation-based IDF curves for estimates of extreme eventsmore » at 376 Snowpack Telemetry (SNOTEL) stations across the western United States that each had at least 30 years of high-quality records. We found standard precipitation-based IDF curves at 45% of the stations were subject to under-design, many with significant under-estimation of 100-year extreme events, for which the precipitation-based IDF curves can underestimate water potentially available for runoff by as much as 125% due to snowmelt and ROS events. The regions with the greatest potential for under-design were in the Pacific Northwest, the Sierra Nevada Mountains, and the Middle and Southern Rockies. We also found the potential for over-design at 20% of the stations, primarily in the Middle Rockies and Arizona mountains. These results demonstrate the need to consider snow processes in the development of IDF curves, and they suggest use of the more robust NG-IDF curves for hydrologic design in snow-dominated environments.« less

  1. An integrated strategy for the quantitative analysis of endogenous proteins: A case of gender-dependent expression of P450 enzymes in rat liver microsome.

    PubMed

    Shao, Yuhao; Yin, Xiaoxi; Kang, Dian; Shen, Boyu; Zhu, Zhangpei; Li, Xinuo; Li, Haofeng; Xie, Lin; Wang, Guangji; Liang, Yan

    2017-08-01

    Liquid chromatography mass spectrometry based methods provide powerful tools for protein analysis. Cytochromes P450 (CYPs), the most important drug metabolic enzymes, always exhibit sex-dependent expression patterns and metabolic activities. To date, analysis of CYPs based on mass spectrometry is still facing critical technical challenges due to the complexity and diversity of CYP isoforms besides lack of corresponding standards. The aim of present work consisted in developing a label-free qualitative and quantitative strategy for endogenous proteins, and then applying to the gender-difference study for CYPs in rat liver microsomes (RLMs). Initially, trypsin digested RLM specimens were analyzed by the nanoLC-LTQ-Orbitrap MS/MS. Skyline, an open source and freely available software for targeted proteomics research, was then used to screen the main CYP isoforms in RLMs under a series of criteria automatically, and a total of 40 and 39 CYP isoforms were identified in male and female RLMs, respectively. More importantly, a robust quantitative method in a tandem mass spectrometry-multiple reaction mode (MS/MS-MRM) was built and optimized under the help of Skyline, and successfully applied into the CYP gender difference study in RLMs. In this process, a simple and accurate approach named 'Standard Curve Slope" (SCS) was established based on the difference of standard curve slopes of CYPs between female and male RLMs in order to assess the gender difference of CYPs in RLMs. This presently developed methodology and approach could be widely used in the protein regulation study during drug pharmacological mechanism research. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Can we improve the clinical utility of respiratory rate as a monitored vital sign?

    PubMed

    Chen, Liangyou; Reisner, Andrew T; Gribok, Andrei; McKenna, Thomas M; Reifman, Jaques

    2009-06-01

    Respiratory rate (RR) is a basic vital sign, measured and monitored throughout a wide spectrum of health care settings, although RR is historically difficult to measure in a reliable fashion. We explore an automated method that computes RR only during intervals of clean, regular, and consistent respiration and investigate its diagnostic use in a retrospective analysis of prehospital trauma casualties. At least 5 s of basic vital signs, including heart rate, RR, and systolic, diastolic, and mean arterial blood pressures, were continuously collected from 326 spontaneously breathing trauma casualties during helicopter transport to a level I trauma center. "Reliable" RR data were identified retrospectively using automated algorithms. The diagnostic performances of reliable versus standard RR were evaluated by calculation of the receiver operating characteristic curves using the maximum-likelihood method and comparison of the summary areas under the receiver operating characteristic curves (AUCs). Respiratory rate shows significant data-reliability differences. For identifying prehospital casualties who subsequently receive a respiratory intervention (hospital intubation or tube thoracotomy), standard RR yields an AUC of 0.59 (95% confidence interval, 0.48-0.69), whereas reliable RR yields an AUC of 0.67 (0.57-0.77), P < 0.05. For identifying casualties subsequently diagnosed with a major hemorrhagic injury and requiring blood transfusion, standard RR yields an AUC of 0.60 (0.49-0.70), whereas reliable RR yields 0.77 (0.67-0.85), P < 0.001. Reliable RR, as determined by an automated algorithm, is a useful parameter for the diagnosis of respiratory pathology and major hemorrhage in a trauma population. It may be a useful input to a wide variety of clinical scores and automated decision-support algorithms.

  3. [Establishment of the mathematic model of total quantum statistical moment standard similarity for application to medical theoretical research].

    PubMed

    He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian

    2013-09-01

    The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents with various solvent extracts. The TQSMSS can characterize the sample similarity, by which we can quantitate the correct probability with the test of power under to make positive and negative conclusions no matter the samples come from same population under confident coefficient a or not, by which we can realize an analysis at both macroscopic and microcosmic levels, as an important similar analytical method for medical theoretical research.

  4. [Determination of 27 elements in Maca nationality's medicine by microwave digestion ICP-MS].

    PubMed

    Yu, Gui-fang; Zhong, Hai-jie; Hu, Jun-hua; Wang, Jing; Huang, Wen-zhe; Wang, Zhen-zhong; Xiao, Wei

    2015-12-01

    An analysis method has been established to test 27 elements (Li, Be, B, Mg, Al, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Ga, As, Sr, Mo, Cd, Sn, Sb, Ba, La, Hg, Pb, Bi) in Maca nationality's medicine with microwave digestion-ICP-MS. Sample solutions were analyzed by ICP-MS after microwave digestion, and the contents of elements were calculated according to their calibration curves, and internal standard method was adopted to reduce matrix effect and other interference effects. The experimental results showed that the linear relations of all the elements were very good; the correlation coefficient (r) was 0.9994-1.0000 (Hg was 0.9982) ; the limits of detection were 0.003-2.662 microg x L(-1); the relative standard deviations for all elements of reproducibility were lower than 5% (except the individual elements); the recovery rate were 78.5%-123.7% with RSD lower than 5% ( except the individual elements). The analytical results of standard material showed acceptable agreement with the certified values. This method was applicable to determinate the contents of multi-elements in Maca which had a high sensitivity, good specificity and good repeatability, and provide basis for the quality control of Maca.

  5. Statistical Analyses for Probabilistic Assessments of the Reactor Pressure Vessel Structural Integrity: Building a Master Curve on an Extract of the 'Euro' Fracture Toughness Dataset, Controlling Statistical Uncertainty for Both Mono-Temperature and multi-temperature tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Josse, Florent; Lefebvre, Yannick; Todeschini, Patrick

    2006-07-01

    Assessing the structural integrity of a nuclear Reactor Pressure Vessel (RPV) subjected to pressurized-thermal-shock (PTS) transients is extremely important to safety. In addition to conventional deterministic calculations to confirm RPV integrity, Electricite de France (EDF) carries out probabilistic analyses. Probabilistic analyses are interesting because some key variables, albeit conventionally taken at conservative values, can be modeled more accurately through statistical variability. One variable which significantly affects RPV structural integrity assessment is cleavage fracture initiation toughness. The reference fracture toughness method currently in use at EDF is the RCCM and ASME Code lower-bound K{sub IC} based on the indexing parameter RT{submore » NDT}. However, in order to quantify the toughness scatter for probabilistic analyses, the master curve method is being analyzed at present. Furthermore, the master curve method is a direct means of evaluating fracture toughness based on K{sub JC} data. In the framework of the master curve investigation undertaken by EDF, this article deals with the following two statistical items: building a master curve from an extract of a fracture toughness dataset (from the European project 'Unified Reference Fracture Toughness Design curves for RPV Steels') and controlling statistical uncertainty for both mono-temperature and multi-temperature tests. Concerning the first point, master curve temperature dependence is empirical in nature. To determine the 'original' master curve, Wallin postulated that a unified description of fracture toughness temperature dependence for ferritic steels is possible, and used a large number of data corresponding to nuclear-grade pressure vessel steels and welds. Our working hypothesis is that some ferritic steels may behave in slightly different ways. Therefore we focused exclusively on the basic french reactor vessel metal of types A508 Class 3 and A 533 grade B Class 1, taking the sampling level and direction into account as well as the test specimen type. As for the second point, the emphasis is placed on the uncertainties in applying the master curve approach. For a toughness dataset based on different specimens of a single product, application of the master curve methodology requires the statistical estimation of one parameter: the reference temperature T{sub 0}. Because of the limited number of specimens, estimation of this temperature is uncertain. The ASTM standard provides a rough evaluation of this statistical uncertainty through an approximate confidence interval. In this paper, a thorough study is carried out to build more meaningful confidence intervals (for both mono-temperature and multi-temperature tests). These results ensure better control over uncertainty, and allow rigorous analysis of the impact of its influencing factors: the number of specimens and the temperatures at which they have been tested. (authors)« less

  6. A curved surface micro-moiré method and its application in evaluating curved surface residual stress

    NASA Astrophysics Data System (ADS)

    Zhang, Hongye; Wu, Chenlong; Liu, Zhanwei; Xie, Huimin

    2014-09-01

    The moiré method is typically applied to the measurement of deformations of a flat surface while, for a curved surface, this method is rarely used other than for projection moiré or moiré interferometry. Here, a novel colour charge-coupled device (CCD) micro-moiré method has been developed, based on which a curved surface micro-moiré (CSMM) method is proposed with a colour CCD and optical microscope (OM). In the CSMM method, no additional reference grating is needed as a Bayer colour filter array (CFA) installed on the OM in front of the colour CCD image sensor performs this role. Micro-moiré fringes with high contrast are directly observed with the OM through the Bayer CFA under the special condition of observing a curved specimen grating. The principle of the CSMM method based on a colour CCD micro-moiré method and its application range and error analysis are all described in detail. In an experiment, the curved surface residual stress near a welded seam on a stainless steel tube was investigated using the CSMM method.

  7. Accelerated pharmacokinetic map determination for dynamic contrast enhanced MRI using frequency-domain based Tofts model.

    PubMed

    Vajuvalli, Nithin N; Nayak, Krupa N; Geethanath, Sairam

    2014-01-01

    Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is widely used in the diagnosis of cancer and is also a promising tool for monitoring tumor response to treatment. The Tofts model has become a standard for the analysis of DCE-MRI. The process of curve fitting employed in the Tofts equation to obtain the pharmacokinetic (PK) parameters is time-consuming for high resolution scans. Current work demonstrates a frequency-domain approach applied to the standard Tofts equation to speed-up the process of curve-fitting in order to obtain the pharmacokinetic parameters. The results obtained show that using the frequency domain approach, the process of curve fitting is computationally more efficient compared to the time-domain approach.

  8. Power of tests for comparing trend curves with application to national immunization survey (NIS).

    PubMed

    Zhao, Zhen

    2011-02-28

    To develop statistical tests for comparing trend curves of study outcomes between two socio-demographic strata across consecutive time points, and compare statistical power of the proposed tests under different trend curves data, three statistical tests were proposed. For large sample size with independent normal assumption among strata and across consecutive time points, the Z and Chi-square test statistics were developed, which are functions of outcome estimates and the standard errors at each of the study time points for the two strata. For small sample size with independent normal assumption, the F-test statistic was generated, which is a function of sample size of the two strata and estimated parameters across study period. If two trend curves are approximately parallel, the power of Z-test is consistently higher than that of both Chi-square and F-test. If two trend curves cross at low interaction, the power of Z-test is higher than or equal to the power of both Chi-square and F-test; however, at high interaction, the powers of Chi-square and F-test are higher than that of Z-test. The measurement of interaction of two trend curves was defined. These tests were applied to the comparison of trend curves of vaccination coverage estimates of standard vaccine series with National Immunization Survey (NIS) 2000-2007 data. Copyright © 2011 John Wiley & Sons, Ltd.

  9. What can Numerical Computation do for the History of Science? (Study of an Orbit Drawn by Newton on a Letter to Hooke)

    NASA Astrophysics Data System (ADS)

    Stuchi, Teresa; Cardozo Dias, P.

    2013-05-01

    Abstract (2,250 Maximum Characters): On a letter to Robert Hooke, Isaac Newton drew the orbit of a mass moving under a constant attracting central force. How he drew the orbit may indicate how and when he developed dynamic categories. Some historians claim that Newton used a method contrived by Hooke; others that he used some method of curvature. We prove geometrically: Hooke’s method is a second order symplectic area preserving algorithm, and the method of curvature is a first order algorithm without special features; then we integrate the hamiltonian equations. Integration by the method of curvature can also be done exploring geometric properties of curves. We compare three methods: Hooke’s method, the method of curvature and a first order method. A fourth order algorithm sets a standard of comparison. We analyze which of these methods best explains Newton’s drawing.

  10. What can numerical computation do for the history of science? (a study of an orbit drawn by Newton in a letter to Hooke)

    NASA Astrophysics Data System (ADS)

    Cardozo Dias, Penha Maria; Stuchi, T. J.

    2013-11-01

    In a letter to Robert Hooke, Isaac Newton drew the orbit of a mass moving under a constant attracting central force. The drawing of the orbit may indicate how and when Newton developed dynamic categories. Some historians claim that Newton used a method contrived by Hooke; others that he used some method of curvature. We prove that Hooke’s method is a second-order symplectic area-preserving algorithm, and the method of curvature is a first-order algorithm without special features; then we integrate the Hamiltonian equations. Integration by the method of curvature can also be done, exploring the geometric properties of curves. We compare three methods: Hooke’s method, the method of curvature and a first-order method. A fourth-order algorithm sets a standard of comparison. We analyze which of these methods best explains Newton’s drawing.

  11. Lateral Penumbra Modelling Based Leaf End Shape Optimization for Multileaf Collimator in Radiotherapy.

    PubMed

    Zhou, Dong; Zhang, Hui; Ye, Peiqing

    2016-01-01

    Lateral penumbra of multileaf collimator plays an important role in radiotherapy treatment planning. Growing evidence has revealed that, for a single-focused multileaf collimator, lateral penumbra width is leaf position dependent and largely attributed to the leaf end shape. In our study, an analytical method for leaf end induced lateral penumbra modelling is formulated using Tangent Secant Theory. Compared with Monte Carlo simulation and ray tracing algorithm, our model serves well the purpose of cost-efficient penumbra evaluation. Leaf ends represented in parametric forms of circular arc, elliptical arc, Bézier curve, and B-spline are implemented. With biobjective function of penumbra mean and variance introduced, genetic algorithm is carried out for approximating the Pareto frontier. Results show that for circular arc leaf end objective function is convex and convergence to optimal solution is guaranteed using gradient based iterative method. It is found that optimal leaf end in the shape of Bézier curve achieves minimal standard deviation, while using B-spline minimum of penumbra mean is obtained. For treatment modalities in clinical application, optimized leaf ends are in close agreement with actual shapes. Taken together, the method that we propose can provide insight into leaf end shape design of multileaf collimator.

  12. Comparison between audio-only and audiovisual biofeedback for regulating patients' respiration during four-dimensional radiotherapy

    PubMed Central

    Yu, Jesang; Choi, Ji Hoon; Ma, Sun Young; Jeung, Tae Sig

    2015-01-01

    Purpose To compare audio-only biofeedback to conventional audiovisual biofeedback for regulating patients' respiration during four-dimensional radiotherapy, limiting damage to healthy surrounding tissues caused by organ movement. Materials and Methods Six healthy volunteers were assisted by audiovisual or audio-only biofeedback systems to regulate their respirations. Volunteers breathed through a mask developed for this study by following computer-generated guiding curves displayed on a screen, combined with instructional sounds. They then performed breathing following instructional sounds only. The guiding signals and the volunteers' respiratory signals were logged at 20 samples per second. Results The standard deviations between the guiding and respiratory curves for the audiovisual and audio-only biofeedback systems were 21.55% and 23.19%, respectively; the average correlation coefficients were 0.9778 and 0.9756, respectively. The regularities between audiovisual and audio-only biofeedback for six volunteers' respirations were same statistically from the paired t-test. Conclusion The difference between the audiovisual and audio-only biofeedback methods was not significant. Audio-only biofeedback has many advantages, as patients do not require a mask and can quickly adapt to this method in the clinic. PMID:26484309

  13. An improved method to determine neuromuscular properties using force laws - From single muscle to applications in human movements.

    PubMed

    Siebert, T; Sust, M; Thaller, S; Tilp, M; Wagner, H

    2007-04-01

    We evaluate an improved method for individually determining neuromuscular properties in vivo. The method is based on Hill's equation used as a force law combined with Newton's equation of motion. To ensure the range of validity of Hill's equation, we first perform detailed investigations on in vitro single muscles. The force-velocity relation determined with the model coincides well with results obtained by standard methods (r=.99) above 20% of the isometric force. In addition, the model-predicted force curves during work loop contractions very well agree with measurements (mean difference: 2-3%). Subsequently, we deduce theoretically under which conditions it is possible to combine several muscles of the human body to model muscles. This leads to a model equation for human leg extension movements containing parameters for the muscle properties and for the activation. To numerically determine these invariant neuromuscular properties we devise an experimental method based on concentric and isometric leg extensions. With this method we determine individual muscle parameters from experiments such that the simulated curves agree well with experiments (r=.99). A reliability test with 12 participants revealed correlations r=.72-.91 for the neuromuscular parameters (p<.01). Predictions of similar movements under different conditions show mean errors of about 5%. In addition, we present applications in sports practise and theory.

  14. Spectrophotometric and Reversed-Phase High-Performance Liquid Chromatographic Method for the Determination of Doxophylline in Pharmaceutical Formulations

    PubMed Central

    Joshi, HR; Patel, AH; Captain, AD

    2010-01-01

    Two methods are described for determination of Doxophylline in a solid dosage form. The first method was based on ultraviolet (UV)-spectrophotometric determination of the drug. It involves absorbance measurement at 274 nm (λmax of Doxophylline) in 0.1 N hydrochloric acid. The calibration curve was linear, with the correlation coefficient between 0.99 and 1.0 over a concentration range of 0.20–30 mg/ml for the drug. The second method was based on high-performance liquid chromatography (HPLC) separation of the drug in reverse-phase mode using the Hypersil ODS C18 column (250 × 4.6 mm, 5 mm). The mobile phase constituted of buffer acetonitrile (80:20) and pH adjusted to 3.0, with dilute orthophosphoric acid delivered at a flow rate 1.0 ml/min. Detection was performed at 210 nm. Separation was completed within 7 min. The calibration curve was linear, with the correlation coefficient between 0.99 and 1.0 over a concentration range of 0.165–30 mg/ml for the drug. The relative standard deviation was found to be <2.0% for the UV-spectrophotometry and HPLC methods. Both these methods have been successively applied to the solid dosage pharmaceutical formulation, and were fully validated according to ICH guidelines. PMID:21042488

  15. Diagnostic performance of different measurement methods for lung nodule enhancement at quantitative contrast-enhanced computed tomography

    NASA Astrophysics Data System (ADS)

    Wormanns, Dag; Klotz, Ernst; Dregger, Uwe; Beyer, Florian; Heindel, Walter

    2004-05-01

    Lack of angiogenesis virtually excludes malignancy of a pulmonary nodule; assessment with quantitative contrast-enhanced CT (QECT) requires a reliable enhancement measurement technique. Diagnostic performance of different measurement methods in the distinction between malignant and benign nodules was evaluated. QECT (unenhanced scan and 4 post-contrast scans) was performed in 48 pulmonary nodules (12 malignant, 12 benign, 24 indeterminate). Nodule enhancement was the difference between the highest nodule density at any post-contrast scan and the unenhanced scan. Enhancement was determined with: A) the standard 2D method; B) a 3D method consisting of segmentation, removal of peripheral structures and density averaging. Enhancement curves were evaluated for their plausibility using a predefined set of criteria. Sensitivity and specificity were 100% and 33% for the 2D method resp. 92% and 55% for the 3D method using a threshold of 20 HU. One malignant nodule did not show significant enhancement with method B due to adjacent atelectasis which disappeared within the few minutes of the QECT examination. Better discrimination between benign and malignant lesions was achieved with a slightly higher threshold than proposed in the literature. Application of plausibility criteria to the enhancement curves rendered less plausibility faults with the 3D method. A new 3D method for analysis of QECT scans yielded less artefacts and better specificity in the discrimination between benign and malignant pulmonary nodules when using an appropriate enhancement threshold. Nevertheless, QECT results must be interpreted with care.

  16. The determination of calcium in phosphate, carbonate, and silicate rocks by flame photometer

    USGS Publications Warehouse

    Kramer, Henry

    1956-01-01

    A method has been developed for the determination of calcium in phosphate, carbonate, and silicate rocks using the Beckman flame photometer, with photomultiplier attachement. The sample is dissolved in hydrofluoric, nitric, and perchloric acids, the hydrofluoric and nitric acids are expelled, a radiation buffer consisting of aluminum, magnesium, iron, sodium, potassium, phosphoric acid, and nitric acid is added, and the solution is atomized in an oxy-hydrogen flame with an instrument setting of 554 mµ. Measurements are made by comparison against calcium standards, prepared in the same manner, in the 0 to 50 ppm range. The suppression of calcium emission by aluminum and phosphate was overcome by the addition of a large excess of magnesium. This addition almost completely restores the standard curve obtained from a solution of calcium nitrate. Interference was noted when the iron concentration in the aspirated solution (including the iron from the buffer) exceeded 100 ppm iron. Other common rock-forming elements did not interfere. The results obtained by this procedure are within ± 2 percent of the calcium oxide values obtained by other methods in the range 1 to 95 percent calcium oxide. In the 0 to 1 percent calcium oxide range the method compares favorably with standard methods.

  17. [Quantitative study of diesel/CNG buses exhaust particulate size distribution in a road tunnel].

    PubMed

    Zhu, Chun; Zhang, Xu

    2010-10-01

    Vehicle emission is one of main sources of fine/ultra-fine particles in many cities. This study firstly presents daily mean particle size distributions of mixed diesel/CNG buses traffic flow by 4 days consecutive real world measurement in an Australia road tunnel. Emission factors (EFs) of particle size distribution of diesel buses and CNG buses are obtained by MLR methods, particle distributions of diesel buses and CNG buses are observed as single accumulation mode and nuclei-mode separately. Particle size distributions of mixed traffic flow are decomposed by two log-normal fitting curves for each 30 min interval mean scans, the degrees of fitting between combined fitting curves and corresponding in-situ scans for totally 90 fitting scans are from 0.972 to 0.998. Finally particle size distributions of diesel buses and CNG buses are quantified by statistical whisker-box charts. For log-normal particle size distribution of diesel buses, accumulation mode diameters are 74.5-86.5 nm, geometric standard deviations are 1.88-2.05. As to log-normal particle size distribution of CNG buses, nuclei-mode diameters are 19.9-22.9 nm, geometric standard deviations are 1.27-1.3.

  18. Variance of transionospheric VLF wave power absorption

    NASA Astrophysics Data System (ADS)

    Tao, X.; Bortnik, J.; Friedrich, M.

    2010-07-01

    To investigate the effects of D-region electron-density variance on wave power absorption, we calculate the power reduction of very low frequency (VLF) waves propagating through the ionosphere with a full wave method using the standard ionospheric model IRI and in situ observational data. We first verify the classic absorption curves of Helliwell's using our full wave code. Then we show that the IRI model gives overall smaller wave absorption compared with Helliwell's. Using D-region electron densities measured by rockets during the past 60 years, we demonstrate that the power absorption of VLF waves is subject to large variance, even though Helliwell's absorption curves are within ±1 standard deviation of absorption values calculated from data. Finally, we use a subset of the rocket data that are more representative of the D region of middle- and low-latitude VLF wave transmitters and show that the average quiet time wave absorption is smaller than that of Helliwell's by up to 100 dB at 20 kHz and 60 dB at 2 kHz, which would make the model-observation discrepancy shown by previous work even larger. This result suggests that additional processes may be needed to explain the discrepancy.

  19. [Experimental studies of using real-time fluorescence quantitative PCR and RT-PCR to detect E6 and E7 genes of human papillomavirus type 16 in cervical carcinoma cell lines].

    PubMed

    Chen, Yue-yue; Peng, Zhi-lan; Liu, Shan-ling; He, Bing; Hu, Min

    2007-06-01

    To establish a method of using real-time fluorescence quantitative PCR and RT-PCR to detect the E6 and E7 genes of human papillomavirus type 16 (HPV-16). Plasmids containing HPV-16 E6 or E7 were used to generate absolute standard curves. Three cervical carcinoma cell lines CaSki, SiHa and HeLa were tested by real-time fluorescence quantitative PCR and RT-PCR analyses for the expressions of HPV-16 E6 and E7. The correlation coefficients of standard curves were larger than 0. 99, and the PCR efficiency was more than 90%. The relative levels of HPV-16 E6 and E7 DNA and RNA were CaSki>SiHa>HeLa cell. HPV-16 E6 and E7 quantum by real-time fluorescence quantitative PCR and RT-PCR analyses may serve as a reliable and sensitive tool. This study provides the possibility of further researches on the relationship between HPV-16 E6 or E7 copy number and cervical carcinoma.

  20. Evaluation of the impact of matrix effect on quantification of pesticides in foods by gas chromatography-mass spectrometry using isotope-labeled internal standards.

    PubMed

    Yarita, Takashi; Aoyagi, Yoshie; Otake, Takamitsu

    2015-05-29

    The impact of the matrix effect in GC-MS quantification of pesticides in food using the corresponding isotope-labeled internal standards was evaluated. A spike-and-recovery study of nine target pesticides was first conducted using paste samples of corn, green soybean, carrot, and pumpkin. The observed analytical values using isotope-labeled internal standards were more accurate for most target pesticides than that obtained using the external calibration method, but were still biased from the spiked concentrations when a matrix-free calibration solution was used for calibration. The respective calibration curves for each target pesticide were also prepared using matrix-free calibration solutions and matrix-matched calibration solutions with blank soybean extract. The intensity ratio of the peaks of most target pesticides to that of the corresponding isotope-labeled internal standards was influenced by the presence of the matrix in the calibration solution; therefore, the observed slope varied. The ratio was also influenced by the type of injection method (splitless or on-column). These results indicated that matrix-matching of the calibration solution is required for very accurate quantification, even if isotope-labeled internal standards were used for calibration. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. High-speed liquid chromatographic determination of pilocarpine in pharmaceutical dosage forms.

    PubMed

    Khalil, S K

    1977-11-01

    A specific method for the direct determination of pilocarpine in aqueous pharmaceuticals in the presence of decomposition products, methylcellulose, and other ingredients usually present in pharmaceuticals is described. The method involves separation by high-speed liquid chromatography using, in series, octadecylsilane bonded to silica and cyanopropylsilane bonded to silica columns and a tetrahydrofuran-pH 9.2 borate buffer (3:7) eluant. Quantitation is achieved by monitoring the absorbance of the effluent at 254 nm and using a pyridine internal standard and a calibration curve prepared from known concentrations of pilocarpine nitrate. The reproducibility of the retention time and peak area was better than 2.0%.

  2. Automated solid-phase extraction of herbicides from water for gas chromatographic-mass spectrometric analysis

    USGS Publications Warehouse

    Meyer, M.T.; Mills, M.S.; Thurman, E.M.

    1993-01-01

    An automated solid-phase extraction (SPE) method was developed for the pre-concentration of chloroacetanilide and triazine herbicides, and two triazine metabolites from 100-ml water samples. Breakthrough experiments for the C18 SPE cartridge show that the two triazine metabolites are not fully retained and that increasing flow-rate decreases their retention. Standard curve r2 values of 0.998-1.000 for each compound were consistently obtained and a quantitation level of 0.05 ??g/l was achieved for each compound tested. More than 10,000 surface and ground water samples have been analyzed by this method.

  3. [A method for inducing standardized spiral fractures of the tibia in the animal experiment].

    PubMed

    Seibold, R; Schlegel, U; Cordey, J

    1995-07-01

    A method for the deliberate weakening of cortical bone has been developed on the basis of an already established technique for creating butterfly fractures. It enables one to create the same type of fracture, i.e., a spiral fracture, every time. The fracturing process is recorded as a force-strain curve. The results of the in vitro investigations form a basis for the preparation of experimental tasks aimed at demonstrating internal fixation techniques and their influence on the vascularity of the bone in simulated fractures. Animal protection law lays down that this fracture model must not fail in animal experiments.

  4. Direct Determination of ECD in ECD Kit: A Solid Sample Quantitation Method for Active Pharmaceutical Ingredient in Drug Product

    PubMed Central

    Chao, Ming-Yu; Liu, Kung-Tien; Hsia, Yi-Chih; Liao, Mei-Hsiu; Shen, Lie-Hang

    2011-01-01

    Technetium-99m ethyl cysteinate dimer (Tc-99m-ECD) is an essential imaging agent used in evaluating the regional cerebral blood flow in patients with cerebrovascular diseases. Determination of active pharmaceutical ingredient, that is, L-Cysteine, N, N′-1,2-ethanediylbis-, diethyl ester, dihydrochloride (ECD) in ECD Kit is a relevant requirement for the pharmaceutical quality control in processes of mass fabrication. We here presented a direct solid sample determination method of ECD in ECD Kit without sample dissolution to avoid the rapid degradation of ECD. An elemental analyzer equipped with a nondispersive infrared detector and a calibration curve of coal standard was used for the quantitation of sulfur in ECD Kit. No significant matrix effect was found. The peak area of coal standard against the amount of sulfur was linear over the range of 0.03–0.10 mg, with a correlation coefficient (r) of 0.9993. Method validation parameters were achieved to demonstrate the potential of this method. PMID:21687539

  5. The Development and Application of a Method to Quantify the Quality of Cryoprotectant Conditions Using Standard Area Detector X-Ray Images

    NASA Technical Reports Server (NTRS)

    McFerrin, Michael; Snell, Edward; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    An X-ray based method for determining cryoprotectant concentrations necessary to protect solutions from crystalline ice formation was developed. X-ray images from a CCD area detector were integrated as powder patterns and quantified by determining the standard deviation of the slope of the normalized intensity curve in the resolution range where ice rings are known to occur. The method was tested determining the concentrations of glycerol, PEG400, ethylene glycol and 1,2-propanediol necessary to form an amorphous glass at 1OOK with each of the 98 crystallization solutions of Crystal Screens I and II (Hampton Research, Laguna Hills, California, USA). For conditions that required glycerol concentrations of 35% or above cryoprotectant conditions using 2,3-butanediol were determined. The method proved to be remarkably accurate. The results build on the work of [Garman and Mitchell] and extend the number, of suitable starting conditions to alternative cryoprotectants. In particular, 1,2-propanediol has emerged as a particularly good additive for glass formation upon flash cooling.

  6. Curved planar reformation and optimal path tracing (CROP) method for false positive reduction in computer-aided detection of pulmonary embolism in CTPA

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Guo, Yanhui; Wei, Jun; Chughtai, Aamer; Hadjiiski, Lubomir M.; Sundaram, Baskaran; Patel, Smita; Kuriakose, Jean W.; Kazerooni, Ella A.

    2013-03-01

    The curved planar reformation (CPR) method re-samples the vascular structures along the vessel centerline to generate longitudinal cross-section views. The CPR technique has been commonly used in coronary CTA workstation to facilitate radiologists' visual assessment of coronary diseases, but has not yet been used for pulmonary vessel analysis in CTPA due to the complicated tree structures and the vast network of pulmonary vasculature. In this study, a new curved planar reformation and optimal path tracing (CROP) method was developed to facilitate feature extraction and false positive (FP) reduction and improve our PE detection system. PE candidates are first identified in the segmented pulmonary vessels at prescreening. Based on Dijkstra's algorithm, the optimal path (OP) is traced from the pulmonary trunk bifurcation point to each PE candidate. The traced vessel is then straightened and a reformatted volume is generated using CPR. Eleven new features that characterize the intensity, gradient, and topology are extracted from the PE candidate in the CPR volume and combined with the previously developed 9 features to form a new feature space for FP classification. With IRB approval, CTPA of 59 PE cases were retrospectively collected from our patient files (UM set) and 69 PE cases from the PIOPED II data set with access permission. 595 and 800 PEs were manually marked by experienced radiologists as reference standard for the UM and PIOPED set, respectively. At a test sensitivity of 80%, the average FP rate was improved from 18.9 to 11.9 FPs/case with the new method for the PIOPED set when the UM set was used for training. The FP rate was improved from 22.6 to 14.2 FPs/case for the UM set when the PIOPED set was used for training. The improvement in the free response receiver operating characteristic (FROC) curves was statistically significant (p<0.05) by JAFROC analysis, indicating that the new features extracted from the CROP method are useful for FP reduction.

  7. Titrimetric and photometric methods for determination of hypochlorite in commercial bleaches.

    PubMed

    Jonnalagadda, Sreekanth B; Gengan, Prabhashini

    2010-01-01

    Two methods, simple titration and photometric methods for determination of hypochlorite are developed, based its reaction with hydrogen peroxide and titration of the residual peroxide by acidic permanganate. In the titration method, the residual hydrogen peroxide is estimated by titration with standard permanganate solution to estimate the hypochlorite concentration. The photometric method is devised to measure the concentration of remaining permanganate, after the reaction with residual hydrogen peroxide. It employs 4 ranges of calibration curves to enable the determination of hypochlorite accurately. The new photometric method measures hypochlorite in the range 1.90 x 10(-3) to 1.90 x 10(-2) M, with high accuracy and with low variance. The concentrations of hypochlorite in diverse commercial bleach samples and in seawater which is enriched with hypochlorite were estimated using the proposed method and compared with the arsenite method. The statistical analysis validates the superiority of the proposed method.

  8. Computation of geometric representation of novel spectrophotometric methods used for the analysis of minor components in pharmaceutical preparations.

    PubMed

    Lotfy, Hayam M; Saleh, Sarah S; Hassan, Nagiba Y; Salem, Hesham

    2015-01-01

    Novel spectrophotometric methods were applied for the determination of the minor component tetryzoline HCl (TZH) in its ternary mixture with ofloxacin (OFX) and prednisolone acetate (PA) in the ratio of (1:5:7.5), and in its binary mixture with sodium cromoglicate (SCG) in the ratio of (1:80). The novel spectrophotometric methods determined the minor component (TZH) successfully in the two selected mixtures by computing the geometrical relationship of either standard addition or subtraction. The novel spectrophotometric methods are: geometrical amplitude modulation (GAM), geometrical induced amplitude modulation (GIAM), ratio H-point standard addition method (RHPSAM) and compensated area under the curve (CAUC). The proposed methods were successfully applied for the determination of the minor component TZH below its concentration range. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed. No difference was observed between the obtained results when compared to the reported HPLC method, which proved that the developed methods could be alternative to HPLC techniques in quality control laboratories. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Study on phase noise induced by 1/f noise of the modulator drive circuit in high-sensitivity fiber optic gyroscope

    NASA Astrophysics Data System (ADS)

    Teng, Fei; Jin, Jing; Li, Yong; Zhang, Chunxi

    2018-05-01

    The contribution of modulator drive circuit noise as a 1/f noise source to the output noise of the high-sensitivity interferometric fiber optic gyroscope (IFOG) was studied here. A noise model of closed-loop IFOG was built. By applying the simulated 1/f noise sequence into the model, a gyroscope output data series was acquired, and the corresponding power spectrum density (PSD) and the Allan variance curve were calculated to analyze the noise characteristic. The PSD curve was in the spectral shape of 1/f, which verifies that the modulator drive circuit induced a low frequency 1/f phase noise into the gyroscope. The random walk coefficient (RWC), a standard metric to characterize the noise performance of the IFOG, was calculated according to the Allan variance curve. Using an operational amplifier with an input 1/f noise of 520 nV/√Hz at 1 Hz, the RWC induced by this 1/f noise was 2 × 10-4°/√h, which accounts for 63% of the total RWC. To verify the correctness of the noise model we proposed, a high-sensitivity gyroscope prototype was built and tested. The simulated Allan variance curve gave a good rendition of the prototype actual measured curve. The error percentage between the simulated RWC and the measured value was less than 13%. According to the model, a noise reduction method is proposed and the effectiveness is verified by the experiment.

  10. Uranium, radium and thorium in soils with high-resolution gamma spectroscopy, MCNP-generated efficiencies, and VRF non-linear full-spectrum nuclide shape fitting

    NASA Astrophysics Data System (ADS)

    Metzger, Robert; Riper, Kenneth Van; Lasche, George

    2017-09-01

    A new method for analysis of uranium and radium in soils by gamma spectroscopy has been developed using VRF ("Visual RobFit") which, unlike traditional peak-search techniques, fits full-spectrum nuclide shapes with non-linear least-squares minimization of the chi-squared statistic. Gamma efficiency curves were developed for a 500 mL Marinelli beaker geometry as a function of soil density using MCNP. Collected spectra were then analyzed using the MCNP-generated efficiency curves and VRF to deconvolute the 90 keV peak complex of uranium and obtain 238U and 235U activities. 226Ra activity was determined either from the radon daughters if the equilibrium status is known, or directly from the deconvoluted 186 keV line. 228Ra values were determined from the 228Ac daughter activity. The method was validated by analysis of radium, thorium and uranium soil standards and by inter-comparison with other methods for radium in soils. The method allows for a rapid determination of whether a sample has been impacted by a man-made activity by comparison of the uranium and radium concentrations to those that would be expected from a natural equilibrium state.

  11. Measuring nanoscale viscoelastic parameters of cells directly from AFM force-displacement curves.

    PubMed

    Efremov, Yuri M; Wang, Wen-Horng; Hardy, Shana D; Geahlen, Robert L; Raman, Arvind

    2017-05-08

    Force-displacement (F-Z) curves are the most commonly used Atomic Force Microscopy (AFM) mode to measure the local, nanoscale elastic properties of soft materials like living cells. Yet a theoretical framework has been lacking that allows the post-processing of F-Z data to extract their viscoelastic constitutive parameters. Here, we propose a new method to extract nanoscale viscoelastic properties of soft samples like living cells and hydrogels directly from conventional AFM F-Z experiments, thereby creating a common platform for the analysis of cell elastic and viscoelastic properties with arbitrary linear constitutive relations. The method based on the elastic-viscoelastic correspondence principle was validated using finite element (FE) simulations and by comparison with the existed AFM techniques on living cells and hydrogels. The method also allows a discrimination of which viscoelastic relaxation model, for example, standard linear solid (SLS) or power-law rheology (PLR), best suits the experimental data. The method was used to extract the viscoelastic properties of benign and cancerous cell lines (NIH 3T3 fibroblasts, NMuMG epithelial, MDA-MB-231 and MCF-7 breast cancer cells). Finally, we studied the changes in viscoelastic properties related to tumorigenesis including TGF-β induced epithelial-to-mesenchymal transition on NMuMG cells and Syk expression induced phenotype changes in MDA-MB-231 cells.

  12. Curved sensors for compact high-resolution wide-field designs: prototype demonstration and optical characterization

    NASA Astrophysics Data System (ADS)

    Chambion, Bertrand; Gaschet, Christophe; Behaghel, Thibault; Vandeneynde, Aurélie; Caplet, Stéphane; Gétin, Stéphane; Henry, David; Hugot, Emmanuel; Jahn, Wilfried; Lombardo, Simona; Ferrari, Marc

    2018-02-01

    Over the recent years, a huge interest has grown for curved electronics, particularly for opto-electronics systems. Curved sensors help the correction of off-axis aberrations, such as Petzval Field Curvature, astigmatism, and bring significant optical and size benefits for imaging systems. In this paper, we first describe advantages of curved sensor and associated packaging process applied on a 1/1.8'' format 1.3Mpx global shutter CMOS sensor (Teledyne EV76C560) into its standard ceramic package with a spherical radius of curvature Rc=65mm and 55mm. The mechanical limits of the die are discussed (Finite Element Modelling and experimental), and electro-optical performances are investigated. Then, based on the monocentric optical architecture, we proposed a new design, compact and with a high resolution, developed specifically for a curved image sensor including optical optimization, tolerances, assembly and optical tests. Finally, a functional prototype is presented through a benchmark approach and compared to an existing standard optical system with same performances and a x2.5 reduction of length. The finality of this work was a functional prototype demonstration on the CEA-LETI during Photonics West 2018 conference. All these experiments and optical results demonstrate the feasibility and high performances of systems with curved sensors.

  13. EPA Method 1615. Measurement of Enterovirus and Norovirus Occurrence in Water by Culture and RT-qPCR. Part III. Virus Detection by RT-qPCR

    PubMed Central

    Fout, G. Shay; Cashdollar, Jennifer L.; Griffin, Shannon M.; Brinkman, Nichole E.; Varughese, Eunice A.; Parshionikar, Sandhya U.

    2016-01-01

    EPA Method 1615 measures enteroviruses and noroviruses present in environmental and drinking waters. This method was developed with the goal of having a standardized method for use in multiple analytical laboratories during monitoring period 3 of the Unregulated Contaminant Monitoring Rule. Herein we present the protocol for extraction of viral ribonucleic acid (RNA) from water sample concentrates and for quantitatively measuring enterovirus and norovirus concentrations using reverse transcription-quantitative PCR (RT-qPCR). Virus concentrations for the molecular assay are calculated in terms of genomic copies of viral RNA per liter based upon a standard curve. The method uses a number of quality controls to increase data quality and to reduce interlaboratory and intralaboratory variation. The method has been evaluated by examining virus recovery from ground and reagent grade waters seeded with poliovirus type 3 and murine norovirus as a surrogate for human noroviruses. Mean poliovirus recoveries were 20% in groundwaters and 44% in reagent grade water. Mean murine norovirus recoveries with the RT-qPCR assay were 30% in groundwaters and 4% in reagent grade water. PMID:26862985

  14. Quantification and Qualification of Bacteria Trapped in Chewed Gum

    PubMed Central

    Wessel, Stefan W.; van der Mei, Henny C.; Morando, David; Slomp, Anje M.; van de Belt-Gritter, Betsy; Maitra, Amarnath; Busscher, Henk J.

    2015-01-01

    Chewing of gum contributes to the maintenance of oral health. Many oral diseases, including caries and periodontal disease, are caused by bacteria. However, it is unknown whether chewing of gum can remove bacteria from the oral cavity. Here, we hypothesize that chewing of gum can trap bacteria and remove them from the oral cavity. To test this hypothesis, we developed two methods to quantify numbers of bacteria trapped in chewed gum. In the first method, known numbers of bacteria were finger-chewed into gum and chewed gums were molded to standard dimensions, sonicated and plated to determine numbers of colony-forming-units incorporated, yielding calibration curves of colony-forming-units retrieved versus finger-chewed in. In a second method, calibration curves were created by finger-chewing known numbers of bacteria into gum and subsequently dissolving the gum in a mixture of chloroform and tris-ethylenediaminetetraacetic-acid (TE)-buffer. The TE-buffer was analyzed using quantitative Polymerase-Chain-Reaction (qPCR), yielding calibration curves of total numbers of bacteria versus finger-chewed in. Next, five volunteers were requested to chew gum up to 10 min after which numbers of colony-forming-units and total numbers of bacteria trapped in chewed gum were determined using the above methods. The qPCR method, involving both dead and live bacteria yielded higher numbers of retrieved bacteria than plating, involving only viable bacteria. Numbers of trapped bacteria were maximal during initial chewing after which a slow decrease over time up to 10 min was observed. Around 108 bacteria were detected per gum piece depending on the method and gum considered. The number of species trapped in chewed gum increased with chewing time. Trapped bacteria were clearly visualized in chewed gum using scanning-electron-microscopy. Summarizing, using novel methods to quantify and qualify oral bacteria trapped in chewed gum, the hypothesis is confirmed that chewing of gum can trap and remove bacteria from the oral cavity. PMID:25602256

  15. The Separation of Between-person and Within-person Components of Individual Change Over Time: A Latent Curve Model with Structured Residuals

    PubMed Central

    Curran, Patrick J.; Howard, Andrea L.; Bainter, Sierra; Lane, Stephanie T.; McGinley, James S.

    2014-01-01

    Objective Although recent statistical and computational developments allow for the empirical testing of psychological theories in ways not previously possible, one particularly vexing challenge remains: how to optimally model the prospective, reciprocal relations between two constructs as they developmentally unfold over time. Several analytic methods currently exist that attempt to model these types of relations, and each approach is successful to varying degrees. However, none provide the unambiguous separation of between-person and within-person components of stability and change over time, components that are often hypothesized to exist in the psychological sciences. The goal of our paper is to propose and demonstrate a novel extension of the multivariate latent curve model to allow for the disaggregation of these effects. Method We begin with a review of the standard latent curve models and describe how these primarily capture between-person differences in change. We then extend this model to allow for regression structures among the time-specific residuals to capture within-person differences in change. Results We demonstrate this model using an artificial data set generated to mimic the developmental relation between alcohol use and depressive symptomatology spanning five repeated measures. Conclusions We obtain a specificity of results from the proposed analytic strategy that are not available from other existing methodologies. We conclude with potential limitations of our approach and directions for future research. PMID:24364798

  16. Thermoluminescence kinetic features of Lithium Iodide (LiI) single crystal grown by vertical Bridgman technique

    NASA Astrophysics Data System (ADS)

    Daniel, D. Joseph; Kim, H. J.; Kim, Sunghwan; Khan, Sajid

    2017-08-01

    Single crystal of pure Lithium Iodide (LiI) has been grown from melt by using the vertical Bridgman technique. Thermoluminescence (TL) Measurements were carried out at 1 K/s following X-ray irradiation. The TL glow curve consists of a dominant peak at (peak-maximum Tm) 393 K and one low temperature peak of weaker intensity at 343 K. The order of kinetics (b), activation energy (E), and the frequency factor (S) for a prominent TL glow peak observed around 393 K for LiI crystals are reported for the first time. The peak shape analysis of the glow peak indicates the kinetics to be of the first order. The value of E is calculated using various standard methods such as initial rise (IR), whole glow peak (WGP), peak shape (PS), computerized glow curve deconvolution (CGCD) and Variable Heating rate (VHR) methods. An average value of 1.06 eV is obtained in this case. In order to validate the obtained parameters, numerically integrated TL glow curve has been generated using experimentally determined kinetic parameters. The effective atomic number (Zeff) for this material was determined and found to be 52. X-ray induced emission spectra of pure LiI single crystal are studied at room temperature and it is found that the sample exhibit sharp emission at 457 nm and broad emission at 650 nm.

  17. Frontal crashworthiness characterisation of a vehicle segment using curve comparison metrics.

    PubMed

    Abellán-López, D; Sánchez-Lozano, M; Martínez-Sáez, L

    2018-08-01

    The objective of this work is to propose a methodology for the characterization of the collision behaviour and crashworthiness of a segment of vehicles, by selecting the vehicle that best represents that group. It would be useful in the development of deformable barriers, to be used in crash tests intended to study vehicle compatibility, as well as for the definition of the representative standard pulses used in numerical simulations or component testing. The characterisation and selection of representative vehicles is based on the objective comparison of the occupant compartment acceleration and barrier force pulses, obtained during crash tests, by using appropriate comparison metrics. This method is complemented with another one, based exclusively on the comparison of a few characteristic parameters of crash behaviour obtained from the previous curves. The method has been applied to different vehicle groups, using test data from a sample of vehicles. During this application, the performance of several metrics usually employed in the validation of simulation models have been analysed, and the most efficient ones have been selected for the task. The methodology finally defined is useful for vehicle segment characterization, taken into account aspects of crash behaviour related to the shape of the curves, difficult to represent by simple numerical parameters, and it may be tuned in future works when applied to larger and different samples. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Spectrophotometric method for quantitative determination of total anthocyanins and quality characteristics of roselle (Hibiscus sabdariffa).

    PubMed

    Sukwattanasinit, Tasamaporn; Burana-Osot, Jankana; Sotanaphun, Uthai

    2007-11-01

    A simple, rapid and cost-saving method for the determination of total anthocyanins in roselle has been developed. The method was based on pH-differential spectrophotometry. The calibration curve of the major anthocyanin in roselle, delphinidin 3-sambubioside (Dp-3-sam), was constructed by using methyl orange and their correlation factor. The reliability of this developed method was comparable to the direct method using standard Dp-3-sam and the HPLC method. Quality characteristics of roselle produced in Thailand were also reported. Its physical quality met the required specifications. The overall chemical quality was herein surveyed for the first time and it was found to be the important parameter corresponded to the commercial grading of roselle. Total contents of anthocyanins and phenolics were proportional to the antiradical capacity.

  19. An alternative approach to the Army Physical Fitness Test two-mile run using critical velocity and isoperformance curves.

    PubMed

    Fukuda, David H; Smith, Abbie E; Kendall, Kristina L; Cramer, Joel T; Stout, Jeffrey R

    2012-02-01

    The purpose of this study was to evaluate the use of critical velocity (CV) and isoperformance curves as an alternative to the Army Physical Fitness Test (APFT) two-mile running test. Seventy-eight men and women (mean +/- SE; age: 22.1 +/- 0.34 years; VO2(MAX): 46.1 +/- 0.82 mL/kg/min) volunteered to participate in this study. A VO2(MAX) test and four treadmill running bouts to exhaustion at varying intensities were completed. The relationship between total distance and time-to-exhaustion was tracked for each exhaustive run to determine CV and anaerobic running capacity. A VO2(MAX) prediction equation (Coefficient of determination: 0.805; Standard error of the estimate: 3.2377 mL/kg/min) was developed using these variables. Isoperformance curves were constructed for men and women to correspond with two-mile run times from APFT standards. Individual CV and anaerobic running capacity values were plotted and compared to isoperformance curves for APFT 2-mile run scores. Fifty-four individuals were determined to receive passing scores from this assessment. Physiological profiles identified from this procedure can be used to assess specific aerobic or anaerobic training needs. With the use of time-to-exhaustion as opposed to a time-trial format used in the two-mile run test, pacing strategies may be limited. The combination of variables from the CV test and isoperformance curves provides an alternative to standardized time-trial testing.

  20. Mercury porosimetry for comparing piece-wise hydraulic properties with full range pore characteristics of soil aggregates and porous rocks

    NASA Astrophysics Data System (ADS)

    Turturro, Antonietta Celeste; Caputo, Maria C.; Gerke, Horst H.

    2017-04-01

    Unsaturated hydraulic properties are essential in the modeling of water and solute movement in the vadose zone. Since standard hydraulic techniques are limited to specific moisture ranges, maybe affected by air entrapment, wettability problems, limitations due to water vapor pressure, and are depending on the initial saturation, the continuous maximal drying curves of the complete hydraulic functions can mostly not reflect the basic pore size distribution. The aim of this work was to compare the water retention curves of soil aggregates and porous rocks with their porosity characteristics. Soil aggregates of Haplic Luvisols from Loess L (Hneveceves, Czech Republic) and glacial Till T (Holzendorf, Germany) and two lithotypes of porous rock C (Canosa) and M (Massafra), Italy, were analyzed using, suction table, evaporation, psychrometry methods, and the adopted Quasi-Steady Centrifuge method for determination of unsaturated hydraulic conductivity. These various water-based techniques were applied to determine the piece-wise retention and the unsaturated hydraulic conductivity functions in the range of pore water saturations. The pore-size distribution was determined with the mercury intrusion porosimetry (MIP). MIP results allowed assessing the volumetric mercury content at applied pressures up to 420000 kPa. Greater intrusion and porosity values were found for the porous rocks than for the soil aggregates. Except for the aggregate samples from glacial till, maximum liquid contents were always smaller than porosity. Multimodal porosities and retention curves were observed for both porous rocks and aggregate soils. Two pore-size peaks with pore diameters of 0.135 and 27.5 µm, 1.847 and 19.7 µm, and 0.75 and 232 µm were found for C, M and T, respectively, while three peaks of 0.005, 0.392 and 222 µm were identified for L. The MIP data allowed describing the retention curve in the entire mercury saturation range as compared to water retention curves that required combining several methods for limited suction ranges. Although the soil aggregates and porous rocks differed in pore geometries and pore size distributions, MIP provided additional information for characterizing the relation between pore structure and hydraulic properties for both.

Top