Sample records for standard addition method

  1. Net analyte signal standard addition method (NASSAM) as a novel spectrofluorimetric and spectrophotometric technique for simultaneous determination, application to assay of melatonin and pyridoxine

    NASA Astrophysics Data System (ADS)

    Asadpour-Zeynali, Karim; Bastami, Mohammad

    2010-02-01

    In this work a new modification of the standard addition method called "net analyte signal standard addition method (NASSAM)" is presented for the simultaneous spectrofluorimetric and spectrophotometric analysis. The proposed method combines the advantages of standard addition method with those of net analyte signal concept. The method can be applied for the determination of analyte in the presence of known interferents. The accuracy of the predictions against H-point standard addition method is not dependent on the shape of the analyte and interferent spectra. The method was successfully applied to simultaneous spectrofluorimetric and spectrophotometric determination of pyridoxine (PY) and melatonin (MT) in synthetic mixtures and in a pharmaceutical formulation.

  2. Standard addition with internal standardisation as an alternative to using stable isotope labelled internal standards to correct for matrix effects-Comparison and validation using liquid chromatography-​tandem mass spectrometric assay of vitamin D.

    PubMed

    Hewavitharana, Amitha K; Abu Kassim, Nur Sofiah; Shaw, Paul Nicholas

    2018-06-08

    With mass spectrometric detection in liquid chromatography, co-eluting impurities affect the analyte response due to ion suppression/enhancement. Internal standard calibration method, using co-eluting stable isotope labelled analogue of each analyte as the internal standard, is the most appropriate technique available to correct for these matrix effects. However, this technique is not without drawbacks, proved to be expensive because separate internal standard for each analyte is required, and the labelled compounds are expensive or require synthesising. Traditionally, standard addition method has been used to overcome the matrix effects in atomic spectroscopy and was a well-established method. This paper proposes the same for mass spectrometric detection, and demonstrates that the results are comparable to those with the internal standard method using labelled analogues, for vitamin D assay. As conventional standard addition procedure does not address procedural errors, we propose the inclusion of an additional internal standard (not co-eluting). Recoveries determined on human serum samples show that the proposed method of standard addition yields more accurate results than the internal standardisation using stable isotope labelled analogues. The precision of the proposed method of standard addition is superior to the conventional standard addition method. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. H-point standard additions method for simultaneous determination of sulfamethoxazole and trimethoprim in pharmaceutical formulations and biological fluids with simultaneous addition of two analytes

    NASA Astrophysics Data System (ADS)

    Givianrad, M. H.; Saber-Tehrani, M.; Aberoomand-Azar, P.; Mohagheghian, M.

    2011-03-01

    The applicability of H-point standard additions method (HPSAM) to the resolving of overlapping spectra corresponding to the sulfamethoxazole and trimethoprim is verified by UV-vis spectrophotometry. The results show that the H-point standard additions method with simultaneous addition of both analytes is suitable for the simultaneous determination of sulfamethoxazole and trimethoprim in aqueous media. The results of applying the H-point standard additions method showed that the two drugs could be determined simultaneously with the concentration ratios of sulfamethoxazole to trimethoprim varying from 1:18 to 16:1 in the mixed samples. Also, the limits of detections were 0.58 and 0.37 μmol L -1 for sulfamethoxazole and trimethoprim, respectively. In addition the means of the calculated RSD (%) were 1.63 and 2.01 for SMX and TMP, respectively in synthetic mixtures. The proposed method has been successfully applied to the simultaneous determination of sulfamethoxazole and trimethoprim in some synthetic, pharmaceutical formulation and biological fluid samples.

  4. H-point standard additions method for simultaneous determination of sulfamethoxazole and trimethoprim in pharmaceutical formulations and biological fluids with simultaneous addition of two analytes.

    PubMed

    Givianrad, M H; Saber-Tehrani, M; Aberoomand-Azar, P; Mohagheghian, M

    2011-03-01

    The applicability of H-point standard additions method (HPSAM) to the resolving of overlapping spectra corresponding to the sulfamethoxazole and trimethoprim is verified by UV-vis spectrophotometry. The results show that the H-point standard additions method with simultaneous addition of both analytes is suitable for the simultaneous determination of sulfamethoxazole and trimethoprim in aqueous media. The results of applying the H-point standard additions method showed that the two drugs could be determined simultaneously with the concentration ratios of sulfamethoxazole to trimethoprim varying from 1:18 to 16:1 in the mixed samples. Also, the limits of detections were 0.58 and 0.37 μmol L(-1) for sulfamethoxazole and trimethoprim, respectively. In addition the means of the calculated RSD (%) were 1.63 and 2.01 for SMX and TMP, respectively in synthetic mixtures. The proposed method has been successfully applied to the simultaneous determination of sulfamethoxazole and trimethoprim in some synthetic, pharmaceutical formulation and biological fluid samples. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Acid Rain Analysis by Standard Addition Titration.

    ERIC Educational Resources Information Center

    Ophardt, Charles E.

    1985-01-01

    The standard addition titration is a precise and rapid method for the determination of the acidity in rain or snow samples. The method requires use of a standard buret, a pH meter, and Gran's plot to determine the equivalence point. Experimental procedures used and typical results obtained are presented. (JN)

  6. Simultaneous Spectrophotometric Determination of Rifampicin, Isoniazid and Pyrazinamide in a Single Step

    PubMed Central

    Asadpour-Zeynali, Karim; Saeb, Elhameh

    2016-01-01

    Three antituberculosis medications are investigated in this work consist of rifampicin, isoniazid and pyrazinamide. The ultra violet (UV) spectra of these compounds are overlapped, thus use of suitable chemometric methods are helpful for simultaneous spectrophotometric determination of them. A generalized version of net analyte signal standard addition method (GNASSAM) was used for determination of three antituberculosis medications as a model system. In generalized net analyte signal standard addition method only one standard solution was prepared for all analytes. This standard solution contains a mixture of all analytes of interest, and the addition of such solution to sample, causes increases in net analyte signal of each analyte which are proportional to the concentrations of analytes in added standards solution. For determination of concentration of each analyte in some synthetic mixtures, the UV spectra of pure analytes and each sample were recorded in the range of 210 nm-550 nm. The standard addition procedure was performed for each sample and the UV spectrum was recorded after each addition and finally the results were analyzed by net analyte signal method. Obtained concentrations show acceptable performance of GNASSAM in these cases. PMID:28243267

  7. Optimal Multicomponent Analysis Using the Generalized Standard Addition Method.

    ERIC Educational Resources Information Center

    Raymond, Margaret; And Others

    1983-01-01

    Describes an experiment on the simultaneous determination of chromium and magnesium by spectophotometry modified to include the Generalized Standard Addition Method computer program, a multivariate calibration method that provides optimal multicomponent analysis in the presence of interference and matrix effects. Provides instructions for…

  8. Determination of Unknown Concentrations of Sodium Acetate Using the Method of Standard Addition and Proton NMR: An Experiment for the Undergraduate Analytical Chemistry Laboratory

    ERIC Educational Resources Information Center

    Rajabzadeh, Massy

    2012-01-01

    In this experiment, students learn how to find the unknown concentration of sodium acetate using both the graphical treatment of standard addition and the standard addition equation. In the graphical treatment of standard addition, the peak area of the methyl peak in each of the sodium acetate standard solutions is found by integration using…

  9. External Standards or Standard Addition? Selecting and Validating a Method of Standardization

    NASA Astrophysics Data System (ADS)

    Harvey, David T.

    2002-05-01

    A common feature of many problem-based laboratories in analytical chemistry is a lengthy independent project involving the analysis of "real-world" samples. Students research the literature, adapting and developing a method suitable for their analyte, sample matrix, and problem scenario. Because these projects encompass the complete analytical process, students must consider issues such as obtaining a representative sample, selecting a method of analysis, developing a suitable standardization, validating results, and implementing appropriate quality assessment/quality control practices. Most textbooks and monographs suitable for an undergraduate course in analytical chemistry, however, provide only limited coverage of these important topics. The need for short laboratory experiments emphasizing important facets of method development, such as selecting a method of standardization, is evident. The experiment reported here, which is suitable for an introductory course in analytical chemistry, illustrates the importance of matrix effects when selecting a method of standardization. Students also learn how a spike recovery is used to validate an analytical method, and obtain a practical experience in the difference between performing an external standardization and a standard addition.

  10. Surrogate matrix and surrogate analyte approaches for definitive quantitation of endogenous biomolecules.

    PubMed

    Jones, Barry R; Schultz, Gary A; Eckstein, James A; Ackermann, Bradley L

    2012-10-01

    Quantitation of biomarkers by LC-MS/MS is complicated by the presence of endogenous analytes. This challenge is most commonly overcome by calibration using an authentic standard spiked into a surrogate matrix devoid of the target analyte. A second approach involves use of a stable-isotope-labeled standard as a surrogate analyte to allow calibration in the actual biological matrix. For both methods, parallelism between calibration standards and the target analyte in biological matrix must be demonstrated in order to ensure accurate quantitation. In this communication, the surrogate matrix and surrogate analyte approaches are compared for the analysis of five amino acids in human plasma: alanine, valine, methionine, leucine and isoleucine. In addition, methodology based on standard addition is introduced, which enables a robust examination of parallelism in both surrogate analyte and surrogate matrix methods prior to formal validation. Results from additional assays are presented to introduce the standard-addition methodology and to highlight the strengths and weaknesses of each approach. For the analysis of amino acids in human plasma, comparable precision and accuracy were obtained by the surrogate matrix and surrogate analyte methods. Both assays were well within tolerances prescribed by regulatory guidance for validation of xenobiotic assays. When stable-isotope-labeled standards are readily available, the surrogate analyte approach allows for facile method development. By comparison, the surrogate matrix method requires greater up-front method development; however, this deficit is offset by the long-term advantage of simplified sample analysis.

  11. Comparison of matrix effects in HPLC-MS/MS and UPLC-MS/MS analysis of nine basic pharmaceuticals in surface waters.

    PubMed

    Van De Steene, Jet C; Lambert, Willy E

    2008-05-01

    When developing an LC-MS/MS-method matrix effects are a major issue. The effect of co-eluting compounds arising from the matrix can result in signal enhancement or suppression. During method development much attention should be paid to diminishing matrix effects as much as possible. The present work evaluates matrix effects from aqueous environmental samples in the simultaneous analysis of a group of 9 specific pharmaceuticals with HPLC-ESI/MS/MS and UPLC-ESI/MS/MS: flubendazole, propiconazole, pipamperone, cinnarizine, ketoconazole, miconazole, rabeprazole, itraconazole and domperidone. When HPLC-MS/MS is used, matrix effects are substantial and can not be compensated for with analogue internal standards. For different surface water samples different matrix effects are found. For accurate quantification the standard addition approach is necessary. Due to the better resolution and more narrow peaks in UPLC, analytes will co-elute less with interferences during ionisation, so matrix effects could be lower, or even eliminated. If matrix effects are eliminated with this technique, the standard addition method for quantification can be omitted and the overall method will be simplified. Results show that matrix effects are almost eliminated if internal standards (structural analogues) are used. Instead of the time-consuming and labour-intensive standard addition method, with UPLC the internal standardization can be used for quantification and the overall method is substantially simplified.

  12. Fuel and Fuel System Materials Compatibility Test Program for A JP-8+100 Fuel Additive. Volume 1: Thermal Stability Additive Package BetzDearborn Spec Aid(Registered) 8Q462

    DTIC Science & Technology

    2001-10-01

    SAE Rings, Sealing, Butadiene-Acrylonitrile ( NBR ), Rubber Fuel and Low Temperature Resistant 60 - 70 MIL-R-83248C Rubber , Fluorocarbon...KAPTON/TEFLON (COMPOSITE) WIRE I.I.10 34 VI. REFERENCE DOCUMENTS Non-Metallics MIL-HDBK-149B Military Standardization Hand Book Rubber ...ASTM D-1414 Standard Test Methods for Rubber O-Rings ASTM D-412 Type II Standard Test Methods for Vulcanized Rubber and Thermoplastic

  13. A novel second-order standard addition analytical method based on data processing with multidimensional partial least-squares and residual bilinearization.

    PubMed

    Lozano, Valeria A; Ibañez, Gabriela A; Olivieri, Alejandro C

    2009-10-05

    In the presence of analyte-background interactions and a significant background signal, both second-order multivariate calibration and standard addition are required for successful analyte quantitation achieving the second-order advantage. This report discusses a modified second-order standard addition method, in which the test data matrix is subtracted from the standard addition matrices, and quantitation proceeds via the classical external calibration procedure. It is shown that this novel data processing method allows one to apply not only parallel factor analysis (PARAFAC) and multivariate curve resolution-alternating least-squares (MCR-ALS), but also the recently introduced and more flexible partial least-squares (PLS) models coupled to residual bilinearization (RBL). In particular, the multidimensional variant N-PLS/RBL is shown to produce the best analytical results. The comparison is carried out with the aid of a set of simulated data, as well as two experimental data sets: one aimed at the determination of salicylate in human serum in the presence of naproxen as an additional interferent, and the second one devoted to the analysis of danofloxacin in human serum in the presence of salicylate.

  14. Determination of Glyphosate, its Degradation Product Aminomethylphosphonic Acid, and Glufosinate, in Water by Isotope Dilution and Online Solid-Phase Extraction and Liquid Chromatography/Tandem Mass Spectrometry

    USGS Publications Warehouse

    Meyer, Michael T.; Loftin, Keith A.; Lee, Edward A.; Hinshaw, Gary H.; Dietze, Julie E.; Scribner, Elisabeth A.

    2009-01-01

    The U.S. Geological Survey method (0-2141-09) presented is approved for the determination of glyphosate, its degradation product aminomethylphosphonic acid (AMPA), and glufosinate in water. It was was validated to demonstrate the method detection levels (MDL), compare isotope dilution to standard addition, and evaluate method and compound stability. The original method USGS analytical method 0-2136-01 was developed using liquid chromatography/mass spectrometry and quantitation by standard addition. Lower method detection levels and increased specificity were achieved in the modified method, 0-2141-09, by using liquid chromatography/tandem mass spectrometry (LC/MS/MS). The use of isotope dilution for glyphosate and AMPA and pseudo isotope dilution of glufosinate in place of standard addition was evaluated. Stable-isotope labeled AMPA and glyphosate were used as the isotope dilution standards. In addition, the stability of glyphosate and AMPA was studied in raw filtered and derivatized water samples. The stable-isotope labeled glyphosate and AMPA standards were added to each water sample and the samples then derivatized with 9-fluorenylmethylchloroformate. After derivatization, samples were concentrated using automated online solid-phase extraction (SPE) followed by elution in-line with the LC mobile phase; the compounds separated and then were analyzed by LC/MS/MS using electrospray ionization in negative-ion mode with multiple-reaction monitoring. The deprotonated derivatized parent molecule and two daughter-ion transition pairs were identified and optimized for glyphosate, AMPA, glufosinate, and the glyphosate and AMPA stable-isotope labeled internal standards. Quantitative comparison between standard addition and isotope dilution was conducted using 473 samples analyzed between April 2004 and June 2006. The mean percent difference and relative standard deviation between the two quantitation methods was 7.6 plus or minus 6.30 (n = 179), AMPA 9.6 plus or minus 8.35 (n = 206), and glufosinate 9.3 plus or minus 9.16 (n = 16). The analytical variation of the method, comparison of quantitation by isotope dilution and multipoint linear regressed standard curves, and method detection levels were evaluated by analyzing six sets of distilled-water, groundwater, and surface-water samples spiked in duplicate at 0.0, 0.05, 0.10 and 0.50 microgram per liter and analyzed on 6 different days during 1 month. The grand means of the normalized concentration percentage recovery for glyphosate, AMPA, and glufosinate among all three matrices and spiked concentrations ranged from 99 to 114 plus or minus 2 to 7 percent of the expected spiked concentration. The grand mean of the percentage difference between concentrations calculated by standard addition and linear regressed multipoint standard curves ranged from 8 to 15 plus or minus 2 to 9 percent for the three compounds. The method reporting levels calculated from all the 0.05- microgram per liter spiked samples were 0.02 microgram per liter for all three compounds. Compound stability experiments were conducted on 10 samples derivatized four times for periods between 136 to 269 days. The glyphosate and AMPA concentrations remained relatively constant in samples held up to 136 days before derivatization. The half life of glyphosate varied from 169 to 223 days in the underivatized samples. Derivatized samples were analyzed the day after derivitization, and again 54 and 64 days after derivatization. The derivatized samples analyzed at days 52 and 64 were within 20 percent of the concentrations of the derivatized samples analyzed the day after derivatization.

  15. 76 FR 16640 - Petitions for Modification of Existing Mandatory Safety Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-24

    ... standard to permit an alternative method of compliance to allow additional outby storage caches of Self.... The petitioner further states that: (a) Additional SCSR outby storage caches will be placed a maximum of 2,000 feet apart in beltlines and return air courses; (b) these additional SCSR outby storage...

  16. Simultaneous spectrophotometric determination of valsartan and hydrochlorothiazide by H-point standard addition method and partial least squares regression.

    PubMed

    Lakshmi, Karunanidhi Santhana; Lakshmi, Sivasubramanian

    2011-03-01

    Simultaneous determination of valsartan and hydrochlorothiazide by the H-point standard additions method (HPSAM) and partial least squares (PLS) calibration is described. Absorbances at a pair of wavelengths, 216 and 228 nm, were monitored with the addition of standard solutions of valsartan. Results of applying HPSAM showed that valsartan and hydrochlorothiazide can be determined simultaneously at concentration ratios varying from 20:1 to 1:15 in a mixed sample. The proposed PLS method does not require chemical separation and spectral graphical procedures for quantitative resolution of mixtures containing the titled compounds. The calibration model was based on absorption spectra in the 200-350 nm range for 25 different mixtures of valsartan and hydrochlorothiazide. Calibration matrices contained 0.5-3 μg mL-1 of both valsartan and hydrochlorothiazide. The standard error of prediction (SEP) for valsartan and hydrochlorothiazide was 0.020 and 0.038 μg mL-1, respectively. Both proposed methods were successfully applied to the determination of valsartan and hydrochlorothiazide in several synthetic and real matrix samples.

  17. Matrix effect and correction by standard addition in quantitative liquid chromatographic-mass spectrometric analysis of diarrhetic shellfish poisoning toxins.

    PubMed

    Ito, Shinya; Tsukada, Katsuo

    2002-01-11

    An evaluation of the feasibility of liquid chromatography-mass spectrometry (LC-MS) with atmospheric pressure ionization was made for quantitation of four diarrhetic shellfish poisoning toxins, okadaic acid, dinophysistoxin-1, pectenotoxin-6 and yessotoxin in scallops. When LC-MS was applied to the analysis of scallop extracts, large signal suppressions were observed due to coeluting substances from the column. To compensate for these matrix signal suppressions, the standard addition method was applied. First, the sample was analyzed and then the sample involving the addition of calibration standards is analyzed. Although this method requires two LC-MS runs per analysis, effective correction of quantitative errors was found.

  18. Advantages of a validated UPLC-MS/MS standard addition method for the quantification of A-type dimeric and trimeric proanthocyanidins in cranberry extracts in comparison with well-known quantification methods.

    PubMed

    van Dooren, Ines; Foubert, Kenn; Theunis, Mart; Naessens, Tania; Pieters, Luc; Apers, Sandra

    2018-01-30

    The berries of Vaccinium macrocarpon, cranberry, are widely used for the prevention of urinary tract infections. This species contains A-type proanthocyanidins (PACs), which intervene in the initial phase of the development of urinary tract infections by preventing the adherence of Escherichia coli by their P-type fimbriae to uroepithelial cells. Unfortunately, the existing clinical studies used different cranberry preparations, which were poorly standardized. Because of this, the results were hard to compare, which led sometimes to conflicting results. Currently, PACs are quantified using the rather non-specific spectrophotometric 4-dimethylaminocinnamaldehyde (DMAC) method. In addition, a normal phase HPTLC-densitometric method, a HPLC-UV method and three LC-MS/MS methods for quantification of procyanidin A2 were recently published. All these methods contain some shortcomings and errors. Hence, the development and validation of a fast and sensitive standard addition LC-MS/MS method for the simultaneous quantification of A-type dimers and trimers in a cranberry dry extract was carried out. A linear calibration model could be adopted for dimers and, after logaritmic transformation, for trimers. The maximal interday and interconcentration precision was found to be 4.86% and 4.28% for procyanidin A2, and 5.61% and 7.65% for trimeric PACs, which are all acceptable values for an analytical method using LC-MS/MS. In addition, twelve different cranberry extracts were analyzed by means of the newly validated method and other widely used methods. There appeared to be an enormous variation in dimeric and trimeric PAC content. Comparison of these results with LC-MS/MS analysis without standard addition showed the presence of matrix effects for some of the extracts and proved the necessity of standard addition. A comparison of the well-known and widely used DMAC method, the butanol-HCl assay and this newly developed LC-MS/MS method clearly indicated the need for a reliable method able to quantify A-type PACs, which are considered to be the pharmacologically active constituents of cranberry, since neither the DMAC or butanol-HCl assays are capable of distinguishing between A and B-type PACs and therefore cannot detect adulterations with, for example, extracts with a high B-type PAC content. Hence, the combination of the DMAC method or butanol-HCl assay with this more specific LC-MS/MS assay could overcome these shortcomings. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. In Situ Determination of Trace Elements in Fish Otoliths by Laser Ablation Double Focusing Sector Field Inductively Coupled Plasma Mass Spectrometry Using a Solution Standard Addition Calibration Method

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Jones, C. M.

    2002-05-01

    Microchemistry of fish otoliths (fish ear bones) is a very useful tool for monitoring aquatic environments and fish migration. However, determination of the elemental composition in fish otolith by ICP-MS has been limited to either analysis of dissolved sample solution or measurement of limited number of trace elements by laser ablation (LA)- ICP-MS due to low sensitivity, lack of available calibration standards, and complexity of polyatomic molecular interference. In this study, a method was developed for in situ determination of trace elements in fish otoliths by laser ablation double focusing sector field ultra high sensitivity Finnigan Element 2 ICP-MS using a solution standard addition calibration method. Due to the lack of matrix-match solid calibration standards, sixteen trace elements (Na, Mg, P, Cr, Mn, Fe, Ni, Cu, Rb, Sr, Y, Cd, La, Ba, Pb and U) were determined using a solution standard calibration with Ca as an internal standard. Flexibility, easy preparation and stable signals are the advantages of using solution calibration standards. In order to resolve polyatomic molecular interferences, medium resolution (M/delta M > 4000) was used for some elements (Na, Mg, P, Cr, Mn, Fe, Ni, and Cu). Both external calibration and standard addition quantification strategies are compared and discussed. Precision, accuracy, and limits of detection are presented.

  20. Spectrophotometric methods for the determination of urea in real samples using silver nanoparticles by standard addition and 2nd order derivative methods

    NASA Astrophysics Data System (ADS)

    Ali, Nauman; Ismail, Muhammad; Khan, Adnan; Khan, Hamayun; Haider, Sajjad; Kamal, Tahseen

    2018-01-01

    In this work, we have developed simple, sensitive and inexpensive methods for the spectrophotometric determination of urea in urine samples using silver nanoparticles (AgNPs). The standard addition and 2nd order derivative methods were adopted for this purpose. AgNPs were prepared by chemical reduction of AgNO3 with hydrazine using 1,3-di-(1H-imidazol-1-yl)-2-propanol (DIPO) as a stabilizing agent in aqueous medium. The proposed methods were based on the complexation of AgNPs with urea. Using this concept, urea in the urine samples was successfully determined spectrophotometric methods. The results showed high percent recovery with ± RSD. The recoveries of urea in the three urine samples by spectrophotometric standard addition were 99.2% ± 5.37, 96.3% ± 4.49, 104.88% ± 4.99 and that of spectrophotometric 2nd order derivative method were 115.3% ± 5.2, 103.4% ± 2.6, 105.93% ± 0.76. The results show that these methods can open doors for a potential role of AgNPs in the clinical determination of urea in urine, blood, biological, non-biological fluids.

  1. Bioanalytical method development and validation for the determination of glycine in human cerebrospinal fluid by ion-pair reversed-phase liquid chromatography-tandem mass spectrometry.

    PubMed

    Jiang, Jian; James, Christopher A; Wong, Philip

    2016-09-05

    A LC-MS/MS method has been developed and validated for the determination of glycine in human cerebrospinal fluid (CSF). The validated method used artificial cerebrospinal fluid as a surrogate matrix for calibration standards. The calibration curve range for the assay was 100-10,000ng/mL and (13)C2, (15)N-glycine was used as an internal standard (IS). Pre-validation experiments were performed to demonstrate parallelism with surrogate matrix and standard addition methods. The mean endogenous glycine concentration in a pooled human CSF determined on three days by using artificial CSF as a surrogate matrix and the method of standard addition was found to be 748±30.6 and 768±18.1ng/mL, respectively. A percentage difference of -2.6% indicated that artificial CSF could be used as a surrogate calibration matrix for the determination of glycine in human CSF. Quality control (QC) samples, except the lower limit of quantitation (LLOQ) QC and low QC samples, were prepared by spiking glycine into aliquots of pooled human CSF sample. The low QC sample was prepared from a separate pooled human CSF sample containing low endogenous glycine concentrations, while the LLOQ QC sample was prepared in artificial CSF. Standard addition was used extensively to evaluate matrix effects during validation. The validated method was used to determine the endogenous glycine concentrations in human CSF samples. Incurred sample reanalysis demonstrated reproducibility of the method. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. 40 CFR 86.000-26 - Mileage and service accumulation; emission measurements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... additional significant figure, in accordance with the Rounding-Off Method specified in ASTM E29-90, Standard Practice for Using Significant Digits in Test Data to Determine Conformance with Specifications... standard expressed to one additional significant figure. (d)(3)-(d)(6) [Reserved]. For guidance see § 86...

  3. 40 CFR 86.000-26 - Mileage and service accumulation; emission measurements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... additional significant figure, in accordance with the Rounding-Off Method specified in ASTM E29-90, Standard Practice for Using Significant Digits in Test Data to Determine Conformance with Specifications... standard expressed to one additional significant figure. (d)(3)-(d)(6) [Reserved]. For guidance see § 86...

  4. 40 CFR 86.000-26 - Mileage and service accumulation; emission measurements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... additional significant figure, in accordance with the Rounding-Off Method specified in ASTM E29-90, Standard Practice for Using Significant Digits in Test Data to Determine Conformance with Specifications... standard expressed to one additional significant figure. (d)(3)-(d)(6) [Reserved]. For guidance see § 86...

  5. Development and validation of new spectrophotometric ratio H-point standard addition method and application to gastrointestinal acting drugs mixtures.

    PubMed

    Yehia, Ali M

    2013-05-15

    New, simple, specific, accurate and precise spectrophotometric technique utilizing ratio spectra is developed for simultaneous determination of two different binary mixtures. The developed ratio H-point standard addition method (RHPSAM) was managed successfully to resolve the spectral overlap in itopride hydrochloride (ITO) and pantoprazole sodium (PAN) binary mixture, as well as, mosapride citrate (MOS) and PAN binary mixture. The theoretical background and advantages of the newly proposed method are presented. The calibration curves are linear over the concentration range of 5-60 μg/mL, 5-40 μg/mL and 4-24 μg/mL for ITO, MOS and PAN, respectively. Specificity of the method was investigated and relative standard deviations were less than 1.5. The accuracy, precision and repeatability were also investigated for the proposed method according to ICH guidelines. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Development and validation of new spectrophotometric ratio H-point standard addition method and application to gastrointestinal acting drugs mixtures

    NASA Astrophysics Data System (ADS)

    Yehia, Ali M.

    2013-05-01

    New, simple, specific, accurate and precise spectrophotometric technique utilizing ratio spectra is developed for simultaneous determination of two different binary mixtures. The developed ratio H-point standard addition method (RHPSAM) was managed successfully to resolve the spectral overlap in itopride hydrochloride (ITO) and pantoprazole sodium (PAN) binary mixture, as well as, mosapride citrate (MOS) and PAN binary mixture. The theoretical background and advantages of the newly proposed method are presented. The calibration curves are linear over the concentration range of 5-60 μg/mL, 5-40 μg/mL and 4-24 μg/mL for ITO, MOS and PAN, respectively. Specificity of the method was investigated and relative standard deviations were less than 1.5. The accuracy, precision and repeatability were also investigated for the proposed method according to ICH guidelines.

  7. Oxidation stability of biodiesel fuels and blends using the Rancimat and PetroOXY methods. Effect of 4-allyl-2,6-dimetoxiphenol and cathecol as biodiesel additives on oxidation stability

    NASA Astrophysics Data System (ADS)

    Botella, Lucía; Bimbela, Fernando; Martín, Lorena; Arauzo, Jesús; Sanchez, Jose Luis

    2014-07-01

    In the present work, several fatty acid methyl esters (FAME) have been synthesized from various fatty acid feedstocks: used frying olive oil, pork fat, soybean, rapeseed, sunflower and coconut. The oxidation stabilities of the biodiesel samples and of several blends have been measured simultaneously by both the Rancimat method, accepted by EN14112 standard, and the PetroOXY method, prEN16091 standard, with the aim of finding a correlation between both methodologies. Other biodiesel properties such as composition, cold filter plugging point (CFPP), flash point (FP) and kinematic viscosity have also been analyzed using standard methods in order to further characterize the biodiesel produced. In addition, the effect on the biodiesel properties of using 4-allyl-2,6-dimetoxiphenol and cathecol as additives in biodiesel blends with rapeseed and with soybean has also been analyzed. The use of both antioxidants results in a considerable improvement in the oxidation stability of both types of biodiesel, especially using cathecol. Adding cathecol loads as low as 0.05 % (m/m) in blends with soybean biodiesel and as low as 0.10 % (m/m) in blends with rapeseed biodiesel is sufficient for the oxidation stabilities to comply with the restrictions established by the European EN14214 standard.An empirical linear equation is proposed to correlate the oxidation stability by the two methods, PetroOXY and Rancimat. It has been found that the presence of either cathecol or 4-allyl-2,6-dimetoxiphenol as additives affects the correlation observed.

  8. Net analyte signal standard addition method for simultaneous determination of sulphadiazine and trimethoprim in bovine milk and veterinary medicines.

    PubMed

    Hajian, Reza; Mousavi, Esmat; Shams, Nafiseh

    2013-06-01

    Net analyte signal standard addition method has been used for the simultaneous determination of sulphadiazine and trimethoprim by spectrophotometry in some bovine milk and veterinary medicines. The method combines the advantages of standard addition method with the net analyte signal concept which enables the extraction of information concerning a certain analyte from spectra of multi-component mixtures. This method has some advantages such as the use of a full spectrum realisation, therefore it does not require calibration and prediction step and only a few measurements require for the determination. Cloud point extraction based on the phenomenon of solubilisation used for extraction of sulphadiazine and trimethoprim in bovine milk. It is based on the induction of micellar organised media by using Triton X-100 as an extraction solvent. At the optimum conditions, the norm of NAS vectors increased linearly with concentrations in the range of 1.0-150.0 μmolL(-1) for both sulphadiazine and trimethoprim. The limits of detection (LOD) for sulphadiazine and trimethoprim were 0.86 and 0.92 μmolL(-1), respectively. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Application of standard addition for the determination of carboxypeptidase activity in Actinomucor elegans bran koji.

    PubMed

    Fu, J; Li, L; Yang, X Q; Zhu, M J

    2011-01-01

    Leucine carboxypeptidase (EC 3.4.16) activity in Actinomucor elegans bran koji was investigated via absorbance at 507 nm after stained by Cd-nihydrin solution, with calibration curve A, which was made by a set of known concentration standard leucine, calibration B, which was made by three sets of known concentration standard leucine solutions with the addition of three concentrations inactive crude enzyme extract, and calibration C, which was made by three sets of known concentration standard leucine solutions with the addition of three concentrations crude enzyme extract. The results indicated that application of pure amino acid standard curve was not a suitable way to determine carboxypeptidase in complicate mixture, and it probably led to overestimated carboxypeptidase activity. It was found that addition of crude exact into pure amino acid standard curve had a significant difference from pure amino acid standard curve method (p < 0.05). There was no significant enzyme activity difference (p > 0.05) between addition of active crude exact and addition of inactive crude kind, when the proper dilute multiple was used. It was concluded that the addition of crude enzyme extract to the calibration was needed to eliminate the interference of free amino acids and related compounds presented in crude enzyme extract.

  10. Verification of the ISO calibration method for field pyranometers under tropical sky conditions

    NASA Astrophysics Data System (ADS)

    Janjai, Serm; Tohsing, Korntip; Pattarapanitchai, Somjet; Detkhon, Pasakorn

    2017-02-01

    Field pyranomters need to be annually calibrated and the International Organization for Standardization (ISO) has defined a standard method (ISO 9847) for calibrating these pyranometers. According to this standard method for outdoor calibration, the field pyranometers have to be compared to a reference pyranometer for the period of 2 to 14 days, depending on sky conditions. In this work, the ISO 9847 standard method was verified under tropical sky conditions. To verify the standard method, calibration of field pyranometers was conducted at a tropical site located in Nakhon Pathom (13.82o N, 100.04o E), Thailand under various sky conditions. The conditions of the sky were monitored by using a sky camera. The calibration results for different time periods used for the calibration under various sky conditions were analyzed. It was found that the calibration periods given by this standard method could be reduced without significant change in the final calibration result. In addition, recommendation and discussion on the use of this standard method in the tropics were also presented.

  11. Determine equilibrium dissociation constant of drug-membrane receptor affinity using the cell membrane chromatography relative standard method.

    PubMed

    Ma, Weina; Yang, Liu; Lv, Yanni; Fu, Jia; Zhang, Yanmin; He, Langchong

    2017-06-23

    The equilibrium dissociation constant (K D ) of drug-membrane receptor affinity is the basic parameter that reflects the strength of interaction. The cell membrane chromatography (CMC) method is an effective technique to study the characteristics of drug-membrane receptor affinity. In this study, the K D value of CMC relative standard method for the determination of drug-membrane receptor affinity was established to analyze the relative K D values of drugs binding to the membrane receptors (Epidermal growth factor receptor and angiotensin II receptor). The K D values obtained by the CMC relative standard method had a strong correlation with those obtained by the frontal analysis method. Additionally, the K D values obtained by CMC relative standard method correlated with pharmacological activity of the drug being evaluated. The CMC relative standard method is a convenient and effective method to evaluate drug-membrane receptor affinity. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Solid matrix transformation and tracer addition using molten ammonium bifluoride salt as a sample preparation method for laser ablation inductively coupled plasma mass spectrometry.

    PubMed

    Grate, Jay W; Gonzalez, Jhanis J; O'Hara, Matthew J; Kellogg, Cynthia M; Morrison, Samuel S; Koppenaal, David W; Chan, George C-Y; Mao, Xianglei; Zorba, Vassilia; Russo, Richard E

    2017-09-08

    Solid sampling and analysis methods, such as laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), are challenged by matrix effects and calibration difficulties. Matrix-matched standards for external calibration are seldom available and it is difficult to distribute spikes evenly into a solid matrix as internal standards. While isotopic ratios of the same element can be measured to high precision, matrix-dependent effects in the sampling and analysis process frustrate accurate quantification and elemental ratio determinations. Here we introduce a potentially general solid matrix transformation approach entailing chemical reactions in molten ammonium bifluoride (ABF) salt that enables the introduction of spikes as tracers or internal standards. Proof of principle experiments show that the decomposition of uranium ore in sealed PFA fluoropolymer vials at 230 °C yields, after cooling, new solids suitable for direct solid sampling by LA. When spikes are included in the molten salt reaction, subsequent LA-ICP-MS sampling at several spots indicate that the spikes are evenly distributed, and that U-235 tracer dramatically improves reproducibility in U-238 analysis. Precisions improved from 17% relative standard deviation for U-238 signals to 0.1% for the ratio of sample U-238 to spiked U-235, a factor of over two orders of magnitude. These results introduce the concept of solid matrix transformation (SMT) using ABF, and provide proof of principle for a new method of incorporating internal standards into a solid for LA-ICP-MS. This new approach, SMT-LA-ICP-MS, provides opportunities to improve calibration and quantification in solids based analysis. Looking forward, tracer addition to transformed solids opens up LA-based methods to analytical methodologies such as standard addition, isotope dilution, preparation of matrix-matched solid standards, external calibration, and monitoring instrument drift against external calibration standards.

  13. Iodine speciation in coastal and inland bathing waters and seaweeds extracts using a sequential injection standard addition flow-batch method.

    PubMed

    Santos, Inês C; Mesquita, Raquel B R; Bordalo, Adriano A; Rangel, António O S S

    2015-02-01

    The present work describes the development of a sequential injection standard addition method for iodine speciation in bathing waters and seaweeds extracts without prior sample treatment. Iodine speciation was obtained by assessing the iodide and iodate content, the two inorganic forms of iodine in waters. For the determination of iodide, an iodide ion selective electrode (ISE) was used. The indirect determination of iodate was based on the spectrophotometric determination of nitrite (Griess reaction). For the iodate measurement, a mixing chamber was employed (flow batch approach) to explore the inherent efficient mixing, essential for the indirect determination of iodate. The application of the standard addition method enabled detection limits of 0.14 µM for iodide and 0.02 µM for iodate, together with the direct introduction of the target water samples, coastal and inland bathing waters. The results obtained were in agreement with those obtained by ICP-MS and a colorimetric reference procedure. Recovery tests also confirmed the accuracy of the developed method which was effectively applied to bathing waters and seaweed extracts. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Consistent Practices for the Probability of Detection (POD) of Fracture Critical Metallic Components Project

    NASA Technical Reports Server (NTRS)

    Hughitt, Brian; Generazio, Edward (Principal Investigator); Nichols, Charles; Myers, Mika (Principal Investigator); Spencer, Floyd (Principal Investigator); Waller, Jess (Principal Investigator); Wladyka, Jordan (Principal Investigator); Aldrin, John; Burke, Eric; Cerecerez, Laura; hide

    2016-01-01

    NASA-STD-5009 requires that successful flaw detection by NDE methods be statistically qualified for use on fracture critical metallic components, but does not standardize practices. This task works towards standardizing calculations and record retention with a web-based tool, the NNWG POD Standards Library or NPSL. Test methods will also be standardized with an appropriately flexible appendix to -5009 identifying best practices. Additionally, this appendix will describe how specimens used to qualify NDE systems will be cataloged, stored and protected from corrosion, damage, or loss.

  15. Multiresidue determination of pesticides in crop plants by the quick, easy, cheap, effective, rugged, and safe method and ultra-high-performance liquid chromatography tandem mass spectrometry using a calibration based on a single level standard addition in the sample.

    PubMed

    Viera, Mariela S; Rizzetti, Tiele M; de Souza, Maiara P; Martins, Manoel L; Prestes, Osmar D; Adaime, Martha B; Zanella, Renato

    2017-12-01

    In this study, a QuEChERS (Quick, Easy, Cheap, Effective, Rugged and Safe) method, optimized by a 2 3 full factorial design, was developed for the determination of 72 pesticides in plant parts of carrot, corn, melon, rice, soy, silage, tobacco, cassava, lettuce and wheat by ultra-high-performance liquid chromatographic tandem mass spectrometry (UHPLC-MS/MS). Considering the complexity of these matrices and the need of use calibration in matrix, a new calibration approach based on single level standard addition in the sample (SLSAS) was proposed in this work and compared with the matrix-matched calibration (MMC), the procedural standard calibration (PSC) and the diluted standard addition calibration (DSAC). All approaches presented satisfactory validation parameters with recoveries from 70 to 120% and relative standard deviations≤20%. SLSAS was the most practical from the evaluated approaches and proved to be an effective way of calibration. Method limit of detection were between 4.8 and 48μgkg -1 and limit of quantification were from 16 to 160μgkg -1 . Method application to different kinds of plants found residues of 20 pesticides that were quantified with z-scores values≤2 in comparison with other calibration approaches. The proposed QuEChERS method combined with UHPLC-MS/MS analysis and using an easy and effective calibration procedure presented satisfactory results for pesticide residues determination in different crop plants and is a good alternative for routine analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Improving IQ measurement in intellectual disabilities using true deviation from population norms

    PubMed Central

    2014-01-01

    Background Intellectual disability (ID) is characterized by global cognitive deficits, yet the very IQ tests used to assess ID have limited range and precision in this population, especially for more impaired individuals. Methods We describe the development and validation of a method of raw z-score transformation (based on general population norms) that ameliorates floor effects and improves the precision of IQ measurement in ID using the Stanford Binet 5 (SB5) in fragile X syndrome (FXS; n = 106), the leading inherited cause of ID, and in individuals with idiopathic autism spectrum disorder (ASD; n = 205). We compared the distributional characteristics and Q-Q plots from the standardized scores with the deviation z-scores. Additionally, we examined the relationship between both scoring methods and multiple criterion measures. Results We found evidence that substantial and meaningful variation in cognitive ability on standardized IQ tests among individuals with ID is lost when converting raw scores to standardized scaled, index and IQ scores. Use of the deviation z- score method rectifies this problem, and accounts for significant additional variance in criterion validation measures, above and beyond the usual IQ scores. Additionally, individual and group-level cognitive strengths and weaknesses are recovered using deviation scores. Conclusion Traditional methods for generating IQ scores in lower functioning individuals with ID are inaccurate and inadequate, leading to erroneously flat profiles. However assessment of cognitive abilities is substantially improved by measuring true deviation in performance from standardization sample norms. This work has important implications for standardized test development, clinical assessment, and research for which IQ is an important measure of interest in individuals with neurodevelopmental disorders and other forms of cognitive impairment. PMID:26491488

  17. Method development for gypenosides fingerprint by high performance liquid chromatography with diode-array detection and the addition of internal standard.

    PubMed

    Liu, Fang; Ren, Dequan; Guo, De-an; Pan, Yifeng; Zhang, Huzhe; Hu, Ping

    2008-03-01

    In this paper, a new method for liquid chromatographic fingerprint of saponins in Gynostemma pentaphyllum (THUNB.) MAKINO was developed. The G. pentaphyllum powder was defatted by Soxhlet extraction with petroleum ether and then gypenosides were extracted from the residue with methanol by sonicating. Column chromatography with macro pore resin was then used to separate and enrich gypenosides. HPLC fingerprint analysis of gypenosides fraction was performed on a C18 column, with an isocratic elution of 34% acetonitrile for 60 min at 0.8 ml/min, sample injection volume was 20 microl and the wavelength was 203 nm. To cover the lack of standard compounds, the addition of an internal standard of ginsenoside Rb2 was employed in the gypenosides fingerprint profile. The relative retention time (RRT) and relative peak area (RPA) of the gypenosides peaks in the fingerprint were calculated by setting the ginsenoside Rb2 as the marker compound. The relative standard deviation (RSDs) of RRT of five common peaks vs. ginsenoside Rb2 in precision, repeatability and stability test were less than 1%, and the RSDs of RPA were less than 5%. The method validation data proved that the proposed method for the fingerprint with internal standard of G. pentaphyllum saponins is adequate, valid and applicable. Finally, three batches of crude drug samples collected from Shanxi province were tested by following the established method.

  18. Quantitative determination of cucurbitane-type triterpenes and triterpene glycosides in dietary supplements containing bitter melon (Momordica charantia) by HPLC-MS/MS.

    PubMed

    Ma, Jun; Krynitsky, Alexander J; Grundel, Erich; Rader, Jeanne I

    2012-01-01

    Momordica charantia L. (Cucurbitaceae), commonly known as bitter melon, is widely cultivated in many tropical and subtropical areas of the world. It is a common food staple; its fruits, leaves, seeds, stems, and roots also have a long history of use in traditional medicine. In the United States, dietary supplements labeled as containing bitter melon can be purchased over-the-counter and from Internet suppliers. Currently, no quantitative analytical method is available for monitoring the content of cucurbitane-type triterpenes and triterpene glycosides, the major constituents of bitter melon, in such supplements. We investigated the use of HPLC-electrospray ionization (ESI)-MS/MS for the quantitative determination of such compounds in dietary supplements containing bitter melon. Values for each compound obtained from external calibration were compared with those obtained from the method of standard additions to address matrix effects associated with ESI. In addition, the cucurbitane-type triterpene and triterpene glycoside contents of two dietary supplements determined by the HPLC-ESI-MS/MS method with standard additions were compared with those measured by an HPLC method with evaporative light scattering detection, which was recently developed for quantification of such compounds in dried fruits of M. charantia. The contents of five cucurbitane-type triterpenes and triterpene glycosides in 10 dietary supplements were measured using the HPLC-ESI-MS/MS method with standard additions. The total contents of the five compounds ranged from 17 to 3464 microg/serving.

  19. Rapid Quantification of Melamine in Different Brands/Types of Milk Powders Using Standard Addition Net Analyte Signal and Near-Infrared Spectroscopy

    PubMed Central

    2016-01-01

    Multivariate calibration (MVC) and near-infrared (NIR) spectroscopy have demonstrated potential for rapid analysis of melamine in various dairy products. However, the practical application of ordinary MVC can be largely restricted because the prediction of a new sample from an uncalibrated batch would be subject to a significant bias due to matrix effect. In this study, the feasibility of using NIR spectroscopy and the standard addition (SA) net analyte signal (NAS) method (SANAS) for rapid quantification of melamine in different brands/types of milk powders was investigated. In SANAS, the NAS vector of melamine in an unknown sample as well as in a series of samples added with melamine standards was calculated and then the Euclidean norms of series standards were used to build a straightforward univariate regression model. The analysis results of 10 different brands/types of milk powders with melamine levels 0~0.12% (w/w) indicate that SANAS obtained accurate results with the root mean squared error of prediction (RMSEP) values ranging from 0.0012 to 0.0029. An additional advantage of NAS is to visualize and control the possible unwanted variations during standard addition. The proposed method will provide a practically useful tool for rapid and nondestructive quantification of melamine in different brands/types of milk powders. PMID:27525154

  20. The Role of Internal Standards and their Interaction with Soils Impact Accuracy of Volatile Organics Determinations

    EPA Science Inventory

    Both US Environmental Protection Agency (EPA) SW-846 Methods 8260C/5035 and 8261A include mixing soil with water and addition of internal standards prior to analyses but the equilibration of internal standards with the soil is not required. With increasing total organic carbon (...

  1. Second-order standard addition for deconvolution and quantification of fatty acids of fish oil using GC-MS.

    PubMed

    Vosough, Maryam; Salemi, Amir

    2007-08-15

    In the present work two second-order calibration methods, generalized rank annihilation method (GRAM) and multivariate curve resolution-alternating least square (MCR-ALS) have been applied on standard addition data matrices obtained by gas chromatography-mass spectrometry (GC-MS) to characterize and quantify four unsaturated fatty acids cis-9-hexadecenoic acid (C16:1omega7c), cis-9-octadecenoic acid (C18:1omega9c), cis-11-eicosenoic acid (C20:1omega9) and cis-13-docosenoic acid (C22:1omega9) in fish oil considering matrix interferences. With these methods, the area does not need to be directly measured and predictions are more accurate. Because of non-trilinear conditions of GC-MS data matrices, at first MCR-ALS and GRAM have been used on uncorrected data matrices. In comparison to MCR-ALS, biased and imprecise concentrations (%R.S.D.=27.3) were obtained using GRAM without correcting the retention time-shift. As trilinearity is the essential requirement for implementing GRAM, the data need to be corrected. Multivariate rank alignment objectively corrects the run-to-run retention time variations between sample GC-MS data matrix and a standard addition GC-MS data matrix. Then, two second-order algorithms have been compared with each other. The above algorithms provided similar mean predictions, pure concentrations and spectral profiles. The results validated using standard mass spectra of target compounds. In addition, some of the quantification results were compared with the concentration values obtained using the selected mass chromatograms. As in the case of strong peak-overlap and the matrix effect, the classical univariate method of determination of the area of the peaks of the analytes will fail, the "second-order advantage" has solved this problem successfully.

  2. The Capacity Profile: A Method to Classify Additional Care Needs in Children with Neurodevelopmental Disabilities

    ERIC Educational Resources Information Center

    Meester-Delver, Anke; Beelen, Anita; Hennekam, Raoul; Nollet, Frans; Hadders-Algra, Mijna

    2007-01-01

    The aim of this study was to determine the interrater reliability and stability over time of the Capacity Profile (CAP). The CAP is a standardized method for classifying additional care needs indicated by current impairments in five domains of body functions: physical health, neuromusculoskeletal and movement-related, sensory, mental, and voice…

  3. Improving IQ measurement in intellectual disabilities using true deviation from population norms.

    PubMed

    Sansone, Stephanie M; Schneider, Andrea; Bickel, Erika; Berry-Kravis, Elizabeth; Prescott, Christina; Hessl, David

    2014-01-01

    Intellectual disability (ID) is characterized by global cognitive deficits, yet the very IQ tests used to assess ID have limited range and precision in this population, especially for more impaired individuals. We describe the development and validation of a method of raw z-score transformation (based on general population norms) that ameliorates floor effects and improves the precision of IQ measurement in ID using the Stanford Binet 5 (SB5) in fragile X syndrome (FXS; n = 106), the leading inherited cause of ID, and in individuals with idiopathic autism spectrum disorder (ASD; n = 205). We compared the distributional characteristics and Q-Q plots from the standardized scores with the deviation z-scores. Additionally, we examined the relationship between both scoring methods and multiple criterion measures. We found evidence that substantial and meaningful variation in cognitive ability on standardized IQ tests among individuals with ID is lost when converting raw scores to standardized scaled, index and IQ scores. Use of the deviation z- score method rectifies this problem, and accounts for significant additional variance in criterion validation measures, above and beyond the usual IQ scores. Additionally, individual and group-level cognitive strengths and weaknesses are recovered using deviation scores. Traditional methods for generating IQ scores in lower functioning individuals with ID are inaccurate and inadequate, leading to erroneously flat profiles. However assessment of cognitive abilities is substantially improved by measuring true deviation in performance from standardization sample norms. This work has important implications for standardized test development, clinical assessment, and research for which IQ is an important measure of interest in individuals with neurodevelopmental disorders and other forms of cognitive impairment.

  4. Review of Real-Time 3-Dimensional Image Guided Radiation Therapy on Standard-Equipped Cancer Radiation Therapy Systems: Are We at the Tipping Point for the Era of Real-Time Radiation Therapy?

    PubMed

    Keall, Paul J; Nguyen, Doan Trang; O'Brien, Ricky; Zhang, Pengpeng; Happersett, Laura; Bertholet, Jenny; Poulsen, Per R

    2018-04-14

    To review real-time 3-dimensional (3D) image guided radiation therapy (IGRT) on standard-equipped cancer radiation therapy systems, focusing on clinically implemented solutions. Three groups in 3 continents have clinically implemented novel real-time 3D IGRT solutions on standard-equipped linear accelerators. These technologies encompass kilovoltage, combined megavoltage-kilovoltage, and combined kilovoltage-optical imaging. The cancer sites treated span pelvic and abdominal tumors for which respiratory motion is present. For each method the 3D-measured motion during treatment is reported. After treatment, dose reconstruction was used to assess the treatment quality in the presence of motion with and without real-time 3D IGRT. The geometric accuracy was quantified through phantom experiments. A literature search was conducted to identify additional real-time 3D IGRT methods that could be clinically implemented in the near future. The real-time 3D IGRT methods were successfully clinically implemented and have been used to treat more than 200 patients. Systematic target position shifts were observed using all 3 methods. Dose reconstruction demonstrated that the delivered dose is closer to the planned dose with real-time 3D IGRT than without real-time 3D IGRT. In addition, compromised target dose coverage and variable normal tissue doses were found without real-time 3D IGRT. The geometric accuracy results with real-time 3D IGRT had a mean error of <0.5 mm and a standard deviation of <1.1 mm. Numerous additional articles exist that describe real-time 3D IGRT methods using standard-equipped radiation therapy systems that could also be clinically implemented. Multiple clinical implementations of real-time 3D IGRT on standard-equipped cancer radiation therapy systems have been demonstrated. Many more approaches that could be implemented were identified. These solutions provide a pathway for the broader adoption of methods to make radiation therapy more accurate, impacting tumor and normal tissue dose, margins, and ultimately patient outcomes. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Determination of fluoride in oxides with the fluoride-ion activity electrode.

    PubMed

    Peters, M A; Ladd, D M

    1971-07-01

    The application of the fluoride-ion activity electrode to the determination of fluoride in various samples has been studied. Samples are decomposed by fusion and the fluoride concentration is determined by a standard-addition or a direct method. The standard-addition method is unsuitable, owing to a positive bias. The direct method, however, is rapid, accurate and precise. The fluoride content of exploration ores, fluorspar, opal glass, phosphate rock and various production samples, has been successfully determined. The success of the direct method depends on the effectiveness of the system used to buffer pH and ionic strength and complex possible interferences (Al(3+), Ca(2+), Fe(3+)). The effect of interferences has been studied and found to be minimal. The procedures are rapid and accurate and may be substituted for the traditional Willard and Winter or pyro hydrolysis methods, with considerable saving of time.

  6. ANSI/ASHRAE/IES Standard 90.1-2016 Performance Rating Method Reference Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goel, Supriya; Rosenberg, Michael I.; Eley, Charles

    This document is intended to be a reference manual for the Appendix G Performance Rating Method (PRM) of ANSI/ASHRAE/IES Standard 90.1-2016 (Standard 90.1-2016). The PRM can be used to demonstrate compliance with the standard and to rate the energy efficiency of commercial and high-rise residential buildings with designs that exceed the requirements of Standard 90.1. Use of the PRM for demonstrating compliance with Standard 90.1 is a new feature of the 2016 edition. The procedures and processes described in this manual are designed to provide consistency and accuracy by filling in gaps and providing additional details needed by users ofmore » the PRM.« less

  7. Mobile Robot and Mobile Manipulator Research Towards ASTM Standards Development.

    PubMed

    Bostelman, Roger; Hong, Tsai; Legowik, Steven

    2016-01-01

    Performance standards for industrial mobile robots and mobile manipulators (robot arms onboard mobile robots) have only recently begun development. Low cost and standardized measurement techniques are needed to characterize system performance, compare different systems, and to determine if recalibration is required. This paper discusses work at the National Institute of Standards and Technology (NIST) and within the ASTM Committee F45 on Driverless Automatic Guided Industrial Vehicles. This includes standards for both terminology, F45.91, and for navigation performance test methods, F45.02. The paper defines terms that are being considered. Additionally, the paper describes navigation test methods that are near ballot and docking test methods being designed for consideration within F45.02. This includes the use of low cost artifacts that can provide alternatives to using relatively expensive measurement systems.

  8. Mobile Robot and Mobile Manipulator Research Towards ASTM Standards Development

    PubMed Central

    Bostelman, Roger; Hong, Tsai; Legowik, Steven

    2017-01-01

    Performance standards for industrial mobile robots and mobile manipulators (robot arms onboard mobile robots) have only recently begun development. Low cost and standardized measurement techniques are needed to characterize system performance, compare different systems, and to determine if recalibration is required. This paper discusses work at the National Institute of Standards and Technology (NIST) and within the ASTM Committee F45 on Driverless Automatic Guided Industrial Vehicles. This includes standards for both terminology, F45.91, and for navigation performance test methods, F45.02. The paper defines terms that are being considered. Additionally, the paper describes navigation test methods that are near ballot and docking test methods being designed for consideration within F45.02. This includes the use of low cost artifacts that can provide alternatives to using relatively expensive measurement systems. PMID:28690359

  9. Topographic and hydrographic survey data for the São Francisco River near Torrinha, Bahia, Brazil, 2014

    USGS Publications Warehouse

    Fosness, Ryan L.; Dietsch, Benjamin J.

    2015-10-21

    This report presents the surveying techniques and data-processing methods used to collect, process, and disseminate topographic and hydrographic data. All standard and non‑standard data-collection methods, techniques, and data process methods were documented. Additional discussion describes the quality-assurance and quality-control elements used in this study, along with the limitations for the Torrinha-Itacoatiara study reach data. The topographic and hydrographic geospatial data are published along with associated metadata.

  10. A generic standard additions based method to determine endogenous analyte concentrations by immunoassays to overcome complex biological matrix interference.

    PubMed

    Pang, Susan; Cowen, Simon

    2017-12-13

    We describe a novel generic method to derive the unknown endogenous concentrations of analyte within complex biological matrices (e.g. serum or plasma) based upon the relationship between the immunoassay signal response of a biological test sample spiked with known analyte concentrations and the log transformed estimated total concentration. If the estimated total analyte concentration is correct, a portion of the sigmoid on a log-log plot is very close to linear, allowing the unknown endogenous concentration to be estimated using a numerical method. This approach obviates conventional relative quantification using an internal standard curve and need for calibrant diluent, and takes into account the individual matrix interference on the immunoassay by spiking the test sample itself. This technique is based on standard additions for chemical analytes. Unknown endogenous analyte concentrations within even 2-fold diluted human plasma may be determined reliably using as few as four reaction wells.

  11. Characterization of Metal Powders Used for Additive Manufacturing.

    PubMed

    Slotwinski, J A; Garboczi, E J; Stutzman, P E; Ferraris, C F; Watson, S S; Peltz, M A

    2014-01-01

    Additive manufacturing (AM) techniques can produce complex, high-value metal parts, with potential applications as critical parts, such as those found in aerospace components. The production of AM parts with consistent and predictable properties requires input materials (e.g., metal powders) with known and repeatable characteristics, which in turn requires standardized measurement methods for powder properties. First, based on our previous work, we assess the applicability of current standardized methods for powder characterization for metal AM powders. Then we present the results of systematic studies carried out on two different powder materials used for additive manufacturing: stainless steel and cobalt-chrome. The characterization of these powders is important in NIST efforts to develop appropriate measurements and standards for additive materials and to document the property of powders used in a NIST-led additive manufacturing material round robin. An extensive array of characterization techniques was applied to these two powders, in both virgin and recycled states. The physical techniques included laser diffraction particle size analysis, X-ray computed tomography for size and shape analysis, and optical and scanning electron microscopy. Techniques sensitive to structure and chemistry, including X-ray diffraction, energy dispersive analytical X-ray analysis using the X-rays generated during scanning electron microscopy, and X-Ray photoelectron spectroscopy were also employed. The results of these analyses show how virgin powder changes after being exposed to and recycled from one or more Direct Metal Laser Sintering (DMLS) additive manufacturing build cycles. In addition, these findings can give insight into the actual additive manufacturing process.

  12. New procedure of quantitative mapping of Ti and Al released from dental implant and Mg, Ca, Fe, Zn, Cu, Mn as physiological elements in oral mucosa by LA-ICP-MS.

    PubMed

    Sajnóg, Adam; Hanć, Anetta; Koczorowski, Ryszard; Barałkiewicz, Danuta

    2017-12-01

    A new procedure for determination of elements derived from titanium implants and physiological elements in soft tissues by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) is presented. The analytical procedure was developed which involved preparation of in-house matrix matched solid standards with analyte addition based on certified reference material (CRM) MODAS-4 Cormorant Tissue. Addition of gelatin, serving as a binding agent, essentially improved physical properties of standards. Performance of the analytical method was assayed and validated by calculating parameters like precision, detection limits, trueness and recovery of analyte addition using additional CRM - ERM-BB184 Bovine Muscle. Analyte addition was additionally confirmed by microwave digestion of solid standards and analysis by solution nebulization ICP-MS. The detection limits are in range 1.8μgg -1 to 450μgg -1 for Mn and Ca respectively. The precision values range from 7.3% to 42% for Al and Zn respectively. The estimated recoveries of analyte addition line within scope of 83%-153% for Mn and Cu respectively. Oral mucosa samples taken from patients treated with titanium dental implants were examined using developed analytical method. Standards and tissue samples were cryocut into 30µm thin sections. LA-ICP-MS allowed to obtain two-dimensional maps of distribution of elements in tested samples which revealed high content of Ti and Al derived from implants. Photographs from optical microscope displayed numerous particles with µm size in oral mucosa samples which suggests that they are residues from implantation procedure. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. [Standard addition determination of impurities in Na2CrO4 by ICP-AES].

    PubMed

    Wang, Li-ping; Feng, Hai-tao; Dong, Ya-ping; Peng, Jiao-yu; Li, Wu; Shi, Hai-qin; Wang, Yong

    2015-02-01

    Coupled plasma atomic emission spectrometry (ICP-AES) was used to determine the trace impurities of Ca, Mg, Al, Fe and Si in industrial sodium chromate. Wavelengths of 167.079, 393.366, 259.940, 279.533 and 251.611 nm were selected as analytical lines for the determination of Al, Ca, Fe, Mg and Si, respectively. The analytical errors can be eliminated by adjusting the determined solution with high pure hydrochloric acid. Standard addition method was used to eliminate matrix effects. The linear correlation, detection limit, precision and recovery for the concerned trace impurities have been examined. The effect of standard addition method on the accuracy for the determination under the selected analytical lines has been studied in detail. The results show that the linear correlations of standard curves were very good (R2 = 0.9988 to 0.9996) under the determined conditions. Detection limits of these trace impurities were in the range of 0.0134 to 0.0280 mg x L(-1). Sample recoveries were within 97.30% to 107.50%, and relative standard deviations were lower than 5.86% for eleven repeated determinations. The detection limits and accuracies established by the experiment can meet the analytical requirements and the analytic procedure was used to determine trace impurities in sodium chromate by ion membrane electrolysis technique successfully. Due to sodium chromate can be changed into sodium dichromate and chromic acid by adding acids, the established method can be further used to monitor trace impurities in these compounds or other hexavalent chromium compounds.

  14. Study on AC loss measurements of HTS power cable for standardizing

    NASA Astrophysics Data System (ADS)

    Mukoyama, Shinichi; Amemiya, Naoyuki; Watanabe, Kazuo; Iijima, Yasuhiro; Mido, Nobuhiro; Masuda, Takao; Morimura, Toshiya; Oya, Masayoshi; Nakano, Tetsutaro; Yamamoto, Kiyoshi

    2017-09-01

    High-temperature superconducting power cables (HTS cables) have been developed for more than 20 years. In addition of the cable developments, the test methods of the HTS cables have been discussed and proposed in many laboratories and companies. Recently the test methods of the HTS cables is required to standardize and to common in the world. CIGRE made the working group (B1-31) for the discussion of the test methods of the HTS cables as a power cable, and published the recommendation of the test method. Additionally, IEC TC20 submitted the New Work Item Proposal (NP) based on the recommendation of CIGRE this year, IEC TC20 and IEC TC90 started the standardization work on Testing of HTS AC cables. However, the individual test method that used to measure a performance of HTS cables hasn’t been established as world’s common methods. The AC loss is one of the most important properties to disseminate low loss and economical efficient HTS cables in the world. We regard to establish the method of the AC loss measurements in rational and in high accuracy. Japan is at a leading position in the AC loss study, because Japanese researchers have studied on the AC loss technically and scientifically, and also developed the effective technologies for the AC loss reduction. The JP domestic commission of TC90 made a working team to discussion the methods of the AC loss measurements for aiming an international standard finally. This paper reports about the AC loss measurement of two type of the HTS conductors, such as a HTS conductor without a HTS shield and a HTS conductor with a HTS shield. The AC loss measurement method is suggested by the electrical method..

  15. A highly parallel multigrid-like method for the solution of the Euler equations

    NASA Technical Reports Server (NTRS)

    Tuminaro, Ray S.

    1989-01-01

    We consider a highly parallel multigrid-like method for the solution of the two dimensional steady Euler equations. The new method, introduced as filtering multigrid, is similar to a standard multigrid scheme in that convergence on the finest grid is accelerated by iterations on coarser grids. In the filtering method, however, additional fine grid subproblems are processed concurrently with coarse grid computations to further accelerate convergence. These additional problems are obtained by splitting the residual into a smooth and an oscillatory component. The smooth component is then used to form a coarse grid problem (similar to standard multigrid) while the oscillatory component is used for a fine grid subproblem. The primary advantage in the filtering approach is that fewer iterations are required and that most of the additional work per iteration can be performed in parallel with the standard coarse grid computations. We generalize the filtering algorithm to a version suitable for nonlinear problems. We emphasize that this generalization is conceptually straight-forward and relatively easy to implement. In particular, no explicit linearization (e.g., formation of Jacobians) needs to be performed (similar to the FAS multigrid approach). We illustrate the nonlinear version by applying it to the Euler equations, and presenting numerical results. Finally, a performance evaluation is made based on execution time models and convergence information obtained from numerical experiments.

  16. Quantification of transformation products of rocket fuel unsymmetrical dimethylhydrazine in soils using SPME and GC-MS.

    PubMed

    Bakaikina, Nadezhda V; Kenessov, Bulat; Ul'yanovskii, Nikolay V; Kosyakov, Dmitry S

    2018-07-01

    Determination of transformation products (TPs) of rocket fuel unsymmetrical dimethylhydrazine (UDMH) in soil is highly important for environmental impact assessment of the launches of heavy space rockets from Kazakhstan, Russia, China and India. The method based on headspace solid-phase microextraction (HS SPME) and gas chromatography-mass spectrometry is advantageous over other known methods due to greater simplicity and cost efficiency. However, accurate quantification of these analytes using HS SPME is limited by the matrix effect. In this research, we proposed using internal standard and standard addition calibrations to achieve proper combination of accuracies of the quantification of key TPs of UDMH and cost efficiency. 1-Trideuteromethyl-1H-1,2,4-triazole (MTA-d3) was used as the internal standard. Internal standard calibration allowed controlling matrix effects during quantification of 1-methyl-1H-1,2,4-triazole (MTA), N,N-dimethylformamide (DMF), and N-nitrosodimethylamine (NDMA) in soils with humus content < 1%. Using SPME at 60 °C for 15 min by 65 µm Carboxen/polydimethylsiloxane fiber, recoveries of MTA, DMF and NDMA for sandy and loamy soil samples were 91-117, 85-123 and 64-132%, respectively. For improving the method accuracy and widening the range of analytes, standard addition and its combination with internal standard calibration were tested and compared on real soil samples. The combined calibration approach provided greatest accuracies for NDMA, DMF, N-methylformamide, formamide, 1H-pyrazole, 3-methyl-1H-pyrazole and 1H-pyrazole. For determination of 1-formyl-2,2-dimethylhydrazine, 3,5-dimethylpyrazole, 2-ethyl-1H-imidazole, 1H-imidazole, 1H-1,2,4-triazole, pyrazines and pyridines, standard addition calibration is more suitable. However, the proposed approach and collected data allow using both approaches simultaneously. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Quantifying stream nutrient uptake from ambient to saturation with instantaneous tracer additions

    NASA Astrophysics Data System (ADS)

    Covino, T. P.; McGlynn, B. L.; McNamara, R.

    2009-12-01

    Stream nutrient tracer additions and spiraling metrics are frequently used to quantify stream ecosystem behavior. However, standard approaches limit our understanding of aquatic biogeochemistry. Specifically, the relationship between in-stream nutrient concentration and stream nutrient spiraling has not been characterized. The standard constant rate (steady-state) approach to stream spiraling parameter estimation, either through elevating nutrient concentration or adding isotopically labeled tracers (e.g. 15N), provides little information regarding the stream kinetic curve that represents the uptake-concentration relationship analogous to the Michaelis-Menten curve. These standard approaches provide single or a few data points and often focus on estimating ambient uptake under the conditions at the time of the experiment. Here we outline and demonstrate a new method using instantaneous nutrient additions and dynamic analyses of breakthrough curve (BTC) data to characterize the full relationship between spiraling metrics and nutrient concentration. We compare the results from these dynamic analyses to BTC-integrated, and standard steady-state approaches. Our results indicate good agreement between these three approaches but we highlight the advantages of our dynamic method. Specifically, our new dynamic method provides a cost-effective and efficient approach to: 1) characterize full concentration-spiraling metric curves; 2) estimate ambient spiraling metrics; 3) estimate Michaelis-Menten parameters maximum uptake (Umax) and the half-saturation constant (Km) from developed uptake-concentration kinetic curves, and; 4) measure dynamic nutrient spiraling in larger rivers where steady-state approaches are impractical.

  18. Estimating standard errors in feature network models.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2007-05-01

    Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.

  19. Characterization of Metal Powders Used for Additive Manufacturing

    PubMed Central

    Slotwinski, JA; Garboczi, EJ; Stutzman, PE; Ferraris, CF; Watson, SS; Peltz, MA

    2014-01-01

    Additive manufacturing (AM) techniques1 can produce complex, high-value metal parts, with potential applications as critical parts, such as those found in aerospace components. The production of AM parts with consistent and predictable properties requires input materials (e.g., metal powders) with known and repeatable characteristics, which in turn requires standardized measurement methods for powder properties. First, based on our previous work, we assess the applicability of current standardized methods for powder characterization for metal AM powders. Then we present the results of systematic studies carried out on two different powder materials used for additive manufacturing: stainless steel and cobalt-chrome. The characterization of these powders is important in NIST efforts to develop appropriate measurements and standards for additive materials and to document the property of powders used in a NIST-led additive manufacturing material round robin. An extensive array of characterization techniques was applied to these two powders, in both virgin and recycled states. The physical techniques included laser diffraction particle size analysis, X-ray computed tomography for size and shape analysis, and optical and scanning electron microscopy. Techniques sensitive to structure and chemistry, including X-ray diffraction, energy dispersive analytical X-ray analysis using the X-rays generated during scanning electron microscopy, and X-Ray photoelectron spectroscopy were also employed. The results of these analyses show how virgin powder changes after being exposed to and recycled from one or more Direct Metal Laser Sintering (DMLS) additive manufacturing build cycles. In addition, these findings can give insight into the actual additive manufacturing process. PMID:26601040

  20. Efficient Implementation of the Invariant Imbedding T-Matrix Method and the Separation of Variables Method Applied to Large Nonspherical Inhomogeneous Particles

    NASA Technical Reports Server (NTRS)

    Bi, Lei; Yang, Ping; Kattawar, George W.; Mishchenko, Michael I.

    2012-01-01

    Three terms, ''Waterman's T-matrix method'', ''extended boundary condition method (EBCM)'', and ''null field method'', have been interchangeable in the literature to indicate a method based on surface integral equations to calculate the T-matrix. Unlike the previous method, the invariant imbedding method (IIM) calculates the T-matrix by the use of a volume integral equation. In addition, the standard separation of variables method (SOV) can be applied to compute the T-matrix of a sphere centered at the origin of the coordinate system and having a maximal radius such that the sphere remains inscribed within a nonspherical particle. This study explores the feasibility of a numerical combination of the IIM and the SOV, hereafter referred to as the IIMþSOV method, for computing the single-scattering properties of nonspherical dielectric particles, which are, in general, inhomogeneous. The IIMþSOV method is shown to be capable of solving light-scattering problems for large nonspherical particles where the standard EBCM fails to converge. The IIMþSOV method is flexible and applicable to inhomogeneous particles and aggregated nonspherical particles (overlapped circumscribed spheres) representing a challenge to the standard superposition T-matrix method. The IIMþSOV computational program, developed in this study, is validated against EBCM simulated spheroid and cylinder cases with excellent numerical agreement (up to four decimal places). In addition, solutions for cylinders with large aspect ratios, inhomogeneous particles, and two-particle systems are compared with results from discrete dipole approximation (DDA) computations, and comparisons with the improved geometric-optics method (IGOM) are found to be quite encouraging.

  1. Preliminary evaluation of a gel tube agglutination major cross-match method in dogs.

    PubMed

    Villarnovo, Dania; Burton, Shelley A; Horney, Barbara S; MacKenzie, Allan L; Vanderstichel, Raphaël

    2016-09-01

    A major cross-match gel tube test is available for use in dogs yet has not been clinically evaluated. This study compared cross-match results obtained using the gel tube and the standard tube methods for canine samples. Study 1 included 107 canine sample donor-recipient pairings cross-match tested with the RapidVet-H method gel tube test and compared results with the standard tube method. Additionally, 120 pairings using pooled sera containing anti-canine erythrocyte antibody at various concentrations were tested with leftover blood from a hospital population to assess sensitivity and specificity of the gel tube method in comparison with the standard method. The gel tube method had a good relative specificity of 96.1% in detecting lack of agglutination (compatibility) compared to the standard tube method. Agreement between the 2 methods was moderate. Nine of 107 pairings showed agglutination/incompatibility on either test, too few to allow reliable calculation of relative sensitivity. Fifty percent of the gel tube method results were difficult to interpret due to sample spreading in the reaction and/or negative control tubes. The RapidVet-H method agreed with the standard cross-match method on compatible samples, but detected incompatibility in some sample pairs that were compatible with the standard method. Evaluation using larger numbers of incompatible pairings is needed to assess diagnostic utility. The gel tube method results were difficult to categorize due to sample spreading. Weak agglutination reactions or other factors such as centrifuge model may be responsible. © 2016 American Society for Veterinary Clinical Pathology.

  2. A comparison of in vitro cytotoxicity assays in medical device regulatory studies.

    PubMed

    Liu, Xuemei; Rodeheaver, Denise P; White, Jeffrey C; Wright, Ann M; Walker, Lisa M; Zhang, Fan; Shannon, Stephen

    2018-06-06

    Medical device biocompatibility testing is used to evaluate the risk of adverse effects on tissues from exposure to leachates/extracts. A battery of tests is typically recommended in accordance with regulatory standards to determine if the device is biocompatible. In vitro cytotoxicity, a key element of the standards, is a required endpoint for all types of medical devices. Each validated cytotoxicity method has different methodology and acceptance criteria that could influence the selection of a specific test. In addition, some guidances are more specific than others as to the recommended test methods. For example, the International Organization for Standardization (ISO 1 ) cites preference for quantitative methods (e.g., tetrazolium (MTT/XTT), neutral red (NR), or colony formation assays (CFA)) over qualitative methods (e.g., elution, agar overlay/diffusion, or direct), while a recent ISO standard for contact lens/lens care solutions specifically requires a qualitative direct test. Qualitative methods are described in United States Pharmacopeia (USP) while quantitative CFAs are listed in Japan guidance. The aim of this review is to compare the methodologies such as test article preparation, test conditions, and criteria for six cytotoxicity methods recommended in regulatory standards in order to inform decisions on which method(s) to select during the medical device safety evaluation. Copyright © 2018. Published by Elsevier Inc.

  3. Standard addition method for the determination of pharmaceutical residues in drinking water by SPE-LC-MS/MS.

    PubMed

    Cimetiere, Nicolas; Soutrel, Isabelle; Lemasle, Marguerite; Laplanche, Alain; Crocq, André

    2013-01-01

    The study of the occurrence and fate of pharmaceutical compounds in drinking or waste water processes has become very popular in recent years. Liquid chromatography with tandem mass spectrometry is a powerful analytical tool often used to determine pharmaceutical residues at trace level in water. However, many steps may disrupt the analytical procedure and bias the results. A list of 27 environmentally relevant molecules, including various therapeutic classes and (cardiovascular, veterinary and human antibiotics, neuroleptics, non-steroidal anti-inflammatory drugs, hormones and other miscellaneous pharmaceutical compounds), was selected. In this work, a method was developed using ultra performance liquid chromatography coupled to tandem mass spectrometry (UPLC-MS/MS) and solid-phase extraction to determine the concentration of the 27 targeted pharmaceutical compounds at the nanogram per litre level. The matrix effect was evaluated from water sampled at different treatment stages. Conventional methods with external calibration and internal standard correction were compared with the standard addition method (SAM). An accurate determination of pharmaceutical compounds in drinking water was obtained by the SAM associated with UPLC-MS/MS. The developed method was used to evaluate the occurrence and fate of pharmaceutical compounds in some drinking water treatment plants in the west of France.

  4. Apparent annual survival estimates of tropical songbirds better reflect life history variation when based on intensive field methods

    USGS Publications Warehouse

    Martin, Thomas E.; Riordan, Margaret M.; Repin, Rimi; Mouton, James C.; Blake, William M.

    2017-01-01

    AimAdult survival is central to theories explaining latitudinal gradients in life history strategies. Life history theory predicts higher adult survival in tropical than north temperate regions given lower fecundity and parental effort. Early studies were consistent with this prediction, but standard-effort netting studies in recent decades suggested that apparent survival rates in temperate and tropical regions strongly overlap. Such results do not fit with life history theory. Targeted marking and resighting of breeding adults yielded higher survival estimates in the tropics, but this approach is thought to overestimate survival because it does not sample social and age classes with lower survival. We compared the effect of field methods on tropical survival estimates and their relationships with life history traits.LocationSabah, Malaysian Borneo.Time period2008–2016.Major taxonPasseriformes.MethodsWe used standard-effort netting and resighted individuals of all social and age classes of 18 tropical songbird species over 8 years. We compared apparent survival estimates between these two field methods with differing analytical approaches.ResultsEstimated detection and apparent survival probabilities from standard-effort netting were similar to those from other tropical studies that used standard-effort netting. Resighting data verified that a high proportion of individuals that were never recaptured in standard-effort netting remained in the study area, and many were observed breeding. Across all analytical approaches, addition of resighting yielded substantially higher survival estimates than did standard-effort netting alone. These apparent survival estimates were higher than for temperate zone species, consistent with latitudinal differences in life histories. Moreover, apparent survival estimates from addition of resighting, but not from standard-effort netting alone, were correlated with parental effort as measured by egg temperature across species.Main conclusionsInclusion of resighting showed that standard-effort netting alone can negatively bias apparent survival estimates and obscure life history relationships across latitudes and among tropical species.

  5. Simple and accurate quantification of BTEX in ambient air by SPME and GC-MS.

    PubMed

    Baimatova, Nassiba; Kenessov, Bulat; Koziel, Jacek A; Carlsen, Lars; Bektassov, Marat; Demyanenko, Olga P

    2016-07-01

    Benzene, toluene, ethylbenzene and xylenes (BTEX) comprise one of the most ubiquitous and hazardous groups of ambient air pollutants of concern. Application of standard analytical methods for quantification of BTEX is limited by the complexity of sampling and sample preparation equipment, and budget requirements. Methods based on SPME represent simpler alternative, but still require complex calibration procedures. The objective of this research was to develop a simpler, low-budget, and accurate method for quantification of BTEX in ambient air based on SPME and GC-MS. Standard 20-mL headspace vials were used for field air sampling and calibration. To avoid challenges with obtaining and working with 'zero' air, slope factors of external standard calibration were determined using standard addition and inherently polluted lab air. For polydimethylsiloxane (PDMS) fiber, differences between the slope factors of calibration plots obtained using lab and outdoor air were below 14%. PDMS fiber provided higher precision during calibration while the use of Carboxen/PDMS fiber resulted in lower detection limits for benzene and toluene. To provide sufficient accuracy, the use of 20mL vials requires triplicate sampling and analysis. The method was successfully applied for analysis of 108 ambient air samples from Almaty, Kazakhstan. Average concentrations of benzene, toluene, ethylbenzene and o-xylene were 53, 57, 11 and 14µgm(-3), respectively. The developed method can be modified for further quantification of a wider range of volatile organic compounds in air. In addition, the new method is amenable to automation. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. A Tool for Estimating Variability in Wood Preservative Treatment Retention

    Treesearch

    Patricia K. Lebow; Adam M. Taylor; Timothy M. Young

    2015-01-01

    Composite sampling is standard practice for evaluation of preservative retention levels in preservative-treated wood. Current protocols provide an average retention value but no estimate of uncertainty. Here we describe a statistical method for calculating uncertainty estimates using the standard sampling regime with minimal additional chemical analysis. This tool can...

  7. 14 CFR 25.853 - Compartment interiors.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Design and Construction Fire Protection § 25.853... criteria prescribed in part I of appendix F of this part, or other approved equivalent methods, regardless... method, in addition to the flammability requirements prescribed in paragraph (a) of this section: (1...

  8. 14 CFR 25.853 - Compartment interiors.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Design and Construction Fire Protection § 25.853... criteria prescribed in part I of appendix F of this part, or other approved equivalent methods, regardless... method, in addition to the flammability requirements prescribed in paragraph (a) of this section: (1...

  9. 14 CFR 25.853 - Compartment interiors.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Design and Construction Fire Protection § 25.853... criteria prescribed in part I of appendix F of this part, or other approved equivalent methods, regardless... method, in addition to the flammability requirements prescribed in paragraph (a) of this section: (1...

  10. 14 CFR 25.853 - Compartment interiors.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... AIRWORTHINESS STANDARDS: TRANSPORT CATEGORY AIRPLANES Design and Construction Fire Protection § 25.853... criteria prescribed in part I of appendix F of this part, or other approved equivalent methods, regardless... method, in addition to the flammability requirements prescribed in paragraph (a) of this section: (1...

  11. Domain Decomposition Algorithms for First-Order System Least Squares Methods

    NASA Technical Reports Server (NTRS)

    Pavarino, Luca F.

    1996-01-01

    Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.

  12. Automated installation methods for photovoltaic arrays

    NASA Astrophysics Data System (ADS)

    Briggs, R.; Daniels, A.; Greenaway, R.; Oster, J., Jr.; Racki, D.; Stoeltzing, R.

    1982-11-01

    Since installation expenses constitute a substantial portion of the cost of a large photovoltaic power system, methods for reduction of these costs were investigated. The installation of the photovoltaic arrays includes all areas, starting with site preparation (i.e., trenching, wiring, drainage, foundation installation, lightning protection, grounding and installation of the panel) and concluding with the termination of the bus at the power conditioner building. To identify the optimum combination of standard installation procedures and automated/mechanized techniques, the installation process was investigated including the equipment and hardware available, the photovoltaic array structure systems and interfaces, and the array field and site characteristics. Preliminary designs of hardware for both the standard installation method, the automated/mechanized method, and a mix of standard installation procedures and mechanized procedures were identified to determine which process effectively reduced installation costs. In addition, costs associated with each type of installation method and with the design, development and fabrication of new installation hardware were generated.

  13. A surrogate analyte method to determine D-serine in mouse brain using liquid chromatography-tandem mass spectrometry.

    PubMed

    Kinoshita, Kohnosuke; Jingu, Shigeji; Yamaguchi, Jun-ichi

    2013-01-15

    A bioanalytical method for determining endogenous d-serine levels in the mouse brain using a surrogate analyte and liquid chromatography-tandem mass spectrometry (LC-MS/MS) was developed. [2,3,3-(2)H]D-serine and [(15)N]D-serine were used as a surrogate analyte and an internal standard, respectively. The surrogate analyte was spiked into brain homogenate to yield calibration standards and quality control (QC) samples. Both endogenous and surrogate analytes were extracted using protein precipitation followed by solid phase extraction. Enantiomeric separation was achieved on a chiral crown ether column with an analysis time of only 6 min without any derivatization. The column eluent was introduced into an electrospray interface of a triple-quadrupole mass spectrometer. The calibration range was 1.00 to 300 nmol/g, and the method showed acceptable accuracy and precision at all QC concentration levels from a validation point of view. In addition, the brain d-serine levels of normal mice determined using this method were the same as those obtained by a standard addition method, which is time-consuming but is often used for the accurate measurement of endogenous substances. Thus, this surrogate analyte method should be applicable to the measurement of d-serine levels as a potential biomarker for monitoring certain effects of drug candidates on the central nervous system. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Comparison of Analytical Methods for the Determination of Uranium in Seawater Using Inductively Coupled Plasma Mass Spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Jordana R.; Gill, Gary A.; Kuo, Li-Jung

    2016-04-20

    Trace element determinations in seawater by inductively coupled plasma mass spectrometry are analytically challenging due to the typically very low concentrations of the trace elements and the potential interference of the salt matrix. In this study, we did a comparison for uranium analysis using inductively coupled plasma mass spectrometry (ICP-MS) of Sequim Bay seawater samples and three seawater certified reference materials (SLEW-3, CASS-5 and NASS-6) using seven different analytical approaches. The methods evaluated include: direct analysis, Fe/Pd reductive precipitation, standard addition calibration, online automated dilution using an external calibration with and without matrix matching, and online automated pre-concentration. The methodmore » which produced the most accurate results was the method of standard addition calibration, recovering uranium from a Sequim Bay seawater sample at 101 ± 1.2%. The on-line preconcentration method and the automated dilution with matrix-matched calibration method also performed well. The two least effective methods were the direct analysis and the Fe/Pd reductive precipitation using sodium borohydride« less

  15. A method for measuring low-weight carboxylic acids from biosolid compost.

    PubMed

    Himanen, Marina; Latva-Kala, Kyösti; Itävaara, Merja; Hänninen, Kari

    2006-01-01

    Concentration of low-weight carboxylic acids (LWCA) is one of the important parameters that should be taken into consideration when compost is applied as soil improver for plant cultivation, because high amounts of LWCA can be toxic to plants. The present work describes a method for analysis of LWCA in compost as a useful tool for monitoring compost quality and safety. The method was tested on compost samples of two different ages: 3 (immature) and 6 (mature) months old. Acids from compost samples were extracted at high pH, filtered, and freeze-dried. The dried sodium salts were derivatized with a sulfuric acid-methanol mixture and concentrations of 11 low-weight fatty acids (C1-C10) were analyzed using headspace gas chromatography. The material was analyzed with two analytical techniques: the external calibration method (tested on 11 LWCA) and the standard addition method (tested only on formic, acetic, propionic, butyric, and iso-butyric acids). The two techniques were compared for efficiency of acids quantification. The method allowed good separation and quantification of a wide range of individual acids with high sensitivity at low concentrations. Detection limit for propionic, butyric, caproic, caprylic, and capric acids was 1 mg kg(-1) compost; for formic, acetic, valeric, enanthoic and pelargonic acids it was 5 mg kg(-1) compost; and for iso-butyric acid it was 10 mg kg(-1) compost. Recovery rates of LWCA were higher in 3-mo-old compost (57-99%) than in 6-mo-old compost (29-45%). In comparison with the external calibration technique the standard addition technique proved to be three to four times more precise for older compost and two times for younger compost. Disadvantages of the standard addition technique are that it is more time demanding and laborious.

  16. Liquid-liquid extraction of strongly protein bound BMS-299897 from human plasma and cerebrospinal fluid, followed by high-performance liquid chromatography/tandem mass spectrometry.

    PubMed

    Xue, Y J; Pursley, Janice; Arnold, Mark

    2007-04-11

    BMS-299897 is a gamma-secretase inhibitor that is being developed for the treatment of Alzheimer's disease. Liquid-liquid extraction (LLE), chromatographic/tandem mass spectrometry (LC/MS/MS) methods have been developed and validated for the quantitation of BMS-299897 in human plasma and cerebrospinal fluid (CSF). Both methods utilized (13)C6-BMS-299897, the stable label isotope analog, as the internal standard. For the human plasma extraction method, two incubation steps were required after the addition of 5 mM ammonium acetate and the internal standard in acetonitrile to release the analyte bound to proteins prior to LLE with toluene. For the human CSF extraction method, after the addition of 0.5 N HCl and the internal standard, CSF samples were extracted with toluene and no incubation was required. The organic layers obtained from both extraction methods were removed and evaporated to dryness. The residues were reconstituted and injected into the LC/MS/MS system. Chromatographic separation was achieved isocratically on a MetaChem C18 Hypersil BDS column (2.0 mm x 50 mm, 3 microm). The mobile phase contained 10 mM ammonium acetate pH 5 and acetonitrile. Detection was by negative ion electrospray tandem mass spectrometry. The standard curves ranged from 1 to 1000 ng/ml for human plasma and 0.25-100 ng/ml for human CSF. Both standard curves were fitted to a 1/x weighted quadratic regression model. For both methods, the intra-assay precision was within 8.2% CV, the inter-assay precision was within 5.4% CV, and assay accuracy was within +/-7.4% of the nominal values. The validation and sample analysis results demonstrated that both methods had acceptable precision and accuracy across the calibration ranges.

  17. Study Methods to Standardize Thermography NDE

    NASA Technical Reports Server (NTRS)

    Walker, James L.; Workman, Gary L.

    1998-01-01

    The purpose of this work is to develop thermographic inspection methods and standards for use in evaluating structural composites and aerospace hardware. Qualification techniques and calibration methods are investigated to standardize the thermographic method for use in the field. Along with the inspections of test standards structural hardware, support hardware is designed and fabricated to aid in the thermographic process. Also, a standard operating procedure is developed for performing inspections with the Bales Thermal Image Processor (TIP). Inspections are performed on a broad range of structural composites. These materials include various graphite/epoxies, graphite/cyanide-ester, graphite/silicon-carbide, graphite phenolic and Keviar/epoxy. Also metal honeycomb (titanium and aluminum faceplates over an aluminum honeycomb core) structures are investigated. Various structural shapes are investigated and the thickness of the structures vary from as few as 3 plies to as many as 80 plies. Special emphasis is placed on characterizing defects in attachment holes and bondlines, in addition to those resulting from impact damage and the inclusion of foreign matter. Image processing through statistical analysis and digital filtering is investigated to enhance the quality and quantify the NDE thermal images when necessary.

  18. Study Methods to Standardize Thermography NDE

    NASA Technical Reports Server (NTRS)

    Walker, James L.; Workman, Gary L.

    1998-01-01

    The purpose of this work is to develop thermographic inspection methods and standards for use in evaluating structural composites and aerospace hardware. Qualification techniques and calibration methods are investigated to standardize the thermographic method for use in the field. Along with the inspections of test standards structural hardware, support hardware is designed and fabricated to aid in the thermographic process. Also, a standard operating procedure is developed for performing inspections with the Bales Thermal Image Processor (TIP). Inspections are performed on a broad range of structural composites. These materials include graphite/epoxies, graphite/cyanide-ester, graphite/silicon-carbide, graphite phenolic and Kevlar/epoxy. Also metal honeycomb (titanium and aluminum faceplates over an aluminum honeycomb core) structures are investigated. Various structural shapes are investigated and the thickness of the structures vary from as few as 3 plies to as many as 80 plies. Special emphasis is placed on characterizing defects in attachment holes and bondlines, in addition to those resulting from impact damage and the inclusion of foreign matter. Image processing through statistical analysis and digital filtering is investigated to enhance the quality and quantify the NDE thermal images when necessary.

  19. Toward unbiased estimations of the statefinder parameters

    NASA Astrophysics Data System (ADS)

    Aviles, Alejandro; Klapp, Jaime; Luongo, Orlando

    2017-09-01

    With the use of simulated supernova catalogs, we show that the statefinder parameters turn out to be poorly and biased estimated by standard cosmography. To this end, we compute their standard deviations and several bias statistics on cosmologies near the concordance model, demonstrating that these are very large, making standard cosmography unsuitable for future and wider compilations of data. To overcome this issue, we propose a new method that consists in introducing the series of the Hubble function into the luminosity distance, instead of considering the usual direct Taylor expansions of the luminosity distance. Moreover, in order to speed up the numerical computations, we estimate the coefficients of our expansions in a hierarchical manner, in which the order of the expansion depends on the redshift of every single piece of data. In addition, we propose two hybrids methods that incorporates standard cosmography at low redshifts. The methods presented here perform better than the standard approach of cosmography both in the errors and bias of the estimated statefinders. We further propose a one-parameter diagnostic to reject non-viable methods in cosmography.

  20. Peelle's pertinent puzzle using the Monte Carlo technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawano, Toshihiko; Talou, Patrick; Burr, Thomas

    2009-01-01

    We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less

  1. Certification of elements in and use of standard reference material 3280 multivitamin/multielement tablets.

    PubMed

    Turk, Gregory C; Sharpless, Katherine E; Cleveland, Danielle; Jongsma, Candice; Mackey, Elizabeth A; Marlow, Anthony F; Oflaz, Rabia; Paul, Rick L; Sieber, John R; Thompson, Robert Q; Wood, Laura J; Yu, Lee L; Zeisler, Rolf; Wise, Stephen A; Yen, James H; Christopher, Steven J; Day, Russell D; Long, Stephen E; Greene, Ella; Harnly, James; Ho, I-Pin; Betz, Joseph M

    2013-01-01

    Standard Reference Material 3280 Multivitamin/ Multielement Tablets was issued by the National Institute of Standards and Technology in 2009, and has certified and reference mass fraction values for 13 vitamins, 26 elements, and two carotenoids. Elements were measured using two or more analytical methods at NIST with additional data contributed by collaborating laboratories. This reference material is expected to serve a dual purpose: to provide quality assurance in support of a database of dietary supplement products and to provide a means for analysts, dietary supplement manufacturers, and researchers to assess the appropriateness and validity of their analytical methods and the accuracy of their results.

  2. Standard model of knowledge representation

    NASA Astrophysics Data System (ADS)

    Yin, Wensheng

    2016-09-01

    Knowledge representation is the core of artificial intelligence research. Knowledge representation methods include predicate logic, semantic network, computer programming language, database, mathematical model, graphics language, natural language, etc. To establish the intrinsic link between various knowledge representation methods, a unified knowledge representation model is necessary. According to ontology, system theory, and control theory, a standard model of knowledge representation that reflects the change of the objective world is proposed. The model is composed of input, processing, and output. This knowledge representation method is not a contradiction to the traditional knowledge representation method. It can express knowledge in terms of multivariate and multidimensional. It can also express process knowledge, and at the same time, it has a strong ability to solve problems. In addition, the standard model of knowledge representation provides a way to solve problems of non-precision and inconsistent knowledge.

  3. Quantitative Detection of Trace Explosive Vapors by Programmed Temperature Desorption Gas Chromatography-Electron Capture Detector

    PubMed Central

    Field, Christopher R.; Lubrano, Adam; Woytowitz, Morgan; Giordano, Braden C.; Rose-Pehrsson, Susan L.

    2014-01-01

    The direct liquid deposition of solution standards onto sorbent-filled thermal desorption tubes is used for the quantitative analysis of trace explosive vapor samples. The direct liquid deposition method yields a higher fidelity between the analysis of vapor samples and the analysis of solution standards than using separate injection methods for vapors and solutions, i.e., samples collected on vapor collection tubes and standards prepared in solution vials. Additionally, the method can account for instrumentation losses, which makes it ideal for minimizing variability and quantitative trace chemical detection. Gas chromatography with an electron capture detector is an instrumentation configuration sensitive to nitro-energetics, such as TNT and RDX, due to their relatively high electron affinity. However, vapor quantitation of these compounds is difficult without viable vapor standards. Thus, we eliminate the requirement for vapor standards by combining the sensitivity of the instrumentation with a direct liquid deposition protocol to analyze trace explosive vapor samples. PMID:25145416

  4. Quantitative detection of trace explosive vapors by programmed temperature desorption gas chromatography-electron capture detector.

    PubMed

    Field, Christopher R; Lubrano, Adam; Woytowitz, Morgan; Giordano, Braden C; Rose-Pehrsson, Susan L

    2014-07-25

    The direct liquid deposition of solution standards onto sorbent-filled thermal desorption tubes is used for the quantitative analysis of trace explosive vapor samples. The direct liquid deposition method yields a higher fidelity between the analysis of vapor samples and the analysis of solution standards than using separate injection methods for vapors and solutions, i.e., samples collected on vapor collection tubes and standards prepared in solution vials. Additionally, the method can account for instrumentation losses, which makes it ideal for minimizing variability and quantitative trace chemical detection. Gas chromatography with an electron capture detector is an instrumentation configuration sensitive to nitro-energetics, such as TNT and RDX, due to their relatively high electron affinity. However, vapor quantitation of these compounds is difficult without viable vapor standards. Thus, we eliminate the requirement for vapor standards by combining the sensitivity of the instrumentation with a direct liquid deposition protocol to analyze trace explosive vapor samples.

  5. Pharmacist perceptions of new competency standards

    PubMed Central

    Maitreemit, Pagamas; Pongcharoensuk, Petcharat; Kapol, Nattiya; Armstrong, Edward P.

    2008-01-01

    Objective To suggest revisions to the Thai pharmacy competency standards and determine the perceptions of Thai pharmacy practitioners and faculty about the proposed pharmacy competency standards. Methods The current competency standards were revised by brainstorming session with nine Thai pharmacy experts according to their perceptions of society’s pharmacy needs. The revised standards were proposed and validated by 574 pharmacy practitioners and faculty members by using a written questionnaire. The respondents were classified based on their practice setting. Results The revision of pharmacy competency standard proposed the integration and addition to current competencies. Of 830 distributed questionnaires, 574 completed questionnaires were received (69.2% response rate). The proposed new competency standards contained 7 domains and 46 competencies. The majority of the respondents were supportive of all 46 proposed competencies. The highest ranked domain was Domain 1 (Practice Pharmacy within Laws, Professional Standards, and Ethics). The second and third highest expectations of pharmacy graduates were Domain 4 (Provide pharmaceutical care) and Domain 3 (Communicate and disseminate knowledge effectively). Conclusion The expectation for pharmacy graduates’ competencies were high and respondents encouraged additional growth in multidisciplinary efforts to improve patient care. PMID:25177401

  6. Personalized monitoring of therapeutic salicylic acid in dried blood spots using a three-layer setup and desorption electrospray ionization mass spectrometry.

    PubMed

    Siebenhaar, Markus; Küllmer, Kai; Fernandes, Nuno Miguel de Barros; Hüllen, Volker; Hopf, Carsten

    2015-09-01

    Desorption electrospray ionization (DESI) mass spectrometry is an emerging technology for direct therapeutic drug monitoring in dried blood spots (DBS). Current DBS methods require manual application of small molecules as internal standards for absolute drug quantification. With industrial standardization in mind, we superseded the manual addition of standard and built a three-layer setup for robust quantification of salicylic acid directly from DBS. We combined a dioctyl sodium sulfosuccinate weave facilitating sample spreading with a cellulose layer for addition of isotope-labeled salicylic acid as internal standard and a filter paper for analysis of the standard-containing sample by DESI-MS. Using this setup, we developed a quantification method for salicylic acid from whole blood with a validated linear curve range from 10 to 2000 mg/L, a relative standard deviation (RSD%) ≤14%, and determination coefficients of 0.997. The limit of detection (LOD) was 8 mg/L and the lower limit of quantification (LLOQ) was 10 mg/L. Recovery rates in method verification by LC-MS/MS were 97 to 101% for blinded samples. Most importantly, a study in healthy volunteers after administration of a single dose of Aspirin provides evidence to suggest that the three-layer setup may enable individual pharmacokinetic and endpoint testing following blood collection by finger pricking by patients at home. Taken together, our data suggests that DBS-based quantification of drugs by DESI-MS on pre-manufactured three-layer cartridges may be a promising approach for future near-patient therapeutic drug monitoring.

  7. Improved detection of sugar addition to maple syrup using malic acid as internal standard and in 13C isotope ratio mass spectrometry (IRMS).

    PubMed

    Tremblay, Patrice; Paquin, Réal

    2007-01-24

    Stable carbon isotope ratio mass spectrometry (delta13C IRMS) was used to detect maple syrup adulteration by exogenous sugar addition (beet and cane sugar). Malic acid present in maple syrup is proposed as an isotopic internal standard to improve actual adulteration detection levels. A lead precipitation method has been modified to isolate quantitatively malic acid from maple syrup using preparative reversed-phase liquid chromatography. The stable carbon isotopic ratio of malic acid isolated from this procedure shows an excellent accuracy and repeatability of 0.01 and 0.1 per thousand respectively, confirming that the modified lead precipitation method is an isotopic fractionation-free process. A new approach is proposed to detect adulteration based on the correlation existing between the delta13Cmalic acid and the delta13Csugars-delta13Cmalic acid (r = 0.704). This technique has been tested on a set of 56 authentic maple syrup samples. Additionally, authentic samples were spiked with exogeneous sugars. The mean theoretical detection level was statistically lowered using this technique in comparison with the usual two-standard deviation approach, especially when maple syrup is adulterated with beet sugar : 24 +/- 12% of adulteration detection versus 48 +/- 20% (t-test, p = 7.3 x 10-15). The method was also applied to published data for pineapple juices and honey with the same improvement.

  8. Combined proportional and additive residual error models in population pharmacokinetic modelling.

    PubMed

    Proost, Johannes H

    2017-11-15

    In pharmacokinetic modelling, a combined proportional and additive residual error model is often preferred over a proportional or additive residual error model. Different approaches have been proposed, but a comparison between approaches is still lacking. The theoretical background of the methods is described. Method VAR assumes that the variance of the residual error is the sum of the statistically independent proportional and additive components; this method can be coded in three ways. Method SD assumes that the standard deviation of the residual error is the sum of the proportional and additive components. Using datasets from literature and simulations based on these datasets, the methods are compared using NONMEM. The different coding of methods VAR yield identical results. Using method SD, the values of the parameters describing residual error are lower than for method VAR, but the values of the structural parameters and their inter-individual variability are hardly affected by the choice of the method. Both methods are valid approaches in combined proportional and additive residual error modelling, and selection may be based on OFV. When the result of an analysis is used for simulation purposes, it is essential that the simulation tool uses the same method as used during analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Field Evaluation of Portable and Central Site PM Samplers Emphasizing Additive and Differential Mass Concentration Estimates

    EPA Science Inventory

    The US Environmental Protection Agency (EPA) published a National Ambient Air Quality Standard (NAAQS) and the accompanying Federal Reference Method (FRM) for PM10 in 1987. The EPA revised the particle standards and FRM in 1997 to include PM2.5. In 2005, EPA...

  10. 40 CFR 761.19 - References.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... approved materials are also available for inspection at the National Archives and Records Administration...://www.archives.gov/federal_register/code_of_federal_regulations/ibr_locations.html. In addition, these... Bomb Method), IBR approved for § 761.71. (3) ASTM D240-87, Standard Test Method for Heat of Combustion...

  11. 40 CFR 761.19 - References.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... approved materials are also available for inspection at the National Archives and Records Administration...://www.archives.gov/federal_register/code_of_federal_regulations/ibr_locations.html. In addition, these... Bomb Method), IBR approved for § 761.71. (3) ASTM D240-87, Standard Test Method for Heat of Combustion...

  12. 40 CFR 761.19 - References.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... approved materials are also available for inspection at the National Archives and Records Administration...://www.archives.gov/federal_register/code_of_federal_regulations/ibr_locations.html. In addition, these... Bomb Method), IBR approved for § 761.71. (3) ASTM D240-87, Standard Test Method for Heat of Combustion...

  13. The anesthesia and brain monitor (ABM). Concept and performance.

    PubMed

    Kay, B

    1984-01-01

    Three integral components of the ABM, the frontalis electromyogram (EMG), the processed unipolar electroencephalogram (EEG) and the neuromuscular transmission monitor (NMT) were compared with standard research methods, and their clinical utility indicated. The EMG was compared with the method of Dundee et al (2) for measuring the induction dose of thiopentone; the EEG was compared with the SLE Galileo E8-b and the NMT was compared with the Medelec MS6. In each case correlation of results was extremely high, and the ABM offered some advantages over the standard research methods. We conclude that each of the integral units of the ABM is simple to apply and interpret, yet as accurate as standard apparatus used for research. In addition the ABM offers excellent display and recording facilities and alarm systems.

  14. Determination of Calcium in Cereal with Flame Atomic Absorption Spectroscopy: An Experiment for a Quantitative Methods of Analysis Course

    ERIC Educational Resources Information Center

    Bazzi, Ali; Kreuz, Bette; Fischer, Jeffrey

    2004-01-01

    An experiment for determination of calcium in cereal using two-increment standard addition method in conjunction with flame atomic absorption spectroscopy (FAAS) is demonstrated. The experiment is intended to introduce students to the principles of atomic absorption spectroscopy giving them hands on experience using quantitative methods of…

  15. The determination of calcium in phosphate, carbonate, and silicate rocks by flame photometer

    USGS Publications Warehouse

    Kramer, Henry

    1956-01-01

    A method has been developed for the determination of calcium in phosphate, carbonate, and silicate rocks using the Beckman flame photometer, with photomultiplier attachement. The sample is dissolved in hydrofluoric, nitric, and perchloric acids, the hydrofluoric and nitric acids are expelled, a radiation buffer consisting of aluminum, magnesium, iron, sodium, potassium, phosphoric acid, and nitric acid is added, and the solution is atomized in an oxy-hydrogen flame with an instrument setting of 554 mµ. Measurements are made by comparison against calcium standards, prepared in the same manner, in the 0 to 50 ppm range. The suppression of calcium emission by aluminum and phosphate was overcome by the addition of a large excess of magnesium. This addition almost completely restores the standard curve obtained from a solution of calcium nitrate. Interference was noted when the iron concentration in the aspirated solution (including the iron from the buffer) exceeded 100 ppm iron. Other common rock-forming elements did not interfere. The results obtained by this procedure are within ± 2 percent of the calcium oxide values obtained by other methods in the range 1 to 95 percent calcium oxide. In the 0 to 1 percent calcium oxide range the method compares favorably with standard methods.

  16. Journal article reporting standards for qualitative primary, qualitative meta-analytic, and mixed methods research in psychology: The APA Publications and Communications Board task force report.

    PubMed

    Levitt, Heidi M; Bamberg, Michael; Creswell, John W; Frost, David M; Josselson, Ruthellen; Suárez-Orozco, Carola

    2018-01-01

    The American Psychological Association Publications and Communications Board Working Group on Journal Article Reporting Standards for Qualitative Research (JARS-Qual Working Group) was charged with examining the state of journal article reporting standards as they applied to qualitative research and with generating recommendations for standards that would be appropriate for a wide range of methods within the discipline of psychology. These standards describe what should be included in a research report to enable and facilitate the review process. This publication marks a historical moment-the first inclusion of qualitative research in APA Style, which is the basis of both the Publication Manual of the American Psychological Association (APA, 2010) and APA Style CENTRAL, an online program to support APA Style. In addition to the general JARS-Qual guidelines, the Working Group has developed standards for both qualitative meta-analysis and mixed methods research. The reporting standards were developed for psychological qualitative research but may hold utility for a broad range of social sciences. They honor a range of qualitative traditions, methods, and reporting styles. The Working Group was composed of a group of researchers with backgrounds in varying methods, research topics, and approaches to inquiry. In this article, they present these standards and their rationale, and they detail the ways that the standards differ from the quantitative research reporting standards. They describe how the standards can be used by authors in the process of writing qualitative research for submission as well as by reviewers and editors in the process of reviewing research. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. Developing design methods of concrete mix with microsilica additives for road construction

    NASA Astrophysics Data System (ADS)

    Dmitrienko, Vladimir; Shrivel, Igor; Kokunko, Irina; Pashkova, Olga

    2017-10-01

    Based on the laboratory test results, regression equations having standard cone and concrete strength, to determine the available amount of cement, water and microsilica were obtained. The joint solution of these equations allowed the researchers to develop the algorithm of designing heavy concrete compositions with microsilica additives for road construction.

  18. An Examination of the Addition of Video Informed Reflective Practice to the Active Support Toolkit

    ERIC Educational Resources Information Center

    Baker, Peter; Appleton, Philippa; Williams, Rosie

    2017-01-01

    Background: This study evaluated a package of Active Support (AS), which included standard training with additional video informed reflective practice. Materials & Methods: The training package was implemented as part of a service improvement initiative in four residential intellectual disability homes, using a concurrent multiple baseline…

  19. The development of European standard (CEN) methods in support of European Directives for plastics materials and articles intended to come into contact with foodstuff.

    PubMed

    Ashby, R

    1994-01-01

    CEC Directives have been implemented for plastics materials and articles intended to come into contact with foodstuffs. These introduce limits upon the overall migration from plastics into food and food simulants. In addition, specific migration limits or composition limits for free monomer in the final article, have been set for some monomers. Agreed test methods are required to allow these Directives to be respected. CEN, the European Committee for Standardization, has created a working group to develop suitable test methods. This is 'Working Group 5, Chemical Methods of Test', of CEN Technical Committee TC 194, Utensils in contact with food. This group has drafted a ten part standard for determining overall migration into aqueous and fatty food simulants by total immersion, by standard cell, by standard pouch and by filling. This draft standard has been approved by CEN TC 194 for circulation for public comment as a provisional standard, i.e. as an ENV. Further parts of this standard are in preparation for determining overall migration at high temperatures, etc. Simultaneously, Working Group 5 is cooperating with the BCR (Community Bureau of Reference) to produce reference materials with certified values of overall migration. CEN TC 194 Working Group 5 is also drafting methods for monomers subject to limitation in Directive 90/128/EEC. Good progress is being made on the monomers of highest priority but it is recognized that developing methods for all the monomers subject to limitation would take many years. Therefore, collaboration with the BCR, the Council of Europe and others is taking place to accelerate method development.

  20. Computation of geometric representation of novel spectrophotometric methods used for the analysis of minor components in pharmaceutical preparations.

    PubMed

    Lotfy, Hayam M; Saleh, Sarah S; Hassan, Nagiba Y; Salem, Hesham

    2015-01-01

    Novel spectrophotometric methods were applied for the determination of the minor component tetryzoline HCl (TZH) in its ternary mixture with ofloxacin (OFX) and prednisolone acetate (PA) in the ratio of (1:5:7.5), and in its binary mixture with sodium cromoglicate (SCG) in the ratio of (1:80). The novel spectrophotometric methods determined the minor component (TZH) successfully in the two selected mixtures by computing the geometrical relationship of either standard addition or subtraction. The novel spectrophotometric methods are: geometrical amplitude modulation (GAM), geometrical induced amplitude modulation (GIAM), ratio H-point standard addition method (RHPSAM) and compensated area under the curve (CAUC). The proposed methods were successfully applied for the determination of the minor component TZH below its concentration range. The methods were validated as per ICH guidelines where accuracy, repeatability, inter-day precision and robustness were found to be within the acceptable limits. The results obtained from the proposed methods were statistically compared with official ones where no significant difference was observed. No difference was observed between the obtained results when compared to the reported HPLC method, which proved that the developed methods could be alternative to HPLC techniques in quality control laboratories. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Determination of calcium, magnesium and zinc in lubricating oils by flame atomic absorption spectrometry using a three-component solution.

    PubMed

    Zmozinski, Ariane V; de Jesus, Alexandre; Vale, Maria G R; Silva, Márcia M

    2010-12-15

    Lubricating oils are used to decrease wear and friction of movable parts of engines and turbines, being in that way essential for the performance and the increase of that equipment lifespan. The presence of some metals shows the addition of specific additives such as detergents, dispersals and antioxidants that improve the performance of these lubricants. In this work, a method for determination of calcium, magnesium and zinc in lubricating oil by flame atomic absorption spectrometry (F AAS) was developed. The samples were diluted with a small quantity of aviation kerosene (AVK), n-propanol and water to form a three-component solution before its introduction in the F AAS. Aqueous inorganic standards diluted in the same way have been used for calibration. To assess the accuracy of the new method, it was compared with ABNT NBR 14066 standard method, which consists in diluting the sample with AVK and in quantification by F AAS. Two other validating methods have also been used: the acid digestion and the certified reference material NIST (SRM 1084a). The proposed method provides the following advantages in relation to the standard method: significant reduction of the use of AVK, higher stability of the analytes in the medium and application of aqueous inorganic standards for calibration. The limits of detection for calcium, magnesium and zinc were 1.3 μg g(-1), 0.052 μg g(-1) and 0.41 μg g(-1), respectively. Concentrations of calcium, magnesium and zinc in six different samples obtained by the developed method did not differ significantly from the results obtained by the reference methods at the 95% confidence level (Student's t-test and ANOVA). Therefore, the proposed method becomes an efficient alternative for determination of metals in lubricating oil. Copyright © 2010 Elsevier B.V. All rights reserved.

  2. Applying ISO 11929:2010 Standard to detection limit calculation in least-squares based multi-nuclide gamma-ray spectrum evaluation

    NASA Astrophysics Data System (ADS)

    Kanisch, G.

    2017-05-01

    The concepts of ISO 11929 (2010) are applied to evaluation of radionuclide activities from more complex multi-nuclide gamma-ray spectra. From net peak areas estimated by peak fitting, activities and their standard uncertainties are calculated by weighted linear least-squares method with an additional step, where uncertainties of the design matrix elements are taken into account. A numerical treatment of the standard's uncertainty function, based on ISO 11929 Annex C.5, leads to a procedure for deriving decision threshold and detection limit values. The methods shown allow resolving interferences between radionuclide activities also in case of calculating detection limits where they can improve the latter by including more than one gamma line per radionuclide. The co"mmon single nuclide weighted mean is extended to an interference-corrected (generalized) weighted mean, which, combined with the least-squares method, allows faster detection limit calculations. In addition, a new grouped uncertainty budget was inferred, which for each radionuclide gives uncertainty budgets from seven main variables, such as net count rates, peak efficiencies, gamma emission intensities and others; grouping refers to summation over lists of peaks per radionuclide.

  3. The quantitative surface analysis of an antioxidant additive in a lubricant oil matrix by desorption electrospray ionization mass spectrometry

    PubMed Central

    Da Costa, Caitlyn; Reynolds, James C; Whitmarsh, Samuel; Lynch, Tom; Creaser, Colin S

    2013-01-01

    RATIONALE Chemical additives are incorporated into commercial lubricant oils to modify the physical and chemical properties of the lubricant. The quantitative analysis of additives in oil-based lubricants deposited on a surface without extraction of the sample from the surface presents a challenge. The potential of desorption electrospray ionization mass spectrometry (DESI-MS) for the quantitative surface analysis of an oil additive in a complex oil lubricant matrix without sample extraction has been evaluated. METHODS The quantitative surface analysis of the antioxidant additive octyl (4-hydroxy-3,5-di-tert-butylphenyl)propionate in an oil lubricant matrix was carried out by DESI-MS in the presence of 2-(pentyloxy)ethyl 3-(3,5-di-tert-butyl-4-hydroxyphenyl)propionate as an internal standard. A quadrupole/time-of-flight mass spectrometer fitted with an in-house modified ion source enabling non-proximal DESI-MS was used for the analyses. RESULTS An eight-point calibration curve ranging from 1 to 80 µg/spot of octyl (4-hydroxy-3,5-di-tert-butylphenyl)propionate in an oil lubricant matrix and in the presence of the internal standard was used to determine the quantitative response of the DESI-MS method. The sensitivity and repeatability of the technique were assessed by conducting replicate analyses at each concentration. The limit of detection was determined to be 11 ng/mm2 additive on spot with relative standard deviations in the range 3–14%. CONCLUSIONS The application of DESI-MS to the direct, quantitative surface analysis of a commercial lubricant additive in a native oil lubricant matrix is demonstrated. © 2013 The Authors. Rapid Communications in Mass Spectrometry published by John Wiley & Sons, Ltd. PMID:24097398

  4. 21 CFR 172.215 - Coumarone-indene resin.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... mixture of indene, indan (hydrindene), substituted benzenes, and related compounds. (2) It contains no... additive meets the following specifications: (1) Softening point, ring and ball: 126 °C minimum as determined by ASTM method E28-67 (Reapproved 1982), “Standard Test Method for Softening Point by Ring-and...

  5. 21 CFR 172.215 - Coumarone-indene resin.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... mixture of indene, indan (hydrindene), substituted benzenes, and related compounds. (2) It contains no... additive meets the following specifications: (1) Softening point, ring and ball: 126 °C minimum as determined by ASTM method E28-67 (Reapproved 1982), “Standard Test Method for Softening Point by Ring-and...

  6. 21 CFR 172.215 - Coumarone-indene resin.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... mixture of indene, indan (hydrindene), substituted benzenes, and related compounds. (2) It contains no... additive meets the following specifications: (1) Softening point, ring and ball: 126 °C minimum as determined by ASTM method E28-67 (Reapproved 1982), “Standard Test Method for Softening Point by Ring-and...

  7. 21 CFR 640.102 - Manufacture of Immune Globulin (Human).

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 7 2011-04-01 2010-04-01 true Manufacture of Immune Globulin (Human). 640.102... (CONTINUED) BIOLOGICS ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Immune Globulin (Human) § 640.102 Manufacture of Immune Globulin (Human). (a) Processing method. The processing method shall be one...

  8. Development and validation of a UV-spectrophotometric method for the determination of pheniramine maleate and its stability studies

    NASA Astrophysics Data System (ADS)

    Raghu, M. S.; Basavaiah, K.; Ramesh, P. J.; Abdulrahman, Sameer A. M.; Vinay, K. B.

    2012-03-01

    A sensitive, precise, and cost-effective UV-spectrophotometric method is described for the determination of pheniramine maleate (PAM) in bulk drug and tablets. The method is based on the measurement of absorbance of a PAM solution in 0.1 N HCl at 264 nm. As per the International Conference on Harmonization (ICH) guidelines, the method was validated for linearity, accuracy, precision, limits of detection (LOD) and quantification (LOQ), and robustness and ruggedness. A linear relationship between absorbance and concentration of PAM in the range of 2-40 μg/ml with a correlation coefficient (r) of 0.9998 was obtained. The LOD and LOQ values were found to be 0.18 and 0.39 μg/ml PAM, respectively. The precision of the method was satisfactory: the value of relative standard deviation (RSD) did not exceed 3.47%. The proposed method was applied successfully to the determination of PAM in tablets with good accuracy and precision. Percentages of the label claims ranged from 101.8 to 102.01% with the standard deviation (SD) from 0.64 to 0.72%. The accuracy of the method was further ascertained by recovery studies via a standard addition procedure. In addition, the forced degradation of PAM was conducted in accordance with the ICH guidelines. Acidic and basic hydrolysis, thermal stress, peroxide, and photolytic degradation were used to assess the stability-indicating power of the method. A substantial degradation was observed during oxidative and alkaline degradations. No degradation was observed under other stress conditions.

  9. Standard setting for OSCEs: trial of borderline approach.

    PubMed

    Kilminster, Sue; Roberts, Trudie

    2004-01-01

    OSCE examinations were held in May and June 2002 for all third and fourth year and some fifth year medical students at the University of Leeds. There has been an arbitrary pass mark of 65% for these examinations. However, we recognise that it is important to adopt a systematic approach towards standard setting in all examinations so held a trial of the borderline approach to standard setting for third and fifth year examinations. This paper reports our findings. The results for the year 3 OSCE demonstrated that the borderline approach to standard setting is feasible and offers a method to ensure that the pass standard is both justifiable and credible. It is efficient, requiring much less time than other methods and has the advantage of using the judgements of expert clinicians about actual practice. In addition it offers a way of empowering clinicians because it uses their expertise.

  10. ATLAS particle detector CSC ROD software design and implementation, and, Addition of K physics to chi-squared analysis of FDQM

    NASA Astrophysics Data System (ADS)

    Hawkins, Donovan Lee

    In this thesis I present a software framework for use on the ATLAS muon CSC readout driver. This C++ framework uses plug-in Decoders incorporating hand-optimized assembly language routines to perform sparsification and data formatting. The software is designed with both flexibility and performance in mind, and runs on a custom 9U VME board using Texas Instruments TMS360C6203 digital signal processors. I describe the requirements of the software, the methods used in its design, and the results of testing the software with simulated data. I also present modifications to a chi-squared analysis of the Standard Model and Four Down Quark Model (FDQM) originally done by Dr. Dennis Silverman. The addition of four new experiments to the analysis has little effect on the Standard Model but provides important new restrictions on the FDQM. The method used to incorporate these new experiments is presented, and the consequences of their addition are reviewed.

  11. NIST Efforts to Quality-Assure Gunpowder Measurements

    NASA Technical Reports Server (NTRS)

    MacCrehan, William A.; Reardon, Michelle R.

    2000-01-01

    In the past few years, the National Institute for Standards and Technology (NIST) has been promoting the idea of quantitatively determining the additives in smokeless gunpowder using micellar capillary electrophoresis as a means of investigating the criminal use of hand guns and pipe bombs. As a part of this effort, we have evaluated both supercritical fluid and ultrasonic solvent extractions for the quantitative recovery of nitroglycerin (NG), diphenylamine (DPA), N-nitrosodiphenylamine (NnDPA), and ethyl centralite (EC) from gunpowder. Recoveries were evaluated by repeat extraction and matrix spiking experiments. The final extraction protocol provides greater than 95 percent recoveries. To help other researches validate their own analytical method for additive determinations, NIST is exploring the development of a standard reference material, Additives in Smokeless Gunpowder. The evaluated method is being applied to two double-base (NG-containing) powders, one stabilized with diphenylamine and the other with ethyl centralite. As part of this reference material development effort, we are conducting an interlaboratory comparison exercise among the forensic and military gunpowder measurement community.

  12. Location Modification Factors for Potential Dose Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, Sandra F.; Barnett, J. Matthew

    2017-01-01

    A Department of Energy facility must comply with the National Emission Standard for Hazardous Air Pollutants for radioactive air emissions. The standard is an effective dose of less than 0.1 mSv yr-1 to the maximum public receptor. Additionally, a lower dose level may be assigned to a specific emission point in a State issued permit. A method to efficiently estimate the expected dose for future emissions is described. This method is most appropriately applied to a research facility with several emission points with generally low emission levels of numerous isotopes.

  13. Quantification of octacalcium phosphate, authigenic apatite and detrital apatite in coastal sediments using differential dissolution and standard addition

    NASA Astrophysics Data System (ADS)

    Oxmann, J. F.; Schwendenmann, L.

    2014-06-01

    Knowledge of calcium phosphate (Ca-P) solubility is crucial for understanding temporal and spatial variations of phosphorus (P) concentrations in water bodies and sedimentary reservoirs. In situ relationships between liquid- and solid-phase levels cannot be fully explained by dissolved analytes alone and need to be verified by determining particular sediment P species. Lack of quantification methods for these species limits the knowledge of the P cycle. To address this issue, we (i) optimized a specifically developed conversion-extraction (CONVEX) method for P species quantification using standard additions, and (ii) simultaneously determined solubilities of Ca-P standards by measuring their pH-dependent contents in the sediment matrix. Ca-P minerals including various carbonate fluorapatite (CFAP) specimens from different localities, fluorapatite (FAP), fish bone apatite, synthetic hydroxylapatite (HAP) and octacalcium phosphate (OCP) were characterized by XRD, Raman, FTIR and elemental analysis. Sediment samples were incubated with and without these reference minerals and then sequentially extracted to quantify Ca-P species by their differential dissolution at pH values between 3 and 8. The quantification of solid-phase phosphates at varying pH revealed solubilities in the following order: OCP > HAP > CFAP (4.5% CO3) > CFAP (3.4% CO3) > CFAP (2.2% CO3) > FAP. Thus, CFAP was less soluble in sediment than HAP, and CFAP solubility increased with carbonate content. Unspiked sediment analyses together with standard addition analyses indicated consistent differential dissolution of natural sediment species vs. added reference species and therefore verified the applicability of the CONVEX method in separately determining the most prevalent Ca-P minerals. We found surprisingly high OCP contents in the coastal sediments analyzed, which supports the hypothesis of apatite formation by an OCP precursor mechanism.

  14. Quantification of octacalcium phosphate, authigenic apatite and detrital apatite in coastal sediments using differential dissolution and standard addition

    NASA Astrophysics Data System (ADS)

    Oxmann, J. F.; Schwendenmann, L.

    2014-01-01

    Knowledge of calcium phosphate (Ca-P) solubility is crucial for understanding temporal and spatial variations of phosphorus (P) concentrations in water bodies and sedimentary reservoirs. In-situ relationships between liquid and solid-phase levels cannot be fully explained by dissolved analytes alone and need to be verified by determination of particular sediment P species. Lack of quantification methods for these species limits the knowledge of the P cycle. To address this issue, we (i) optimized a specifically developed conversion-extraction (CONVEX) method for P species quantification using standard additions; and (ii) simultaneously determined solubilities of Ca-P standards by measuring their pH-dependent contents in the sediment matrix. Ca-P minerals including various carbonate fluorapatite (CFAP) specimens from different localities, fluorapatite (FAP), fish bone apatite, synthetic hydroxylapatite (HAP) and octacalcium phosphate (OCP) were characterized by XRD, Raman, FTIR and elemental analysis. Sediment samples were incubated with and without these reference minerals and then sequentially extracted to quantify Ca-P species by their differential dissolution at pH values between 3 and 8. The quantification of solid-phase phosphates at varying pH revealed solubilities in the following order: OCP > HAP > CFAP (4.5% CO3) > CFAP (3.4% CO3) > CFAP (2.2% CO3) > FAP. Thus, CFAP was less soluble in sediment than HAP, and CFAP solubility increased with carbonate content. Unspiked sediment analyses together with standard addition analyses indicated consistent differential dissolution of natural sediment species vs. added reference species and therefore verified the applicability of the CONVEX method in separately determining the most prevalent Ca-P minerals. We found surprisingly high OCP contents in the analyzed coastal sediments which supports the hypothesis of apatite formation by an OCP precursor.

  15. Comparison between the Standardized Clinical and Laboratory Standards Institute M38-A2 Method and a 2,3-Bis(2-Methoxy-4-Nitro-5-[(Sulphenylamino)Carbonyl]-2H-Tetrazolium Hydroxide- Based Method for Testing Antifungal Susceptibility of Dermatophytes ▿

    PubMed Central

    Shehata, Atef S.; Mukherjee, Pranab K.; Ghannoum, Mahmoud A.

    2008-01-01

    In this study, we determined the utility of a 2,3-bis(2-methoxy-4-nitro-5-[(sulfenylamino)carbonyl]-2H-tetrazolium hydroxide (XTT)-based assay for determining antifungal susceptibilities of dermatophytes to terbinafine, ciclopirox, and voriconazole in comparison to the Clinical and Laboratory Standards Institute (CLSI) M38-A2 method. Forty-eight dermatophyte isolates, including Trichophyton rubrum (n = 15), Trichophyton mentagrophytes (n = 7), Trichophyton tonsurans (n = 11), and Epidermophyton floccosum (n = 13), and two quality control strains, were tested. In the XTT-based method, MICs were determined spectrophotometrically at 490 nm after addition of XTT and menadione. For the CLSI method, the MICs were determined visually. With T. rubrum, the XTT assay revealed MIC ranges of 0.004 to >64 μg/ml, 0.125 to 0.25 μg/ml, and 0.008 to 0.025 μg/ml for terbinafine, ciclopirox, and voriconazole, respectively. Similar MIC ranges were obtained against T. rubrum by using the CLSI method. Additionally, when tested with T. mentagrophytes, T. tonsurans, and E. floccosum isolates, the XTT and CLSI methods resulted in comparable MIC ranges. Both methods revealed similar lowest drug concentrations that inhibited 90% of the isolates for the majority of tested drug-dermatophyte combinations. The levels of agreement within 1 dilution between both methods were as follows: 100% with terbinafine, 97.8% with ciclopirox, and 89.1% with voriconazole. However, the agreement within 2 dilutions between these two methods was 100% for all tested drugs. Our results revealed that the XTT assay can be a useful tool for antifungal susceptibility testing of dermatophytes. PMID:18832129

  16. A simple web-based tool to compare freshwater fish data collected using AFS standard methods

    USGS Publications Warehouse

    Bonar, Scott A.; Mercado-Silva, Norman; Rahr, Matt; Torrey, Yuta T.; Cate, Averill

    2016-01-01

    The American Fisheries Society (AFS) recently published Standard Methods for Sampling North American Freshwater Fishes. Enlisting the expertise of 284 scientists from 107 organizations throughout Canada, Mexico, and the United States, this text was developed to facilitate comparisons of fish data across regions or time. Here we describe a user-friendly web tool that automates among-sample comparisons in individual fish condition, population length-frequency distributions, and catch per unit effort (CPUE) data collected using AFS standard methods. Currently, the web tool (1) provides instantaneous summaries of almost 4,000 data sets of condition, length frequency, and CPUE of common freshwater fishes collected using standard gears in 43 states and provinces; (2) is easily appended with new standardized field data to update subsequent queries and summaries; (3) compares fish data from a particular water body with continent, ecoregion, and state data summaries; and (4) provides additional information about AFS standard fish sampling including benefits, ongoing validation studies, and opportunities to comment on specific methods. The web tool—programmed in a PHP-based Drupal framework—was supported by several AFS Sections, agencies, and universities and is freely available from the AFS website and fisheriesstandardsampling.org. With widespread use, the online tool could become an important resource for fisheries biologists.

  17. High Performance Liquid Chromatography of Vitamin A: A Quantitative Determination.

    ERIC Educational Resources Information Center

    Bohman, Ove; And Others

    1982-01-01

    Experimental procedures are provided for the quantitative determination of Vitamin A (retinol) in food products by analytical liquid chromatography. Standard addition and calibration curve extraction methods are outlined. (SK)

  18. Photometric requirements for portable changeable message signs.

    DOT National Transportation Integrated Search

    2001-09-01

    This project reviewed the performance of pchangeable message signs (PCMSs) and developed photometric standards to establish performance requirements. In addition, researchers developed photometric test methods and recommended them for use in evaluati...

  19. An on-spot internal standard addition approach for accurately determining colistin A and colistin B in dried blood spots using ultra high-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Tsai, I-Lin; Kuo, Ching-Hua; Sun, Hsin-Yun; Chuang, Yu-Chung; Chepyala, Divyabharathi; Lin, Shu-Wen; Tsai, Yun-Jung

    2017-10-25

    Outbreaks of multidrug-resistant Gram-negative bacterial infections have been reported worldwide. Colistin, an antibiotic with known nephrotoxicity and neurotoxicity, is now being used to treat multidrug-resistant Gram-negative strains. In this study, we applied an on-spot internal standard addition approach coupled with an ultra high-performance liquid chromatography-tandem mass spectrometry (LC-MS/MS) method to quantify colistin A and B from dried blood spots (DBSs). Only 15μL of whole blood was required for each sample. An internal standard with the same yield of extraction recoveries as colistin was added to the spot before sample extraction for accurate quantification. Formic acid in water (0.15%) with an equal volume of acetonitrile (50:50v/v) was used as the extraction solution. With the optimized extraction process and LC-MS/MS conditions, colistin A and B could be quantified from a DBS with respective limits of quantification of 0.13 and 0.27μgmL -1 , and the retention times were < 2min. The relative standard deviations of within-run and between-run precisions for peak area ratios were all < 17.3%. Accuracies were 91.5-111.2% for lower limit of quantification, low, medium, and high QC samples. The stability of the easily hydrolyzed prodrug, colistin methanesulfonate, was investigated in DBSs. Less than 4% of the prodrug was found to be hydrolyzed in DBSs at room temperature after 48h. The developed method applied an on-spot internal standard addition approach which benefited the precision and accuracy. Results showed that DBS sampling coupled with the sensitive LC-MS/MS method has the potential to be an alternative approach for colistin quantification, where the bias of prodrug hydrolysis in liquid samples is decreased. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Reliability of the individual components of the Canadian Armed Forces Physical Employment Standard.

    PubMed

    Stockbrugger, Barry G; Reilly, Tara J; Blacklock, Rachel E; Gagnon, Patrick J

    2018-01-29

    This investigation recruited 24 participants from both the Canadian Armed Forces (CAF) and civilian populations to complete 4 separate trials at "best effort" of each of the 4 components in the CAF Physical Employment Standard named the FORCE Evaluation: Fitness for Operational Requirements of CAF Employment. Analyses were performed to examine the level of variability and reliability within each component. The results demonstrate that candidates should be provided with at least 1 retest if they have recently completed at least 2 previous best effort attempts as per the protocol. In addition, the minimal detectable difference is given for each of the 4 components in seconds which identifies the threshold for subsequent action, either retest or remedial training, for those unable to meet the minimum standard. These results will educate the delivery of this employment standard, function as a method of accommodation, in addition to providing direction for physical training programs.

  1. Methods of analysis by the U.S. Geological Survey National Water Quality Laboratory; determination of chromium in water by graphite furnace atomic absorption spectrophotometry

    USGS Publications Warehouse

    McLain, B.J.

    1993-01-01

    Graphite furnace atomic absorption spectrophotometry is a sensitive, precise, and accurate method for the determination of chromium in natural water samples. The detection limit for this analytical method is 0.4 microg/L with a working linear limit of 25.0 microg/L. The precision at the detection limit ranges from 20 to 57 percent relative standard deviation (RSD) with an improvement to 4.6 percent RSD for concentrations more than 3 microg/L. Accuracy of this method was determined for a variety of reference standards that was representative of the analytical range. The results were within the established standard deviations. Samples were spiked with known concentrations of chromium with recoveries ranging from 84 to 122 percent. In addition, a comparison of data between graphite furnace atomic absorption spectrophotometry and direct-current plasma atomic emission spectrometry resulted in suitable agreement between the two methods, with an average deviation of +/- 2.0 microg/L throughout the analytical range.

  2. Comparison of genetic algorithms with conjugate gradient methods

    NASA Technical Reports Server (NTRS)

    Bosworth, J. L.; Foo, N. Y.; Zeigler, B. P.

    1972-01-01

    Genetic algorithms for mathematical function optimization are modeled on search strategies employed in natural adaptation. Comparisons of genetic algorithms with conjugate gradient methods, which were made on an IBM 1800 digital computer, show that genetic algorithms display superior performance over gradient methods for functions which are poorly behaved mathematically, for multimodal functions, and for functions obscured by additive random noise. Genetic methods offer performance comparable to gradient methods for many of the standard functions.

  3. [Determination of heavy metals for RoHS compliance by ICP-OES spectrometry coupled with microwave extraction system].

    PubMed

    Hua, Li; Wu, Yi-Ping; An, Bing; Lai, Xiao-Wei

    2008-11-01

    The harm of heavy metals contained in electronic and electrical equipment (EEE) on environment is of high concern by human. Aiming to handle the great challenge of RoHS compliance, the determinations of trace or ultratrace chromium (Cr), cadmium (Cd), mercury (Hg) and lead (Pb) by inductively coupled plasma optical emission spectrometry (ICP-OES) was performed in the present paper, wherein, microwave extraction technology was used to prepare the sample solutions. In addition, the precision, recovery, repeatability and interference issues of this method were also discussed. The results exhibited that using the microwave extraction system to prepare samples is more quick, lossless, contamination-free in comparison with the conventional extraction methods such as dry ashing, wet-oven extraction etc. By analyzing the recoveries of these four heavy metals over different working time and wavelengths, the good recovery range between 85% and 115% showed that there was only tiny loss or contamination during the process of microwave extraction, sample introduction and ICP detection. Repeatability experiments proved that ICP plasma had a good stability during the working time and the matrix effect was small. Interference was a problem troublesome for atomic absorption spectrometry (AAS), however, the techniques of standard additions or inter-element correction (IEC) method can effectively eliminated the interferences of Ni, As, Fe etc. with the Cd determination. By employing the multi-wavelengths and two correction point methods, the issues of background curve sloping shift and spectra overlap were successfully overcome. Besides, for the determinations of trace heavy metal elements, the relative standard deviation (RSD) was less than 3% and the detection limits were less than 1 microg x L(-10 (3sigma, n = 5) for samples, standard solutions, and standard additions, which proved that ICP-OES has a good precision and high reliability. This provided a reliable technique support for electronic and electrical (EE) industries to comply with RoHS directive.

  4. Quantile regression via vector generalized additive models.

    PubMed

    Yee, Thomas W

    2004-07-30

    One of the most popular methods for quantile regression is the LMS method of Cole and Green. The method naturally falls within a penalized likelihood framework, and consequently allows for considerable flexible because all three parameters may be modelled by cubic smoothing splines. The model is also very understandable: for a given value of the covariate, the LMS method applies a Box-Cox transformation to the response in order to transform it to standard normality; to obtain the quantiles, an inverse Box-Cox transformation is applied to the quantiles of the standard normal distribution. The purposes of this article are three-fold. Firstly, LMS quantile regression is presented within the framework of the class of vector generalized additive models. This confers a number of advantages such as a unifying theory and estimation process. Secondly, a new LMS method based on the Yeo-Johnson transformation is proposed, which has the advantage that the response is not restricted to be positive. Lastly, this paper describes a software implementation of three LMS quantile regression methods in the S language. This includes the LMS-Yeo-Johnson method, which is estimated efficiently by a new numerical integration scheme. The LMS-Yeo-Johnson method is illustrated by way of a large cross-sectional data set from a New Zealand working population. Copyright 2004 John Wiley & Sons, Ltd.

  5. Study of diffusion bond development in 6061 aluminum and its relationship to future high density fuels fabrication.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prokofiev, I.; Wiencek, T.; McGann, D.

    1997-10-07

    Powder metallurgy dispersions of uranium alloys and silicides in an aluminum matrix have been developed by the RERTR program as a new generation of proliferation-resistant fuels. Testing is done with miniplate-type fuel plates to simulate standard fuel with cladding and matrix in plate-type configurations. In order to seal the dispersion fuel plates, a diffusion bond must exist between the aluminum coverplates surrounding the fuel meat. Four different variations in the standard method for roll-bonding 6061 aluminum were studied. They included mechanical cleaning, addition of a getter material, modifications to the standard chemical etching, and welding methods. Aluminum test pieces weremore » subjected to a bend test after each rolling pass. Results, based on 400 samples, indicate that at least a 70% reduction in thickness is required to produce a diffusion bond using the standard rollbonding method versus a 60% reduction using the Type II method in which the assembly was welded 100% and contained open 9mm holes at frame corners.« less

  6. Developing Carbon Nanotube Standards at NASA

    NASA Technical Reports Server (NTRS)

    Nikolaev, Pasha; Arepalli, Sivaram; Sosa, Edward; Gorelik, Olga; Yowell, Leonard

    2007-01-01

    Single wall carbon nanotubes (SWCNTs) are currently being produced and processed by several methods. Many researchers are continuously modifying existing methods and developing new methods to incorporate carbon nanotubes into other materials and utilize the phenomenal properties of SWCNTs. These applications require availability of SWCNTs with known properties and there is a need to characterize these materials in a consistent manner. In order to monitor such progress, it is critical to establish a means by which to define the quality of SWCNT material and develop characterization standards to evaluate of nanotube quality across the board. Such characterization standards should be applicable to as-produced materials as well as processed SWCNT materials. In order to address this issue, NASA Johnson Space Center has developed a protocol for purity and dispersion characterization of SWCNTs (Ref.1). The NASA JSC group is currently working with NIST, ANSI and ISO to establish purity and dispersion standards for SWCNT material. A practice guide for nanotube characterization is being developed in cooperation with NIST (Ref.2). Furthermore, work is in progress to incorporate additional characterization methods for electrical, mechanical, thermal, optical and other properties of SWCNTs.

  7. Developing Carbon Nanotube Standards at NASA

    NASA Technical Reports Server (NTRS)

    Nikolaev, Pasha; Arepalli, Sivaram; Sosa, Edward; Gorelik, Olga; Yowell, Leonard

    2007-01-01

    Single wall carbon nanotubes (SWCNTs) are currently being produced and processed by several methods. Many researchers are continuously modifying existing methods and developing new methods to incorporate carbon nanotubes into other materials and utilize the phenomenal properties of SWCNTs. These applications require availability of SWCNTs with known properties and there is a need to characterize these materials in a consistent manner. In order to monitor such progress, it is critical to establish a means by which to define the quality of SWCNT material and develop characterization standards to evaluate of nanotube quality across the board. Such characterization standards should be applicable to as-produced materials as well as processed SWCNT materials. In order to address this issue, NASA Johnson Space Center has developed a protocol for purity and dispersion characterization of SWCNTs. The NASA JSC group is currently working with NIST, ANSI and ISO to establish purity and dispersion standards for SWCNT material. A practice guide for nanotube characterization is being developed in cooperation with NIST. Furthermore, work is in progress to incorporate additional characterization methods for electrical, mechanical, thermal, optical and other properties of SWCNTs.

  8. Study on the criteria for assessing skull-face correspondence in craniofacial superimposition.

    PubMed

    Ibáñez, Oscar; Valsecchi, Andrea; Cavalli, Fabio; Huete, María Isabel; Campomanes-Alvarez, Blanca Rosario; Campomanes-Alvarez, Carmen; Vicente, Ricardo; Navega, David; Ross, Ann; Wilkinson, Caroline; Jankauskas, Rimantas; Imaizumi, Kazuhiko; Hardiman, Rita; Jayaprakash, Paul Thomas; Ruiz, Elena; Molinero, Francisco; Lestón, Patricio; Veselovskaya, Elizaveta; Abramov, Alexey; Steyn, Maryna; Cardoso, Joao; Humpire, Daniel; Lusnig, Luca; Gibelli, Daniele; Mazzarelli, Debora; Gaudio, Daniel; Collini, Federica; Damas, Sergio

    2016-11-01

    Craniofacial superimposition has the potential to be used as an identification method when other traditional biological techniques are not applicable due to insufficient quality or absence of ante-mortem and post-mortem data. Despite having been used in many countries as a method of inclusion and exclusion for over a century it lacks standards. Thus, the purpose of this research is to provide forensic practitioners with standard criteria for analysing skull-face relationships. Thirty-seven experts from 16 different institutions participated in this study, which consisted of evaluating 65 criteria for assessing skull-face anatomical consistency on a sample of 24 different skull-face superimpositions. An unbiased statistical analysis established the most objective and discriminative criteria. Results did not show strong associations, however, important insights to address lack of standards were provided. In addition, a novel methodology for understanding and standardizing identification methods based on the observation of morphological patterns has been proposed. Crown Copyright © 2016. Published by Elsevier Ireland Ltd. All rights reserved.

  9. Biodegradability standards for carrier bags and plastic films in aquatic environments: a critical review.

    PubMed

    Harrison, Jesse P; Boardman, Carl; O'Callaghan, Kenneth; Delort, Anne-Marie; Song, Jim

    2018-05-01

    Plastic litter is encountered in aquatic ecosystems across the globe, including polar environments and the deep sea. To mitigate the adverse societal and ecological impacts of this waste, there has been debate on whether 'biodegradable' materials should be granted exemptions from plastic bag bans and levies. However, great care must be exercised when attempting to define this term, due to the broad and complex range of physical and chemical conditions encountered within natural ecosystems. Here, we review existing international industry standards and regional test methods for evaluating the biodegradability of plastics within aquatic environments (wastewater, unmanaged freshwater and marine habitats). We argue that current standards and test methods are insufficient in their ability to realistically predict the biodegradability of carrier bags in these environments, due to several shortcomings in experimental procedures and a paucity of information in the scientific literature. Moreover, existing biodegradability standards and test methods for aquatic environments do not involve toxicity testing or account for the potentially adverse ecological impacts of carrier bags, plastic additives, polymer degradation products or small (microscopic) plastic particles that can arise via fragmentation. Successfully addressing these knowledge gaps is a key requirement for developing new biodegradability standard(s) for lightweight carrier bags.

  10. Measurement of water-soluble B vitamins in infant formula by liquid chromatography/tandem mass spectrometry.

    PubMed

    Huang, Min; Winters, Doug; Crowley, Richard; Sullivan, Darryl

    2009-01-01

    A method has been developed for the simultaneous measurement of multiple B vitamins (i.e., B1, B2, B3, B5, and B6) in infant formulas by LC-MSIMS. The vitamins were extracted with acidic solvent, followed by protein precipitation at a pH range of 4.5 to 5.5, and filtered. This simplified procedure eliminates many of the potential sources of laboratory error and facilitates rapid and efficient analysis. As is common in most cases, isotope internal standards were added to account for variations in sample preparation, as well as changes in MS measurement. In this method, isotope-labeled internal standards of B1, B3, B5, and B6 were used. The factors affecting analytical performance were investigated and optimized. In addition, the stability of these vitamins in the extraction solution was investigated. An acidic condition (5 mM HCl) was applied to successfully stabilize B1, which had shown a decrease in signal when other solvents were used. The quantitative extraction and good stability allowed isotope standards to be added to the filtered sample solution, instead of to the extraction solvent. The addition of the isotope to the small portion of the filtered sample solution significantly reduces cost. A comprehensive evaluation of the analysis of the standard reference material and good spike recovery of the vitamins (100 +/- 6%) demonstrates the accuracy of the method. The results for commercially available infant formula samples were also compared with those obtained using the current microbiological method.

  11. Selective deposition of polycrystalline diamond films using photolithography with addition of nanodiamonds as nucleation centers

    NASA Astrophysics Data System (ADS)

    Okhotnikov, V. V.; Linnik, S. A.; Gaidaichuk, A. V.; Shashev, D. V.; Nazarova, G. Yu; Yurchenko, V. I.

    2016-02-01

    A new method of selective deposition of polycrystalline diamond has been developed and studied. The diamond coatings with a complex, predetermined geometry and resolution up to 5 μm were obtained. A high density of polycrystallites in the coating area was reached (up to 32·107 pcs/cm2). The uniformity of the film reached 100%, and the degree of the surface contamination by parasitic crystals did not exceed 2%. The technology was based on the application of the standard photolithography with an addition of nanodiamond suspension into the photoresist that provided the creation of the centers of further nucleation in the areas which require further overgrowth. The films were deposited onto monocrystalline silicon substrates using the method of “hot filaments” in the CVD reactor. The properties of the coating and the impact of the nanodiamond suspension concentration in the photoresist were also studied. The potential use of the given method includes a high resolution, technological efficiency, and low labor costs compared to the standard methods (laser treatment, chemical etching in aggressive environments,).

  12. Analytical challenges in drug counterfeiting and falsification-The NMR approach.

    PubMed

    Holzgrabe, Ulrike; Malet-Martino, Myriam

    2011-06-25

    Counterfeiting of products is a global problem. As long as clothes, clocks, leather wear, etc. are faked there is no danger, but when it comes to drugs, counterfeiting can be life-threatening. In the last years sub-standard active pharmaceutical ingredients (APIs) were found more often even though the use of the quality-ensuring methods of international pharmacopoeias should have detected additional impurities and the low content of the API. Methods orthogonal to the separating methods used in the pharmacopoeias are necessary to find counterfeits. Beside Raman and NIR spectroscopies as well as powder X-ray analysis, NMR spectroscopy being a primary ratio method of measurement is highly suitable to identify and quantify a drug and its related substances as well as to recognize a drug of sub-standard quality. DOSY experiments are suitable to identify the ingredients of formulations and therefore to identify wrong and/or additional ingredients. This review gives an overview of the application of quantitative NMR spectroscopy and DOSY NMR in anticounterfeiting. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. Determination of chloramphenicol residues in meat, seafood, egg, honey, milk, plasma and urine with liquid chromatography-tandem mass spectrometry, and the validation of the method based on 2002/657/EC.

    PubMed

    Rønning, Helene Thorsen; Einarsen, Kristin; Asp, Tone Normann

    2006-06-23

    A simple and rapid method for the determination and confirmation of chloramphenicol in several food matrices with LC-MS/MS was developed. Following addition of d5-chloramphenicol as internal standard, meat, seafood, egg, honey and milk samples were extracted with acetonitrile. Chloroform was then added to remove water. After evaporation, the residues were reconstituted in methanol/water (3+4) before injection. The urine and plasma samples were after addition of internal standard applied to a Chem Elut extraction cartridge, eluted with ethyl acetate, and hexane washed. Also these samples were reconstituted in methanol/water (3+4) after evaporation. By using an MRM acquisition method in negative ionization mode, the transitions 321-->152, 321-->194 and 326-->157 were used for quantification, confirmation and internal standard, respectively. Quantification of chloramphenicol positive samples regardless of matrix could be achieved with a common water based calibration curve. The validation of the method was based on EU-decision 2002/657 and different ways of calculating CCalpha and CCbeta were evaluated. The common CCalpha and CCbeta for all matrices were 0.02 and 0.04 microg/kg for the 321-->152 ion transition, and 0.02 and 0.03 microg/kg for the 321-->194 ion transition. At fortification level 0.1 microg/kg the within-laboratory reproducibility is below 25%.

  14. [Expert investigation on food safety standard system framework construction in China].

    PubMed

    He, Xiang; Yan, Weixing; Fan, Yongxiang; Zeng, Biao; Peng, Zhen; Sun, Zhenqiu

    2013-09-01

    Through investigating food safety standard framework among food safety experts, to summarize the basic elements and principles of food safety standard system, and provide policy advices for food safety standards framework. A survey was carried out among 415 experts from government, professional institutions and the food industry/enterprises using the National Food Safety Standard System Construction Consultation Questionnaire designed in the name of the Secretariat of National Food Safety Standard Committee. Experts have different advices in each group about the principles of food product standards, food additive product standards, food related product standards, hygienic practice, test methods. According to the results, the best solution not only may reflect experts awareness of the work of food safety standards situation, but also provide advices for setting and revision of food safety standards for the next. Through experts investigation, the framework and guiding principles of food safety standard had been built.

  15. Comparison of presbyopic additions determined by the fused cross-cylinder method using alternative target background colours.

    PubMed

    Wee, Sung-Hyun; Yu, Dong-Sik; Moon, Byeong-Yeon; Cho, Hyun Gug

    2010-11-01

    To compare and contrast standard and alternative versions of refractor head (phoropter)-based charts used to determine reading addition. Forty one presbyopic subjects aged between 42 and 60 years were tested. Tentative additions were determined using a red-green background letter chart, and 4 cross-grid charts (with white, red, green, or red-green backgrounds) which were used with the fused cross cylinder (FCC) method. The final addition for a 40 cm working distance was determined for each subject by subjectively adjusting the tentative additions. There were significant differences in the tentative additions obtained using the 5 methods (repeated measures ANOVA, p < 0.001). The mean differences between the tentative and final additions were <0.10 D and were not clinically meaningful, with the exception of the red-green letter test, and the red background in the FCC method. There were no significant differences between the tentative and final additions for the green background in the FCC method (p > 0.05). The intervals of the 95% limits of agreement were under ±0.50 D, and the narrowest interval (±0.26 D) was for the red-green background. The 3 FCC methods with a white, green, or red-green background provided a tentative addition close to the final addition. Compared with the other methods, the FCC method with the red-green background had a narrow range of error. Further, since this method combines the functions of both the fused cross-cylinder test and the duochrome test, it can be a useful technique for determining presbyopic additions. © 2010 The Authors. Ophthalmic and Physiological Optics © 2010 The College of Optometrists.

  16. Performance of Adhesive and Cementitious Anchoring Systems

    DOT National Transportation Integrated Search

    2017-08-01

    This research project evaluated the behavior of adhesive and cementitious bonded anchoring systems per the approach found in the provisional standard AASHTO TP-84, in order to provide recommendations pertaining to the test method. Additional paramete...

  17. Sustainability Characterization for Additive Manufacturing.

    PubMed

    Mani, Mahesh; Lyons, Kevin W; Gupta, S K

    2014-01-01

    Additive manufacturing (AM) has the potential to create geometrically complex parts that require a high degree of customization, using less material and producing less waste. Recent studies have shown that AM can be an economically viable option for use by the industry, yet there are some inherent challenges associated with AM for wider acceptance. The lack of standards in AM impedes its use for parts production since industries primarily depend on established standards in processes and material selection to ensure the consistency and quality. Inability to compare AM performance against traditional manufacturing methods can be a barrier for implementing AM processes. AM process sustainability has become a driver due to growing environmental concerns for manufacturing. This has reinforced the importance to understand and characterize AM processes for sustainability. Process characterization for sustainability will help close the gaps for comparing AM performance to traditional manufacturing methods. Based on a literature review, this paper first examines the potential environmental impacts of AM. A methodology for sustainability characterization of AM is then proposed to serve as a resource for the community to benchmark AM processes for sustainability. Next, research perspectives are discussed along with relevant standardization efforts.

  18. Blind retrospective motion correction of MR images.

    PubMed

    Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard

    2013-12-01

    Subject motion can severely degrade MR images. A retrospective motion correction algorithm, Gradient-based motion correction, which significantly reduces ghosting and blurring artifacts due to subject motion was proposed. The technique uses the raw data of standard imaging sequences; no sequence modifications or additional equipment such as tracking devices are required. Rigid motion is assumed. The approach iteratively searches for the motion trajectory yielding the sharpest image as measured by the entropy of spatial gradients. The vast space of motion parameters is efficiently explored by gradient-based optimization with a convergence guarantee. The method has been evaluated on both synthetic and real data in two and three dimensions using standard imaging techniques. MR images are consistently improved over different kinds of motion trajectories. Using a graphics processing unit implementation, computation times are in the order of a few minutes for a full three-dimensional volume. The presented technique can be an alternative or a complement to prospective motion correction methods and is able to improve images with strong motion artifacts from standard imaging sequences without requiring additional data. Copyright © 2013 Wiley Periodicals, Inc., a Wiley company.

  19. Determination of As, Hg and Pb in herbs using slurry sampling flow injection chemical vapor generation inductively coupled plasma mass spectrometry.

    PubMed

    Tai, Chia-Yi; Jiang, Shiuh-Jen; Sahayam, A C

    2016-02-01

    Analysis of herbs for As, Hg and Pb has been carried out using slurry sampling inductively coupled plasma mass spectrometry (ICP-MS) with flow injection vapor generation. Slurry containing 0.5% m/v herbal powder, 0.1% m/v citric acid and 2% v/v HCl was injected into the VG-ICP-MS system for the determination of As, Hg and Pb that obviate dissolution and mineralization. Standard addition and isotope dilution methods were used for quantifications in selected herbal powders. This method has been validated by the determination of As, Hg and Pb in NIST standard reference materials SRM 1547 Peach Leaves and SRM 1573a Tomato Leaves. The As, Hg and Pb analysis results of the reference materials agreed with the certified values. The precision obtained by the reported procedure was better than 7% for all determinations. The detection limit estimated from standard addition curve was 0.008, 0.003, and 0.007 ng mL(-1) for As, Hg and Pb, respectively. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Calculation of transonic flows using an extended integral equation method

    NASA Technical Reports Server (NTRS)

    Nixon, D.

    1976-01-01

    An extended integral equation method for transonic flows is developed. In the extended integral equation method velocities in the flow field are calculated in addition to values on the aerofoil surface, in contrast with the less accurate 'standard' integral equation method in which only surface velocities are calculated. The results obtained for aerofoils in subcritical flow and in supercritical flow when shock waves are present compare satisfactorily with the results of recent finite difference methods.

  1. Simultaneous determination of nickel and copper by H-point standard addition method-first-order derivative spectrophotometry in plant samples after separation and preconcentration on modified natural clinoptilolite as a new sorbent.

    PubMed

    Roohparvar, Rasool; Taher, Mohammad Ali; Mohadesi, Alireza

    2008-01-01

    For the simultaneous determination of nickel(ll) and copper(ll) in plant samples, a rapid and accurate method was developed. In this method, solid-phase extraction (SPE) and first-order derivative spectrophotometry (FDS) are combined, and the result is coupled with the H-point standard addition method (HPSAM). Compared with normal spectrophotometry, derivative spectrophotometry offers the advantages of increased selectivity and sensitivity. As there is no need for carrying out any pretreatment of the sample, the spectrophotometry method is easy, but because of a high detection limit, it is not so practical. In order to decrease the detection limit, it is suggested to combine spectrophotometry with a preconcentration method such as SPE. In the present work, after separation and preconcentration of Ni(ll) and Cu(ll) on modified clinoptilolite zeolite that is loaded with 2-[1-(2-hydroxy-5-sulforphenyl)-3-phenyl-5-formaza-no]-benzoic acid monosodium salt (zincon) as a selective chromogenic reagent, FDS-HPSAM, which is a simple and selective spectrophotometric method, has been applied for simultaneous determination of these ions. With optimum conditions, the detection limit in original solutions is 0.7 and 0.5 ng/mL, respectively, for nickel and copper. The linear concentration ranges in the proposed method for nickel and copper ions in original solutions are 1.1 to 3.0 x 10(3) and 0.9 to 2.0 x 10(3) ng/mL, respectively. The recommended procedure is applied to successful determination of Cu(ll) and Ni(ll) in standard and real samples.

  2. Is Mistletoe Treatment Beneficial in Invasive Breast Cancer? A New Approach to an Unresolved Problem.

    PubMed

    Fritz, Peter; Dippon, Jürgen; Müller, Simon; Goletz, Sven; Trautmann, Christian; Pappas, Xenophon; Ott, German; Brauch, Hiltrud; Schwab, Matthias; Winter, Stefan; Mürdter, Thomas; Brinkmann, Friedhelm; Faisst, Simone; Rössle, Susanne; Gerteis, Andreas; Friedel, Godehard

    2018-03-01

    In this retrospective study, we compared breast cancer patients treated with and without mistletoe lectin I (ML-I) in addition to standard breast cancer treatment in order to determine a possible effect of this complementary treatment. This study included 18,528 patients with invasive breast cancer. Data on additional ML-I treatments were reported for 164 patients. We developed a "similar case" method with a distance measure retrieved from the beta variable in Cox regression to compare these patients, after stage adjustment, with their non-ML-1 treated counterparts in order to answer three hypotheses concerning overall survival, recurrence free survival and life quality. Raw data analysis of an additional ML-I treatment yielded a worse outcome (p=0.02) for patients with ML treatment, possibly due to a bias inherent in the ML-I-treated patients. Using the "similar case" method (a case-based reasoning approach) we could not confirm this harm for patients using ML-I. Analysis of life quality data did not demonstrate reliable differences between patients treated with ML-I treatment and those without proven ML-I treatment. Based on a "similar case" model we did not observe any differences in the overall survival (OS), recurrence-free survival (RFS), and quality of life data between breast cancer patients with standard treatment and those who in addition to standard treatment received ML-I treatment. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  3. Analytical aspects of diterpene alkaloid poisoning with monkshood.

    PubMed

    Colombo, Maria Laura; Bugatti, Carlo; Davanzo, Franca; Persico, Andrea; Ballabio, Cinzia; Restani, Patrizia

    2009-11-01

    A sensitive and specific method for aconitine extraction from biological samples was developed. Aconitine, the main toxic alkaloid from plants belonging to Aconitum species (family Ranunculaceae), was determined in plant material by an external standard method, and by a standard addition calibration method in biological fluids. Described here is one fatal case and five intoxications of accidental aconitine poisoning following the ingestion of aconite mistaken for an edible grass, Aruncus dioicus (Walt.) Fernald, "mountain asparagus", and Cicerbita alpina (L.) Wallroth. The aconitine content in urine was in the range 2.94 microg/mL (dead patient)-0.20 microg/mL (surviving patients), which was almost two to four times higher than that in plasma.

  4. The accuracy of ultrashort echo time MRI sequences for medical additive manufacturing

    PubMed Central

    Rijkhorst, Erik-Jan; Hofman, Mark; Forouzanfar, Tymour; Wolff, Jan

    2016-01-01

    Objectives: Additively manufactured bone models, implants and drill guides are becoming increasingly popular amongst maxillofacial surgeons and dentists. To date, such constructs are commonly manufactured using CT technology that induces ionizing radiation. Recently, ultrashort echo time (UTE) MRI sequences have been developed that allow radiation-free imaging of facial bones. The aim of the present study was to assess the feasibility of UTE MRI sequences for medical additive manufacturing (AM). Methods: Three morphologically different dry human mandibles were scanned using a CT and MRI scanner. Additionally, optical scans of all three mandibles were made to acquire a “gold standard”. All CT and MRI scans were converted into Standard Tessellation Language (STL) models and geometrically compared with the gold standard. To quantify the accuracy of the AM process, the CT, MRI and gold-standard STL models of one of the mandibles were additively manufactured, optically scanned and compared with the original gold-standard STL model. Results: Geometric differences between all three CT-derived STL models and the gold standard were <1.0 mm. All three MRI-derived STL models generally presented deviations <1.5 mm in the symphyseal and mandibular area. The AM process introduced minor deviations of <0.5 mm. Conclusions: This study demonstrates that MRI using UTE sequences is a feasible alternative to CT in generating STL models of the mandible and would therefore be suitable for surgical planning and AM. Further in vivo studies are necessary to assess the usability of UTE MRI sequences in clinical settings. PMID:26943179

  5. ISO radiation sterilization standards

    NASA Astrophysics Data System (ADS)

    Lambert, Byron J.; Hansen, Joyce M.

    1998-06-01

    This presentation provides an overview of the current status of the ISO radiation sterilization standards. The ISO standards are voluntary standards which detail both the validation and routine control of the sterilization process. ISO 11137 was approved in 1994 and published in 1995. When reviewing the standard you will note that less than 20% of the standard is devoted to requirements and the remainder is guidance on how to comply with the requirements. Future standards developments in radiation sterilization are being focused on providing additional guidance. The guidance that is currently provided in informative annexes of ISO 11137 includes: device/packaging materials, dose setting methods, and dosimeters and dose measurement, currently, there are four Technical Reports being developed to provide additional guidance: 1. AAMI Draft TIR, "Radiation Sterilization Material Qualification" 2. ISO TR 13409-1996, "Sterilization of health care products — Radiation sterilization — Substantiation of 25 kGy as a sterilization dose for small or infrequent production batches" 3. ISO Draft TR, "Sterilization of health care products — Radiation sterilization Selection of a sterilization dose for a single production batch" li]4. ISO Draft TR, "Sterilization of health care products — Radiation sterilization-Product Families, Plans for Sampling and Frequency of Dose Audits."

  6. Improved score statistics for meta-analysis in single-variant and gene-level association studies.

    PubMed

    Yang, Jingjing; Chen, Sai; Abecasis, Gonçalo

    2018-06-01

    Meta-analysis is now an essential tool for genetic association studies, allowing them to combine large studies and greatly accelerating the pace of genetic discovery. Although the standard meta-analysis methods perform equivalently as the more cumbersome joint analysis under ideal settings, they result in substantial power loss under unbalanced settings with various case-control ratios. Here, we investigate the power loss problem by the standard meta-analysis methods for unbalanced studies, and further propose novel meta-analysis methods performing equivalently to the joint analysis under both balanced and unbalanced settings. We derive improved meta-score-statistics that can accurately approximate the joint-score-statistics with combined individual-level data, for both linear and logistic regression models, with and without covariates. In addition, we propose a novel approach to adjust for population stratification by correcting for known population structures through minor allele frequencies. In the simulated gene-level association studies under unbalanced settings, our method recovered up to 85% power loss caused by the standard methods. We further showed the power gain of our methods in gene-level tests with 26 unbalanced studies of age-related macular degeneration . In addition, we took the meta-analysis of three unbalanced studies of type 2 diabetes as an example to discuss the challenges of meta-analyzing multi-ethnic samples. In summary, our improved meta-score-statistics with corrections for population stratification can be used to construct both single-variant and gene-level association studies, providing a useful framework for ensuring well-powered, convenient, cross-study analyses. © 2018 WILEY PERIODICALS, INC.

  7. A practical method of estimating standard error of age in the fission track dating method

    USGS Publications Warehouse

    Johnson, N.M.; McGee, V.E.; Naeser, C.W.

    1979-01-01

    A first-order approximation formula for the propagation of error in the fission track age equation is given by PA = C[P2s+P2i+P2??-2rPsPi] 1 2, where PA, Ps, Pi and P?? are the percentage error of age, of spontaneous track density, of induced track density, and of neutron dose, respectively, and C is a constant. The correlation, r, between spontaneous are induced track densities is a crucial element in the error analysis, acting generally to improve the standard error of age. In addition, the correlation parameter r is instrumental is specifying the level of neutron dose, a controlled variable, which will minimize the standard error of age. The results from the approximation equation agree closely with the results from an independent statistical model for the propagation of errors in the fission-track dating method. ?? 1979.

  8. Gene toxicity studies on titanium dioxide and zinc oxide nanomaterials used for UV-protection in cosmetic formulations.

    PubMed

    Landsiedel, Robert; Ma-Hock, Lan; Van Ravenzwaay, Ben; Schulz, Markus; Wiench, Karin; Champ, Samantha; Schulte, Stefan; Wohlleben, Wendel; Oesch, Franz

    2010-12-01

    Titanium dioxide and zinc oxide nanomaterials, used as UV protecting agents in sunscreens, were investigated for their potential genotoxicity in in vitro and in vivo test systems. Since standard OECD test methods are designed for soluble materials and genotoxicity testing for nanomaterials is still under revision, a battery of standard tests was used, covering different endpoints. Additionally, a procedure to disperse the nanomaterials in the test media and careful characterization of the dispersed test item was added to the testing methods. No genotoxicity was observed in vitro (Ames' Salmonella gene mutation test and V79 micronucleus chromosome mutation test) or in vivo (mouse bone marrow micronucleus test and Comet DNA damage assay in lung cells from rats exposed by inhalation). These results add to the still limited data base on genotoxicity test results with nanomaterials and provide congruent results of a battery of standard OECD test methods applied to nanomaterials.

  9. Test Standard Developed for Determining the Slow Crack Growth of Advanced Ceramics at Ambient Temperature

    NASA Technical Reports Server (NTRS)

    Choi, Sung R.; Salem, Jonathan A.

    1998-01-01

    The service life of structural ceramic components is often limited by the process of slow crack growth. Therefore, it is important to develop an appropriate testing methodology for accurately determining the slow crack growth design parameters necessary for component life prediction. In addition, an appropriate test methodology can be used to determine the influences of component processing variables and composition on the slow crack growth and strength behavior of newly developed materials, thus allowing the component process to be tailored and optimized to specific needs. At the NASA Lewis Research Center, work to develop a standard test method to determine the slow crack growth parameters of advanced ceramics was initiated by the authors in early 1994 in the C 28 (Advanced Ceramics) committee of the American Society for Testing and Materials (ASTM). After about 2 years of required balloting, the draft written by the authors was approved and established as a new ASTM test standard: ASTM C 1368-97, Standard Test Method for Determination of Slow Crack Growth Parameters of Advanced Ceramics by Constant Stress-Rate Flexural Testing at Ambient Temperature. Briefly, the test method uses constant stress-rate testing to determine strengths as a function of stress rate at ambient temperature. Strengths are measured in a routine manner at four or more stress rates by applying constant displacement or loading rates. The slow crack growth parameters required for design are then estimated from a relationship between strength and stress rate. This new standard will be published in the Annual Book of ASTM Standards, Vol. 15.01, in 1998. Currently, a companion draft ASTM standard for determination of the slow crack growth parameters of advanced ceramics at elevated temperatures is being prepared by the authors and will be presented to the committee by the middle of 1998. Consequently, Lewis will maintain an active leadership role in advanced ceramics standardization within ASTM. In addition, the authors have been and are involved with several international standardization organizations including the Versailles Project on Advanced Materials and Standards (VAMAS), the International Energy Agency (IEA), and the International Organization for Standardization (ISO). The associated standardization activities involve fracture toughness, strength, elastic modulus, and the machining of advanced ceramics.

  10. Bivariate versus multivariate smart spectrophotometric calibration methods for the simultaneous determination of a quaternary mixture of mosapride, pantoprazole and their degradation products.

    PubMed

    Hegazy, M A; Yehia, A M; Moustafa, A A

    2013-05-01

    The ability of bivariate and multivariate spectrophotometric methods was demonstrated in the resolution of a quaternary mixture of mosapride, pantoprazole and their degradation products. The bivariate calibrations include bivariate spectrophotometric method (BSM) and H-point standard addition method (HPSAM), which were able to determine the two drugs, simultaneously, but not in the presence of their degradation products, the results showed that simultaneous determinations could be performed in the concentration ranges of 5.0-50.0 microg/ml for mosapride and 10.0-40.0 microg/ml for pantoprazole by bivariate spectrophotometric method and in the concentration ranges of 5.0-45.0 microg/ml for both drugs by H-point standard addition method. Moreover, the applied multivariate calibration methods were able for the determination of mosapride, pantoprazole and their degradation products using concentration residuals augmented classical least squares (CRACLS) and partial least squares (PLS). The proposed multivariate methods were applied to 17 synthetic samples in the concentration ranges of 3.0-12.0 microg/ml mosapride, 8.0-32.0 microg/ml pantoprazole, 1.5-6.0 microg/ml mosapride degradation products and 2.0-8.0 microg/ml pantoprazole degradation products. The proposed bivariate and multivariate calibration methods were successfully applied to the determination of mosapride and pantoprazole in their pharmaceutical preparations.

  11. Standardizing lightweight deflectometer modulus measurements for compaction quality assurance : research summary.

    DOT National Transportation Integrated Search

    2017-09-01

    The mechanistic-empirical pavement design method requires the elastic resilient modulus as the key input for characterization of geomaterials. Current density-based QA procedures do not measure resilient modulus. Additionally, the density-based metho...

  12. Determination of boron in uranium aluminum silicon alloy by spectrophotometry and estimation of expanded uncertainty in measurement

    NASA Astrophysics Data System (ADS)

    Ramanjaneyulu, P. S.; Sayi, Y. S.; Ramakumar, K. L.

    2008-08-01

    Quantification of boron in diverse materials of relevance in nuclear technology is essential in view of its high thermal neutron absorption cross section. A simple and sensitive method has been developed for the determination of boron in uranium-aluminum-silicon alloy, based on leaching of boron with 6 M HCl and H 2O 2, its selective separation by solvent extraction with 2-ethyl hexane 1,3-diol and quantification by spectrophotometry using curcumin. The method has been evaluated by standard addition method and validated by inductively coupled plasma-atomic emission spectroscopy. Relative standard deviation and absolute detection limit of the method are 3.0% (at 1 σ level) and 12 ng, respectively. All possible sources of uncertainties in the methodology have been individually assessed, following the International Organization for Standardization guidelines. The combined uncertainty is calculated employing uncertainty propagation formulae. The expanded uncertainty in the measurement at 95% confidence level (coverage factor 2) is 8.840%.

  13. An Interlaboratory Evaluation of Drift Tube Ion Mobility–Mass Spectrometry Collision Cross Section Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stow, Sarah M.; Causon, Tim J.; Zheng, Xueyun

    Collision cross section (CCS) measurements resulting from ion mobility-mass spectrometry (IM-MS) experiments provide a promising orthogonal dimension of structural information in MS-based analytical separations. As with any molecular identifier, interlaboratory standardization must precede broad range integration into analytical workflows. In this study, we present a reference drift tube ion mobility mass spectrometer (DTIM-MS) where improvements on the measurement accuracy of experimental parameters influencing IM separations provide standardized drift tube, nitrogen CCS values (DTCCSN2) for over 120 unique ion species with the lowest measurement uncertainty to date. The reproducibility of these DTCCSN2 values are evaluated across three additional laboratories on amore » commercially available DTIM-MS instrument. The traditional stepped field CCS method performs with a relative standard deviation (RSD) of 0.29% for all ion species across the three additional laboratories. The calibrated single field CCS method, which is compatible with a wide range of chromatographic inlet systems, performs with an average, absolute bias of 0.54% to the standardized stepped field DTCCSN2 values on the reference system. The low RSD and biases observed in this interlaboratory study illustrate the potential of DTIM-MS for providing a molecular identifier for a broad range of discovery based analyses.« less

  14. Numerical evaluation of discontinuous and nonconforming finite element methods in nonlinear solid mechanics

    NASA Astrophysics Data System (ADS)

    Bayat, Hamid Reza; Krämer, Julian; Wunderlich, Linus; Wulfinghoff, Stephan; Reese, Stefanie; Wohlmuth, Barbara; Wieners, Christian

    2018-03-01

    This work presents a systematic study of discontinuous and nonconforming finite element methods for linear elasticity, finite elasticity, and small strain plasticity. In particular, we consider new hybrid methods with additional degrees of freedom on the skeleton of the mesh and allowing for a local elimination of the element-wise degrees of freedom. We show that this process leads to a well-posed approximation scheme. The quality of the new methods with respect to locking and anisotropy is compared with standard and in addition locking-free conforming methods as well as established (non-) symmetric discontinuous Galerkin methods with interior penalty. For several benchmark configurations, we show that all methods converge asymptotically for fine meshes and that in many cases the hybrid methods are more accurate for a fixed size of the discrete system.

  15. Investigating Arsenic Contents in Surface and Drinking Water by Voltammetry and the Method of Standard Additions

    ERIC Educational Resources Information Center

    Cheng, Anran; Tyne, Rebecca; Kwok, Yu Ting; Rees, Louis; Craig, Lorraine; Lapinee, Chaipat; D'Arcy, Mitch; Weiss, Dominik J.; Salau¨n, Pascal

    2016-01-01

    Testing water samples for arsenic contamination has become an important water quality issue worldwide. Arsenic usually occurs in very small concentrations, and a sensitive analytical method is needed. We present here a 1-day laboratory module developed to introduce Earth Sciences and/or Chemistry student undergraduates to key aspects of this…

  16. Modeling the gas-phase thermochemistry of organosulfur compounds.

    PubMed

    Vandeputte, Aäron G; Sabbe, Maarten K; Reyniers, Marie-Françoise; Marin, Guy B

    2011-06-27

    Key to understanding the involvement of organosulfur compounds in a variety of radical chemistries, such as atmospheric chemistry, polymerization, pyrolysis, and so forth, is knowledge of their thermochemical properties. For organosulfur compounds and radicals, thermochemical data are, however, much less well documented than for hydrocarbons. The traditional recourse to the Benson group additivity method offers no solace since only a very limited number of group additivity values (GAVs) is available. In this work, CBS-QB3 calculations augmented with 1D hindered rotor corrections for 122 organosulfur compounds and 45 organosulfur radicals were used to derive 93 Benson group additivity values, 18 ring-strain corrections, 2 non-nearest-neighbor interactions, and 3 resonance corrections for standard enthalpies of formation, standard molar entropies, and heat capacities for organosulfur compounds and organosulfur radicals. The reported GAVs are consistent with previously reported GAVs for hydrocarbons and hydrocarbon radicals and include 77 contributions, among which 26 radical contributions, which, to the best of our knowledge, have not been reported before. The GAVs allow one to estimate the standard enthalpies of formation at 298 K, the standard entropies at 298 K, and standard heat capacities in the temperature range 300-1500 K for a large set of organosulfur compounds, that is, thiols, thioketons, polysulfides, alkylsulfides, thials, dithioates, and cyclic sulfur compounds. For a validation set of 26 organosulfur compounds, the mean absolute deviation between experimental and group additively modeled enthalpies of formation amounts to 1.9  kJ  mol(-1). For an additional set of 14 organosulfur compounds, it was shown that the mean absolute deviations between calculated and group additively modeled standard entropies and heat capacities are restricted to 4 and 2 J  mol(-1)  K(-1), respectively. As an alternative to Benson GAVs, 26 new hydrogen-bond increments are reported, which can also be useful for the prediction of radical thermochemistry. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. The effect of instructional methodology on high school students natural sciences standardized tests scores

    NASA Astrophysics Data System (ADS)

    Powell, P. E.

    Educators have recently come to consider inquiry based instruction as a more effective method of instruction than didactic instruction. Experience based learning theory suggests that student performance is linked to teaching method. However, research is limited on inquiry teaching and its effectiveness on preparing students to perform well on standardized tests. The purpose of the study to investigate whether one of these two teaching methodologies was more effective in increasing student performance on standardized science tests. The quasi experimental quantitative study was comprised of two stages. Stage 1 used a survey to identify teaching methods of a convenience sample of 57 teacher participants and determined level of inquiry used in instruction to place participants into instructional groups (the independent variable). Stage 2 used analysis of covariance (ANCOVA) to compare posttest scores on a standardized exam by teaching method. Additional analyses were conducted to examine the differences in science achievement by ethnicity, gender, and socioeconomic status by teaching methodology. Results demonstrated a statistically significant gain in test scores when taught using inquiry based instruction. Subpopulation analyses indicated all groups showed improved mean standardized test scores except African American students. The findings benefit teachers and students by presenting data supporting a method of content delivery that increases teacher efficacy and produces students with a greater cognition of science content that meets the school's mission and goals.

  18. An inversion-based self-calibration for SIMS measurements: Application to H, F, and Cl in apatite

    NASA Astrophysics Data System (ADS)

    Boyce, J. W.; Eiler, J. M.

    2011-12-01

    Measurements of volatile abundances in igneous apatites can provide information regarding the abundances and evolution of volatiles in magmas, with applications to terrestrial volcanism and planetary evolution. Secondary ion mass spectrometry (SIMS) measurements can produce accurate and precise measurements of H and other volatiles in many materials including apatite. SIMS standardization generally makes use of empirical linear transfer functions that relate measured ion ratios to independently known concentrations. However, this approach is often limited by the lack of compositionally diverse, well-characterized, homogeneous standards. In general, SIMS calibrations are developed for minor and trace elements, and any two are treated as independent of one another. However, in crystalline materials, additional stoichiometric constraints may apply. In the case of apatite, the sum of concentrations of abundant volatile elements (H, Cl, and F) should closely approach 100% occupancy of their collective structural site. Here we propose and document the efficacy of a method for standardizing SIMS analyses of abundant volatiles in apatites that takes advantage of this stoichiometric constraint. The principle advantage of this method is that it is effectively self-standardizing; i.e., it requires no independently known homogeneous reference standards. We define a system of independent linear equations relating measured ion ratios (H/P, Cl/P, F/P) and unknown calibration slopes. Given sufficient range in the concentrations of the different elements among apatites measured in a single analytical session, solving this system of equations allows for the calibration slope for each element to be determined without standards, using only blank-corrected ion ratios. In the case that a data set of this kind lacks sufficient range in measured compositions of one or more of the relevant ion ratios, one can employ measurements of additional apatites of a variety of compositions to increase the statistical range and make the inversion more accurate and precise. These additional non-standard apatites need only be wide-ranging in composition: They need not be homogenous nor have known H, F, or Cl concentrations. Tests utilizing synthetic data and data generated in the laboratory indicate that this method should yield satisfactory results provided apatites meet the criteria of the model. The inversion method is able to reproduce conventional calibrations to within <2.5%, a level of accuracy comparable to or even better than the uncertainty of the conventional calibration, and one that includes both error in the inversion method as well as any true error in the independently determined values of the standards. Uncertainties in the inversion calibrations range from 0.1-1.7% (2σ), typically an order of magnitude smaller than the uncertainties in conventional calibrations (~4-5% for H2O, 1-19% for F and Cl). However, potential systematic errors stem from the model assumption of 100% occupancy of this site by the measured elements. Use of this method simplifies analysis of H, F, and Cl in apatites by SIMS, and may also be amenable to other stoichiometrically limited substitution groups, including P+As+S+Si+C in apatite, and Zr+Hf+U+Th in non-metamict zircon.

  19. Blood collection techniques, heparin and quinidine protein binding.

    PubMed

    Kessler, K M; Leech, R C; Spann, J F

    1979-02-01

    With the use of glass syringes without heparin and all glass equipment, the percent of unbound quinidine was measured by ultrafiltration and a double-extraction assay method after addition of 2 microgram/ml of quinidine sulfate. Compared to the all-glass method, collection of blood using Vacutainers resulted in an erroneous and variable decrease in quinidine binding related to blood to rubber-stopper contact. With glass, the unbound quinidine fraction was (mean +/- standard error) 10 +/- 1% in 10 normal volunteers, 8.5 +/- 1.5% in 10 patients with congestive heart failure, and 11 +/- 2% in 11 patients with chronic renal failure (although in 8 of the latter 11 patients the percent of unbound quinidine was 4 or more standard errors from the mean of the normal group). During cardiac catheterization, patients had markedly elevated unbound quinidine fractions: 24 +/- 2% (p less than 0.001). This abnormality coincided with the addition of heparin in vivo and was less apparent after the addition of up to 10 U/ml of heparin in vitro (120% and 29% increase in unbound quinidine fractions, respectively). Quinidine binding should be measured with all glass or equivalent equipment.

  20. Accurate diagnosis of thyroid follicular lesions from nuclear morphology using supervised learning.

    PubMed

    Ozolek, John A; Tosun, Akif Burak; Wang, Wei; Chen, Cheng; Kolouri, Soheil; Basu, Saurav; Huang, Hu; Rohde, Gustavo K

    2014-07-01

    Follicular lesions of the thyroid remain significant diagnostic challenges in surgical pathology and cytology. The diagnosis often requires considerable resources and ancillary tests including immunohistochemistry, molecular studies, and expert consultation. Visual analyses of nuclear morphological features, generally speaking, have not been helpful in distinguishing this group of lesions. Here we describe a method for distinguishing between follicular lesions of the thyroid based on nuclear morphology. The method utilizes an optimal transport-based linear embedding for segmented nuclei, together with an adaptation of existing classification methods. We show the method outputs assignments (classification results) which are near perfectly correlated with the clinical diagnosis of several lesion types' lesions utilizing a database of 94 patients in total. Experimental comparisons also show the new method can significantly outperform standard numerical feature-type methods in terms of agreement with the clinical diagnosis gold standard. In addition, the new method could potentially be used to derive insights into biologically meaningful nuclear morphology differences in these lesions. Our methods could be incorporated into a tool for pathologists to aid in distinguishing between follicular lesions of the thyroid. In addition, these results could potentially provide nuclear morphological correlates of biological behavior and reduce health care costs by decreasing histotechnician and pathologist time and obviating the need for ancillary testing. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. New output improvements for CLASSY

    NASA Technical Reports Server (NTRS)

    Rassbach, M. E. (Principal Investigator)

    1981-01-01

    Additional output data and formats for the CLASSY clustering algorithm were developed. Four such aids to the CLASSY user are described. These are: (1) statistical measures; (2) special map types; (3) formats for standard output; and (4) special cluster display method.

  2. Determination of arsenic in traditional Chinese medicine by microwave digestion with flow injection-inductively coupled plasma mass spectrometry (FI-ICP-MS).

    PubMed

    Ong, E S; Yong, Y L; Woo, S O

    1999-01-01

    A simple, rapid, and sensitive method with high sample throughput was developed for determining arsenic in traditional Chinese medicine (TCM) in the form of uncoated tablets, sugar-coated tablets, black pills, capsules, powders, and syrups. The method involves microwave digestion with flow injection-inductively coupled plasma mass spectrometry (FI-ICP-MS). Method precision was 2.7-10.1% (relative standard deviation, n = 6) for different concentrations of arsenic in different TCM samples analyzed by different analysts on different days. Method accuracy was checked with a certified reference material (sea lettuce, Ulva lactuca, BCR CRM 279) for external calibration and by spiking arsenic standard into different TCMs. Recoveries of 89-92% were obtained for the certified reference material and higher than 95% for spiked TCMs. Matrix interference was insignificant for samples analyzed by the method of standard addition. Hence, no correction equation was used in the analysis of arsenic in the samples studied. Sample preparation using microwave digestion gave results that were very similar to those obtained by conventional wet acid digestion using nitric acid.

  3. ANSI/ASHRAE/IES Standard 90.1-2010 Performance Rating Method Reference Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goel, Supriya; Rosenberg, Michael I.

    This document is intended to be a reference manual for the Appendix G Performance Rating Method (PRM) of ANSI/ASHRAE/IES Standard 90.1- 2010 (Standard 90.1-2010).The PRM is used for rating the energy efficiency of commercial and high-rise residential buildings with designs that exceed the requirements of Standard 90.1. The procedures and processes described in this manual are designed to provide consistency and accuracy by filling in gaps and providing additional details needed by users of the PRM. It should be noted that this document is created independently from ASHRAE and SSPC 90.1 and is not sanctioned nor approved by either ofmore » those entities . Potential users of this manual include energy modelers, software developers and implementers of “beyond code” energy programs. Energy modelers using ASHRAE Standard 90.1-2010 for beyond code programs can use this document as a reference manual for interpreting requirements of the Performance Rating method. Software developers, developing tools for automated creation of the baseline model can use this reference manual as a guideline for developing the rules for the baseline model.« less

  4. The Shell Seeker: What Is the Quantity of Shell in the Lido di Venezia Sand? A Calibration DRIFTS Experiment

    ERIC Educational Resources Information Center

    Pezzolo, Alessandra De Lorenzi

    2011-01-01

    In this experiment, students are given a fanciful application of the standard addition method to evaluate the approximate quantity of the shell component in a sample of sand collected on the Lido di Venezia seashore. Several diffuse reflectance infrared Fourier transform (DRIFT) spectra are recorded from a sand sample before and after addition of…

  5. Energy-conserving impact algorithm for the heel-strike phase of gait.

    PubMed

    Kaplan, M L; Heegaard, J H

    2000-06-01

    Significant ground reaction forces exceeding body weight occur during the heel-strike phase of gait. The standard methods of analytical dynamics used to solve the impact problem do not accommodate well the heel-strike collision due to the persistent contact at the front foot and presence of contact at the back foot. These methods can cause a non-physical energy gain on the order of the total kinetic energy of the system at impact. Additionally, these standard techniques do not quantify the contact force, but the impulse over the impact. We present an energy-conserving impact algorithm based on the penalty method to solve for the ground reaction forces during gait. The rigid body assumptions are relaxed and the bodies are allowed to penetrate one another to a small degree. Associated with the deformation is a potential, from which the contact forces are derived. The empirical coefficient-of-restitution used in the standard approaches is replaced by two parameters to characterize the stiffness and the damping of the materials. We solve two simple heel-strike models to illustrate the shortcomings of a standard approach and the suitability of the proposed method for use with gait.

  6. Biodegradability standards for carrier bags and plastic films in aquatic environments: a critical review

    PubMed Central

    Boardman, Carl; O'Callaghan, Kenneth; Delort, Anne-Marie; Song, Jim

    2018-01-01

    Plastic litter is encountered in aquatic ecosystems across the globe, including polar environments and the deep sea. To mitigate the adverse societal and ecological impacts of this waste, there has been debate on whether ‘biodegradable' materials should be granted exemptions from plastic bag bans and levies. However, great care must be exercised when attempting to define this term, due to the broad and complex range of physical and chemical conditions encountered within natural ecosystems. Here, we review existing international industry standards and regional test methods for evaluating the biodegradability of plastics within aquatic environments (wastewater, unmanaged freshwater and marine habitats). We argue that current standards and test methods are insufficient in their ability to realistically predict the biodegradability of carrier bags in these environments, due to several shortcomings in experimental procedures and a paucity of information in the scientific literature. Moreover, existing biodegradability standards and test methods for aquatic environments do not involve toxicity testing or account for the potentially adverse ecological impacts of carrier bags, plastic additives, polymer degradation products or small (microscopic) plastic particles that can arise via fragmentation. Successfully addressing these knowledge gaps is a key requirement for developing new biodegradability standard(s) for lightweight carrier bags. PMID:29892374

  7. Conceptual designs for in situ analysis of Mars soil

    NASA Technical Reports Server (NTRS)

    Mckay, C. P.; Zent, A. P.; Hartman, H.

    1991-01-01

    A goal of this research is to develop conceptual designs for instrumentation to perform in situ measurements of the Martian soil in order to determine the existence and nature of any reactive chemicals. Our approach involves assessment and critical review of the Viking biology results which indicated the presence of a soil oxidant, an investigation of the possible application of standard soil science techniques to the analysis of Martian soil, and a preliminary consideration of non-standard methods that may be necessary for use in the highly oxidizing Martian soil. Based on our preliminary analysis, we have developed strawman concepts for standard soil analysis on Mars, including pH, suitable for use on a Mars rover mission. In addition, we have devised a method for the determination of the possible strong oxidants on Mars.

  8. Antimicrobial Efficiency of Iodinated Individual Protection Filters

    DTIC Science & Technology

    2004-11-01

    additional 2 logs of attenuation vs. a standard COTS canister when challenged with MS2 coliphage . U U U UU 9 Joseph D. Wander 850-283-6240 NOTICES USING...versus a standard COTS canister when challenged with MS2 coliphage . INTRODUCTION Biological weapons are not new, and have been used as warfare...canisters and the iodinated clip-on prototypes were challenged with aerosolized MS2 coliphage . EXPERIMENTAL METHODS Escherichia coli (ATCC 15597) was

  9. [Simultaneous Determination of Sn and S in Methyltin Mercaptide by Microwave-Assisted Acid Digestion and ICP-OES].

    PubMed

    Chen, Qian; Wu, Xi; Hou, Xian-deng; Xu, Kai-lai

    2015-09-01

    Methyltin mercaptide is widely used as one of the best heat stabilizer in the polyvinylchloride (PVC) thermal processing due to its excellent stability, good transparency, high compatibility and weather resistance. The content of sulfur and tin significantly affects its quality and performance, so it is of great significance to develop an analytical method for the simultaneous determination of sulfur and tin. Inductively coupled plasma atomic emission spectrometry (ICP-OES) has been a powerful analytical tool for a myriad of complex samples owing to its advantages of the low detection limits, rapid and precise determinations over wide dynamic ranges, freedom from chemical inter-element interferences, the high sample throughput and above all, simultaneous multi-elements analysis. Microwave technique as a well-developed method for sample preparation can dramatically reduce the digestion time and the loss of volatile elements compared with the traditional open digestion. Hereby, a microwave-assisted acid digestion (MW-AAD) procedure followed by inductively coupled plasma optical emission spectroscopy (ICP-OES) analysis was developed for the simultaneous determination of Sn and S in methyltin mercaptide. This method has the advantages of simplicity, rapidness, good accuracy, green and less use of samples. Parameters affecting the MW-AAD such as the digestion solution and digestion time were optimized by using a chemical analyzed reference sample (DX-181) to attain tin and sulfur quantitative recoveries. HNO3-HCl-HClO4 (v/v/v=9:3:1) and 10 min were the optimum digestion solution and digestion time, respectively. Under optimum conditions, the standard addition method and the standard calibration curve method were both been used to detect Sn and S in DX-181. There was no significant difference between two methods and the relative deviations to the chemical analysis values were both less than 2%. Additionally, the accuracy of the MW-AAD method was examined by analyzing three methyltin mercaptide samples (DX-181, DX-990, DX-960). The results were satisfactory with the relative deviations (<3%) and the recoveries of standard addition (99%~102%).

  10. Development of an electrothermal vaporization ICP-MS method and assessment of its applicability to studies of the homogeneity of reference materials.

    PubMed

    Friese, K C; Grobecker, K H; Wätjen, U

    2001-07-01

    A method has been developed for measurement of the homogeneity of analyte distribution in powdered materials by use of electrothermal vaporization with inductively coupled plasma mass spectrometric (ETV-ICP-MS) detection. The method enabled the simultaneous determination of As, Cd, Cu, Fe, Mn, Pb, and Zn in milligram amounts of samples of biological origin. The optimized conditions comprised a high plasma power of 1,500 W, reduced aerosol transport flow, and heating ramps below 300 degrees C s(-1). A temperature ramp to 550 degrees C ensured effective pyrolysis of approximately 70% of the organic compounds without losses of analyte. An additional hold stage at 700 degrees C led to separation of most of the analyte signals from the evaporation of carbonaceous matrix compounds. The effect of time resolution of signal acquisition on the precision of the ETV measurements was investigated. An increase in the number of masses monitored up to 20 is possible with not more than 1% additional relative standard deviation of results caused by limited temporal resolution of the transient signals. Recording of signals from the nebulization of aqueous standards in each sample run enabled correction for drift of the sensitivity of the ETV-ICP-MS instrument. The applicability of the developed method to homogeneity studies was assessed by use of four certified reference materials. According to the best repeatability observed in these sample runs, the maximum contribution of the method to the standard deviation is approximately 5% to 6% for all the elements investigated.

  11. Test Methodology to Evaluate the Safety of Materials Using Spark Incendivity

    NASA Technical Reports Server (NTRS)

    Buhler, Charles; Calle, Carlos; Clements, Sid; Ritz, Mindy; Starnes, Jeff

    2007-01-01

    For many years scientists and engineers have been searching for the proper test method to evaluate an electrostatic risk for materials used in hazardous environments. A new test standard created by the International Electrotechnical Commission is a promising addition to conventional test methods used throughout industry. The purpose of this paper is to incorporate this test into a proposed new methodology for the evaluation of materials exposed to flammable environments. However, initial testing using this new standard has uncovered some unconventional behavior in materials that conventional test methods were thought to have reconciled. For example some materials tested at higher humidities were more susceptible to incendive discharges than at lower humidity even though the surface resistivity was lower.

  12. A comparison of data-driven groundwater vulnerability assessment methods

    USGS Publications Warehouse

    Sorichetta, Alessandro; Ballabio, Cristiano; Masetti, Marco; Robinson, Gilpin R.; Sterlacchini, Simone

    2013-01-01

    Increasing availability of geo-environmental data has promoted the use of statistical methods to assess groundwater vulnerability. Nitrate is a widespread anthropogenic contaminant in groundwater and its occurrence can be used to identify aquifer settings vulnerable to contamination. In this study, multivariate Weights of Evidence (WofE) and Logistic Regression (LR) methods, where the response variable is binary, were used to evaluate the role and importance of a number of explanatory variables associated with nitrate sources and occurrence in groundwater in the Milan District (central part of the Po Plain, Italy). The results of these models have been used to map the spatial variation of groundwater vulnerability to nitrate in the region, and we compare the similarities and differences of their spatial patterns and associated explanatory variables. We modify the standard WofE method used in previous groundwater vulnerability studies to a form analogous to that used in LR; this provides a framework to compare the results of both models and reduces the effect of sampling bias on the results of the standard WofE model. In addition, a nonlinear Generalized Additive Model has been used to extend the LR analysis. Both approaches improved discrimination of the standard WofE and LR models, as measured by the c-statistic. Groundwater vulnerability probability outputs, based on rank-order classification of the respective model results, were similar in spatial patterns and identified similar strong explanatory variables associated with nitrate source (population density as a proxy for sewage systems and septic sources) and nitrate occurrence (groundwater depth).

  13. In situ Spectroscopic Analysis and Quantification of [Tc(CO)3]+ in Hanford Tank Waste.

    PubMed

    Branch, Shirmir D; French, Amanda D; Lines, Amanda M; Soderquist, Chuck Z; Rapko, Brian M; Heineman, William R; Bryan, Samuel A

    2018-06-12

    The quantitative conversion of non-pertechnetate [Tc(CO)3]+ species in nuclear waste storage tank 241-AN-102 at the Hanford Site is demonstrated. A waste sample containing the [Tc(CO)3]+ species is added to a developer solution that rapidly converts the non-emissive species into a luminescent complex, which is detected spectroscopically. This method was first demonstrated using a [Tc(CO)3]+ sample non-waste containing matrix to determine a detection limit (LOD), resulting in a [Tc(CO)3]+ LOD of 2.20 × 10-7 M, very near the LOD of the independently synthesized standard (2.10 × 10-7 M). The method was then used to detect [Tc(CO)3]+ in a simulated waste using the standard addition method, resulting in a [Tc(CO)3]+ concentration of 1.89 × 10-5 M (within 27.7% of the concentration determined by β- liquid scintillation counting). Three samples from 241-AN-102 were tested by the standard addition method: (1) a 5 M Na adjusted fraction, (2) a fraction depleted of 137Cs, (3) and an acid-stripped eluate. The concentrations of [Tc(CO)3]+ in these fractions were determined to be 9.90 × 10-6 M (1), 0 M (2), and 2.46 × 10-6 M (3), respectively. The concentration of [Tc(CO)3]+ in the as-received AN-102 tank waste supernatant was determined to be 1.84 × 10-5 M.

  14. Slurry sampling electrothermal vaporization inductively coupled plasma mass spectrometry for the determination of Cr, Cd and Pb in plastics.

    PubMed

    Li, Po-Chien; Jiang, Shiuh-Jen

    2006-07-01

    Ultrasonic slurry sampling electrothermal vaporization dynamic reaction cell inductively coupled plasma mass spectrometry (USS-ETV-DRC-ICP-MS) for the determination of Cr, Cd and Pb in several plastic samples, using NH4NO3 as the modifier, is described. The influences of the instrumental operating conditions and the slurry preparation technique on the ion signals are investigated. A reduction in the intensity of the background at signals corresponding to chromium masses (arising from matrix elements) was achieved by using NH3 as the reaction cell gas in the DRC. The method was applied to determine Cr, Cd and Pb in two polystyrene (PS) samples and a polyvinyl chloride (PVC) sample using two different calibration methods, namely standard addition and isotope dilution. The results were in good agreement with those for digested samples analyzed by ultrasonic nebulization DRC-ICP-MS. The precision between sample replicates was better than 17% with the USS-ETV-DRC-ICP-MS method. The method detection limits, estimated from standard addition curves, were about 6-9, 1-2 and 8-11 ng g(-1) for Cr, Cd and Pb, respectively, in the original plastic samples.

  15. Vibration Testing of Electrical Cables to Quantify Loads at Tie-Down Locations

    NASA Technical Reports Server (NTRS)

    Dutson, Joseph D.

    2013-01-01

    The standard method for defining static equivalent structural load factors for components is based on Mile s equation. Unless test data is available, 5% critical damping is assumed for all components when calculating loads. Application of this method to electrical cable tie-down hardware often results in high loads, which often exceed the capability of typical tie-down options such as cable ties and P-clamps. Random vibration testing of electrical cables was used to better understand the factors that influence component loads: natural frequency, damping, and mass participation. An initial round of vibration testing successfully identified variables of interest, checked out the test fixture and instrumentation, and provided justification for removing some conservatism in the standard method. Additional testing is planned that will include a larger range of cable sizes for the most significant contributors to load as variables to further refine loads at cable tie-down points. Completed testing has provided justification to reduce loads at cable tie-downs by 45% with additional refinement based on measured cable natural frequencies.

  16. Revision of the NIST Standard for (223)Ra: New Measurements and Review of 2008 Data.

    PubMed

    Zimmerman, B E; Bergeron, D E; Cessna, J T; Fitzgerald, R; Pibida, L

    2015-01-01

    After discovering a discrepancy in the transfer standard currently being disseminated by the National Institute of Standards and Technology (NIST), we have performed a new primary standardization of the alpha-emitter (223)Ra using Live-timed Anticoincidence Counting (LTAC) and the Triple-to-Double Coincidence Ratio Method (TDCR). Additional confirmatory measurements were made with the CIEMAT-NIST efficiency tracing method (CNET) of liquid scintillation counting, integral γ-ray counting using a NaI(Tl) well counter, and several High Purity Germanium (HPGe) detectors in an attempt to understand the origin of the discrepancy and to provide a correction. The results indicate that a -9.5 % difference exists between activity values obtained using the former transfer standard relative to the new primary standardization. During one of the experiments, a 2 % difference in activity was observed between dilutions of the (223)Ra master solution prepared using the composition used in the original standardization and those prepared using 1 mol·L(-1) HCl. This effect appeared to be dependent on the number of dilutions or the total dilution factor to the master solution, but the magnitude was not reproducible. A new calibration factor ("K-value") has been determined for the NIST Secondary Standard Ionization Chamber (IC "A"), thereby correcting the discrepancy between the primary and secondary standards.

  17. Studies and research concerning BNFP: process monitoring and process surveillance demonstration program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kight, H R

    1979-11-01

    Computerized methods of monitoring process functions and alarming off-standard conditions were implemented and demonstrated during the FY 1979 Uranium Run. In addition, prototype applications of instruments for the purpose of tamper indication and surveillance were tested.

  18. 40 CFR 63.1352 - Additional test methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Standards for Hazardous Air Pollutants From the Portland Cement Manufacturing Industry Monitoring and... bypass stacks at portland cement manufacturing facilities, for use in applicability determinations under... kiln/raw mills and associated bypass stacks at portland cement manufacturing facilities, for use in...

  19. Accounting for both local aquatic community composition and bioavailability in setting site-specific quality standards for zinc.

    PubMed

    Peters, Adam; Simpson, Peter; Moccia, Alessandra

    2014-01-01

    Recent years have seen considerable improvement in water quality standards (QS) for metals by taking account of the effect of local water chemistry conditions on their bioavailability. We describe preliminary efforts to further refine water quality standards, by taking account of the composition of the local ecological community (the ultimate protection objective) in addition to bioavailability. Relevance of QS to the local ecological community is critical as it is important to minimise instances where quality classification using QS does not reconcile with a quality classification based on an assessment of the composition of the local ecology (e.g. using benthic macroinvertebrate quality assessment metrics such as River InVertebrate Prediction and Classification System (RIVPACS)), particularly where ecology is assessed to be at good or better status, whilst chemical quality is determined to be failing relevant standards. The alternative approach outlined here describes a method to derive a site-specific species sensitivity distribution (SSD) based on the ecological community which is expected to be present at the site in the absence of anthropogenic pressures (reference conditions). The method combines a conventional laboratory ecotoxicity dataset normalised for bioavailability with field measurements of the response of benthic macroinvertebrate abundance to chemical exposure. Site-specific QSref are then derived from the 5%ile of this SSD. Using this method, site QSref have been derived for zinc in an area impacted by historic mining activities. Application of QSref can result in greater agreement between chemical and ecological metrics of environmental quality compared with the use of either conventional (QScon) or bioavailability-based QS (QSbio). In addition to zinc, the approach is likely to be applicable to other metals and possibly other types of chemical stressors (e.g. pesticides). However, the methodology for deriving site-specific targets requires additional development and validation before they can be robustly applied during surface water classification.

  20. A generalized method for characterization of 235U and 239Pu content using short-lived fission product gamma spectroscopy

    DOE PAGES

    Knowles, Justin R.; Skutnik, Steven E.; Glasgow, David C.; ...

    2016-06-23

    Rapid non-destructive assay methods for trace fissile material analysis are needed in both nuclear forensics and safeguards communities. To address these needs, research at the High Flux Isotope Reactor Neutron Activation Analysis laboratory has developed a generalized non-destructive assay method to characterize materials containing fissile isotopes. This method relies on gamma-ray emissions from short-lived fission products and capitalizes off of differences in fission product yields to identify fissile compositions of trace material samples. Although prior work has explored the use of short-lived fission product gamma-ray measurements, the proposed method is the first to provide a holistic characterization of isotopic identification,more » mass ratios, and absolute mass determination. Successful single fissile isotope mass recoveries of less than 6% error have been conducted on standards of 235U and 239Pu as low as 12 nanograms in less than 10 minutes. Additionally, mixtures of fissile isotope standards containing 235U and 239Pu have been characterized as low as 229 nanograms of fissile mass with less than 12% error. The generalizability of this method is illustrated by evaluating different fissile isotopes, mixtures of fissile isotopes, and two different irradiation positions in the reactor. Furthermore, it is anticipated that this method will be expanded to characterize additional fissile nuclides, utilize various irradiation sources, and account for increasingly complex sample matrices.« less

  1. A generalized method for characterization of 235U and 239Pu content using short-lived fission product gamma spectroscopy

    NASA Astrophysics Data System (ADS)

    Knowles, Justin; Skutnik, Steven; Glasgow, David; Kapsimalis, Roger

    2016-10-01

    Rapid nondestructive assay methods for trace fissile material analysis are needed in both nuclear forensics and safeguards communities. To address these needs, research at the Oak Ridge National Laboratory High Flux Isotope Reactor Neutron Activation Analysis facility has developed a generalized nondestructive assay method to characterize materials containing fissile isotopes. This method relies on gamma-ray emissions from short-lived fission products and makes use of differences in fission product yields to identify fissile compositions of trace material samples. Although prior work has explored the use of short-lived fission product gamma-ray measurements, the proposed method is the first to provide a complete characterization of isotopic identification, mass ratios, and absolute mass determination. Successful single fissile isotope mass recoveries of less than 6% recovery bias have been conducted on standards of 235U and 239Pu as low as 12 ng in less than 10 minutes. Additionally, mixtures of fissile isotope standards containing 235U and 239Pu have been characterized as low as 198 ng of fissile mass with less than 7% recovery bias. The generalizability of this method is illustrated by evaluating different fissile isotopes, mixtures of fissile isotopes, and two different irradiation positions in the reactor. It is anticipated that this method will be expanded to characterize additional fissile nuclides, utilize various irradiation facilities, and account for increasingly complex sample matrices.

  2. A generalized method for characterization of 235U and 239Pu content using short-lived fission product gamma spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knowles, Justin R.; Skutnik, Steven E.; Glasgow, David C.

    Rapid non-destructive assay methods for trace fissile material analysis are needed in both nuclear forensics and safeguards communities. To address these needs, research at the High Flux Isotope Reactor Neutron Activation Analysis laboratory has developed a generalized non-destructive assay method to characterize materials containing fissile isotopes. This method relies on gamma-ray emissions from short-lived fission products and capitalizes off of differences in fission product yields to identify fissile compositions of trace material samples. Although prior work has explored the use of short-lived fission product gamma-ray measurements, the proposed method is the first to provide a holistic characterization of isotopic identification,more » mass ratios, and absolute mass determination. Successful single fissile isotope mass recoveries of less than 6% error have been conducted on standards of 235U and 239Pu as low as 12 nanograms in less than 10 minutes. Additionally, mixtures of fissile isotope standards containing 235U and 239Pu have been characterized as low as 229 nanograms of fissile mass with less than 12% error. The generalizability of this method is illustrated by evaluating different fissile isotopes, mixtures of fissile isotopes, and two different irradiation positions in the reactor. Furthermore, it is anticipated that this method will be expanded to characterize additional fissile nuclides, utilize various irradiation sources, and account for increasingly complex sample matrices.« less

  3. ASTM international workshop on standards and measurements for tissue engineering scaffolds.

    PubMed

    Simon, Carl G; Yaszemski, Michael J; Ratcliffe, Anthony; Tomlins, Paul; Luginbuehl, Reto; Tesk, John A

    2015-07-01

    The "Workshop on Standards & Measurements for Tissue Engineering Scaffolds" was held on May 21, 2013 in Indianapolis, IN, and was sponsored by the ASTM International (ASTM). The purpose of the workshop was to identify the highest priority items for future standards work for scaffolds used in the development and manufacture of tissue engineered medical products (TEMPs). Eighteen speakers and 78 attendees met to assess current scaffold standards and to prioritize needs for future standards. A key finding was that the ASTM TEMPs subcommittees (F04.41-46) have many active "guide" documents for educational purposes, but few standard "test methods" or "practices." Overwhelmingly, the most clearly identified need was standards for measuring the structure of scaffolds, followed by standards for biological characterization, including in vitro testing, animal models and cell-material interactions. The third most pressing need was to develop standards for assessing the mechanical properties of scaffolds. Additional needs included standards for assessing scaffold degradation, clinical outcomes with scaffolds, effects of sterilization on scaffolds, scaffold composition, and drug release from scaffolds. Discussions highlighted the need for additional scaffold reference materials and the need to use them for measurement traceability. Workshop participants emphasized the need to promote the use of standards in scaffold fabrication, characterization, and commercialization. Finally, participants noted that standards would be more broadly accepted if their impact in the TEMPs community could be quantified. Many scaffold standard needs have been identified and focus is turning to generating these standards to support the use of scaffolds in TEMPs. © 2014 Wiley Periodicals, Inc.

  4. Method of synthesizing tungsten nanoparticles

    DOEpatents

    Thoma, Steven G; Anderson, Travis M

    2013-02-12

    A method to synthesize tungsten nanoparticles has been developed that enables synthesis of nanometer-scale, monodisperse particles that can be stabilized only by tetrahydrofuran. The method can be used at room temperature, is scalable, and the product concentrated by standard means. Since no additives or stabilizing surfactants are required, this method is particularly well suited for producing tungsten nanoparticles for dispersion in polymers. If complete dispersion is achieved due to the size of the nanoparticles, then the optical properties of the polymer can be largely maintained.

  5. Comparison of methods for acid quantification: impact of resist components on acid-generating efficiency

    NASA Astrophysics Data System (ADS)

    Cameron, James F.; Fradkin, Leslie; Moore, Kathryn; Pohlers, Gerd

    2000-06-01

    Chemically amplified deep UV (CA-DUV) positive resists are the enabling materials for manufacture of devices at and below 0.18 micrometer design rules in the semiconductor industry. CA-DUV resists are typically based on a combination of an acid labile polymer and a photoacid generator (PAG). Upon UV exposure, a catalytic amount of a strong Bronsted acid is released and is subsequently used in a post-exposure bake step to deprotect the acid labile polymer. Deprotection transforms the acid labile polymer into a base soluble polymer and ultimately enables positive tone image development in dilute aqueous base. As CA-DUV resist systems continue to mature and are used in increasingly demanding situations, it is critical to develop a fundamental understanding of how robust these materials are. One of the most important factors to quantify is how much acid is photogenerated in these systems at key exposure doses. For the purpose of quantifying photoacid generation several methods have been devised. These include spectrophotometric methods, ion conductivity methods and most recently an acid-base type titration similar to the standard addition method. This paper compares many of these techniques. First, comparisons between the most commonly used acid sensitive dye, tetrabromophenol blue sodium salt (TBPB) and a less common acid sensitive dye, Rhodamine B base (RB) are made in several resist systems. Second, the novel acid-base type titration based on the standard addition method is compared to the spectrophotometric titration method. During these studies, the make up of the resist system is probed as follows: the photoacid generator and resist additives are varied to understand the impact of each of these resist components on the acid generation process.

  6. Non-additive Effects in Genomic Selection

    PubMed Central

    Varona, Luis; Legarra, Andres; Toro, Miguel A.; Vitezica, Zulma G.

    2018-01-01

    In the last decade, genomic selection has become a standard in the genetic evaluation of livestock populations. However, most procedures for the implementation of genomic selection only consider the additive effects associated with SNP (Single Nucleotide Polymorphism) markers used to calculate the prediction of the breeding values of candidates for selection. Nevertheless, the availability of estimates of non-additive effects is of interest because: (i) they contribute to an increase in the accuracy of the prediction of breeding values and the genetic response; (ii) they allow the definition of mate allocation procedures between candidates for selection; and (iii) they can be used to enhance non-additive genetic variation through the definition of appropriate crossbreeding or purebred breeding schemes. This study presents a review of methods for the incorporation of non-additive genetic effects into genomic selection procedures and their potential applications in the prediction of future performance, mate allocation, crossbreeding, and purebred selection. The work concludes with a brief outline of some ideas for future lines of that may help the standard inclusion of non-additive effects in genomic selection. PMID:29559995

  7. Non-additive Effects in Genomic Selection.

    PubMed

    Varona, Luis; Legarra, Andres; Toro, Miguel A; Vitezica, Zulma G

    2018-01-01

    In the last decade, genomic selection has become a standard in the genetic evaluation of livestock populations. However, most procedures for the implementation of genomic selection only consider the additive effects associated with SNP (Single Nucleotide Polymorphism) markers used to calculate the prediction of the breeding values of candidates for selection. Nevertheless, the availability of estimates of non-additive effects is of interest because: (i) they contribute to an increase in the accuracy of the prediction of breeding values and the genetic response; (ii) they allow the definition of mate allocation procedures between candidates for selection; and (iii) they can be used to enhance non-additive genetic variation through the definition of appropriate crossbreeding or purebred breeding schemes. This study presents a review of methods for the incorporation of non-additive genetic effects into genomic selection procedures and their potential applications in the prediction of future performance, mate allocation, crossbreeding, and purebred selection. The work concludes with a brief outline of some ideas for future lines of that may help the standard inclusion of non-additive effects in genomic selection.

  8. New methods of MR image intensity standardization via generalized scale

    NASA Astrophysics Data System (ADS)

    Madabhushi, Anant; Udupa, Jayaram K.

    2005-04-01

    Image intensity standardization is a post-acquisition processing operation designed for correcting acquisition-to-acquisition signal intensity variations (non-standardness) inherent in Magnetic Resonance (MR) images. While existing standardization methods based on histogram landmarks have been shown to produce a significant gain in the similarity of resulting image intensities, their weakness is that, in some instances the same histogram-based landmark may represent one tissue, while in other cases it may represent different tissues. This is often true for diseased or abnormal patient studies in which significant changes in the image intensity characteristics may occur. In an attempt to overcome this problem, in this paper, we present two new intensity standardization methods based on the concept of generalized scale. In reference 1 we introduced the concept of generalized scale (g-scale) to overcome the shape, topological, and anisotropic constraints imposed by other local morphometric scale models. Roughly speaking, the g-scale of a voxel in a scene was defined as the largest set of voxels connected to the voxel that satisfy some homogeneity criterion. We subsequently formulated a variant of the generalized scale notion, referred to as generalized ball scale (gB-scale), which, in addition to having the advantages of g-scale, also has superior noise resistance properties. These scale concepts are utilized in this paper to accurately determine principal tissue regions within MR images, and landmarks derived from these regions are used to perform intensity standardization. The new methods were qualitatively and quantitatively evaluated on a total of 67 clinical 3D MR images corresponding to four different protocols and to normal, Multiple Sclerosis (MS), and brain tumor patient studies. The generalized scale-based methods were found to be better than the existing methods, with a significant improvement observed for severely diseased and abnormal patient studies.

  9. Neutral monosaccharide composition analysis of plant-derived oligo- and polysaccharides by high performance liquid chromatography.

    PubMed

    Yan, Jun; Shi, Songshan; Wang, Hongwei; Liu, Ruimin; Li, Ning; Chen, Yonglin; Wang, Shunchun

    2016-01-20

    A novel analytical method for neutral monosaccharide composition analysis of plant-derived oligo- and polysaccharides was developed using hydrophilic interaction liquid chromatography coupled to a charged aerosol detector. The effects of column type, additives, pH and column temperature on retention and separation were evaluated. Additionally, the method could distinguish potential impurities in samples, including chloride, sulfate and sodium, from sugars. The results of validation demonstrated that this method had good linearity (R(2) ≥ 0.9981), high precision (relative standard deviation ≤ 4.43%), and adequate accuracy (94.02-103.37% recovery) and sensitivity (detection limit: 15-40 ng). Finally, the monosaccharide compositions of the polysaccharide from Eclipta prostrasta L. and stachyose were successfully profiled through this method. This report represents the first time that all of these common monosaccharides could be well-separated and determined simultaneously by high performance liquid chromatography without additional derivatization. This newly developed method is convenient, efficient and reliable for monosaccharide analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Assay of free captopril in human plasma as monobromobimane derivative, using RPLC/(+)ESI/MS/MS: validation aspects and bioequivalence evaluation.

    PubMed

    Medvedovici, Andrei; Albu, Florin; Sora, Iuliana Daniela; Udrescu, Stefan; Galaon, Toma; David, Victor

    2009-10-01

    A sensitive method for determination of free captopril as monobromobimane derivative in plasma samples is discussed. The internal standard (IS) was 5-methoxy-1H-benzimidazole-2-thiol. Derivatization with monobromobimane immediately after blood collection and plasma preparation prevents oxidation of captopril to the corresponding disulfide compound and enhances the ionization yield. Consequently, derivatization enhances sample stability and detection sensitivity. Addition of the internal standard was made immediately after plasma preparation. The internal standard was also derivatized by monobromobimane, as it contains a thiol functional group. Preparation of plasma samples containing captopril and IS derivatives was based upon protein precipitation through addition of acetonitrile, in a volumetric ratio 1:2. The reversed-phase liquid chromatographic separation was achieved on a rapid resolution cartridge Zorbax SB-C(18), monitored through positive electrospray ionization and tandem MS detection using the multiple-reaction monitoring mode. Transitions were 408-362 amu for the captopril derivative and 371-260 amu for the internal standard derivative. The kinetics of captopril oxidation to the corresponding disulfide compound in plasma matrix was also studied using the proposed method. A linear log-log calibration was obtained over the concentration interval 2.5-750 ng/mL. A low limit of quantitation in the 2.5 ng/mL range was obtained. The analytical method was fully validated and successfully applied in a three-way, three-period, single-dose (50 mg), block-randomized bioequivalence study for two pharmaceutical formulations (captopril LPH 25 and 50 mg) against the comparator Capoten 50 mg. Copyright (c) 2009 John Wiley & Sons, Ltd.

  11. Establishment of metrological traceability in porosity measurements by x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Hermanek, Petr; Carmignato, Simone

    2017-09-01

    Internal porosity is an inherent phenomenon to many manufacturing processes, such as casting, additive manufacturing, and others. Since these defects cannot be completely avoided by improving production processes, it is important to have a reliable method to detect and evaluate them accurately. The accurate evaluation becomes even more important concerning current industrial trends to minimize size and weight of products on one side, and enhance their complexity and performance on the other. X-ray computed tomography (CT) has emerged as a promising instrument for holistic porosity measurements offering several advantages over equivalent methods already established in the detection of internal defects. The main shortcomings of the conventional techniques pertain to too general information about total porosity content (e.g. Archimedes method) or the destructive way of testing (e.g. microscopy of cross-sections). On the contrary, CT is a nondestructive technique providing complete information about size, shape and distribution of internal porosity. However, due to the lack of international standards and the fact that it is relatively a new measurement technique, CT as a measurement technology has not yet reached maturity. This study proposes a procedure for the establishment of measurement traceability in porosity measurements by CT including the necessary evaluation of measurement uncertainty. The traceability transfer is carried out through a novel reference standard calibrated by optical and tactile coordinate measuring systems. The measurement uncertainty is calculated following international standards and guidelines. In addition, the accuracy of porosity measurements by CT with the associated measurement uncertainty is evaluated using the reference standard.

  12. Sustainability Characterization for Additive Manufacturing

    PubMed Central

    Mani, Mahesh; Lyons, Kevin W; Gupta, SK

    2014-01-01

    Additive manufacturing (AM) has the potential to create geometrically complex parts that require a high degree of customization, using less material and producing less waste. Recent studies have shown that AM can be an economically viable option for use by the industry, yet there are some inherent challenges associated with AM for wider acceptance. The lack of standards in AM impedes its use for parts production since industries primarily depend on established standards in processes and material selection to ensure the consistency and quality. Inability to compare AM performance against traditional manufacturing methods can be a barrier for implementing AM processes. AM process sustainability has become a driver due to growing environmental concerns for manufacturing. This has reinforced the importance to understand and characterize AM processes for sustainability. Process characterization for sustainability will help close the gaps for comparing AM performance to traditional manufacturing methods. Based on a literature review, this paper first examines the potential environmental impacts of AM. A methodology for sustainability characterization of AM is then proposed to serve as a resource for the community to benchmark AM processes for sustainability. Next, research perspectives are discussed along with relevant standardization efforts. PMID:26601038

  13. High-throughput real-time quantitative reverse transcription PCR.

    PubMed

    Bookout, Angie L; Cummins, Carolyn L; Mangelsdorf, David J; Pesola, Jean M; Kramer, Martha F

    2006-02-01

    Extensive detail on the application of the real-time quantitative polymerase chain reaction (QPCR) for the analysis of gene expression is provided in this unit. The protocols are designed for high-throughput, 384-well-format instruments, such as the Applied Biosystems 7900HT, but may be modified to suit any real-time PCR instrument. QPCR primer and probe design and validation are discussed, and three relative quantitation methods are described: the standard curve method, the efficiency-corrected DeltaCt method, and the comparative cycle time, or DeltaDeltaCt method. In addition, a method is provided for absolute quantification of RNA in unknown samples. RNA standards are subjected to RT-PCR in the same manner as the experimental samples, thus accounting for the reaction efficiencies of both procedures. This protocol describes the production and quantitation of synthetic RNA molecules for real-time and non-real-time RT-PCR applications.

  14. Scoring and setting pass/fail standards for an essay certification examination in nurse-midwifery.

    PubMed

    Fullerton, J T; Greener, D L; Gross, L J

    1992-03-01

    Examination for certification or licensure of health professionals (credentialing) in the United States is almost exclusively of the multiple choice format. The certification examination for entry into the practice of the profession of nurse-midwifery has, however, used a modified essay format throughout its twenty-year history. The examination has recently undergone a revision in the method for score interpretation and for pass/fail decision-making. The revised method, described in this paper, has important implications for all health professional credentialing agencies which use modified essay, oral or practical methods of competency assessment. This paper describes criterion-referenced scoring, the process of constructing the essay items, the methods for assuring validity and reliability for the examination, and the manner of standard setting. In addition, two alternative methods for increasing the validity of the pass/fail decision are evaluated, and the rationale for decision-making about marginal candidates is described.

  15. Beyond the hype: deep neural networks outperform established methods using a ChEMBL bioactivity benchmark set.

    PubMed

    Lenselink, Eelke B; Ten Dijke, Niels; Bongers, Brandon; Papadatos, George; van Vlijmen, Herman W T; Kowalczyk, Wojtek; IJzerman, Adriaan P; van Westen, Gerard J P

    2017-08-14

    The increase of publicly available bioactivity data in recent years has fueled and catalyzed research in chemogenomics, data mining, and modeling approaches. As a direct result, over the past few years a multitude of different methods have been reported and evaluated, such as target fishing, nearest neighbor similarity-based methods, and Quantitative Structure Activity Relationship (QSAR)-based protocols. However, such studies are typically conducted on different datasets, using different validation strategies, and different metrics. In this study, different methods were compared using one single standardized dataset obtained from ChEMBL, which is made available to the public, using standardized metrics (BEDROC and Matthews Correlation Coefficient). Specifically, the performance of Naïve Bayes, Random Forests, Support Vector Machines, Logistic Regression, and Deep Neural Networks was assessed using QSAR and proteochemometric (PCM) methods. All methods were validated using both a random split validation and a temporal validation, with the latter being a more realistic benchmark of expected prospective execution. Deep Neural Networks are the top performing classifiers, highlighting the added value of Deep Neural Networks over other more conventional methods. Moreover, the best method ('DNN_PCM') performed significantly better at almost one standard deviation higher than the mean performance. Furthermore, Multi-task and PCM implementations were shown to improve performance over single task Deep Neural Networks. Conversely, target prediction performed almost two standard deviations under the mean performance. Random Forests, Support Vector Machines, and Logistic Regression performed around mean performance. Finally, using an ensemble of DNNs, alongside additional tuning, enhanced the relative performance by another 27% (compared with unoptimized 'DNN_PCM'). Here, a standardized set to test and evaluate different machine learning algorithms in the context of multi-task learning is offered by providing the data and the protocols. Graphical Abstract .

  16. Standardization of Fat:SNF ratio of milk and addition of sprouted wheat fada (semolina) for the manufacture of halvasan.

    PubMed

    Chaudhary, Apurva H; Patel, H G; Prajapati, P S; Prajapati, J P

    2015-04-01

    Traditional Indian Dairy Products such as Halvasan are manufactured in India using an age old practice. For manufacture of such products industrially, a standard formulation is required. Halvasan is a region specific, very popular heat desiccated milk product but has not been studied scientifically. Fat and Solids-not-fat (SNF) plays an important role in physico-chemical, sensory, textural characteristics and also the shelf life of any milk sweet. Hence for process standardization of Halvasan manufacture, different levels of Fat:SNF ratios i.e. 0.44, 0.55, 0.66 and 0.77 of milk were studied so that an optimum level yielding best organoleptic characteristics in final product can be selected. The product was made from milk standardized to these ratios of Fat:SNF and the product was manufactured as per the method tentatively employed on the basis of characterization of market samples of the product in laboratory. Based on the sensory results obtained, a Fat:SNF ratio of 0.66 for the milk has been selected. In the similar way, for standardizing the rate of addition of fada (semolina); 30, 40, 50 and 60 g fada (semolina) per kg of milk were added and based on the sensory observations, the level of fada (semolina) addition @50 gm/kg of milk was adjudged the best for Halvasan manufacture and hence selected.

  17. Evaluation of ultraviolet spectrophotometry for simultaneous analysis of alkylbenzenes, alkylnaphthalenes, alkylanthracenes/phenanthrenes and total aromatics in mid-distillate fuels

    NASA Technical Reports Server (NTRS)

    Kim, W. S.; Seng, G. T.

    1982-01-01

    A rapid ultraviolet spectrophotometric method for the simultaneous determination of aromatics in middistillate fuels was developed and evaluated. In this method, alkylbenzenes, alkylnaphthalenes, alkylanthracenes/phenanthracenes and total aromatics were determined from ultraviolet spectra of the fuels. The accuracy and precision were determined using simulated standard fuels with known compositions. The total aromatics fraction accuracy was 5% for a Jet A type fuel and 0.6% for a broadened properties jet turbine type fuel. Precision, expressed as relative standard deviations, ranged from 2.9% for the alkylanthracenes/phenanthrenes to 15.3% for the alkylbenzenes. The accuracy, however, was less for actual fuel samples when compared to the results obtained by a mass spectrometric method. In addition, the ASTM D-1840 method for naphthalenes by ultraviolet spectroscopy was evaluated.

  18. Fission matrix-based Monte Carlo criticality analysis of fuel storage pools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farlotti, M.; Ecole Polytechnique, Palaiseau, F 91128; Larsen, E. W.

    2013-07-01

    Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simplemore » problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)« less

  19. Agreement between gastrointestinal panel testing and standard microbiology methods for detecting pathogens in suspected infectious gastroenteritis: Test evaluation and meta-analysis in the absence of a reference standard.

    PubMed

    Freeman, Karoline; Tsertsvadze, Alexander; Taylor-Phillips, Sian; McCarthy, Noel; Mistry, Hema; Manuel, Rohini; Mason, James

    2017-01-01

    Multiplex gastrointestinal pathogen panel (GPP) tests simultaneously identify bacterial, viral and parasitic pathogens from the stool samples of patients with suspected infectious gastroenteritis presenting in hospital or the community. We undertook a systematic review to compare the accuracy of GPP tests with standard microbiology techniques. Searches in Medline, Embase, Web of Science and the Cochrane library were undertaken from inception to January 2016. Eligible studies compared GPP tests with standard microbiology techniques in patients with suspected gastroenteritis. Quality assessment of included studies used tailored QUADAS-2. In the absence of a reference standard we analysed test performance taking GPP tests and standard microbiology techniques in turn as the benchmark test, using random effects meta-analysis of proportions. No study provided an adequate reference standard with which to compare the test accuracy of GPP and conventional tests. Ten studies informed a meta-analysis of positive and negative agreement. Positive agreement across all pathogens was 0.93 (95% CI 0.90 to 0.96) when conventional methods were the benchmark and 0.68 (95% CI: 0.58 to 0.77) when GPP provided the benchmark. Negative agreement was high in both instances due to the high proportion of negative cases. GPP testing produced a greater number of pathogen-positive findings than conventional testing. It is unclear whether these additional 'positives' are clinically important. GPP testing has the potential to simplify testing and accelerate reporting when compared to conventional microbiology methods. However the impact of GPP testing upon the management, treatment and outcome of patients is poorly understood and further studies are needed to evaluate the health economic impact of GPP testing compared with standard methods. The review protocol is registered with PROSPERO as CRD42016033320.

  20. 40 CFR 63.1352 - Additional test methods.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) National Emission Standards for Hazardous Air Pollutants From the Portland Cement Manufacturing Industry... determine the rates of emission of HCl from kilns and associated bypass stacks at portland cement... emission of specific organic HAP from raw material dryers, and kilns at Portland cement manufacturing...

  1. Integrating Security into the Curriculum

    DTIC Science & Technology

    1998-12-01

    predicate calculus, discrete math , and finite-state machine the- ory. In addition to applying standard mathematical foundations to constructing hardware and...models, specifi- cations, and the use of formal methods for verification and covert channel analysis. The means for analysis is based on discrete math , information

  2. Information Management Systems in the Undergraduate Instrumental Analysis Laboratory.

    ERIC Educational Resources Information Center

    Merrer, Robert J.

    1985-01-01

    Discusses two applications of Laboratory Information Management Systems (LIMS) in the undergraduate laboratory. They are the coulometric titration of thiosulfate with electrogenerated triiodide ion and the atomic absorption determination of calcium using both analytical calibration curve and standard addition methods. (JN)

  3. Manual of praying mantis morphology, nomenclature, and practices (Insecta, Mantodea)

    PubMed Central

    Brannoch, Sydney K.; Wieland, Frank; Rivera, Julio; Klass, Klaus-Dieter; Olivier Béthoux; Svenson, Gavin J.

    2017-01-01

    Abstract This study provides a comprehensive review of historical morphological nomenclature used for praying mantis (Mantodea) morphology, which includes citations, original use, and assignment of homology. All referenced structures across historical works correspond to a proposed standard term for use in all subsequent works pertaining to praying mantis morphology and systematics. The new standards are presented with a verbal description in a glossary as well as indicated on illustrations and images. In the vast majority of cases, originally used terms were adopted as the new standard. In addition, historical morphological topographical homology conjectures are considered with discussion on modern interpretations. A new standardized formulation to present foreleg femoral and tibial spines is proposed for clarity based on previous works. In addition, descriptions for methods of collection, curation, genital complex dissection, and labeling are provided to aid in the proper preservation and storage of specimens for longevity and ease of study. Due to the lack of consistent linear morphometric measurement practices in the literature, we have proposed a series of measurements for taxonomic and morphological research. These measurements are presented with figures to provide visual aids with homologous landmarks to ensure compatibility and comparability across the Order. Finally, our proposed method of pinning mantises is presented with a photographical example as well as a video tutorial available at http://mantodearesearch.com. PMID:29200926

  4. Standardization of chemical analytical techniques for pyrolysis bio-oil: history, challenges, and current status of methods

    DOE PAGES

    Ferrell, Jack R.; Olarte, Mariefel V.; Christensen, Earl D.; ...

    2016-07-05

    Here, we discuss the standardization of analytical techniques for pyrolysis bio-oils, including the current status of methods, and our opinions on future directions. First, the history of past standardization efforts is summarized, and both successful and unsuccessful validation of analytical techniques highlighted. The majority of analytical standardization studies to-date has tested only physical characterization techniques. In this paper, we present results from an international round robin on the validation of chemical characterization techniques for bio-oils. Techniques tested included acid number, carbonyl titrations using two different methods (one at room temperature and one at 80 °C), 31P NMR for determination ofmore » hydroxyl groups, and a quantitative gas chromatography–mass spectrometry (GC-MS) method. Both carbonyl titration and acid number methods have yielded acceptable inter-laboratory variabilities. 31P NMR produced acceptable results for aliphatic and phenolic hydroxyl groups, but not for carboxylic hydroxyl groups. As shown in previous round robins, GC-MS results were more variable. Reliable chemical characterization of bio-oils will enable upgrading research and allow for detailed comparisons of bio-oils produced at different facilities. Reliable analytics are also needed to enable an emerging bioenergy industry, as processing facilities often have different analytical needs and capabilities than research facilities. We feel that correlations in reliable characterizations of bio-oils will help strike a balance between research and industry, and will ultimately help to -determine metrics for bio-oil quality. Lastly, the standardization of additional analytical methods is needed, particularly for upgraded bio-oils.« less

  5. Standardization of chemical analytical techniques for pyrolysis bio-oil: history, challenges, and current status of methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrell, Jack R.; Olarte, Mariefel V.; Christensen, Earl D.

    Here, we discuss the standardization of analytical techniques for pyrolysis bio-oils, including the current status of methods, and our opinions on future directions. First, the history of past standardization efforts is summarized, and both successful and unsuccessful validation of analytical techniques highlighted. The majority of analytical standardization studies to-date has tested only physical characterization techniques. In this paper, we present results from an international round robin on the validation of chemical characterization techniques for bio-oils. Techniques tested included acid number, carbonyl titrations using two different methods (one at room temperature and one at 80 °C), 31P NMR for determination ofmore » hydroxyl groups, and a quantitative gas chromatography–mass spectrometry (GC-MS) method. Both carbonyl titration and acid number methods have yielded acceptable inter-laboratory variabilities. 31P NMR produced acceptable results for aliphatic and phenolic hydroxyl groups, but not for carboxylic hydroxyl groups. As shown in previous round robins, GC-MS results were more variable. Reliable chemical characterization of bio-oils will enable upgrading research and allow for detailed comparisons of bio-oils produced at different facilities. Reliable analytics are also needed to enable an emerging bioenergy industry, as processing facilities often have different analytical needs and capabilities than research facilities. We feel that correlations in reliable characterizations of bio-oils will help strike a balance between research and industry, and will ultimately help to -determine metrics for bio-oil quality. Lastly, the standardization of additional analytical methods is needed, particularly for upgraded bio-oils.« less

  6. Development Of Methodologies Using PhabrOmeter For Fabric Drape Evaluation

    NASA Astrophysics Data System (ADS)

    Lin, Chengwei

    Evaluation of fabric drape is important for textile industry as it reveals the aesthetic and functionality of the cloth and apparel. Although many fabric drape measuring methods have been developed for several decades, they are falling behind the need for fast product development by the industry. To meet the requirement of industries, it is necessary to develop an effective and reliable method to evaluate fabric drape. The purpose of the present study is to determine if PhabrOmeter can be applied to fabric drape evaluation. PhabrOmeter is a fabric sensory performance evaluating instrument which is developed to provide fast and reliable quality testing results. This study was sought to determine the relationship between fabric drape and other fabric attributes. In addition, a series of conventional methods including AATCC standards, ASTM standards and ISO standards were used to characterize the fabric samples. All the data were compared and analyzed with linear correlation method. The results indicate that PhabrOmeter is reliable and effective instrument for fabric drape evaluation. Besides, some effects including fabric structure, testing directions were considered to examine their impact on fabric drape.

  7. International Council for Standardization in Haematology (ICSH) Recommendations for Laboratory Measurement of Direct Oral Anticoagulants.

    PubMed

    Gosselin, Robert C; Adcock, Dorothy M; Bates, Shannon M; Douxfils, Jonathan; Favaloro, Emmanuel J; Gouin-Thibault, Isabelle; Guillermo, Cecilia; Kawai, Yohko; Lindhoff-Last, Edelgard; Kitchen, Steve

    2018-03-01

    This guidance document was prepared on behalf of the International Council for Standardization in Haematology (ICSH) for providing haemostasis-related guidance documents for clinical laboratories. This inaugural coagulation ICSH document was developed by an ad hoc committee, comprised of international clinical and laboratory direct acting oral anticoagulant (DOAC) experts. The committee developed consensus recommendations for laboratory measurement of DOACs (dabigatran, rivaroxaban, apixaban and edoxaban), which would be germane for laboratories assessing DOAC anticoagulation. This guidance document addresses all phases of laboratory DOAC measurements, including pre-analytical (e.g. preferred time sample collection, preferred sample type, sample stability), analytical (gold standard method, screening and quantifying methods) and post analytical (e.g. reporting units, quality assurance). The committee addressed the use and limitations of screening tests such as prothrombin time, activated partial thromboplastin time as well as viscoelastic measurements of clotting blood and point of care methods. Additionally, the committee provided recommendations for the proper validation or verification of performance of laboratory assays prior to implementation for clinical use, and external quality assurance to provide continuous assessment of testing and reporting method. Schattauer GmbH Stuttgart.

  8. Growth rate measurement in free jet experiments

    NASA Astrophysics Data System (ADS)

    Charpentier, Jean-Baptiste; Renoult, Marie-Charlotte; Crumeyrolle, Olivier; Mutabazi, Innocent

    2017-07-01

    An experimental method was developed to measure the growth rate of the capillary instability for free liquid jets. The method uses a standard shadow-graph imaging technique to visualize a jet, produced by extruding a liquid through a circular orifice, and a statistical analysis of the entire jet. The analysis relies on the computation of the standard deviation of a set of jet profiles, obtained in the same experimental conditions. The principle and robustness of the method are illustrated with a set of emulated jet profiles. The method is also applied to free falling jet experiments conducted for various Weber numbers and two low-viscosity solutions: a Newtonian and a viscoelastic one. Growth rate measurements are found in good agreement with linear stability theory in the Rayleigh's regime, as expected from previous studies. In addition, the standard deviation curve is used to obtain an indirect measurement of the initial perturbation amplitude and to identify beads on a string structure on the jet. This last result serves to demonstrate the capability of the present technique to explore in the future the dynamics of viscoelastic liquid jets.

  9. Determination of biogenic amines in chocolate by ion chromatographic separation and pulsed integrated amperometric detection with implemented wave-form at Au disposable electrode.

    PubMed

    Pastore, Paolo; Favaro, Gabriella; Badocco, Denis; Tapparo, Andrea; Cavalli, Silvano; Saccani, Giovanna

    2005-12-09

    A rapid and selective cation exchange chromatographic method coupled to integrated pulsed amperometric detection (PAD) has been developed to quantify biogenic amines in chocolate. The method is based on gradient elution of aqueous methanesulfonic acid with post column addition of strong base to obtain suitable conditions for amperometric detection. A potential waveform able to keep long time performance of the Au disposable electrode was set up. Total analysis time is less than 20min. Concentration levels of dopamine, serotonin, tyramine, histamine and 2-phenylethylamine were measured, after extraction with perchloric acid from 2g samples previously defatted twice with petroleum ether. The method was used to determine the analytes in chocolate real matrices and their quantification was made with standard addition method. Only dopamine, histamine and serotonin were found in the analysed real samples. Repeatabilities of their signals, computed on their amounts in the real samples, were 5% for all of them. Repeatabilities of tyramine and phenethylamine were relative to standard additions to real samples (close to 1mg/l in the extract) and were 7 and 3%, respectively. Detection limits were computed with the 3s of the baseline noise combined with the calibration plot regression parameters. They were satisfactorily low for all amines: 3mg/kg for dopamine, 2mg/kg for tyramine, 1mg/kg for histamine, 2mg/kg for serotonin, 3mg/kg for 2-phenylethylamine.

  10. Achieving Innovation and Affordability Through Standardization of Materials Development and Testing

    NASA Technical Reports Server (NTRS)

    Bray, M. H.; Zook, L. M.; Raley, R. E.; Chapman, C.

    2011-01-01

    The successful expansion of development, innovation, and production within the aeronautics industry during the 20th century was facilitated by collaboration of government agencies with the commercial aviation companies. One of the initial products conceived from the collaboration was the ANC-5 Bulletin, first published in 1937. The ANC-5 Bulletin had intended to standardize the requirements of various government agencies in the design of aircraft structure. The national space policy shift in priority for NASA with an emphasis on transferring the travel to low earth orbit to commercial space providers highlights an opportunity and a need for the national and global space industries. The same collaboration and standardization that is documented and maintained by the industry within MIL-HDBK-5 (MMPDS-01) and MIL-HBDK-17 (nonmetallic mechanical properties) can also be exploited to standardize the thermal performance properties, processing methods, test methods, and analytical methods for use in aircraft and spacecraft design and associated propulsion systems. In addition to the definition of thermal performance description and standardization, the standardization for test methods and analysis for extreme environments (high temperature, cryogenics, deep space radiation, etc) would also be highly valuable to the industry. Its subsequent revisions and conversion to MIL-HDBK-5 and then MMPDS-01 established and then expanded to contain standardized mechanical property design values and other related design information for metallic materials used in aircraft, missiles, and space vehicles. It also includes guidance on standardization of composition, processing, and analytical methods for presentation and inclusion into the handbook. This standardization enabled an expansion of the technologies to provide efficiency and reliability to the consumers. It can be established that many individual programs within the government agencies have been overcome with development costs generated from these nonstandard requirements. Without industry standardization and acceptance, the programs are driven to shoulder the costs of determining design requirements, performance criteria, and then material qualification and certification. A significant investment that the industry could make to both reduce individual program development costs and schedules while expanding commercial space flight capabilities would be to invest in standardizing material performance properties for high temperature, cryogenic, and deep space environments for both metallic and nonmetallic materials.

  11. Evaluation of diagnostic accuracy in detecting ordered symptom statuses without a gold standard

    PubMed Central

    Wang, Zheyu; Zhou, Xiao-Hua; Wang, Miqu

    2011-01-01

    Our research is motivated by 2 methodological problems in assessing diagnostic accuracy of traditional Chinese medicine (TCM) doctors in detecting a particular symptom whose true status has an ordinal scale and is unknown—imperfect gold standard bias and ordinal scale symptom status. In this paper, we proposed a nonparametric maximum likelihood method for estimating and comparing the accuracy of different doctors in detecting a particular symptom without a gold standard when the true symptom status had an ordered multiple class. In addition, we extended the concept of the area under the receiver operating characteristic curve to a hyper-dimensional overall accuracy for diagnostic accuracy and alternative graphs for displaying a visual result. The simulation studies showed that the proposed method had good performance in terms of bias and mean squared error. Finally, we applied our method to our motivating example on assessing the diagnostic abilities of 5 TCM doctors in detecting symptoms related to Chills disease. PMID:21209155

  12. Comparison of Soil Quality Index Using Three Methods

    PubMed Central

    Mukherjee, Atanu; Lal, Rattan

    2014-01-01

    Assessment of management-induced changes in soil quality is important to sustaining high crop yield. A large diversity of cultivated soils necessitate identification development of an appropriate soil quality index (SQI) based on relative soil properties and crop yield. Whereas numerous attempts have been made to estimate SQI for major soils across the World, there is no standard method established and thus, a strong need exists for developing a user-friendly and credible SQI through comparison of various available methods. Therefore, the objective of this article is to compare three widely used methods to estimate SQI using the data collected from 72 soil samples from three on-farm study sites in Ohio. Additionally, challenge lies in establishing a correlation between crop yield versus SQI calculated either depth wise or in combination of soil layers as standard methodology is not yet available and was not given much attention to date. Predominant soils of the study included one organic (Mc), and two mineral (CrB, Ko) soils. Three methods used to estimate SQI were: (i) simple additive SQI (SQI-1), (ii) weighted additive SQI (SQI-2), and (iii) statistically modeled SQI (SQI-3) based on principal component analysis (PCA). The SQI varied between treatments and soil types and ranged between 0–0.9 (1 being the maximum SQI). In general, SQIs did not significantly differ at depths under any method suggesting that soil quality did not significantly differ for different depths at the studied sites. Additionally, data indicate that SQI-3 was most strongly correlated with crop yield, the correlation coefficient ranged between 0.74–0.78. All three SQIs were significantly correlated (r = 0.92–0.97) to each other and with crop yield (r = 0.65–0.79). Separate analyses by crop variety revealed that correlation was low indicating that some key aspects of soil quality related to crop response are important requirements for estimating SQI. PMID:25148036

  13. Cost-effectiveness of additional catheter-directed thrombolysis for deep vein thrombosis

    PubMed Central

    ENDEN, T.; RESCH, S.; WHITE, C.; WIK, H. S.; KLØW, N. E.; SANDSET, P. M.

    2013-01-01

    Summary Background Additional treatment with catheter-directed thrombolysis (CDT) has recently been shown to reduce post-thrombotic syndrome (PTS). Objectives To estimate the cost effectiveness of additional CDT compared with standard treatment alone. Methods Using a Markov decision model, we compared the two treatment strategies in patients with a high proximal deep vein thrombosis (DVT) and a low risk of bleeding. The model captured the development of PTS, recurrent venous thromboembolism and treatment-related adverse events within a lifetime horizon and the perspective of a third-party payer. Uncertainty was assessed with one-way and probabilistic sensitivity analyzes. Model inputs from the CaVenT study included PTS development, major bleeding from CDT and utilities for post DVT states including PTS. The remaining clinical inputs were obtained from the literature. Costs obtained from the CaVenT study, hospital accounts and the literature are expressed in US dollars ($); effects in quality adjusted life years (QALY). Results In base case analyzes, additional CDT accumulated 32.31 QALYs compared with 31.68 QALYs after standard treatment alone. Direct medical costs were $64 709 for additional CDT and $51 866 for standard treatment. The incremental cost-effectiveness ratio (ICER) was $20 429/QALY gained. One-way sensitivity analysis showed model sensitivity to the clinical efficacy of both strategies, but the ICER remained < $55 000/QALY over the full range of all parameters. The probability that CDT is cost effective was 82% at a willingness to pay threshold of $50 000/QALY gained. Conclusions Additional CDT is likely to be a cost-effective alternative to the standard treatment for patients with a high proximal DVT and a low risk of bleeding. PMID:23452204

  14. Quantifying Uncertainty in Near Surface Electromagnetic Imaging Using Bayesian Methods

    NASA Astrophysics Data System (ADS)

    Blatter, D. B.; Ray, A.; Key, K.

    2017-12-01

    Geoscientists commonly use electromagnetic methods to image the Earth's near surface. Field measurements of EM fields are made (often with the aid an artificial EM source) and then used to infer near surface electrical conductivity via a process known as inversion. In geophysics, the standard inversion tool kit is robust and can provide an estimate of the Earth's near surface conductivity that is both geologically reasonable and compatible with the measured field data. However, standard inverse methods struggle to provide a sense of the uncertainty in the estimate they provide. This is because the task of finding an Earth model that explains the data to within measurement error is non-unique - that is, there are many, many such models; but the standard methods provide only one "answer." An alternative method, known as Bayesian inversion, seeks to explore the full range of Earth model parameters that can adequately explain the measured data, rather than attempting to find a single, "ideal" model. Bayesian inverse methods can therefore provide a quantitative assessment of the uncertainty inherent in trying to infer near surface conductivity from noisy, measured field data. This study applies a Bayesian inverse method (called trans-dimensional Markov chain Monte Carlo) to transient airborne EM data previously collected over Taylor Valley - one of the McMurdo Dry Valleys in Antarctica. Our results confirm the reasonableness of previous estimates (made using standard methods) of near surface conductivity beneath Taylor Valley. In addition, we demonstrate quantitatively the uncertainty associated with those estimates. We demonstrate that Bayesian inverse methods can provide quantitative uncertainty to estimates of near surface conductivity.

  15. Standardization Efforts for Mechanical Testing and Design of Advanced Ceramic Materials and Components

    NASA Technical Reports Server (NTRS)

    Salem, Jonathan A.; Jenkins, Michael G.

    2003-01-01

    Advanced aerospace systems occasionally require the use of very brittle materials such as sapphire and ultra-high temperature ceramics. Although great progress has been made in the development of methods and standards for machining, testing and design of component from these materials, additional development and dissemination of standard practices is needed. ASTM Committee C28 on Advanced Ceramics and ISO TC 206 have taken a lead role in the standardization of testing for ceramics, and recent efforts and needs in standards development by Committee C28 on Advanced Ceramics will be summarized. In some cases, the engineers, etc. involved are unaware of the latest developments, and traditional approaches applicable to other material systems are applied. Two examples of flight hardware failures that might have been prevented via education and standardization will be presented.

  16. Comparison of standard moisture loss-on-drying methods for the determination of moisture content of corn distillers dried grains with solubles.

    PubMed

    Ileleji, Klein E; Garcia, Arnoldo A; Kingsly, Ambrose R P; Clementson, Clairmont L

    2010-01-01

    This study quantified the variability among 14 standard moisture loss-on-drying (gravimetric) methods for determination of the moisture content of corn distillers dried grains with solubles (DDGS). The methods were compared with the Karl Fischer (KF) titration method to determine their percent variation from the KF method. Additionally, the thermo-balance method using a halogen moisture analyzer that is routinely used in fuel ethanol plants was included in the methods investigated. Moisture contents by the loss-on-drying methods were significantly different for DDGS samples from three fuel ethanol plants. The percent deviation of the moisture loss-on-drying methods decreased with decrease in drying temperature and, to a lesser extent, drying time. This was attributed to an overestimation of moisture content in DDGS due to the release of volatiles at high temperatures. Our findings indicate that the various methods that have been used for moisture determination by moisture loss-on-drying will not give identical results and therefore, caution should be exercised when selecting a moisture loss-on-drying method for DDGS.

  17. Microwave Energy Increases Fatty Acid Methyl Ester Yield in Human Whole Blood Due to Increased Sphingomyelin Transesterification.

    PubMed

    Metherel, Adam H; Aristizabal Henao, Juan J; Ciobanu, Flaviu; Taha, Ameer Y; Stark, Ken D

    2015-09-01

    Dried blood spots (DBS) by fingertip prick collection for fatty acid profiling are becoming increasingly popular due to ease of collection, minimal invasiveness and its amenability to high-throughput analyses. Herein, we assess a microwave-assisted direct transesterification method for the production of fatty acid methyl esters (FAME) from DBS. Technical replicates of human whole blood were collected and 25-μL aliquots were applied to chromatography strips prior to analysis by a standard 3-h transesterification method or microwave-assisted direct transesterification method under various power (variable vs constant), time (1-5 min) and reagent (1-10% H2SO4 in methanol) conditions. In addition, a standard method was compared to a 5-min, 30-W power microwave in 1% H2SO4 method for FAME yield from whole blood sphingomyelin, and sphingomyelin standards alone and spiked in whole blood. Microwave-assisted direct transesterification yielded no significant differences in both quantitative (nmol/100 µL) and qualitative (mol%) fatty acid assessments after as little as 1.5- and 1-min reaction times, respectively, using the variable power method and 5% H2SO4 in methanol. However, 30-W power for 5 min increased total FAME yield of the technical replicates by 14%. This increase appears largely due to higher sphingomyelin-derived FAME yield of up to 109 and 399% compared to the standard method when determined from whole blood or pure standards, respectively. In conclusion, microwave-assisted direct transesterification of DBS achieved in as little as 1-min, and 5-min reaction times increase total fatty acids primarily by significantly improving sphingomyelin-derived fatty acid yield.

  18. Biological/Horticultural Internship Final Report

    NASA Technical Reports Server (NTRS)

    Palmer, Shane R.; Spencer, Lashelle (Editor)

    2017-01-01

    A study was conducted to determine water use requirements of genetically modified (GMO) dwarf plum. GMO plum and unmodified standard plum plants were grown in a controlled environment chamber under varying CO2 concentrations (400 ppm, 1500 ppm, and 5000 ppm). Pepper plants were also grown in the chamber for additional comparison. Leaf stomatal conductance, biomass accumulation, soil moisture and pot weights were measured; Stomatal conductance of GMO plum and pepper plants decreased at sustained elevated CO2 concentrations. The stomatal conductance rates of the standard plums, however, increased at sustained elevated CO2 concentrations. Further data analysis (statistical analysis, biomass, soil moisture and pot weight measurements) is ongoing and required to gain better understanding of the data. An additional proof-of-concept study was undertaken to determine the feasibility of grafting unmodified standard plum scions onto genetically modified rootstocks as a propagation method. Bud grafts were performed on three GMO plum rootstocks: NASA-5, NASA-10, and NASA-11. All of the standard plum buds grafted onto NASA-5 and NASA-10 rootstocks began growing, indicating that this grafting method is highly successful for the formation of a graft union and initial bud growth. However, bud growth during stem elongation was curtailed on several grafts due to a combination of nutritional deficiency and physical damage/obstruction of the grafted tissues. Bud growth on the NASA-5 rootstock occurred sooner than in grafts on the NASA-10 rootstock, while only one bud graft has shown growth on the NASA-11 rootstock thus far. These marked differences in the onset of bud growth suggest genotypic differences between the rootstocks may affect bud graft vigor. Mature standard plum scions grown on the NASA-5 rootstock appeared to retain most or all of the physical characteristics of the standard plum donor plant.

  19. 40 CFR 63.1352 - Additional test methods.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Standards for Hazardous Air Pollutants From the Portland Cement Manufacturing Industry Monitoring and... rates of emission of HCl from kilns and associated bypass stacks at portland cement manufacturing... specific organic HAP from raw material dryers, kilns and in-line kiln/raw mills at Portland cement...

  20. 40 CFR 63.1352 - Additional test methods.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Standards for Hazardous Air Pollutants From the Portland Cement Manufacturing Industry Monitoring and... rates of emission of HCl from kilns and associated bypass stacks at portland cement manufacturing... specific organic HAP from raw material dryers, kilns and in-line kiln/raw mills at Portland cement...

  1. 40 CFR 412.37 - Additional measures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... STANDARDS CONCENTRATED ANIMAL FEEDING OPERATIONS (CAFO) POINT SOURCE CATEGORY Dairy Cows and Cattle Other... application; (4) Test methods used to sample and analyze manure, litter, process waste water, and soil; (5) Results from manure, litter, process waste water, and soil sampling; (6) Explanation of the basis for...

  2. 40 CFR 412.37 - Additional measures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... STANDARDS CONCENTRATED ANIMAL FEEDING OPERATIONS (CAFO) POINT SOURCE CATEGORY Dairy Cows and Cattle Other... application; (4) Test methods used to sample and analyze manure, litter, process waste water, and soil; (5) Results from manure, litter, process waste water, and soil sampling; (6) Explanation of the basis for...

  3. 40 CFR 412.37 - Additional measures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... STANDARDS CONCENTRATED ANIMAL FEEDING OPERATIONS (CAFO) POINT SOURCE CATEGORY Dairy Cows and Cattle Other... application; (4) Test methods used to sample and analyze manure, litter, process waste water, and soil; (5) Results from manure, litter, process waste water, and soil sampling; (6) Explanation of the basis for...

  4. An improved method for the determination of trace levels of arsenic and antimony in geological materials by automated hydride generation-atomic absorption spectroscopy

    USGS Publications Warehouse

    Crock, J.G.; Lichte, F.E.

    1982-01-01

    An improved, automated method for the determination of arsenic and antimony in geological materials is described. After digestion of the material in sulfuric, nitric, hydrofluoric and perchloric acids, a hydrochloric acid solution of the sample is automatically mixed with reducing agents, acidified with additional hydrochloric acid, and treated with a sodium tetrahydroborate solution to form arsine and stibine. The hydrides are decomposed in a heated quartz tube in the optical path of an atomic absorption spectrometer. The absorbance peak height for arsenic or antimony is measured. Interferences that exist are minimized to the point where most geological materials including coals, soils, coal ashes, rocks and sediments can be analyzed directly without use of standard additions. The relative standard deviation of the digestion and the instrumental procedure is less than 2% at the 50 ??g l-1 As or Sb level. The reagent-blank detection limit is 0.2 ??g l-1 As or Sb. ?? 1982.

  5. Proposed Standards for Medical Education Submissions to the Journal of General Internal Medicine

    PubMed Central

    Bowen, Judith L.; Gerrity, Martha S.; Kalet, Adina L.; Kogan, Jennifer R.; Spickard, Anderson; Wayne, Diane B.

    2008-01-01

    To help authors design rigorous studies and prepare clear and informative manuscripts, improve the transparency of editorial decisions, and raise the bar on educational scholarship, the Deputy Editors of the Journal of General Internal Medicine articulate standards for medical education submissions to the Journal. General standards include: (1) quality questions, (2) quality methods to match the questions, (3) insightful interpretation of findings, (4) transparent, unbiased reporting, and (5) attention to human subjects’ protection and ethical research conduct. Additional standards for specific study types are described. We hope these proposed standards will generate discussion that will foster their continued evolution. Electronic supplementary material The online version of this article (doi:10.1007/s11606-008-0676-z) contains supplementary material, which is available to authorized users. PMID:18612716

  6. Reviving common standards in point-count surveys for broad inference across studies

    USGS Publications Warehouse

    Matsuoka, Steven M.; Mahon, C. Lisa; Handel, Colleen M.; Solymos, Peter; Bayne, Erin M.; Fontaine, Patricia C.; Ralph, C.J.

    2014-01-01

    We revisit the common standards recommended by Ralph et al. (1993, 1995a) for conducting point-count surveys to assess the relative abundance of landbirds breeding in North America. The standards originated from discussions among ornithologists in 1991 and were developed so that point-count survey data could be broadly compared and jointly analyzed by national data centers with the goals of monitoring populations and managing habitat. Twenty years later, we revisit these standards because (1) they have not been universally followed and (2) new methods allow estimation of absolute abundance from point counts, but these methods generally require data beyond the original standards to account for imperfect detection. Lack of standardization and the complications it introduces for analysis become apparent from aggregated data. For example, only 3% of 196,000 point counts conducted during the period 1992-2011 across Alaska and Canada followed the standards recommended for the count period and count radius. Ten-minute, unlimited-count-radius surveys increased the number of birds detected by >300% over 3-minute, 50-m-radius surveys. This effect size, which could be eliminated by standardized sampling, was ≥10 times the published effect sizes of observers, time of day, and date of the surveys. We suggest that the recommendations by Ralph et al. (1995a) continue to form the common standards when conducting point counts. This protocol is inexpensive and easy to follow but still allows the surveys to be adjusted for detection probabilities. Investigators might optionally collect additional information so that they can analyze their data with more flexible forms of removal and time-of-detection models, distance sampling, multiple-observer methods, repeated counts, or combinations of these methods. Maintaining the common standards as a base protocol, even as these study-specific modifications are added, will maximize the value of point-count data, allowing compilation and analysis by regional and national data centers.

  7. 21 CFR 170.10 - Food additives in standardized foods.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 3 2014-04-01 2014-04-01 false Food additives in standardized foods. 170.10... (CONTINUED) FOOD ADDITIVES General Provisions § 170.10 Food additives in standardized foods. (a) The... inclusion of a food additive in such definition and standard of identity, the provisions of the regulations...

  8. Usability evaluation of a medication reconciliation tool: Embedding safety probes to assess users' detection of medication discrepancies.

    PubMed

    Russ, Alissa L; Jahn, Michelle A; Patel, Himalaya; Porter, Brian W; Nguyen, Khoa A; Zillich, Alan J; Linsky, Amy; Simon, Steven R

    2018-06-01

    An electronic medication reconciliation tool was previously developed by another research team to aid provider-patient communication for medication reconciliation. To evaluate the usability of this tool, we integrated artificial safety probes into standard usability methods. The objective of this article is to describe this method of using safety probes, which enabled us to evaluate how well the tool supports users' detection of medication discrepancies. We completed a mixed-method usability evaluation in a simulated setting with 30 participants: 20 healthcare professionals (HCPs) and 10 patients. We used factual scenarios but embedded three artificial safety probes: (1) a missing medication (i.e., omission); (2) an extraneous medication (i.e., commission); and (3) an inaccurate dose (i.e., dose discrepancy). We measured users' detection of each probe to estimate the probability that a HCP or patient would detect these discrepancies. Additionally, we recorded participants' detection of naturally occurring discrepancies. Each safety probe was detected by ≤50% of HCPs. Patients' detection rates were generally higher. Estimates indicate that a HCP and patient, together, would detect 44.8% of these medication discrepancies. Additionally, HCPs and patients detected 25 and 45 naturally-occurring discrepancies, respectively. Overall, detection of medication discrepancies was low. Findings indicate that more advanced interface designs are warranted. Future research is needed on how technologies can be designed to better aid HCPs' and patients' detection of medication discrepancies. This is one of the first studies to evaluate the usability of a collaborative medication reconciliation tool and assess HCPs' and patients' detection of medication discrepancies. Results demonstrate that embedded safety probes can enhance standard usability methods by measuring additional, clinically-focused usability outcomes. The novel safety probes we used may serve as an initial, standard set for future medication reconciliation research. More prevalent use of safety probes could strengthen usability research for a variety of health information technologies. Published by Elsevier Inc.

  9. The impact of three discharge coding methods on the accuracy of diagnostic coding and hospital reimbursement for inpatient medical care.

    PubMed

    Tsopra, Rosy; Peckham, Daniel; Beirne, Paul; Rodger, Kirsty; Callister, Matthew; White, Helen; Jais, Jean-Philippe; Ghosh, Dipansu; Whitaker, Paul; Clifton, Ian J; Wyatt, Jeremy C

    2018-07-01

    Coding of diagnoses is important for patient care, hospital management and research. However coding accuracy is often poor and may reflect methods of coding. This study investigates the impact of three alternative coding methods on the inaccuracy of diagnosis codes and hospital reimbursement. Comparisons of coding inaccuracy were made between a list of coded diagnoses obtained by a coder using (i)the discharge summary alone, (ii)case notes and discharge summary, and (iii)discharge summary with the addition of medical input. For each method, inaccuracy was determined for the primary, secondary diagnoses, Healthcare Resource Group (HRG) and estimated hospital reimbursement. These data were then compared with a gold standard derived by a consultant and coder. 107 consecutive patient discharges were analysed. Inaccuracy of diagnosis codes was highest when a coder used the discharge summary alone, and decreased significantly when the coder used the case notes (70% vs 58% respectively, p < 0.0001) or coded from the discharge summary with medical support (70% vs 60% respectively, p < 0.0001). When compared with the gold standard, the percentage of incorrect HRGs was 42% for discharge summary alone, 31% for coding with case notes, and 35% for coding with medical support. The three coding methods resulted in an annual estimated loss of hospital remuneration of between £1.8 M and £16.5 M. The accuracy of diagnosis codes and percentage of correct HRGs improved when coders used either case notes or medical support in addition to the discharge summary. Further emphasis needs to be placed on improving the standard of information recorded in discharge summaries. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. A novel baseline-correction method for standard addition based derivative spectra and its application to quantitative analysis of benzo(a)pyrene in vegetable oil samples.

    PubMed

    Li, Na; Li, Xiu-Ying; Zou, Zhe-Xiang; Lin, Li-Rong; Li, Yao-Qun

    2011-07-07

    In the present work, a baseline-correction method based on peak-to-derivative baseline measurement was proposed for the elimination of complex matrix interference that was mainly caused by unknown components and/or background in the analysis of derivative spectra. This novel method was applicable particularly when the matrix interfering components showed a broad spectral band, which was common in practical analysis. The derivative baseline was established by connecting two crossing points of the spectral curves obtained with a standard addition method (SAM). The applicability and reliability of the proposed method was demonstrated through both theoretical simulation and practical application. Firstly, Gaussian bands were used to simulate 'interfering' and 'analyte' bands to investigate the effect of different parameters of interfering band on the derivative baseline. This simulation analysis verified that the accuracy of the proposed method was remarkably better than other conventional methods such as peak-to-zero, tangent, and peak-to-peak measurements. Then the above proposed baseline-correction method was applied to the determination of benzo(a)pyrene (BaP) in vegetable oil samples by second-derivative synchronous fluorescence spectroscopy. The satisfactory results were obtained by using this new method to analyze a certified reference material (coconut oil, BCR(®)-458) with a relative error of -3.2% from the certified BaP concentration. Potentially, the proposed method can be applied to various types of derivative spectra in different fields such as UV-visible absorption spectroscopy, fluorescence spectroscopy and infrared spectroscopy.

  11. [Determination of twenty one elements in lithium hexafluorophosphate by ICP-AES].

    PubMed

    Fang, Yi-wen; Hao, Zhi-feng; Song, Yi-bing; Sun, Chang-yong; Yu, Jian; Yu, Lin

    2005-02-01

    One gram (+/- 0.0001 g) of lithium hexafluorophosphate was weighed exactly under dry atmosphere and was dissolved with an adequate amount of dimethyl carbonate (DMC). After the sample solution was pretreated with a series of methods, Be, Cu, Pb, Ca, Zr, Co, Mg, V, Ti, Mo, Ni, Mn, Sr, Zn, K, Al, Ba, Cd, Fe, Cr and Na were determined by ICP-AES. The results show that the recoveries of standard addition were 93.3%-102.1%, and the relative standard deviations (n = 11) were 0%-3.56%. The method is efficient, accurate and easy to operate. It has been applied to the determination of lithium hexafluorophosphate products with satisfactory results.

  12. Optimized, Fast-Throughput UHPLC-DAD Based Method for Carotenoid Quantification in Spinach, Serum, Chylomicrons, and Feces.

    PubMed

    Eriksen, Jane N; Madsen, Pia L; Dragsted, Lars O; Arrigoni, Eva

    2017-02-01

    An improved UHPLC-DAD-based method was developed and validated for quantification of major carotenoids present in spinach, serum, chylomicrons, and feces. Separation was achieved with gradient elution within 12.5 min for six dietary carotenoids and the internal standard, echinenone. The proposed method provides, for all standard components, resolution > 1.1, linearity covering the target range (R > 0.99), LOQ < 0.035 mg/L, and intraday and interday RSDs < 2 and 10%, respectively. Suitability of the method was tested on biological matrices. Method precision (RSD%) for carotenoid quantification in serum, chylomicrons, and feces was below 10% for intra- and interday analysis, except for lycopene. Method accuracy was consistent with mean recoveries ranging from 78.8 to 96.9% and from 57.2 to 96.9% for all carotenoids, except for lycopene, in serum and feces, respectively. Additionally, an interlaboratory validation study on spinach at two institutions showed no significant differences in lutein or β-carotene content, when evaluated on four occasions.

  13. 21 CFR 70.10 - Color additives in standardized foods and new drugs.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Color additives in standardized foods and new... SERVICES GENERAL COLOR ADDITIVES General Provisions § 70.10 Color additives in standardized foods and new... proposes the inclusion of a color additive in the standardized food, the provisions of the regulations in...

  14. 21 CFR 70.10 - Color additives in standardized foods and new drugs.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 1 2013-04-01 2013-04-01 false Color additives in standardized foods and new... SERVICES GENERAL COLOR ADDITIVES General Provisions § 70.10 Color additives in standardized foods and new... proposes the inclusion of a color additive in the standardized food, the provisions of the regulations in...

  15. 21 CFR 70.10 - Color additives in standardized foods and new drugs.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 1 2011-04-01 2011-04-01 false Color additives in standardized foods and new... SERVICES GENERAL COLOR ADDITIVES General Provisions § 70.10 Color additives in standardized foods and new... proposes the inclusion of a color additive in the standardized food, the provisions of the regulations in...

  16. 21 CFR 70.10 - Color additives in standardized foods and new drugs.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 1 2012-04-01 2012-04-01 false Color additives in standardized foods and new... SERVICES GENERAL COLOR ADDITIVES General Provisions § 70.10 Color additives in standardized foods and new... proposes the inclusion of a color additive in the standardized food, the provisions of the regulations in...

  17. 21 CFR 70.10 - Color additives in standardized foods and new drugs.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 1 2014-04-01 2014-04-01 false Color additives in standardized foods and new... SERVICES GENERAL COLOR ADDITIVES General Provisions § 70.10 Color additives in standardized foods and new... proposes the inclusion of a color additive in the standardized food, the provisions of the regulations in...

  18. A new acoustic method to determine the setting time of calcium sulfate bone cement mixed with antibiotics.

    PubMed

    Cooper, J J; Brayford, M J; Laycock, P A

    2014-08-01

    A new method is described which can be used to determine the setting times of small amounts of high value bone cements. The test was developed to measure how the setting times of a commercially available synthetic calcium sulfate cement (Stimulan, Biocomposites, UK) in two forms (standard and Rapid Cure) varies with the addition of clinically relevant antibiotics. The importance of being able to accurately quantify these setting times is discussed. The results demonstrate that this new method, which is shown to correlate to the Vicat needle, gives reliable and repeatable data with additional benefits expressed in the article. The majority of antibiotics mixed were found to retard the setting reaction of the calcium sulfate cement.

  19. Matrix effect and recovery terminology issues in regulated drug bioanalysis.

    PubMed

    Huang, Yong; Shi, Robert; Gee, Winnie; Bonderud, Richard

    2012-02-01

    Understanding the meaning of the terms used in the bioanalytical method validation guidance is essential for practitioners to implement best practice. However, terms that have several meanings or that have different interpretations exist within bioanalysis, and this may give rise to differing practices. In this perspective we discuss an important but often confusing term - 'matrix effect (ME)' - in regulated drug bioanalysis. The ME can be interpreted as either the ionization change or the measurement bias of the method caused by the nonanalyte matrix. The ME definition dilemma makes its evaluation challenging. The matrix factor is currently used as a standard method for evaluation of ionization changes caused by the matrix in MS-based methods. Standard additions to pre-extraction samples have been suggested to evaluate the overall effects of a matrix from different sources on the analytical system, because it covers ionization variation and extraction recovery variation. We also provide our personal views on the term 'recovery'.

  20. Results from the (U-Th)/He dating systems in Japan Atomic Energy Agency

    NASA Astrophysics Data System (ADS)

    Yamada, K.; Hanamuro, T.; Tagami, T.; Yamada, R.; Umeda, K.

    2007-12-01

    Japan Atomic Energy Agency (JAEA) has jointly set up the lab of the (U-Th)/He dating in cooperation with Kyoto University and National Research Institute for Earth Science and Disaster Prevention. We use the MM5400 rare gas mass spectrometer and the SPQ9000 ICP quadrupole mass spectrometer, belonging to JAEA, and built a new vacuum heater using infrared laser to extract helium. HF decomposes zircon after the alkali-fusion method using XRF bead sampler and LiBO3 in the preparation of ICP solution. Helium is quantified using sensitivity method. Uranium and thorium are using standard addition method. Quantifications of uranium-238 and thorium-232 are only need for parent isotopes to date samples because they are expected that the state of secular equilibrium becomes established and samarium does not compose the samples. At the present stage, we calibrate our systems by dating some standards, such as zircon from the Fish Canyon Tuff and apatite from the Durango, those are the international age standard, and apatite and zircon from the Tanzawa Tonalite Complex, that was dated in Yamada's PhD thesis, as a working standard. We report the results and detailed views of the dating systems.

  1. Simultaneous determination of ascorbic acid and caffeine in commercial soft drinks using reversed-phase ultraperformance liquid chromatography.

    PubMed

    Turak, Fatma; Güzel, Remziye; Dinç, Erdal

    2017-04-01

    A new reversed-phase ultraperformance liquid chromatography method with a photodiode array detector was developed for the quantification of ascorbic acid (AA) and caffeine (CAF) in 11 different commercial drinks consisting of one energy drink and 10 ice tea drinks. Separation of the analyzed AA and CAF with an internal standard, caffeic acid, was performed on a Waters BEH C 18 column (100 mm × 2.1 mm, 1.7 μm i.d.), using a mobile phase consisting of acetonitrile and 0.2M H 3 PO 4 (11:89, v/v) with a flow rate of 0.25 mL/min and an injection volume of 1.0 μL. Calibration graphs for AA and CAF were computed from the peak area ratio of AA/internal standard and CAF/internal standard detected at 244.0 nm and 273.6 nm, respectively. The developed reversed-phase ultraperformance liquid chromatography method was validated by analyzing standard addition samples. The proposed reversed-phase ultraperformance liquid chromatography method gave us successful results for the quantitative analysis of commercial drinks containing AA and CAF substances. Copyright © 2016. Published by Elsevier B.V.

  2. A protocol for lifetime energy and environmental impact assessment of building insulation materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, Som S., E-mail: shresthass@ornl.gov; Biswas, Kaushik; Desjarlais, Andre O.

    This article describes a proposed protocol that is intended to provide a comprehensive list of factors to be considered in evaluating the direct and indirect environmental impacts of building insulation materials, as well as detailed descriptions of standardized calculation methodologies to determine those impacts. The energy and environmental impacts of insulation materials can generally be divided into two categories: (1) direct impact due to the embodied energy of the insulation materials and other factors and (2) indirect or environmental impacts avoided as a result of reduced building energy use due to addition of insulation. Standards and product category rules exist,more » which provide guidelines about the life cycle assessment (LCA) of materials, including building insulation products. However, critical reviews have suggested that these standards fail to provide complete guidance to LCA studies and suffer from ambiguities regarding the determination of the environmental impacts of building insulation and other products. The focus of the assessment protocol described here is to identify all factors that contribute to the total energy and environmental impacts of different building insulation products and, more importantly, provide standardized determination methods that will allow comparison of different insulation material types. Further, the intent is not to replace current LCA standards but to provide a well-defined, easy-to-use comparison method for insulation materials using existing LCA guidelines. - Highlights: • We proposed a protocol to evaluate the environmental impacts of insulation materials. • The protocol considers all life cycle stages of an insulation material. • Both the direct environmental impacts and the indirect impacts are defined. • Standardized calculation methods for the ‘avoided operational energy’ is defined. • Standardized calculation methods for the ‘avoided environmental impact’ is defined.« less

  3. Applicability of the DPPH assay for evaluating the antioxidant capacity of food additives - inter-laboratory evaluation study -.

    PubMed

    Shimamura, Tomoko; Sumikura, Yoshihiro; Yamazaki, Takeshi; Tada, Atsuko; Kashiwagi, Takehiro; Ishikawa, Hiroya; Matsui, Toshiro; Sugimoto, Naoki; Akiyama, Hiroshi; Ukeda, Hiroyuki

    2014-01-01

    An inter-laboratory evaluation study was conducted in order to evaluate the antioxidant capacity of food additives by using a 1,1-diphenyl-2-picrylhydrazyl (DPPH) assay. Four antioxidants used as existing food additives (i.e., tea extract, grape seed extract, enju extract, and d-α-tocopherol) and 6-hydroxy-2,5,7,8-tetramethylchroman-2-carboxylic acid (Trolox) were used as analytical samples, and 14 laboratories participated in this study. The repeatability relative standard deviation (RSD(r)) of the IC50 of Trolox, four antioxidants, and the Trolox equivalent antioxidant capacity (TEAC) were 1.8-2.2%, 2.2-2.9%, and 2.1-2.5%, respectively. Thus, the proposed DPPH assay showed good performance within the same laboratory. The reproducibility relative standard deviation (RSD(R)) of IC50 of Trolox, four antioxidants, and TEAC were 4.0-7.9%, 6.0-11%, and 3.7-9.3%, respectively. The RSD(R)/RSD(r) values of TEAC were lower than, or nearly equal to, those of IC50 of the four antioxidants, suggesting that the use of TEAC was effective for reducing the variance among the laboratories. These results showed that the proposed DPPH assay could be used as a standard method to evaluate the antioxidant capacity of food additives.

  4. Comparison of macro-gravimetric and micro-colorimetric lipid determination methods.

    PubMed

    Inouye, Laura S; Lotufo, Guiherme R

    2006-10-15

    In order to validate a method for lipid analysis of small tissue samples, the standard macro-gravimetric method of Bligh-Dyer (1959) [E.G. Bligh, W.J. Dyer, Can. J. Biochem. Physiol. 37 (1959) 911] and a modification of the micro-colorimetric assay developed by Van Handel (1985) [E. Van Handel, J. Am. Mosq. Control Assoc. 1 (1985) 302] were compared. No significant differences were observed for wet tissues of two species of fish. However, limited analysis of wet tissue of the amphipod, Leptocheirusplumulosus, indicated that the Bligh-Dyer gravimetric method generated higher lipid values, most likely due to the inclusion of non-lipid materials. Additionally, significant differences between the methods were observed with dry tissues, with the micro-colorimetric method consistently reporting calculated lipid values greater than as reported by the gravimetric method. This was most likely due to poor extraction of dry tissue in the standard Bligh-Dyer method, as no significant differences were found when analyzing a single composite extract. The data presented supports the conclusion that the micro-colorimetric method described in this paper is accurate, rapid, and minimizes time and solvent use.

  5. Modified Gas Chromatographic Method to Determine Monoacylglycerol and Diacylglycerol Contents in Edible Fats and Oils.

    PubMed

    Satou, Chiemi; Goto, Hirofumi; Yamazaki, Yuya; Saitou, Katsuyoshi; Matsumoto, Shoji; Takahashi, Ou; Miyazaki, Yosuke; Ikuta, Keiichi; Yajima, Yosuke

    2017-06-01

    Monoacylglycerol (MAG) and diacylglycerol (DAG) are minor components of edible fats and oils, and they relate to the quality of these foods. The AOCS official method Cd 11b-91 has been used to determine MAG and DAG contents in fats and oils. There are, however, difficulties in the determination of MAG and DAG using this analytical procedure. Therefore, we improved this method by modifying the trimethylsilyl derivatization procedure and replacing the internal standard (IS) material. In our modified method, TMS-HT (mixture of hexamethyldisilazane and trimethylchlorosilane) was used for derivatization of MAG and DAG, which was followed by liquid-liquid extraction with water and n-hexane solution containing the IS, tricaprin. Using the modified method, we demonstrated superior repeatability in comparison with that of the AOCS method by reducing procedural difficulties. The relative standard deviation of distearin peak areas was 1.8% or 2.9% in the modified method, while it was 5.6% in the AOCS method. In addition, capillary columns, such as DB-1ht and DB-5ht could be used in this method.

  6. Developing Performance Cost Index Targets for ASHRAE Standard 90.1 Appendix G – Performance Rating Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, Michael I.; Hart, Philip R.

    2016-02-16

    Appendix G, the Performance Rating Method in ASHRAE Standard 90.1 has been updated to make two significant changes for the 2016 edition, to be published in October of 2016. First, it allows Appendix G to be used as a third path for compliance with the standard in addition to rating beyond code building performance. This prevents modelers from having to develop separate building models for code compliance and beyond code programs. Using this new version of Appendix G to show compliance with the 2016 edition of the standard, the proposed building design needs to have a performance cost index (PCI)more » less than targets shown in a new table based on building type and climate zone. The second change is that the baseline design is now fixed at a stable level of performance set approximately equal to the 2004 code. Rather than changing the stringency of the baseline with each subsequent edition of the standard, compliance with new editions will simply require a reduced PCI (a PCI of zero is a net-zero building). Using this approach, buildings of any era can be rated using the same method. The intent is that any building energy code or beyond code program can use this methodology and merely set the appropriate PCI target for their needs. This report discusses the process used to set performance criteria for compliance with ASHRAE Standard 90.1-2016 and suggests a method for demonstrating compliance with other codes and beyond code programs.« less

  7. Developing Performance Cost Index Targets for ASHRAE Standard 90.1 Appendix G – Performance Rating Method - Rev.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosenberg, Michael I.; Hart, Philip R.

    2016-03-01

    Appendix G, the Performance Rating Method in ASHRAE Standard 90.1 has been updated to make two significant changes for the 2016 edition, to be published in October of 2016. First, it allows Appendix G to be used as a third path for compliance with the standard in addition to rating beyond code building performance. This prevents modelers from having to develop separate building models for code compliance and beyond code programs. Using this new version of Appendix G to show compliance with the 2016 edition of the standard, the proposed building design needs to have a performance cost index (PCI)more » less than targets shown in a new table based on building type and climate zone. The second change is that the baseline design is now fixed at a stable level of performance set approximately equal to the 2004 code. Rather than changing the stringency of the baseline with each subsequent edition of the standard, compliance with new editions will simply require a reduced PCI (a PCI of zero is a net-zero building). Using this approach, buildings of any era can be rated using the same method. The intent is that any building energy code or beyond code program can use this methodology and merely set the appropriate PCI target for their needs. This report discusses the process used to set performance criteria for compliance with ASHRAE Standard 90.1-2016 and suggests a method for demonstrating compliance with other codes and beyond code programs.« less

  8. Primer Stepper Motor Nomenclature, Definition, Performance and Recommended Test Methods

    NASA Technical Reports Server (NTRS)

    Starin, Scott; Shea, Cutter

    2014-01-01

    There has been an unfortunate lack of standardization of the terms and components of stepper motor performance, requirements definition, application of torque margin and implementation of test methods. This paper will address these inconsistencies and discuss in detail the implications of performance parameters, affects of load inertia, control electronics, operational resonances and recommended test methods. Additionally, this paper will recommend parameters for defining and specifying stepper motor actuators. A useful description of terms as well as consolidated equations and recommended requirements is included.

  9. Analysis of drugs in human tissues by supercritical fluid extraction/immunoassay

    NASA Astrophysics Data System (ADS)

    Furton, Kenneth G.; Sabucedo, Alberta; Rein, Joseph; Hearn, W. L.

    1997-02-01

    A rapid, readily automated method has been developed for the quantitative analysis of phenobarbital from human liver tissues based on supercritical carbon dioxide extraction followed by fluorescence enzyme immunoassay. The method developed significantly reduces sample handling and utilizes the entire liver homogenate. The current method yields comparable recoveries and precision and does not require the use of an internal standard, although traditional GC/MS confirmation can still be performed on sample extracts. Additionally, the proposed method uses non-toxic, inexpensive carbon dioxide, thus eliminating the use of halogenated organic solvents.

  10. Computing tools for implementing standards for single-case designs.

    PubMed

    Chen, Li-Ting; Peng, Chao-Ying Joanne; Chen, Ming-E

    2015-11-01

    In the single-case design (SCD) literature, five sets of standards have been formulated and distinguished: design standards, assessment standards, analysis standards, reporting standards, and research synthesis standards. This article reviews computing tools that can assist researchers and practitioners in meeting the analysis standards recommended by the What Works Clearinghouse: Procedures and Standards Handbook-the WWC standards. These tools consist of specialized web-based calculators or downloadable software for SCD data, and algorithms or programs written in Excel, SAS procedures, SPSS commands/Macros, or the R programming language. We aligned these tools with the WWC standards and evaluated them for accuracy and treatment of missing data, using two published data sets. All tools were tested to be accurate. When missing data were present, most tools either gave an error message or conducted analysis based on the available data. Only one program used a single imputation method. This article concludes with suggestions for an inclusive computing tool or environment, additional research on the treatment of missing data, and reasonable and flexible interpretations of the WWC standards. © The Author(s) 2015.

  11. On the introduction of a measurement standard for high-purity germanium crystals to be used in radiation detectors

    NASA Astrophysics Data System (ADS)

    Darken, L.

    1994-02-01

    The IEEE and ANSI have recently approved "Standard Test Procedures for High-Purity Germanium Crystals for Radiation Detectors" proposed by the IEEE/NPSS/Nuclear Instruments and Detectors Committee. The standard addresses three aspects of the characterisation of high-purity germanium: (i) the determination by the van der Pauw method of the net carrier concentration and type; (ii) the measurement by capacitance transient techniques of the concentration of trapping levels; (iii) the description of the crystallographic properties revealed by preferential etching. In addition to describing the contents of this standard, the purpose of this work is also to place the issues faced in the context of professional consensus: points of agreement, points of disagreement, and subjects poorly understood.

  12. Sample sizes needed for specified margins of relative error in the estimates of the repeatability and reproducibility standard deviations.

    PubMed

    McClure, Foster D; Lee, Jung K

    2005-01-01

    Sample size formulas are developed to estimate the repeatability and reproducibility standard deviations (Sr and S(R)) such that the actual error in (Sr and S(R)) relative to their respective true values, sigmar and sigmaR, are at predefined levels. The statistical consequences associated with AOAC INTERNATIONAL required sample size to validate an analytical method are discussed. In addition, formulas to estimate the uncertainties of (Sr and S(R)) were derived and are provided as supporting documentation. Formula for the Number of Replicates Required for a Specified Margin of Relative Error in the Estimate of the Repeatability Standard Deviation.

  13. 78 FR 18372 - TUV Rheinland of North America, Inc.; Expansion of Recognition

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ... covers the addition of a new site and the use one additional test standard. OSHA's current scope of..., and one additional test standard. In response to OSHA's requests for clarification, TUV amended its... NRTL Program staff determined that the additional test standard is an ``appropriate test standard...

  14. Greenhouse Gas Analysis by GC/MS

    NASA Astrophysics Data System (ADS)

    Bock, E. M.; Easton, Z. M.; Macek, P.

    2015-12-01

    Current methods to analyze greenhouse gases rely on designated complex, multiple-column, multiple-detector gas chromatographs. A novel method was developed in partnership with Shimadzu for simultaneous quantification of carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) in environmental gas samples. Gas bulbs were used to make custom standard mixtures by injecting small volumes of pure analyte into the nitrogen-filled bulb. Resulting calibration curves were validated using a certified gas standard. The use of GC/MS systems to perform this analysis has the potential to move the analysis of greenhouse gasses from expensive, custom GC systems to standard single-quadrupole GC/MS systems that are available in most laboratories, which wide variety of applications beyond greenhouse gas analysis. Additionally, use of mass spectrometry can provide confirmation of identity of target analytes, and will assist in the identification of unknown peaks should they be present in the chromatogram.

  15. Food increases the bioavailability of isotretinoin.

    PubMed

    Colburn, W A; Gibson, D M; Wiens, R E; Hanigan, J J

    1983-01-01

    Twenty healthy male subjects received 80 mg (2 X 40 mg SEG capsules) oral isotretinoin separated by two-week washout periods in an open randomized crossover design. Isotretinoin was administered during a complete fast, 1 hour after a standard breakfast, with a standard breakfast, or 1 hour before a standard breakfast. Blood samples were obtained at specific times over a 72-hour period. Isotretinoin blood concentrations were determined by a specific HPLC method. The relative bioavailability (AUC) of isotretinoin was found to be approximately 1.5 to 2 times greater when the dose was administered 1 hour before, concomitantly with, or 1 hour after a meal than when it was given during a complete fast. In addition, because the Cmax value is lower when the dose is administered with food rather than 1 hour after a meal, coadministration of isotretinoin with food may be the best method of administration.

  16. Operationally Responsive Space Standard Bus Battery Thermal Balance Testing and Heat Dissipation Analysis

    NASA Technical Reports Server (NTRS)

    Marley, Mike

    2008-01-01

    The focus of this paper will be on the thermal balance testing for the Operationally Responsive Space Standard Bus Battery. The Standard Bus thermal design required that the battery be isolated from the bus itself. This required the battery to have its own thermal control, including heaters and a radiator surface. Since the battery was not ready for testing during the overall bus thermal balance testing, a separate test was conducted to verify the thermal design for the battery. This paper will discuss in detail, the test set up, test procedure, and results from this test. Additionally this paper will consider the methods taken to determine the heat dissipation of the battery during charge and discharge. It seems that the heat dissipation for Lithium Ion batteries is relatively unknown and hard to quantify. The methods used during test and the post test analysis to estimate the heat dissipation of the battery will be discussed.

  17. Calibration of laser vibrometers at frequencies up to 100 kHz and higher

    NASA Astrophysics Data System (ADS)

    Silva Pineda, Guillermo; von Martens, Hans-Jürgen; Rojas, Sergio; Ruiz, Arturo; Muñiz, Lorenzo

    2008-06-01

    Manufacturers and users of laser vibrometers exploit the wide frequency and intensity ranges of laser techniques, ranging over many decades (e.g., from 0.1 Hz to 100 MHz). Traceability to primary measurement standards is demanded over the specified measurement ranges of any measurement instrumentation. As the primary documentary standard ISO 16063-11 for the calibration of vibration transducers is restricted to 10 kHz, a new international standard for the calibration of laser vibrometers, ISO 16063-41, is under development. The current stage of the 2nd Committee Draft (CD) of the ISO standard specifies calibration methods for frequencies from 0.4 Hz to 50 kHz which does not meet the demand for providing traceability at higher frequencies. New investigations will be presented which demonstrate the applicability of the laser interferometer methods specified in ISO 16063-11 and in the 2nd CD also at higher frequencies of 100 kHz and beyond. The three standard methods were simultaneously used for vibration displacement and acceleration measurements up to 100 kHz, and a fourth high-accuracy method has been developed and used. Their results in displacement and acceleration measurements deviated by less than 1 % from each other at vibration displacement amplitudes in the order of 100 nm. In addition to the three interferometer methods specified in ISO 16063-11 and 16063-15, and in the 2nd Committee Draft of 16063-41 as well, measurement results will be presented. Examples of laser vibrometer calibrations will bedemonstrated. Further investigations are aimed

  18. Modified cupric reducing antioxidant capacity (CUPRAC) assay for measuring the antioxidant capacities of thiol-containing proteins in admixture with polyphenols.

    PubMed

    Cekiç, Sema Demirci; Başkan, Kevser Sözgen; Tütem, Esma; Apak, Reşat

    2009-07-15

    Proteins are not considered as true antioxidants but are known to protect antioxidants from oxidation in various antioxidant activity assays. This study aims to investigate the contribution of proteins, especially thiol-containing proteins, to the observed overall antioxidant capacity measured by known methods. To determine the antioxidant properties of thiol-containing proteins, the CUPRAC method of antioxidant assay using the oxidizing reagent Cu(II)-neocuproine previously used for simultaneous analysis of cystine and cysteine was adopted. While the CUPRAC method is capable of determining all antioxidant compounds including thiols in complex sample matrices, the Ellman method of thiol quantitation basically does not respond to other antioxidants. The antioxidant quantities in the selected samples were assayed with the ABTS and FRAP methods as well as with the CUPRAC method. In all applied methods, the dilutions were made with a standard pH 8 buffer used in the Ellman method by substituting the Na(2)EDTA component of the buffer with sodium citrate. On the other hand, the standard CUPRAC protocol was modified by substituting the pH 7 ammonium acetate buffer (at 1M concentration) with 8M urea buffer adjusted to pH 7 by neutralizing with 6M HCl. Urea helps to partly solubilize and denaturate proteins so that their buried thiols be oxidized more easily. All methods used in the estimation of antioxidant properties of proteins (i.e., CUPRAC, Ellman, ABTS, and FRAP) were first standardized with a simple thiol compound, cysteine, by constructing the calibration curves. The molar absorptivities of these methods for cysteine were: epsilon(CUPRAC)=7.71x10(3), epsilon(Ellman)=1.37x10(4), epsilon(ABTS)=2.06x10(4), and epsilon(FRAP)=2.98x10(3)L mol(-1)cm(-1). Then these methods were applied to various samples containing thiols, such as glutathione (reduced form:GSH), egg white, whey proteins, and gelatin. Additionally, known quantities of selected antioxidants were added to these samples to show the additivity of responses.

  19. Natural wrapping paper from banana (Musa paradisiaca Linn) peel waste with additive essential oils

    NASA Astrophysics Data System (ADS)

    Widiastuti Agustina, E. S.; Elfi Susanti, V. H.

    2018-05-01

    The research aimed to produce natural wrapping paper from banana (Musa Paradisiaca Linn.) peel waste with additive essentials oils. The method used in this research was alkalization. The delignification process is done with the use of NaOH 4% at the temperature of 100°C for 1.5 hours. Additive materials in the form of essential oils are added as a preservative and aroma agent, namely cinnamon oil, lemon oil, clove oil and lime oil respectively 2% and 3%. Chemical and physical properties of the produced papers are tested included water content (dry-oven method SNI ISO 287:2010), pH (SNI ISO 6588-1.2010), grammage (SNI ISO 536:2010) and brightness (SNI ISO 2470:2010). Testing results of each paper were compared with commercial wrapping paper. The result shows that the natural paper from banana peel waste with additive essential oil meets the standard of ISO 6519:2016 about Basic Paper for Laminated Plastic Wrapping Paper within the parameter of pH and water content. The paper produced also meet the standard of ISO 8218:2015 about Food Paper and Cardboard within the grammage parameter (high-grade grammage), except the paper with 2% lemon oil. The paper which is closest to the characteristic of commercial wrapping paper is the paper with the additive of 2% cinnamon oil, with pH of 6.95, the water content of 7.14%, grammage of 347.6 gram/m2 and the brightness level of 24.68%.

  20. Comparative Evaluations of Randomly Selected Four Point-of-Care Glucometer Devices in Addis Ababa, Ethiopia.

    PubMed

    Wolde, Mistire; Tarekegn, Getahun; Kebede, Tedla

    2018-05-01

    Point-of-care glucometer (PoCG) devices play a significant role in self-monitoring of the blood sugar level, particularly in the follow-up of high blood sugar therapeutic response. The aim of this study was to evaluate blood glucose test results performed with four randomly selected glucometers on diabetes and control subjects versus standard wet chemistry (hexokinase) methods in Addis Ababa, Ethiopia. A prospective cross-sectional study was conducted on randomly selected 200 study participants (100 participants with diabetes and 100 healthy controls). Four randomly selected PoCG devices (CareSens N, DIAVUE Prudential, On Call Extra, i-QARE DS-W) were evaluated against hexokinase method and ISO 15197:2003 and ISO 15197:2013 standards. The minimum and maximum blood sugar values were recorded by CareSens N (21 mg/dl) and hexokinase method (498.8 mg/dl), respectively. The mean sugar values of all PoCG devices except On Call Extra showed significant differences compared with the reference hexokinase method. Meanwhile, all four PoCG devices had strong positive relationship (>80%) with the reference method (hexokinase). On the other hand, none of the four PoCG devices fulfilled the minimum accuracy measurement set by ISO 15197:2003 and ISO 15197:2013 standards. In addition, the linear regression analysis revealed that all four selected PoCG overestimated the glucose concentrations. The overall evaluation of the selected four PoCG measurements were poorly correlated with standard reference method. Therefore, before introducing PoCG devices to the market, there should be a standardized evaluation platform for validation. Further similar large-scale studies on other PoCG devices also need to be undertaken.

  1. Developing product quality standards for wheelchairs used in less-resourced environments

    PubMed Central

    McCambridge, Matt; Reese, Norman; Schoendorfer, Don; Wunderlich, Eric; Rushman, Chris; Mahilo, Dave

    2017-01-01

    Background Premature failures of wheelchairs in less-resourced environments (LREs) may be because of shortcomings in product regulation and quality standards. The standards published by the International Organization for Standardization (ISO) specify wheelchair tests for durability, safety and performance, but their applicability to products used in the rugged conditions of LREs is unclear. Because of this, wheelchair-related guidelines published by the World Health Organization recommended developing more rigorous durability tests for wheelchairs. Objectives This study was performed to identify the additional tests needed for LREs. Methods First, a literature review of the development of ISO test standards, wheelchair standards testing studies and wheelchair evaluations in LREs was performed. Second, expert advice from members of the Standards Working Group of the International Society of Wheelchair Professionals (ISWP) was compiled and reviewed. Results A total of 35 articles were included in the literature review. Participation from LREs was not observed in the ISO standards development. As per wheelchair testing study evidence, wheelchair models delivered in LREs did not meet the minimum standards requirement. Multiple part failures and repairs were observed with reviewed field evaluation studies. ISWP experts noted that several testing factors responsible for premature failures with wheelchair parts are not included in the standards and accordingly provided advice for additional test development. Conclusion The study findings indicate the need to develop a wide range of tests, with specific tests for measuring corrosion resistance of the entire wheelchair, rolling resistance of castors and rear wheels, and durability of whole wheelchair and castor assemblies. PMID:28936410

  2. The effect of 'standard drink' labelling on the ability of drinkers to pour a 'standard drink'.

    PubMed

    Stockwell, T; Blaze-Temple, D; Walker, C

    1991-03-01

    Australia's National Health Policy on Alcohol has recommended that beverage containers be labelled so that alcohol content is 'readily understandable by the public'. Health promotion to increase the responsible use of alcohol now relies extensively on the concept of a standard drink--usually defined as 10 g of ethyl alcohol. Numerous difficulties confront a drinker who wishes to apply the standard drink system to monitor alcohol intake. This report describes a series of experimental tests of the proposal that these difficulties are minimised if alcohol containers have their alcohol content indicated in terms of standard drinks in addition to the usual percentage alcohol by volume method. Subjects were drinkers recruited from a Perth shopping mall and were tested only on beverage types they had consumed within the previous week. They were required to pour what they judged to be a single standard drink from a 750 ml bottle of either wine or beer. Beer drinkers achieved greater accuracy in this task when the bottles had standard drink labels, even when the glass size and beverage strength were varied. Wine drinkers had equal difficulty with this task whether standard drink or percentage labels were used. The addition of a 'ladder' up the side of a wine bottle with graduations in standard drinks would be necessary for wine drinkers to achieve a high level of accuracy. We conclude that labelling drink containers with their alcohol content in terms of standard drinks would better equip all drinkers to follow the advice of health educators.

  3. Determination of nitrogen monoxide in high purity nitrogen gas with an atmospheric pressure ionization mass spectrometer

    NASA Technical Reports Server (NTRS)

    Kato, K.

    1985-01-01

    An atmospheric pressure ionization mass spectrometric (API-MS) method was studied for the determination of residual NO in high purity N2 gas. The API-MS is very sensitive to NO, but the presence of O2 interferes with the NO measurement. Nitrogen gas in cylinders as sample gas was mixed with NO standard gas and/or O2 standard gas, and then introduced into the API-MS. The calibration curves of NO and O2 has linearity in the region of 0 - 2 ppm, but the slopes changed with every cylinder. The effect of O2 on NO+ peak was additive and proportional to O2 concentration in the range of 0 - 0.5 ppm. The increase in NO+ intensity due to O2 was (0.07 - 0.13)%/O2, 1 ppm. Determination of NO and O2 was carried out by the standard addition method to eliminate the influence of variation of slopes. The interference due to O2 was estimated from the product of the O2 concentration and the ratio of slope A to Slope B. Slope A is the change in the NO+ intensity with the O2 concentration. Slope B is the intensity with O2 concentration.

  4. Simultaneous determination of Fluticasone propionate and Azelastine hydrochloride in the presence of pharmaceutical dosage form additives

    NASA Astrophysics Data System (ADS)

    Merey, Hanan A.; El-Mosallamy, Sally S.; Hassan, Nagiba Y.; El-Zeany, Badr A.

    2016-05-01

    Fluticasone propionate (FLU) and Azelastine hydrochloride (AZE) are co-formulated with phenylethyl alcohol (PEA) and Benzalkonium chloride (BENZ) (as preservatives) in pharmaceutical dosage form for treatment of seasonal allergies. Different spectrophotometric methods were used for the simultaneous determination of cited drugs in the dosage form. Direct spectrophotometric method was used for determining of AZE, while Derivative of double divisor of ratio spectra (DD-RS), Ratio subtraction coupled with ratio difference method (RS-RD) and Mean centering of the ratio spectra (MCR) are used for the determination of FLU. The linearity of the proposed methods was investigated in the range of 5.00-40.00 and 5.00-80.00 μg/mL for FLU and AZE, respectively. The specificity of the developed methods was investigated by analyzing laboratory prepared mixtures containing different ratios of cited drugs in addition to PEA and their pharmaceutical dosage form. The validity of the proposed methods was assessed using the standard addition technique. The obtained results were statistically compared with those obtained by official or the reported method for FLU or AZE, respectively showing no significant difference with respect to accuracy and precision at p = 0.05.

  5. Thermal Modeling of the Injection of Standard and Thermally Insulated Cored Wire

    NASA Astrophysics Data System (ADS)

    Castro-Cedeno, E.-I.; Jardy, A.; Carré, A.; Gerardin, S.; Bellot, J. P.

    2017-12-01

    Cored wire injection is a widespread method used to perform alloying additions during ferrous and non-ferrous liquid metal treatment. The wire consists of a metal casing that is tightly wrapped around a core of material; the casing delays the release of the material as the wire is immersed into the melt. This method of addition presents advantages such as higher repeatability and yield of cored material with respect to bulk additions. Experimental and numerical work has been performed by several authors on the subject of alloy additions, spherical and cylindrical geometries being mainly considered. Surprisingly this has not been the case for cored wire, where the reported experimental or numerical studies are scarce. This work presents a 1-D finite volume numerical model aimed for the simulation of the thermal phenomena which occurs when the wire is injected into a liquid metal bath. It is currently being used as a design tool for the conception of new types of cored wire. A parametric study on the effect of injection velocity and steel casing thickness for an Al cored wire immersed into a steel melt at 1863 K (1590 °C) is presented. The standard single casing wire is further compared against a wire with multiple casings. Numerical results show that over a certain range of injection velocities, the core contents' release is delayed in the multiple casing when compared to a single casing wire.

  6. Determination of vitamin A (retinol) in infant formula and adult nutritionals by liquid chromatography: First Action 2011.15.

    PubMed

    DeVries, Jonathan W; Silvera, Karlene R; McSherry, Elliot; Dowell, Dawn

    2012-01-01

    During the "Standards Development and International Harmonization: AOAC INTERNATIONAL Mid-Year Meeting," held on June 29, 2011, an Expert Review Panel (ERP) reviewed the method for the "Determination of Vitamins A (Retinol) and E (alpha-Tocopherol) in Foods by Liquid Chromatography: Collaborative Study," published by Jonathan W. DeVries and Karlene R. Silvera in J. AOAC Int. in 2002. After evaluation of the original validation data, an ERP agreed in June 2011 that the method meets standard method performance requirements (SMPRs) for vitamin A, as articulated by the Stakeholder Panel on Infant Formula and Adult Nutritionals. The ERP granted the method First Action status, applicable to determining vitamin A in ready-to-eat infant and adult nutritional formula. In an effort to achieve Final Action status, it was recommended that additional information be generated for different types of infant and adult nutritional formula matrixes at varied concentration levels as indicated in the vitamin A (retinol) SMPR. Existing AOAC LC methods are suited for specific vitamin A analytical applications. The original method differs from existing methods in that it can be used to assay samples in all nine sectors of the food matrix. One sector of the food matrix was powdered infant formula and gave support for the First Action approval for vitamin A in infant and adult nutritional formula. In this method, standards and test samples are saponified in basic ethanol-water solution, neutralized, and diluted, converting fats to fatty acids and retinol esters to retinol. Retinol is quantitated by an LC method, using UV detection at 313 or 328 nm for retinol. Vitamin concentration is calculated by comparison of the peak heights or peak areas of retinol in test samples with those of standards.

  7. 40 CFR 86.1837-01 - Rounding of emission measurements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... subpart to one additional significant figure, in accordance with the Rounding-Off Method specified in ASTM E29-93a, Standard Practice for Using Significant Digits in Test Data to Determine Conformance with... credits generated or needed as follows: manufacturers must round to the same number of significant figures...

  8. 40 CFR 86.1837-01 - Rounding of emission measurements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... subpart to one additional significant figure, in accordance with the Rounding-Off Method specified in ASTM E29-93a, Standard Practice for Using Significant Digits in Test Data to Determine Conformance with... credits generated or needed as follows: manufacturers must round to the same number of significant figures...

  9. 40 CFR 86.1837-01 - Rounding of emission measurements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... subpart to one additional significant figure, in accordance with the Rounding-Off Method specified in ASTM E29-93a, Standard Practice for Using Significant Digits in Test Data to Determine Conformance with... credits generated or needed as follows: manufacturers must round to the same number of significant figures...

  10. 40 CFR 86.1837-01 - Rounding of emission measurements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... subpart to one additional significant figure, in accordance with the Rounding-Off Method specified in ASTM E29-93a, Standard Practice for Using Significant Digits in Test Data to Determine Conformance with... credits generated or needed as follows: manufacturers must round to the same number of significant figures...

  11. IMPROVING THE FLUOROMETRIC AMMONIUM METHOD BY ACCOUNTING FOR MATRIX EFFECTS WITH STANDARD ADDITIONS. (R829426E02)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  12. Assessment Challenges for Business Education in Changing Times

    ERIC Educational Resources Information Center

    Hazari, Sunil; Gaytan, Jorge; North, Alexa

    2008-01-01

    In addition to the difficult task of identifying teaching methods that ensure student learning, the American educational system is facing significant challenges. Schools are struggling to maintain standards for high-quality teaching while trying to address the learning needs of students with Limited English Proficiency (LEP). The same struggle is…

  13. Contribution of calcium oxalate to soil-exchangeable calcium

    USGS Publications Warehouse

    Dauer, Jenny M.; Perakis, Steven S.

    2013-01-01

    Acid deposition and repeated biomass harvest have decreased soil calcium (Ca) availability in many temperate forests worldwide, yet existing methods for assessing available soil Ca do not fully characterize soil Ca forms. To account for discrepancies in ecosystem Ca budgets, it has been hypothesized that the highly insoluble biomineral Ca oxalate might represent an additional soil Ca pool that is not detected in standard measures of soil-exchangeable Ca. We asked whether several standard method extractants for soil-exchangeable Ca could also access Ca held in Ca oxalate crystals using spike recovery tests in both pure solutions and soil extractions. In solutions of the extractants ammonium chloride, ammonium acetate, and barium chloride, we observed 2% to 104% dissolution of Ca oxalate crystals, with dissolution increasing with both solution molarity and ionic potential of cation extractant. In spike recovery tests using a low-Ca soil, we estimate that 1 M ammonium acetate extraction dissolved sufficient Ca oxalate to contribute an additional 52% to standard measurements of soil-exchangeable Ca. However, in a high-Ca soil, the amount of Ca oxalate spike that would dissolve in 1 M ammonium acetate extraction was difficult to detect against the large pool of exchangeable Ca. We conclude that Ca oxalate can contribute substantially to standard estimates of soil-exchangeable Ca in acid forest soils with low soil-exchangeable Ca. Consequently, measures of exchangeable Ca are unlikely to fully resolve discrepancies in ecosystem Ca mass balance unless the contribution of Ca oxalate to exchangeable Ca is also assessed.

  14. Multiple transfer standard for calibration and characterization of test setups for LED lamps and luminaires in industry

    NASA Astrophysics Data System (ADS)

    Sperling, A.; Meyer, M.; Pendsa, S.; Jordan, W.; Revtova, E.; Poikonen, T.; Renoux, D.; Blattner, P.

    2018-04-01

    Proper characterization of test setups used in industry for testing and traceable measurement of lighting devices by the substitution method is an important task. According to new standards for testing LED lamps, luminaires and modules, uncertainty budgets are requested because in many cases the properties of the device under test differ from the transfer standard used, which may cause significant errors, for example if a LED-based lamp is tested or calibrated in an integrating sphere which was calibrated with a tungsten lamp. This paper introduces a multiple transfer standard, which was designed not only to transfer a single calibration value (e.g. luminous flux) but also to characterize test setups used for LED measurements with additional provided and calibrated output features to enable the application of the new standards.

  15. Hybrid Differential Dynamic Programming with Stochastic Search

    NASA Technical Reports Server (NTRS)

    Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob

    2016-01-01

    Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASAs Dawn mission. The Dawn trajectory was designed with the DDP-based Static Dynamic Optimal Control algorithm used in the Mystic software. Another recently developed method, Hybrid Differential Dynamic Programming (HDDP) is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.

  16. Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review.

    PubMed

    Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C

    2018-03-07

    Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally performed better than omitting trials. When estimating missing means, a formula using the median, lower quartile and upper quartile performed best in preserving the precision of the meta-analysis findings, although in some scenarios, omitting trials gave superior results. Methods based on summary statistics (minimum, maximum, lower quartile, upper quartile, median) reported in the literature facilitate more comprehensive inclusion of randomised controlled trials with missing mean or variability summary statistics within meta-analyses.

  17. A Nanocoaxial-Based Electrochemical Sensor for the Detection of Cholera Toxin

    NASA Astrophysics Data System (ADS)

    Archibald, Michelle M.; Rizal, Binod; Connolly, Timothy; Burns, Michael J.; Naughton, Michael J.; Chiles, Thomas C.

    2015-03-01

    Sensitive, real-time detection of biomarkers is of critical importance for rapid and accurate diagnosis of disease for point of care (POC) technologies. Current methods do not allow for POC applications due to several limitations, including sophisticated instrumentation, high reagent consumption, limited multiplexing capability, and cost. Here, we report a nanocoaxial-based electrochemical sensor for the detection of bacterial toxins using an electrochemical enzyme-linked immunosorbent assay (ELISA) and differential pulse voltammetry (DPV). Proof-of-concept was demonstrated for the detection of cholera toxin (CT). The linear dynamic range of detection was 10 ng/ml - 1 μg/ml, and the limit of detection (LOD) was found to be 2 ng/ml. This level of sensitivity is comparable to the standard optical ELISA used widely in clinical applications. In addition to matching the detection profile of the standard ELISA, the nanocoaxial array provides a simple electrochemical readout and a miniaturized platform with multiplexing capabilities for the simultaneous detection of multiple biomarkers, giving the nanocoax a desirable advantage over the standard method towards POC applications. Sensitive, real-time detection of biomarkers is of critical importance for rapid and accurate diagnosis of disease for point of care (POC) technologies. Current methods do not allow for POC applications due to several limitations, including sophisticated instrumentation, high reagent consumption, limited multiplexing capability, and cost. Here, we report a nanocoaxial-based electrochemical sensor for the detection of bacterial toxins using an electrochemical enzyme-linked immunosorbent assay (ELISA) and differential pulse voltammetry (DPV). Proof-of-concept was demonstrated for the detection of cholera toxin (CT). The linear dynamic range of detection was 10 ng/ml - 1 μg/ml, and the limit of detection (LOD) was found to be 2 ng/ml. This level of sensitivity is comparable to the standard optical ELISA used widely in clinical applications. In addition to matching the detection profile of the standard ELISA, the nanocoaxial array provides a simple electrochemical readout and a miniaturized platform with multiplexing capabilities for the simultaneous detection of multiple biomarkers, giving the nanocoax a desirable advantage over the standard method towards POC applications. This work was supported by the National Institutes of Health (National Cancer Institute award No. CA137681 and National Institute of Allergy and Infectious Diseases Award No. AI100216).

  18. Comparative study of methods to measure the density of Cementious powders

    PubMed Central

    Helsel, Michelle A.; Bentz, Dale

    2016-01-01

    The accurate measurement of the density of hydraulic cement has an essential role in the determination of concrete mixture proportions. As more supplementary cementitious materials (SCM), such as fly ash, and slag, or cement replacements materials such as limestone and calcium carbonate are used in blended cements, knowledge of the density of each powder or of the blended cement would allow a more accurate calculation of the proportions of a concrete mixture by volume instead of by mass. The current ASTM standard for measuring cement density is the “Test Method for Density of Hydraulic Cements” (ASTM C188-14), which utilizes a liquid displacement method to measure the volume of the cement. This paper will examine advantageous modifications of the current ASTM test, by alcohol substitutions for kerosene. In addition, a gas (helium) pycnometry method is evaluated as a possible alternative to the current standard. The described techniques will be compared to determine the most precise and reproducible method for measuring the density of hydraulic cements and other powders. PMID:27099404

  19. Comparative study of methods to measure the density of Cementious powders.

    PubMed

    Helsel, Michelle A; Ferraris, Chiara F; Bentz, Dale

    2016-11-01

    The accurate measurement of the density of hydraulic cement has an essential role in the determination of concrete mixture proportions. As more supplementary cementitious materials (SCM), such as fly ash, and slag, or cement replacements materials such as limestone and calcium carbonate are used in blended cements, knowledge of the density of each powder or of the blended cement would allow a more accurate calculation of the proportions of a concrete mixture by volume instead of by mass. The current ASTM standard for measuring cement density is the "Test Method for Density of Hydraulic Cements" (ASTM C188-14), which utilizes a liquid displacement method to measure the volume of the cement. This paper will examine advantageous modifications of the current ASTM test, by alcohol substitutions for kerosene. In addition, a gas (helium) pycnometry method is evaluated as a possible alternative to the current standard. The described techniques will be compared to determine the most precise and reproducible method for measuring the density of hydraulic cements and other powders.

  20. Evaluation of methods to reduce background using the Python-based ELISA_QC program.

    PubMed

    Webster, Rose P; Cohen, Cinder F; Saeed, Fatima O; Wetzel, Hanna N; Ball, William J; Kirley, Terence L; Norman, Andrew B

    2018-05-01

    Almost all immunological approaches [immunohistochemistry, enzyme-linked immunosorbent assay (ELISA), Western blot], that are used to quantitate specific proteins have had to address high backgrounds due to non-specific reactivity. We report here for the first time a quantitative comparison of methods for reduction of the background of commercial biotinylated antibodies using the Python-based ELISA_QC program. This is demonstrated using a recombinant humanized anti-cocaine monoclonal antibody. Several approaches, such as adjustment of the incubation time and the concentration of blocking agent, as well as the dilution of secondary antibodies, have been explored to address this issue. In this report, systematic comparisons of two different methods, contrasted with other more traditional methods to address this problem are provided. Addition of heparin (HP) at 1 μg/ml to the wash buffer prior to addition of the secondary biotinylated antibody reduced the elevated background absorbance values (from a mean of 0.313 ± 0.015 to 0.137 ± 0.002). A novel immunodepletion (ID) method also reduced the background (from a mean of 0.331 ± 0.010 to 0.146 ± 0.013). Overall, the ID method generated more similar results at each concentration of the ELISA standard curve to that using the standard lot 1 than the HP method, as analyzed by the Python-based ELISA_QC program. We conclude that the ID method, while more laborious, provides the best solution to resolve the high background seen with specific lots of biotinylated secondary antibody. Copyright © 2018. Published by Elsevier B.V.

  1. Method for determination of levoglucosan in snow and ice at trace concentration levels using ultra-performance liquid chromatography coupled with triple quadrupole mass spectrometry.

    PubMed

    You, Chao; Song, Lili; Xu, Baiqing; Gao, Shaopeng

    2016-02-01

    A method is developed for determination of levoglucosan at trace concentration levels in complex matrices of snow and ice samples. This method uses an injection mixture comprising acetonitrile and melt sample at a ratio of 50/50 (v/v). Samples are analyzed using ultra-performance liquid chromatography system combined with triple tandem quadrupole mass spectrometry (UPLC-MS/MS). Levoglucosan is analyzed on BEH Amide column (2.1 mm × 100 mm, 1.7 um), and a Z-spray electrospray ionization source is used for levoglucosan ionization. The polyether sulfone filter is selected for filtrating insoluble particles due to less impact on levoglucosan. The matrix effect is evaluated by using a standard addition method. During the method validation, limit of detection (LOD), linearity, recovery, repeatability and reproducibility were evaluated using standard addition method. The LOD of this method is 0.11 ng mL(-1). Recoveries vary from 91.2% at 0.82 ng mL(-1) to 99.3% at 4.14 ng mL(-1). Repeatability ranges from 17.9% at a concentration of 0.82 ng mL(-1) to 2.8% at 4.14 ng mL(-1). Reproducibility ranges from 15.1% at a concentration of 0.82 ng mL(-1) to 1.9% at 4.14 ng mL(-1). This method can be implemented using less than 0.50 mL sample volume in low and middle latitude regions like the Tibetan Plateau. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Evaluation of a new automated instrument for pretransfusion testing.

    PubMed

    Morelati, F; Revelli, N; Maffei, L M; Poretti, M; Santoro, C; Parravicini, A; Rebulla, P; Cole, R; Sirchia, G

    1998-10-01

    A number of automated devices for pretransfusion testing have recently become available. This study evaluated a fully automated device based on column agglutination technology (AutoVue System, Ortho, Raritan, NJ). Some 6747 tests including forward and reverse ABO group, Rh type and phenotype, antibody screen, autocontrol, and crossmatch were performed on random samples from 1069 blood donors, 2063 patients, and 98 newborns and cord blood. Also tested were samples from 168 immunized patients and 53 donors expressing weak or variant A and D antigens. Test results and technician times required for their performance were compared with those obtained by standard methods (manual column agglutination technology, slide, semiautomatic handler). No erroneous conclusions were found in regard to the 5028 ABO group and Rh type or phenotype determinations carried out with the device. The device rejected 1.53 percent of tests for sample inadequacy. Of the remaining 18 tests with discrepant results found with the device and not confirmed with the standard methods, 6 gave such results because of mixed-field reactions, 10 gave negative results with A2 RBCs in reverse ABO grouping, and 2 gave very weak positive reactions in antibody screening and crossmatching. In the samples from immunized patients, the device missed one weak anti-K, whereas standard methods missed five weak antibodies. In addition, 48, 34, and 31 of the 53 weak or variant antigens were detected by the device, the slide method, and the semiautomated handler, respectively. Technician time with the standard methods was 1.6 to 7 times higher than that with the device. The technical performance of the device compared favorably with that of standard methods, with a number of advantages, including in particular the saving of technician time. Sample inadequacy was the most common cause of discrepancy, which suggests that standardization of sample collection can further improve the performance of the device.

  3. How to create a very-low-cost, very-low-power, credit-card-sized and real-time-ready datalogger

    NASA Astrophysics Data System (ADS)

    Bès de Berc, M.; Grunberg, M.; Engels, F.

    2015-03-01

    In order to improve an existing network, a field seismologist would have to add some extra sensors to a remote station. However, additional ADCs (analogue-to-digital converters) are not always implemented on commercial dataloggers, or, if they are, they may already be used. Installing additional ADCs often implies an expensive development, or the purchase of a new datalogger. We present here a simple method to take advantage of the ADCs of an embedded computer in order to create data in a seismological standard format and integrate them within the real-time data stream from the station. Our first goal is to plug temperature and pressure sensors on the ADCs, read data and record them in mini-seed format (seed stands for Standard for the Exchange of the Earthquake Data), and eventually transfer them to a central server together with the seismic data, by using seedlink, since mini-seed and seedlink are standard for seismology.

  4. Economic evaluation of smoke alarm distribution methods in Baltimore, Maryland.

    PubMed

    Diamond-Smith, Nadia; Bishai, David; Perry, Elise; Shields, Wendy; Gielen, Andrea

    2014-08-01

    This paper analyses costs and potential lives saved from a door-to-door smoke alarm distribution programme using data from a programme run by the Baltimore City Fire Department in 2010-2011. We evaluate the impact of a standard home visit programme and an enhanced home visit programme that includes having community health workers provide advance notice, promote the programme, and accompany fire department personnel on the day of the home visit, compared with each other and with an option of not having a home visit programme (control). Study data show that the home visit programme increased by 10% the number of homes that went from having no working alarm to having any working alarm, and the enhanced programme added an additional 1% to the number of homes protected. We use published reports on the relative risk of death in homes with and without a working smoke alarm to show that the standard programme would save an additional 0.24 lives per 10,000 homes over 10 years, compared with control areas and the enhanced home visit programme saved an additional 0.07 lives compared with the standard programme. The incremental cost of each life saved for the standard programme compared with control was $28,252 per death averted and $284,501per additional death averted for the enhanced compared with the standard. Following the US guidelines for the value of a life, both programmes are cost effective, however, the standard programme may offer a better value in terms of dollars per death averted. The study also highlights the need for better data on the benefits of current smoke alarm recommendations and their impact on injury, death and property damage. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  5. Functional Additive Mixed Models

    PubMed Central

    Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja

    2014-01-01

    We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach. PMID:26347592

  6. Functional Additive Mixed Models.

    PubMed

    Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja

    2015-04-01

    We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach.

  7. Interlaboratory assessment of mitotic index by flow cytometry confirms superior reproducibility relative to microscopic scoring.

    PubMed

    Roberts, D J; Spellman, R A; Sanok, K; Chen, H; Chan, M; Yurt, P; Thakur, A K; DeVito, G L; Murli, H; Stankowski, L F

    2012-05-01

    A flow cytometric procedure for determining mitotic index (MI) as part of the metaphase chromosome aberrations assay, developed and utilized routinely at Pfizer as part of their standard assay design, has been adopted successfully by Covance laboratories. This method, using antibodies against phosphorylated histone tails (H3PS10) and nucleic acid stain, has been evaluated by the two independent test sites and compared to manual scoring. Primary human lymphocytes were treated with cyclophosphamide, mitomycin C, benzo(a)pyrene, and etoposide at concentrations inducing dose-dependent cytotoxicity. Deming regression analysis indicates that the results generated via flow cytometry (FCM) were more consistent between sites than those generated via microscopy. Further analysis using the Bland-Altman modification of the Tukey mean difference method supports this finding, as the standard deviations (SDs) of differences in MI generated by FCM were less than half of those generated manually. Decreases in scoring variability owing to the objective nature of FCM, and the greater number of cells analyzed, make FCM a superior method for MI determination. In addition, the FCM method has proven to be transferable and easily integrated into standard genetic toxicology laboratory operations. Copyright © 2012 Wiley Periodicals, Inc.

  8. Early cosmology constrained

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Verde, Licia; Jimenez, Raul; Bellini, Emilio

    We investigate our knowledge of early universe cosmology by exploring how much additional energy density can be placed in different components beyond those in the ΛCDM model. To do this we use a method to separate early- and late-universe information enclosed in observational data, thus markedly reducing the model-dependency of the conclusions. We find that the 95% credibility regions for extra energy components of the early universe at recombination are: non-accelerating additional fluid density parameter Ω{sub MR} < 0.006 and extra radiation parameterised as extra effective neutrino species 2.3 < N {sub eff} < 3.2 when imposing flatness. Our constraintsmore » thus show that even when analyzing the data in this largely model-independent way, the possibility of hiding extra energy components beyond ΛCDM in the early universe is seriously constrained by current observations. We also find that the standard ruler, the sound horizon at radiation drag, can be well determined in a way that does not depend on late-time Universe assumptions, but depends strongly on early-time physics and in particular on additional components that behave like radiation. We find that the standard ruler length determined in this way is r {sub s} = 147.4 ± 0.7 Mpc if the radiation and neutrino components are standard, but the uncertainty increases by an order of magnitude when non-standard dark radiation components are allowed, to r {sub s} = 150 ± 5 Mpc.« less

  9. Early cosmology constrained

    NASA Astrophysics Data System (ADS)

    Verde, Licia; Bellini, Emilio; Pigozzo, Cassio; Heavens, Alan F.; Jimenez, Raul

    2017-04-01

    We investigate our knowledge of early universe cosmology by exploring how much additional energy density can be placed in different components beyond those in the ΛCDM model. To do this we use a method to separate early- and late-universe information enclosed in observational data, thus markedly reducing the model-dependency of the conclusions. We find that the 95% credibility regions for extra energy components of the early universe at recombination are: non-accelerating additional fluid density parameter ΩMR < 0.006 and extra radiation parameterised as extra effective neutrino species 2.3 < Neff < 3.2 when imposing flatness. Our constraints thus show that even when analyzing the data in this largely model-independent way, the possibility of hiding extra energy components beyond ΛCDM in the early universe is seriously constrained by current observations. We also find that the standard ruler, the sound horizon at radiation drag, can be well determined in a way that does not depend on late-time Universe assumptions, but depends strongly on early-time physics and in particular on additional components that behave like radiation. We find that the standard ruler length determined in this way is rs = 147.4 ± 0.7 Mpc if the radiation and neutrino components are standard, but the uncertainty increases by an order of magnitude when non-standard dark radiation components are allowed, to rs = 150 ± 5 Mpc.

  10. Evaluating the Good Ontology Design Guideline (GoodOD) with the Ontology Quality Requirements and Evaluation Method and Metrics (OQuaRE)

    PubMed Central

    Duque-Ramos, Astrid; Boeker, Martin; Jansen, Ludger; Schulz, Stefan; Iniesta, Miguela; Fernández-Breis, Jesualdo Tomás

    2014-01-01

    Objective To (1) evaluate the GoodOD guideline for ontology development by applying the OQuaRE evaluation method and metrics to the ontology artefacts that were produced by students in a randomized controlled trial, and (2) informally compare the OQuaRE evaluation method with gold standard and competency questions based evaluation methods, respectively. Background In the last decades many methods for ontology construction and ontology evaluation have been proposed. However, none of them has become a standard and there is no empirical evidence of comparative evaluation of such methods. This paper brings together GoodOD and OQuaRE. GoodOD is a guideline for developing robust ontologies. It was previously evaluated in a randomized controlled trial employing metrics based on gold standard ontologies and competency questions as outcome parameters. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies and has been successfully used for evaluating the quality of ontologies. Methods In this paper, we evaluate the effect of training in ontology construction based on the GoodOD guideline within the OQuaRE quality evaluation framework and compare the results with those obtained for the previous studies based on the same data. Results Our results show a significant effect of the GoodOD training over developed ontologies by topics: (a) a highly significant effect was detected in three topics from the analysis of the ontologies of untrained and trained students; (b) both positive and negative training effects with respect to the gold standard were found for five topics. Conclusion The GoodOD guideline had a significant effect over the quality of the ontologies developed. Our results show that GoodOD ontologies can be effectively evaluated using OQuaRE and that OQuaRE is able to provide additional useful information about the quality of the GoodOD ontologies. PMID:25148262

  11. Comparison of two filtration-elution procedures to improve the standard methods ISO 10705-1 & 2 for bacteriophage detection in groundwater, surface water and finished water samples.

    PubMed

    Helmi, K; Jacob, P; Charni-Ben-Tabassi, N; Delabre, K; Arnal, C

    2011-09-01

    To select a reliable method for bacteriophage concentration prior detection by culture from surface water, groundwater and drinking water to enhance the sensitivity of the standard methods ISO 10705-1 & 2. Artificially contaminated (groundwater and drinking water) and naturally contaminated (surface water) 1-litre samples were processed for bacteriophages detection. The spiked samples were inoculated with about 150 PFU of F-specific RNA bacteriophages and somatic coliphages using wastewater. Bacteriophage detection in the water samples was achieved using the standard method without and with a concentration step (electropositive Anodisc membrane or a pretreated electronegative Micro Filtration membrane, MF). For artificially contaminated matrices (drinking and ground waters), recovery rates using the concentration step were superior to 70% whilst analyses without concentration step mainly led to false negative results. Besides, the MF membrane presented higher performances compared with the Anodisc membrane. The concentration of a large volume of water (up to one litre) on a filter membrane avoids false negative results obtained by direct analysis as it allows detecting low number of bacteriophages in water samples. The addition of concentration step before applying the standard method could be useful to enhance the reliability of bacteriophages monitoring in water samples as bio-indicators to highlight faecal pollution. © No claim to French Government works. Letters in Applied Microbiology © 2011 The Society for Applied Microbiology.

  12. Microbleed detection using automated segmentation (MIDAS): a new method applicable to standard clinical MR images.

    PubMed

    Seghier, Mohamed L; Kolanko, Magdalena A; Leff, Alexander P; Jäger, Hans R; Gregoire, Simone M; Werring, David J

    2011-03-23

    Cerebral microbleeds, visible on gradient-recalled echo (GRE) T2* MRI, have generated increasing interest as an imaging marker of small vessel diseases, with relevance for intracerebral bleeding risk or brain dysfunction. Manual rating methods have limited reliability and are time-consuming. We developed a new method for microbleed detection using automated segmentation (MIDAS) and compared it with a validated visual rating system. In thirty consecutive stroke service patients, standard GRE T2* images were acquired and manually rated for microbleeds by a trained observer. After spatially normalizing each patient's GRE T2* images into a standard stereotaxic space, the automated microbleed detection algorithm (MIDAS) identified cerebral microbleeds by explicitly incorporating an "extra" tissue class for abnormal voxels within a unified segmentation-normalization model. The agreement between manual and automated methods was assessed using the intraclass correlation coefficient (ICC) and Kappa statistic. We found that MIDAS had generally moderate to good agreement with the manual reference method for the presence of lobar microbleeds (Kappa = 0.43, improved to 0.65 after manual exclusion of obvious artefacts). Agreement for the number of microbleeds was very good for lobar regions: (ICC = 0.71, improved to ICC = 0.87). MIDAS successfully detected all patients with multiple (≥2) lobar microbleeds. MIDAS can identify microbleeds on standard MR datasets, and with an additional rapid editing step shows good agreement with a validated visual rating system. MIDAS may be useful in screening for multiple lobar microbleeds.

  13. Novel methods of imaging and analysis for the thermoregulatory sweat test.

    PubMed

    Carroll, Michael Sean; Reed, David W; Kuntz, Nancy L; Weese-Mayer, Debra Ellyn

    2018-06-07

    The thermoregulatory sweat test (TST) can be central to the identification and management of disorders affecting sudomotor function and small sensory and autonomic nerve fibers, but the cumbersome nature of the standard testing protocol has prevented its widespread adoption. A high resolution, quantitative, clean and simple assay of sweating could significantly improve identification and management of these disorders. Images from 89 clinical TSTs were analyzed retrospectively using two novel techniques. First, using the standard indicator powder, skin surface sweat distributions were determined algorithmically for each patient. Second, a fundamentally novel method using thermal imaging of forced evaporative cooling was evaluated through comparison with the standard technique. Correlation and receiver operating characteristic analyses were used to determine the degree of match between these methods, and the potential limits of thermal imaging were examined through cumulative analysis of all studied patients. Algorithmic encoding of sweating and non-sweating regions produces a more objective analysis for clinical decision making. Additionally, results from the forced cooling method correspond well with those from indicator powder imaging, with a correlation across spatial regions of -0.78 (CI: -0.84 to -0.71). The method works similarly across body regions, and frame-by-frame analysis suggests the ability to identify sweating regions within about 1 second of imaging. While algorithmic encoding can enhance the standard sweat testing protocol, thermal imaging with forced evaporative cooling can dramatically improve the TST by making it less time-consuming and more patient-friendly than the current approach.

  14. Static headspace gas chromatographic method for quantitative determination of residual solvents in pharmaceutical drug substances according to european pharmacopoeia requirements.

    PubMed

    Otero, Raquel; Carrera, Guillem; Dulsat, Joan Francesc; Fábregas, José Luís; Claramunt, Juan

    2004-11-19

    A static headspace (HS) gas chromatographic method for quantitative determination of residual solvents in a drug substance has been developed according to European Pharmacopoeia general procedure. A water-dimethylformamide mixture is proposed as sample solvent to obtain good sensitivity and recovery. The standard addition technique with internal standard quantitation was used for ethanol, tetrahydrofuran and toluene determination. Validation was performed within the requirements of ICH validation guidelines Q2A and Q2B. Selectivity was tested for 36 solvents, and system suitability requirements described in the European Pharmacopoeia were checked. Limits of detection and quantitation, precision, linearity, accuracy, intermediate precision and robustness were determined, and excellent results were obtained.

  15. A novel approach for quantitation of glucosylceramide in human dried blood spot using LC-MS/MS.

    PubMed

    Ji, Allena Ji; Wang, Haixing; Ziso-Qejvanaj, Enida; Zheng, Kefei; Chung, Lee Lee; Foley, Timothy; Chuang, Wei-Lien; Richards, Susan; Sung, Crystal

    2015-01-01

    Glucosylceramide, an efficacy biomarker for Gaucher Type 1 disease, exhibits poor solubility in polar solvents and whole blood which makes it difficult to prepare a homogenous blood standard. We developed a novel method using standard addition approach by spiking a small volume of analyte solution on the surface of prespotted dried blood spot. The whole spots were punched out for subsequent extraction and LC-MS/MS analysis. The assay performance met all validation acceptance criteria. Glucosylceramide concentrations in 50 paired plasma and dry blood spot samples obtained from Gaucher Type 1 patients were tested and the results demonstrated the feasibility of using the DBS method for clinical biomarker monitoring. The new approach greatly improves assay precision and accuracy.

  16. Interlaboratory Study of Quality Control Isolates for a Broth Microdilution Method (Modified CLSI M38-A) for Testing Susceptibilities of Dermatophytes to Antifungals▿

    PubMed Central

    Ghannoum, M. A.; Arthington-Skaggs, B.; Chaturvedi, V.; Espinel-Ingroff, A.; Pfaller, M. A.; Rennie, R.; Rinaldi, M. G.; Walsh, T. J.

    2006-01-01

    The Clinical and Laboratory Standards Institute (CLSI; formerly National Committee for Clinical Laboratory Standards, or NCCLS) M38-A standard for the susceptibility testing of filamentous fungi does not specifically address the testing of dermatophytes. In 2003, a multicenter study investigated the reproducibility of the microdilution method developed at the Center for Medical Mycology, Cleveland, Ohio, for testing the susceptibility of dermatophytes. Data from that study supported the introduction of this method for testing dermatophytes in the future version of the CLSI M38-A standard. In order for the method to be accepted by CLSI, appropriate quality control isolates needed to be identified. To that end, an interlaboratory study, involving the original six laboratories plus two additional sites, was conducted to evaluate potential candidates for quality control isolates. These candidate strains included five Trichophyton rubrum strains known to have elevated MICs to terbinafine and five Trichophyton mentagrophytes strains. Antifungal agents tested included ciclopirox, fluconazole, griseofulvin, itraconazole, posaconazole, terbinafine, and voriconazole. Based on the data generated, two quality control isolates, one T. rubrum isolate and one T. mentagrophytes isolate, were identified and submitted to the American Type Culture Collection (ATCC) for inclusion as reference strains. Ranges encompassing 95.2 to 97.9% of all data points for all seven drugs were established. PMID:17050812

  17. Preparation of crotaline F-ab antivenom (CroFab) with automated mixing methods: in vitro observations.

    PubMed

    Vohra, Rais; Kelner, Michael; Clark, Richard F

    2009-01-01

    Crotaline Polyvalent Ovine Fab antivenom (CroFab, Savage Laboratories and Protherics Inc., Brentwood, TN, USA) preparation requires that the lyophilized powder be manually reconstituted before use. We compared automated methods for driving the product into solution with the standard manual method of reconstitution, and the effect of repeated rinsing of the product vial, on the per-vial availability of antivenom. Normal saline (NS, 10 mL) was added to 12 vials of expired CroFab. Vials were assigned in pairs to each of six mixing methods, including one pair mixed manually as recommended by the product package insert. Each vial's contents were diluted to a final volume of 75 mL of normal saline. Protein concentration was measured with a colorimetric assay. The fluid left in each vial was removed and the vial was washed with 10 mL NS. Total protein yield from each step was calculated. There was no significant change in protein yield among three of five automated mixing methods when compared to manual reconstitution. Repeat rinsing of the product vial with an additional 10 mLs of fluid added to the protein yield regardless of the mixing method used. We found slightly higher protein yields with all automated methods compared to manual mixing, but only two of five comparisons with the standard mixing method demonstrated statistical significance. However, for all methods tested, the addition of a second rinsing and recovery step increased the amount of protein recovered considerably, presumably by allowing solution of protein trapped in the foamy residues. Automated mixing methods and repeat rinsing of the product vial may allow higher protein yields in the preparation of CroFab antivenom.

  18. Semiautomated segmentation of head and neck cancers in 18F-FDG PET scans: A just-enough-interaction approach

    PubMed Central

    Beichel, Reinhard R.; Van Tol, Markus; Ulrich, Ethan J.; Bauer, Christian; Chang, Tangel; Plichta, Kristin A.; Smith, Brian J.; Sunderland, John J.; Graham, Michael M.; Sonka, Milan; Buatti, John M.

    2016-01-01

    Purpose: The purpose of this work was to develop, validate, and compare a highly computer-aided method for the segmentation of hot lesions in head and neck 18F-FDG PET scans. Methods: A semiautomated segmentation method was developed, which transforms the segmentation problem into a graph-based optimization problem. For this purpose, a graph structure around a user-provided approximate lesion centerpoint is constructed and a suitable cost function is derived based on local image statistics. To handle frequently occurring situations that are ambiguous (e.g., lesions adjacent to each other versus lesion with inhomogeneous uptake), several segmentation modes are introduced that adapt the behavior of the base algorithm accordingly. In addition, the authors present approaches for the efficient interactive local and global refinement of initial segmentations that are based on the “just-enough-interaction” principle. For method validation, 60 PET/CT scans from 59 different subjects with 230 head and neck lesions were utilized. All patients had squamous cell carcinoma of the head and neck. A detailed comparison with the current clinically relevant standard manual segmentation approach was performed based on 2760 segmentations produced by three experts. Results: Segmentation accuracy measured by the Dice coefficient of the proposed semiautomated and standard manual segmentation approach was 0.766 and 0.764, respectively. This difference was not statistically significant (p = 0.2145). However, the intra- and interoperator standard deviations were significantly lower for the semiautomated method. In addition, the proposed method was found to be significantly faster and resulted in significantly higher intra- and interoperator segmentation agreement when compared to the manual segmentation approach. Conclusions: Lack of consistency in tumor definition is a critical barrier for radiation treatment targeting as well as for response assessment in clinical trials and in clinical oncology decision-making. The properties of the authors approach make it well suited for applications in image-guided radiation oncology, response assessment, or treatment outcome prediction. PMID:27277044

  19. Semiautomated segmentation of head and neck cancers in 18F-FDG PET scans: A just-enough-interaction approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beichel, Reinhard R., E-mail: reinhard-beichel@uiowa.edu; Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242; Department of Internal Medicine, University of Iowa, Iowa City, Iowa 52242

    Purpose: The purpose of this work was to develop, validate, and compare a highly computer-aided method for the segmentation of hot lesions in head and neck 18F-FDG PET scans. Methods: A semiautomated segmentation method was developed, which transforms the segmentation problem into a graph-based optimization problem. For this purpose, a graph structure around a user-provided approximate lesion centerpoint is constructed and a suitable cost function is derived based on local image statistics. To handle frequently occurring situations that are ambiguous (e.g., lesions adjacent to each other versus lesion with inhomogeneous uptake), several segmentation modes are introduced that adapt the behaviormore » of the base algorithm accordingly. In addition, the authors present approaches for the efficient interactive local and global refinement of initial segmentations that are based on the “just-enough-interaction” principle. For method validation, 60 PET/CT scans from 59 different subjects with 230 head and neck lesions were utilized. All patients had squamous cell carcinoma of the head and neck. A detailed comparison with the current clinically relevant standard manual segmentation approach was performed based on 2760 segmentations produced by three experts. Results: Segmentation accuracy measured by the Dice coefficient of the proposed semiautomated and standard manual segmentation approach was 0.766 and 0.764, respectively. This difference was not statistically significant (p = 0.2145). However, the intra- and interoperator standard deviations were significantly lower for the semiautomated method. In addition, the proposed method was found to be significantly faster and resulted in significantly higher intra- and interoperator segmentation agreement when compared to the manual segmentation approach. Conclusions: Lack of consistency in tumor definition is a critical barrier for radiation treatment targeting as well as for response assessment in clinical trials and in clinical oncology decision-making. The properties of the authors approach make it well suited for applications in image-guided radiation oncology, response assessment, or treatment outcome prediction.« less

  20. Application of advanced sampling and analysis methods to predict the structure of adsorbed protein on a material surface

    PubMed Central

    Abramyan, Tigran M.; Hyde-Volpe, David L.; Stuart, Steven J.; Latour, Robert A.

    2017-01-01

    The use of standard molecular dynamics simulation methods to predict the interactions of a protein with a material surface have the inherent limitations of lacking the ability to determine the most likely conformations and orientations of the adsorbed protein on the surface and to determine the level of convergence attained by the simulation. In addition, standard mixing rules are typically applied to combine the nonbonded force field parameters of the solution and solid phases the system to represent interfacial behavior without validation. As a means to circumvent these problems, the authors demonstrate the application of an efficient advanced sampling method (TIGER2A) for the simulation of the adsorption of hen egg-white lysozyme on a crystalline (110) high-density polyethylene surface plane. Simulations are conducted to generate a Boltzmann-weighted ensemble of sampled states using force field parameters that were validated to represent interfacial behavior for this system. The resulting ensembles of sampled states were then analyzed using an in-house-developed cluster analysis method to predict the most probable orientations and conformations of the protein on the surface based on the amount of sampling performed, from which free energy differences between the adsorbed states were able to be calculated. In addition, by conducting two independent sets of TIGER2A simulations combined with cluster analyses, the authors demonstrate a method to estimate the degree of convergence achieved for a given amount of sampling. The results from these simulations demonstrate that these methods enable the most probable orientations and conformations of an adsorbed protein to be predicted and that the use of our validated interfacial force field parameter set provides closer agreement to available experimental results compared to using standard CHARMM force field parameterization to represent molecular behavior at the interface. PMID:28514864

  1. The role of ultrasound guidance in pediatric caudal block

    PubMed Central

    Erbüyün, Koray; Açıkgöz, Barış; Ok, Gülay; Yılmaz, Ömer; Temeltaş, Gökhan; Tekin, İdil; Tok, Demet

    2016-01-01

    Objectives: To compare the time interval of the procedure, possible complications, post-operative pain levels, additional analgesics, and nurse satisfaction in ultrasonography-guided and standard caudal block applications. Methods: This retrospective study was conducted in Celal Bayar University Hospital, Manisa, Turkey, between January and December 2014, included 78 pediatric patients. Caudal block was applied to 2 different groups; one with ultrasound guide, and the other using the standard method. Results: The time interval of the procedure was significantly shorter in the standard application group compared with ultrasound-guided group (p=0.020). Wong-Baker FACES Pain Rating Scale values obtained at the 90th minute was statistically lower in the standard application group compared with ultrasound-guided group (p=0.035). No statistically significant difference was found on the other parameters between the 2 groups. The shorter time interval of the procedure at standard application group should not be considered as a distinctive mark by the pediatric anesthesiologists, because this time difference was as short as seconds. Conclusion: Ultrasound guidance for caudal block applications would neither increase nor decrease the success of the treatment. However, ultrasound guidance should be needed in cases where the detection of sacral anatomy is difficult, especially by palpations. PMID:26837396

  2. Baseline Assessment of 25-Hydroxyvitamin D Reference Material and Proficiency Testing/External Quality Assurance Material Commutability: A Vitamin D Standardization Program Study.

    PubMed

    Phinney, Karen W; Sempos, Christopher T; Tai, Susan S-C; Camara, Johanna E; Wise, Stephen A; Eckfeldt, John H; Hoofnagle, Andrew N; Carter, Graham D; Jones, Julia; Myers, Gary L; Durazo-Arvizu, Ramon; Miller, W Greg; Bachmann, Lorin M; Young, Ian S; Pettit, Juanita; Caldwell, Grahame; Liu, Andrew; Brooks, Stephen P J; Sarafin, Kurtis; Thamm, Michael; Mensink, Gert B M; Busch, Markus; Rabenberg, Martina; Cashman, Kevin D; Kiely, Mairead; Galvin, Karen; Zhang, Joy Y; Kinsella, Michael; Oh, Kyungwon; Lee, Sun-Wha; Jung, Chae L; Cox, Lorna; Goldberg, Gail; Guberg, Kate; Meadows, Sarah; Prentice, Ann; Tian, Lu; Brannon, Patsy M; Lucas, Robyn M; Crump, Peter M; Cavalier, Etienne; Merkel, Joyce; Betz, Joseph M

    2017-09-01

    The Vitamin D Standardization Program (VDSP) coordinated a study in 2012 to assess the commutability of reference materials and proficiency testing/external quality assurance materials for total 25-hydroxyvitamin D [25(OH)D] in human serum, the primary indicator of vitamin D status. A set of 50 single-donor serum samples as well as 17 reference and proficiency testing/external quality assessment materials were analyzed by participating laboratories that used either immunoassay or LC-MS methods for total 25(OH)D. The commutability test materials included National Institute of Standards and Technology Standard Reference Material 972a Vitamin D Metabolites in Human Serum as well as materials from the College of American Pathologists and the Vitamin D External Quality Assessment Scheme. Study protocols and data analysis procedures were in accordance with Clinical and Laboratory Standards Institute guidelines. The majority of the test materials were found to be commutable with the methods used in this commutability study. These results provide guidance for laboratories needing to choose appropriate reference materials and select proficiency or external quality assessment programs and will serve as a foundation for additional VDSP studies.

  3. Nutrigenomics, beta-cell function and type 2 diabetes.

    PubMed

    Nino-Fong, R; Collins, Tm; Chan, Cb

    2007-03-01

    The present investigation was designed to investigate the accuracy and precision of lactate measurement obtained with contemporary biosensors (Chiron Diagnostics, Nova Biomedical) and standard enzymatic photometric procedures (Sigma Diagnostics, Abbott Laboratories, Analyticon). Measurements were performed in vitro before and after the stepwise addition of 1 molar sodium lactate solution to samples of fresh frozen plasma to systematically achieve lactate concentrations of up to 20 mmol/l. Precision of the methods investigated varied between 1% and 7%, accuracy ranged between 2% and -33% with the variability being lowest in the Sigma photometric procedure (6%) and more than 13% in both biosensor methods. Biosensors for lactate measurement provide adequate accuracy in mean with the limitation of highly variable results. A true lactate value of 6 mmol/l was found to be presented between 4.4 and 7.6 mmol/l or even with higher difference. Biosensors and standard enzymatic photometric procedures are only limited comparable because the differences between paired determinations presented to be several mmol. The advantage of biosensors is the complete lack of preanalytical sample preparation which appeared to be the major limitation of standard photometry methods.

  4. Electrothermal atomic absorption spectrometric determination of copper in nickel-base alloys with various chemical modifiers*1

    NASA Astrophysics Data System (ADS)

    Tsai, Suh-Jen Jane; Shiue, Chia-Chann; Chang, Shiow-Ing

    1997-07-01

    The analytical characteristics of copper in nickel-base alloys have been investigated with electrothermal atomic absorption spectrometry. Deuterium background correction was employed. The effects of various chemical modifiers on the analysis of copper were investigated. Organic modifiers which included 2-(5-bromo-2-pyridylazo)-5-(diethylamino-phenol) (Br-PADAP), ammonium citrate, 1-(2-pyridylazo)-naphthol, 4-(2-pyridylazo)resorcinol, ethylenediaminetetraacetic acid and Triton X-100 were studied. Inorganic modifiers palladium nitrate, magnesium nitrate, aluminum chloride, ammonium dihydrogen phosphate, hydrogen peroxide and potassium nitrate were also applied in this work. In addition, zirconium hydroxide and ammonium hydroxide precipitation methods have also been studied. Interference effects were effectively reduced with Br-PADAP modifier. Aqueous standards were used to construct the calibration curves. The detection limit was 1.9 pg. Standard reference materials of nickel-base alloys were used to evaluate the accuracy of the proposed method. The copper contents determined with the proposed method agreed closely with the certified values of the reference materials. The recoveries were within the range 90-100% with relative standard deviation of less than 10%. Good precision was obtained.

  5. Simple 1H NMR spectroscopic method for assay of salts of the contrast agent diatrizoate in commercial solutions.

    PubMed

    Hanna, G M; Lau-Cam, C A

    1996-01-01

    A simple, accurate, and specific 1H NMR spectroscopic method was developed for the assay of diatrizoate meglumine or the combination diatrizoate meglumine and diatrizoate sodium in commercial solutions for injection. A mixture of injectable solution and sodium acetate, the internal standard, was diluted with D2O and the 1H NMR spectrum of the solution was obtained. Two approaches were used to calculate the drug content, based on the integral values for the -N-CO-CH3 protons of diatrizoic acid at 2.23 ppm, and -N-CH3 protons of meglumine at 2.73 ppm, and the CH3-CO-protons of sodium acetate at 1.9 ppm. Recoveries (mean +/- standard deviation) of diatrizoic acid and meglumine from 10 synthetic mixtures of various amounts of these compounds with a fixed amount of internal standard were 100.3 +/- 0.55% and 100.1 +/- 0.98%, respectively. In addition to providing a direct means of simultaneously assaying diatrizoic acid and meglumine, the proposed NMR method can also be used to identify diatrizoate meglumine and each of its molecular components.

  6. Isotope-ratio-monitoring gas chromatography-mass spectrometry: methods for isotopic calibration

    NASA Technical Reports Server (NTRS)

    Merritt, D. A.; Brand, W. A.; Hayes, J. M.

    1994-01-01

    In trial analyses of a series of n-alkanes, precise determinations of 13C contents were based on isotopic standards introduced by five different techniques and results were compared. Specifically, organic-compound standards were coinjected with the analytes and carried through chromatography and combustion with them; or CO2 was supplied from a conventional inlet and mixed with the analyte in the ion source, or CO2 was supplied from an auxiliary mixing volume and transmitted to the source without interruption of the analyte stream. Additionally, two techniques were investigated in which the analyte stream was diverted and CO2 standards were placed on a near-zero background. All methods provided accurate results. Where applicable, methods not involving interruption of the analyte stream provided the highest performance (sigma = 0.00006 at.% 13C or 0.06% for 250 pmol C as CO2 reaching the ion source), but great care was required. Techniques involving diversion of the analyte stream were immune to interference from coeluting sample components and still provided high precision (0.0001 < or = sigma < or = 0.0002 at.% or 0.1 < or = sigma < or = 0.2%).

  7. NASA Handbook for Models and Simulations: An Implementation Guide for NASA-STD-7009

    NASA Technical Reports Server (NTRS)

    Steele, Martin J.

    2013-01-01

    The purpose of this Handbook is to provide technical information, clarification, examples, processes, and techniques to help institute good modeling and simulation practices in the National Aeronautics and Space Administration (NASA). As a companion guide to NASA-STD- 7009, Standard for Models and Simulations, this Handbook provides a broader scope of information than may be included in a Standard and promotes good practices in the production, use, and consumption of NASA modeling and simulation products. NASA-STD-7009 specifies what a modeling and simulation activity shall or should do (in the requirements) but does not prescribe how the requirements are to be met, which varies with the specific engineering discipline, or who is responsible for complying with the requirements, which depends on the size and type of project. A guidance document, which is not constrained by the requirements of a Standard, is better suited to address these additional aspects and provide necessary clarification. This Handbook stems from the Space Shuttle Columbia Accident Investigation (2003), which called for Agency-wide improvements in the "development, documentation, and operation of models and simulations"' that subsequently elicited additional guidance from the NASA Office of the Chief Engineer to include "a standard method to assess the credibility of the models and simulations."2 General methods applicable across the broad spectrum of model and simulation (M&S) disciplines were sought to help guide the modeling and simulation processes within NASA and to provide for consistent reporting ofM&S activities and analysis results. From this, the standardized process for the M&S activity was developed. The major contents of this Handbook are the implementation details of the general M&S requirements ofNASA-STD-7009, including explanations, examples, and suggestions for improving the credibility assessment of an M&S-based analysis.

  8. High-Level Disinfection of Otorhinolaryngology Clinical Instruments: An Evaluation of the Efficacy and Cost-effectiveness of Instrument Storage.

    PubMed

    Yalamanchi, Pratyusha; Yu, Jason; Chandler, Laura; Mirza, Natasha

    2018-01-01

    Objectives Despite increasing interest in individual instrument storage, risk of bacterial cross-contamination of otorhinolaryngology clinic instruments has not been assessed. This study is the first to determine the clinical efficacy and cost-effectiveness of standard high-level disinfection and clinic instrument storage. Methods To assess for cross-contamination, surveillance cultures of otorhinolaryngology clinic instruments subject to standard high-level disinfection and storage were obtained at the start and end of the outpatient clinical workday. Rate of microorganism recovery was compared with cultures of instruments stored in individual peel packs and control cultures of contaminated instruments. Based on historical clinic data, the direct allocation method of cost accounting was used to determine aggregate raw material cost and additional labor hours required to process and restock peel-packed instruments. Results Among 150 cultures of standard high-level disinfected and co-located clinic instruments, 3 positive bacterial cultures occurred; 100% of control cultures were positive for bacterial species ( P < .001). There was no statistical difference between surveillance cultures obtained before and after the clinic day. While there was also no significant difference in rate of contamination between peel-packed and co-located instruments, peel packing all instruments requires 6250 additional labor hours, and conservative analyses place the cost of individual semicritical instrument storage at $97,852.50 per year. Discussion With in vitro inoculation of >200 otorhinolaryngology clinic instruments, this study demonstrates that standard high-level disinfection and storage are equally efficacious to more time-consuming and expensive individual instrument storage protocols, such as peel packing, with regard to bacterial contamination. Implications for Practice Standard high-level disinfection and storage are equally effective to labor-intensive and costly individual instrument storage protocols.

  9. Spectral data compression using weighted principal component analysis with consideration of human visual system and light sources

    NASA Astrophysics Data System (ADS)

    Cao, Qian; Wan, Xiaoxia; Li, Junfeng; Liu, Qiang; Liang, Jingxing; Li, Chan

    2016-10-01

    This paper proposed two weight functions based on principal component analysis (PCA) to reserve more colorimetric information in spectral data compression process. One weight function consisted of the CIE XYZ color-matching functions representing the characteristic of the human visual system, while another was made up of the CIE XYZ color-matching functions of human visual system and relative spectral power distribution of the CIE standard illuminant D65. The improvement obtained from the proposed two methods were tested to compress and reconstruct the reflectance spectra of 1600 glossy Munsell color chips and 1950 Natural Color System color chips as well as six multispectral images. The performance was evaluated by the mean values of color difference under the CIE 1931 standard colorimetric observer and the CIE standard illuminant D65 and A. The mean values of root mean square errors between the original and reconstructed spectra were also calculated. The experimental results show that the proposed two methods significantly outperform the standard PCA and another two weighted PCA in the aspects of colorimetric reconstruction accuracy with very slight degradation in spectral reconstruction accuracy. In addition, weight functions with the CIE standard illuminant D65 can improve the colorimetric reconstruction accuracy compared to weight functions without the CIE standard illuminant D65.

  10. Aminocarminic acid in E120-labelled food additives and beverages.

    PubMed

    Sabatino, Leonardo; Scordino, Monica; Gargano, Maria; Lazzaro, Francesco; Borzì, Marco A; Traulo, Pasqualino; Gagliano, Giacomo

    2012-01-01

    An analytical method was developed for investigating aminocarminic acid occurrence in E120-labelled red-coloured-beverages and in E120 additives, with the aim of controlling the purity of the carmine additive in countries where the use of aminocarminic acid is forbidden. The carminic acid and the aminocarminic acid were separated by high-performance liquid chromatography-photodiode array-tandem mass spectrography (HPLC-PDA-MS/MS). The method was statistically validated. The regression lines, ranging from 10 to 100 mg/L, showed r(2 )> 0.9996. Recoveries from 97% to 101% were obtained for the fortification level of 50 mg/L; the relative standard deviations did not exceed 3%. The LODs were below 2 mg/L, whereas the LOQs did not exceed 4 mg/L. The method was successfully applied to 27 samples of commercial E120-labelled red-coloured beverages and E120 additives, collected in Italy during quality control investigations conducted by the Ministry. The results demonstrated that more than 50% of the samples contained aminocarminic acid, evidencing the alarming illicit use of this semi-synthetic carmine acid derivative.

  11. Submicrodeterminations of thiols, disulphides and thiol esters in serum by using o-hydroxymercuribenzoic acid and dithiofluorescein

    PubMed Central

    Wroński, Mieczysław

    1967-01-01

    1. Methods are described for selective estimation of thiols, disulphides and thiol esters in standard solutions and in serum. The methods are based on the reaction with the excess of o-hydroxymercuribenzoic acid (HMB) in alkaline solution with subsequent addition of dithiofluorescein in excess and determination of the extinction at 588mμ. The sensitivity of the methods amounts to 1·5×10−9g.equiv. in 5ml. of final solution. Of results obtained on standard solutions 80% have the errors within the range ±4%. 2. It has been found that serum contains an unidentified substance (substance X) producing green complexes with dithiofluorescein which undergo decomposition on addition of formaldehyde. The correction for substance X must be estimated in a separate sample and taken into account. The concentration of substance X can be calculated from extinctions measured at 588mμ and 635mμ in the presence of dithiofluorescein in excess. 3. The selective determination of thiols and disulphides is based on different reaction rates with formaldehyde. The complexes between HMB and cysteine can be selectively decomposed by formaldehyde, and free glutathione can be selectively removed by formaldehyde in the presence of protein thiols. 4. Thiols are determined in the presence of triethylamine, thiols plus disulphides in the presence of triethylamine and sulphite, and thiols plus thiol esters in the presence of dimethylamine, with subsequent addition of ammonium sulphate. PMID:6049936

  12. Software Formal Inspections Guidebook

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The Software Formal Inspections Guidebook is designed to support the inspection process of software developed by and for NASA. This document provides information on how to implement a recommended and proven method for conducting formal inspections of NASA software. This Guidebook is a companion document to NASA Standard 2202-93, Software Formal Inspections Standard, approved April 1993, which provides the rules, procedures, and specific requirements for conducting software formal inspections. Application of the Formal Inspections Standard is optional to NASA program or project management. In cases where program or project management decide to use the formal inspections method, this Guidebook provides additional information on how to establish and implement the process. The goal of the formal inspections process as documented in the above-mentioned Standard and this Guidebook is to provide a framework and model for an inspection process that will enable the detection and elimination of defects as early as possible in the software life cycle. An ancillary aspect of the formal inspection process incorporates the collection and analysis of inspection data to effect continual improvement in the inspection process and the quality of the software subjected to the process.

  13. Automated model-based quantitative analysis of phantoms with spherical inserts in FDG PET scans.

    PubMed

    Ulrich, Ethan J; Sunderland, John J; Smith, Brian J; Mohiuddin, Imran; Parkhurst, Jessica; Plichta, Kristin A; Buatti, John M; Beichel, Reinhard R

    2018-01-01

    Quality control plays an increasingly important role in quantitative PET imaging and is typically performed using phantoms. The purpose of this work was to develop and validate a fully automated analysis method for two common PET/CT quality assurance phantoms: the NEMA NU-2 IQ and SNMMI/CTN oncology phantom. The algorithm was designed to only utilize the PET scan to enable the analysis of phantoms with thin-walled inserts. We introduce a model-based method for automated analysis of phantoms with spherical inserts. Models are first constructed for each type of phantom to be analyzed. A robust insert detection algorithm uses the model to locate all inserts inside the phantom. First, candidates for inserts are detected using a scale-space detection approach. Second, candidates are given an initial label using a score-based optimization algorithm. Third, a robust model fitting step aligns the phantom model to the initial labeling and fixes incorrect labels. Finally, the detected insert locations are refined and measurements are taken for each insert and several background regions. In addition, an approach for automated selection of NEMA and CTN phantom models is presented. The method was evaluated on a diverse set of 15 NEMA and 20 CTN phantom PET/CT scans. NEMA phantoms were filled with radioactive tracer solution at 9.7:1 activity ratio over background, and CTN phantoms were filled with 4:1 and 2:1 activity ratio over background. For quantitative evaluation, an independent reference standard was generated by two experts using PET/CT scans of the phantoms. In addition, the automated approach was compared against manual analysis, which represents the current clinical standard approach, of the PET phantom scans by four experts. The automated analysis method successfully detected and measured all inserts in all test phantom scans. It is a deterministic algorithm (zero variability), and the insert detection RMS error (i.e., bias) was 0.97, 1.12, and 1.48 mm for phantom activity ratios 9.7:1, 4:1, and 2:1, respectively. For all phantoms and at all contrast ratios, the average RMS error was found to be significantly lower for the proposed automated method compared to the manual analysis of the phantom scans. The uptake measurements produced by the automated method showed high correlation with the independent reference standard (R 2 ≥ 0.9987). In addition, the average computing time for the automated method was 30.6 s and was found to be significantly lower (P ≪ 0.001) compared to manual analysis (mean: 247.8 s). The proposed automated approach was found to have less error when measured against the independent reference than the manual approach. It can be easily adapted to other phantoms with spherical inserts. In addition, it eliminates inter- and intraoperator variability in PET phantom analysis and is significantly more time efficient, and therefore, represents a promising approach to facilitate and simplify PET standardization and harmonization efforts. © 2017 American Association of Physicists in Medicine.

  14. An in vitro digestion method adapted for carotenoids and carotenoid esters: moving forward towards standardization.

    PubMed

    Rodrigues, Daniele Bobrowski; Mariutti, Lilian Regina Barros; Mercadante, Adriana Zerlotti

    2016-12-07

    In vitro digestion methods are a useful approach to predict the bioaccessibility of food components and overcome some limitations or disadvantages associated with in vivo methodologies. Recently, the INFOGEST network published a static method of in vitro digestion with a proposal for assay standardization. The INFOGEST method is not specific for any food component; therefore, we aimed to adapt this method to assess the in vitro bioaccessibility of carotenoids and carotenoid esters in a model fruit (Byrsonima crassifolia). Two additional steps were coupled to the in vitro digestion procedure, centrifugation at 20 000g for the separation of the aqueous phase containing mixed micelles and exhaustive carotenoid extraction with an organic solvent. The effect of electrolytes, enzymes and bile acids on carotenoid micellarization and stability was also tested. The results were compared with those found with a simpler method that has already been used for carotenoid bioaccessibility analysis. These values were in the expected range for free carotenoids (5-29%), monoesters (9-26%) and diesters (4-28%). In general, the in vitro bioaccessibility of carotenoids assessed by the adapted INFOGEST method was significantly higher (p < 0.05) than those assessed by the simplest protocol, with or without the addition of simulated fluids. Although no trend was observed, differences in bioaccessibility values depended on the carotenoid form (free, monoester or diester), isomerization (Z/E) and the in vitro digestion protocol. To the best of our knowledge, it was the first time that a systematic identification of carotenoid esters by HPLC-DAD-MS/MS after in vitro digestion using the INFOGEST protocol was carried out.

  15. 21 CFR 170.10 - Food additives in standardized foods.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 3 2012-04-01 2012-04-01 false Food additives in standardized foods. 170.10... (CONTINUED) FOOD FOR HUMAN CONSUMPTION (CONTINUED) FOOD ADDITIVES General Provisions § 170.10 Food additives... the Act, which proposes the inclusion of a food additive in such definition and standard of identity...

  16. 21 CFR 170.10 - Food additives in standardized foods.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 3 2013-04-01 2013-04-01 false Food additives in standardized foods. 170.10... (CONTINUED) FOOD FOR HUMAN CONSUMPTION (CONTINUED) FOOD ADDITIVES General Provisions § 170.10 Food additives... the Act, which proposes the inclusion of a food additive in such definition and standard of identity...

  17. 21 CFR 170.10 - Food additives in standardized foods.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 3 2011-04-01 2011-04-01 false Food additives in standardized foods. 170.10... (CONTINUED) FOOD FOR HUMAN CONSUMPTION (CONTINUED) FOOD ADDITIVES General Provisions § 170.10 Food additives... the Act, which proposes the inclusion of a food additive in such definition and standard of identity...

  18. 21 CFR 170.10 - Food additives in standardized foods.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Food additives in standardized foods. 170.10... (CONTINUED) FOOD FOR HUMAN CONSUMPTION (CONTINUED) FOOD ADDITIVES General Provisions § 170.10 Food additives... the Act, which proposes the inclusion of a food additive in such definition and standard of identity...

  19. Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report.

    PubMed

    Appelbaum, Mark; Cooper, Harris; Kline, Rex B; Mayo-Wilson, Evan; Nezu, Arthur M; Rao, Stephen M

    2018-01-01

    Following a review of extant reporting standards for scientific publication, and reviewing 10 years of experience since publication of the first set of reporting standards by the American Psychological Association (APA; APA Publications and Communications Board Working Group on Journal Article Reporting Standards, 2008), the APA Working Group on Quantitative Research Reporting Standards recommended some modifications to the original standards. Examples of modifications include division of hypotheses, analyses, and conclusions into 3 groupings (primary, secondary, and exploratory) and some changes to the section on meta-analysis. Several new modules are included that report standards for observational studies, clinical trials, longitudinal studies, replication studies, and N-of-1 studies. In addition, standards for analytic methods with unique characteristics and output (structural equation modeling and Bayesian analysis) are included. These proposals were accepted by the Publications and Communications Board of APA and supersede the standards included in the 6th edition of the Publication Manual of the American Psychological Association (APA, 2010). (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. Analysis of backward error recovery for concurrent processes with recovery blocks

    NASA Technical Reports Server (NTRS)

    Shin, K. G.; Lee, Y. H.

    1982-01-01

    Three different methods of implementing recovery blocks (RB's). These are the asynchronous, synchronous, and the pseudo recovery point implementations. Pseudo recovery points so that unbounded rollback may be avoided while maintaining process autonomy are proposed. Probabilistic models for analyzing these three methods under standard assumptions in computer performance analysis, i.e., exponential distributions for related random variables were developed. The interval between two successive recovery lines for asynchronous RB's mean loss in computation power for the synchronized method, and additional overhead and rollback distance in case PRP's are used were estimated.

  1. XBRL: The New Language of Corporate Financial Reporting

    ERIC Educational Resources Information Center

    Lester, Wanda F.

    2007-01-01

    In its purest form, accounting is a method of communication, and many refer to it as the language of business. Although the average citizen might view accounting as a convoluted set of complex standards, the recent abuses of data have resulted in legislation and investor demands for timely and relevant information. In addition, global requirements…

  2. Do "Clicker" Educational Sessions Enhance the Effectiveness of a Social Norms Marketing Campaign?

    ERIC Educational Resources Information Center

    Killos, Lydia F.; Hancock, Linda C.; McGann, Amanda Wattenmaker; Keller, Adrienne E.

    2010-01-01

    Objective: Social norms campaigns are a cost-effective way to reduce high-risk drinking on college campuses. This study compares effectiveness of a "standard" social norms media (SNM) campaign for those with and without exposure to additional educational sessions using audience response technology ("clickers"). Methods: American College Health…

  3. Determination of chromium in treated crayfish, Procambarus clarkii, by electrothermal ASS: study of chromium accumulation in different tissues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez, F.; Diaz, J.; Medina, J.

    1986-06-01

    In the present study, the authors investigated the accumulation of chromium in muscle, hepatopancreas, antennal glands, and gills of Procambarus clarkii (Girard) from Lake Albufera following Cr(VI)-exposure. Determinations of chromium were made by using Electrothermal Atomic Absorption Spectroscopy and the standard additions method.

  4. 21 CFR 640.63 - Suitability of donor.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Source Plasma § 640.63 Suitability of donor. (a) Method of determining. The suitability of a donor for Source Plasma shall be determined by a qualified... year. (2)(i) A donor who is to be immunized for the production of high-titer plasma shall be examined...

  5. 21 CFR 640.63 - Suitability of donor.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Source Plasma § 640.63 Suitability of donor. (a) Method of determining. The suitability of a donor for Source Plasma shall be determined by a qualified... year. (2)(i) A donor who is to be immunized for the production of high-titer plasma shall be examined...

  6. 21 CFR 640.63 - Suitability of donor.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Source Plasma § 640.63 Suitability of donor. (a) Method of determining. The suitability of a donor for Source Plasma shall be determined by a qualified... year. (2)(i) A donor who is to be immunized for the production of high-titer plasma shall be examined...

  7. 21 CFR 640.63 - Suitability of donor.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Source Plasma § 640.63 Suitability of donor. (a) Method of determining. The suitability of a donor for Source Plasma shall be determined by a qualified... year. (2)(i) A donor who is to be immunized for the production of high-titer plasma shall be examined...

  8. 21 CFR 640.63 - Suitability of donor.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... ADDITIONAL STANDARDS FOR HUMAN BLOOD AND BLOOD PRODUCTS Source Plasma § 640.63 Suitability of donor. (a) Method of determining. The suitability of a donor for Source Plasma shall be determined by a qualified... year. (2)(i) A donor who is to be immunized for the production of high-titer plasma shall be examined...

  9. Development of a standard reference material containing 22 chlorinated hydrocarbon gases at 1 μmol/mol in nitrogen.

    PubMed

    Li, Ning; Du, Jian; Yang, Jing; Fan, Qiang; Tian, Wen

    2017-11-01

    A gas standard mixture containing 22 chlorinated hydrocarbons in high purity nitrogen was prepared using a two-step weighing method and a gasifying apparatus developed in-house. The concentration of each component was determined using a gas chromatograph with flame ionization detection (GC/FID). Linear regression analysis of every component was performed using the gas standard mixture with concentrations ranging from 1 to 10 μmol/mol, showing the complete gasification of volatile organic compound (VOCs) species in a selected cylinder. Repeatability was also examined to ensure the reliability of the preparation method. In addition, no significant difference was observed between domestic treated and imported treated cylinders, which were conducive to reduction of the cost of raw materials. Moreover, the results of stability testing at different pressures and long-term stability tests indicated that the gas standard at 1 μmol/mol level with relative expanded uncertainties of 5% was stable above 2 MPa for a minimum of 12 months. Finally, a quantity comparison was conducted between the gas standard and a commercial gas standard from Scott Specialty Gases (now Air Liquide America Specialty Gases). The excellent agreement of every species suggested the favorable accuracy of our gas standard. Therefore, this reference material can be applied to routine observation of VOCs and for other purposes.

  10. Recommended Protocol for Round Robin Studies in Additive Manufacturing

    PubMed Central

    Moylan, Shawn; Brown, Christopher U.; Slotwinski, John

    2016-01-01

    One way to improve confidence and encourage proliferation of additive manufacturing (AM) technologies and parts is by generating more high quality data describing the performance of AM processes and parts. Many in the AM community see round robin studies as a way to generate large data sets while distributing the cost among the participants, thereby reducing the cost to individual users. The National Institute of Standards and Technology (NIST) has conducted and participated in several of these AM round robin studies. While the results of these studies are interesting and informative, many of the lessons learned in conducting these studies concern the logistics and methods of the study and unique issues presented by AM. Existing standards for conducting interlaboratory studies of measurement methods, along with NIST’s experience, form the basis for recommended protocols for conducting AM round robin studies. The role of round robin studies in AM qualification, some of the limitations of round robin studies, and the potential benefit of less formal collaborative experiments where multiple factors, AM machine being only one, are varied simultaneously are also discussed. PMID:27274602

  11. Simultaneous determination of lovastatin and niacin in tablet by first and third derivative spectrophotometry and H-point standard addition methods

    PubMed Central

    Kazemipour, M.; Ansari, M.; Ramezani, H.; Moradalizadeh, M.

    2012-01-01

    Most cardiovascular diseases need to be treated by more than a simple drug and the use of combination products diminishes noncompliance. Advicor® as a combination product of a vitamin and a fat lowering agent has no monograph in official pharmacopeias for its quality control purposes. In this study, first and third derivative signals for NA and LV quantitation at the two pairs of wavelengths, 261 and 273 nm; 245 and 249 nm were monitored with the addition of standard solutions of NA or LV, respectively. The limits of detection were 0.03 and 0.32 mg/L for LV and NA, respectively. The limits of quantitation were 0.09 and 0.78 mg/L for LV and NA, respectively. RSD% for both interday and intraday precision was lower than 2.6 and 2.7% for LV and NA, respectively. Selectivity of the method was assessed for both degradation products produced in stress conditions and common excipients that may present in the pharmaceutical dosage forms. The recommended procedure was successfully applied to real samples. PMID:23181086

  12. Finger millet (Eleusine coracana) - an economically viable source for antihypercholesterolemic metabolites production by Monascus purpureus.

    PubMed

    Venkateswaran, V; Vijayalakshmi, G

    2010-08-01

    Rice, parboiled rice, finger millet, germinated finger millet, broken wheat, njavara (medicinal rice), sorghum and maize were used as substrates for solid state fermentation of Monascus purpureus at 28°C for 7 days using 2% seed medium as inoculum for the production of its metabolites. The fungus exhibited good growth in all the substrates. The fermented substrates were dried at 45°C and analysed for antihypercholesterolemic metabolite statins by standardized HPLC method and dietary sterol contents by spectrophotometric method using reference standards of statin (pravastatin and lovastatin) and cholesterol, respectively. Germinated finger millet yielded higher total statin production of 5.2 g/kg dry wt with pravastatin and lovastatin content of 4.9 and 0.37 g/kg dry wt respectively than other substrates which range from 1.04-4.41 g/kg. In addition to statin, monascus fermented germinated finger millet yielded dietary sterol of 0.053 g/kg dry wt which is 7.6 folds higher than the control. The value addition of finger millet by germination and fermentation with Monascus purpureus provides scope for development of functional food.

  13. Recommended Protocol for Round Robin Studies in Additive Manufacturing.

    PubMed

    Moylan, Shawn; Brown, Christopher U; Slotwinski, John

    2016-03-01

    One way to improve confidence and encourage proliferation of additive manufacturing (AM) technologies and parts is by generating more high quality data describing the performance of AM processes and parts. Many in the AM community see round robin studies as a way to generate large data sets while distributing the cost among the participants, thereby reducing the cost to individual users. The National Institute of Standards and Technology (NIST) has conducted and participated in several of these AM round robin studies. While the results of these studies are interesting and informative, many of the lessons learned in conducting these studies concern the logistics and methods of the study and unique issues presented by AM. Existing standards for conducting interlaboratory studies of measurement methods, along with NIST's experience, form the basis for recommended protocols for conducting AM round robin studies. The role of round robin studies in AM qualification, some of the limitations of round robin studies, and the potential benefit of less formal collaborative experiments where multiple factors, AM machine being only one, are varied simultaneously are also discussed.

  14. A general method for bead-enhanced quantitation by flow cytometry

    PubMed Central

    Montes, Martin; Jaensson, Elin A.; Orozco, Aaron F.; Lewis, Dorothy E.; Corry, David B.

    2009-01-01

    Flow cytometry provides accurate relative cellular quantitation (percent abundance) of cells from diverse samples, but technical limitations of most flow cytometers preclude accurate absolute quantitation. Several quantitation standards are now commercially available which, when added to samples, permit absolute quantitation of CD4+ T cells. However, these reagents are limited by their cost, technical complexity, requirement for additional software and/or limited applicability. Moreover, few studies have validated the use of such reagents in complex biological samples, especially for quantitation of non-T cells. Here we show that addition to samples of known quantities of polystyrene fluorescence standardization beads permits accurate quantitation of CD4+ T cells from complex cell samples. This procedure, here termed single bead-enhanced cytofluorimetry (SBEC), was equally capable of enumerating eosinophils as well as subcellular fragments of apoptotic cells, moieties with very different optical and fluorescent characteristics. Relative to other proprietary products, SBEC is simple, inexpensive and requires no special software, suggesting that the method is suitable for the routine quantitation of most cells and other particles by flow cytometry. PMID:17067632

  15. Group-type hydrocarbon standards for high-performance liquid chromatographic analysis of middistillate fuels

    NASA Technical Reports Server (NTRS)

    Otterson, D. A.; Seng, G. T.

    1984-01-01

    A new high-performance liquid chromatographic (HPLC) method for group-type analysis of middistillate fuels is described. It uses a refractive index detector and standards that are prepared by reacting a portion of the fuel sample with sulfuric acid. A complete analysis of a middistillate fuel for saturates and aromatics (including the preparation of the standard) requires about 15 min if standards for several fuels are prepared simultaneously. From model fuel studies, the method was found to be accurate to within 0.4 vol% saturates or aromatics, and provides a precision of + or - 0.4 vol%. Olefin determinations require an additional 15 min of analysis time. However, this determination is needed only for those fuels displaying a significant olefin response at 200 nm (obtained routinely during the saturated/aromatics analysis procedure). The olefin determination uses the responses of the olefins and the corresponding saturates, as well as the average value of their refractive index sensitivity ratios (1.1). Studied indicated that, although the relative error in the olefins result could reach 10 percent by using this average sensitivity ratio, it was 5 percent for the fuels used in this study. Olefin concentrations as low as 0.1 vol% have been determined using this method.

  16. General Anesthetics Have Additive Actions on Three Ligand-Gated Ion Channels

    PubMed Central

    Jenkins, Andrew; Lobo, Ingrid A.; Gong, Diane; Trudell, James R.; Solt, Ken; Harris, R. Adron; Eger, Edmond I

    2008-01-01

    Background The purpose of this study was to determine whether pairs of compounds, including general anesthetics, could simultaneously modulate receptor function in a synergistic manner, thus demonstrating the existence of multiple intra-protein anesthetic binding sites. Methods Using standard electrophysiologic methods, we measured the effects of at least one combination of benzene, isoflurane, halothane, chloroform, flunitrazepam, zinc and pentobarbital on at least one of the following ligand gated ion channels: N-methyl-D-aspartate receptors (NMDARs), glycine receptors (GlyRs) and γ-aminobutyric acid type A receptors (GABAARs). Results All drug-drug-receptor combinations were found to exhibit additive, not synergistic modulation. Isoflurane with benzene additively depressed NMDAR function. Isoflurane with halothane additively enhanced GlyR function, as did isoflurane with zinc. Isoflurane with halothane additively enhanced GABAAR function as did all of the following: halothane with chloroform, pentobarbital with isoflurane, and flunitrazepam with isoflurane. Conclusions The simultaneous allosteric modulation of ligand gated ion channels by general anesthetics is entirely additive. Where pairs of general anesthetic drugs interact synergistically to produce general anesthesia, they must do so on systems more complex than a single receptor. PMID:18633027

  17. A Rapid Segmentation-Insensitive "Digital Biopsy" Method for Radiomic Feature Extraction: Method and Pilot Study Using CT Images of Non-Small Cell Lung Cancer.

    PubMed

    Echegaray, Sebastian; Nair, Viswam; Kadoch, Michael; Leung, Ann; Rubin, Daniel; Gevaert, Olivier; Napel, Sandy

    2016-12-01

    Quantitative imaging approaches compute features within images' regions of interest. Segmentation is rarely completely automatic, requiring time-consuming editing by experts. We propose a new paradigm, called "digital biopsy," that allows for the collection of intensity- and texture-based features from these regions at least 1 order of magnitude faster than the current manual or semiautomated methods. A radiologist reviewed automated segmentations of lung nodules from 100 preoperative volume computed tomography scans of patients with non-small cell lung cancer, and manually adjusted the nodule boundaries in each section, to be used as a reference standard, requiring up to 45 minutes per nodule. We also asked a different expert to generate a digital biopsy for each patient using a paintbrush tool to paint a contiguous region of each tumor over multiple cross-sections, a procedure that required an average of <3 minutes per nodule. We simulated additional digital biopsies using morphological procedures. Finally, we compared the features extracted from these digital biopsies with our reference standard using intraclass correlation coefficient (ICC) to characterize robustness. Comparing the reference standard segmentations to our digital biopsies, we found that 84/94 features had an ICC >0.7; comparing erosions and dilations, using a sphere of 1.5-mm radius, of our digital biopsies to the reference standard segmentations resulted in 41/94 and 53/94 features, respectively, with ICCs >0.7. We conclude that many intensity- and texture-based features remain consistent between the reference standard and our method while substantially reducing the amount of operator time required.

  18. An RBF-FD closest point method for solving PDEs on surfaces

    NASA Astrophysics Data System (ADS)

    Petras, A.; Ling, L.; Ruuth, S. J.

    2018-10-01

    Partial differential equations (PDEs) on surfaces appear in many applications throughout the natural and applied sciences. The classical closest point method (Ruuth and Merriman (2008) [17]) is an embedding method for solving PDEs on surfaces using standard finite difference schemes. In this paper, we formulate an explicit closest point method using finite difference schemes derived from radial basis functions (RBF-FD). Unlike the orthogonal gradients method (Piret (2012) [22]), our proposed method uses RBF centers on regular grid nodes. This formulation not only reduces the computational cost but also avoids the ill-conditioning from point clustering on the surface and is more natural to couple with a grid based manifold evolution algorithm (Leung and Zhao (2009) [26]). When compared to the standard finite difference discretization of the closest point method, the proposed method requires a smaller computational domain surrounding the surface, resulting in a decrease in the number of sampling points on the surface. In addition, higher-order schemes can easily be constructed by increasing the number of points in the RBF-FD stencil. Applications to a variety of examples are provided to illustrate the numerical convergence of the method.

  19. [Comparison of two algorithms for development of design space-overlapping method and probability-based method].

    PubMed

    Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu

    2018-05-01

    In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.

  20. Methodological Issues in Antifungal Susceptibility Testing of Malassezia pachydermatis

    PubMed Central

    Peano, Andrea; Pasquetti, Mario; Tizzani, Paolo; Chiavassa, Elisa; Guillot, Jacques; Johnson, Elizabeth

    2017-01-01

    Reference methods for antifungal susceptibility testing of yeasts have been developed by the Clinical and Laboratory Standards Institute (CLSI) and the European Committee on Antibiotic Susceptibility Testing (EUCAST). These methods are intended to test the main pathogenic yeasts that cause invasive infections, namely Candida spp. and Cryptococcus neoformans, while testing other yeast species introduces several additional problems in standardization not addressed by these reference procedures. As a consequence, a number of procedures have been employed in the literature to test the antifungal susceptibility of Malassezia pachydermatis. This has resulted in conflicting results. The aim of the present study is to review the procedures and the technical parameters (growth media, inoculum preparation, temperature and length of incubation, method of reading) employed for susceptibility testing of M. pachydermatis, and when possible, to propose recommendations for or against their use. Such information may be useful for the future development of a reference assay. PMID:29371554

  1. Hybrid Differential Dynamic Programming with Stochastic Search

    NASA Technical Reports Server (NTRS)

    Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob A.

    2016-01-01

    Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASA's Dawn mission. The Dawn trajectory was designed with the DDP-based Static/Dynamic Optimal Control algorithm used in the Mystic software.1 Another recently developed method, Hybrid Differential Dynamic Programming (HDDP),2, 3 is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.

  2. Cryptanalysis and Improvement of "A Secure Password Authentication Mechanism for Seamless Handover in Proxy Mobile IPv6 Networks".

    PubMed

    Alizadeh, Mojtaba; Zamani, Mazdak; Baharun, Sabariah; Abdul Manaf, Azizah; Sakurai, Kouichi; Anada, Hiroaki; Anada, Hiroki; Keshavarz, Hassan; Ashraf Chaudhry, Shehzad; Khurram Khan, Muhammad

    2015-01-01

    Proxy Mobile IPv6 is a network-based localized mobility management protocol that supports mobility without mobile nodes' participation in mobility signaling. The details of user authentication procedure are not specified in this standard, hence, many authentication schemes have been proposed for this standard. In 2013, Chuang et al., proposed an authentication method for PMIPv6, called SPAM. However, Chuang et al.'s Scheme protects the network against some security attacks, but it is still vulnerable to impersonation and password guessing attacks. In addition, we discuss other security drawbacks such as lack of revocation procedure in case of loss or stolen device, and anonymity issues of the Chuang et al.'s scheme. We further propose an enhanced authentication method to mitigate the security issues of SPAM method and evaluate our scheme using BAN logic.

  3. Cryptanalysis and Improvement of "A Secure Password Authentication Mechanism for Seamless Handover in Proxy Mobile IPv6 Networks"

    PubMed Central

    Alizadeh, Mojtaba; Zamani, Mazdak; Baharun, Sabariah; Abdul Manaf, Azizah; Sakurai, Kouichi; Anada, Hiroki; Keshavarz, Hassan; Ashraf Chaudhry, Shehzad; Khurram Khan, Muhammad

    2015-01-01

    Proxy Mobile IPv6 is a network-based localized mobility management protocol that supports mobility without mobile nodes’ participation in mobility signaling. The details of user authentication procedure are not specified in this standard, hence, many authentication schemes have been proposed for this standard. In 2013, Chuang et al., proposed an authentication method for PMIPv6, called SPAM. However, Chuang et al.’s Scheme protects the network against some security attacks, but it is still vulnerable to impersonation and password guessing attacks. In addition, we discuss other security drawbacks such as lack of revocation procedure in case of loss or stolen device, and anonymity issues of the Chuang et al.’s scheme. We further propose an enhanced authentication method to mitigate the security issues of SPAM method and evaluate our scheme using BAN logic. PMID:26580963

  4. Spectrophotometric determination of phenylephrine HCl and orphenadrine citrate in pure and in dosage forms.

    PubMed

    Shama, S A

    2002-11-07

    A simple and rapid spectrophotometric methods have been estimated for the microdetermination of phenylephrine HCl (I) and orphenadrine citrate (II). The proposed methods are based on the formation of ion-pair complexes between the examined drugs with alizarine (Aliz), alizarine red S (ARS), alizarine yellow G (AYG) or quinalizarine (Qaliz), which can be measured at the optimum lambda(max). The optimization of the reaction conditions is investigated. Beer's law is obeyed in the concentration ranges 2-36 microgram ml(-1), whereas optimum concentration as adopted from Ringbom plots was 3.5-33 microgram ml(-1). The molar absorptivity, Sandell sensitivity, and detection limit are also calculated. The correlation coefficient was >/=0.9988 (n=6) with a relative standard deviation of

  5. Organic carbonates: experiment and ab initio calculations for prediction of thermochemical properties.

    PubMed

    Verevkin, Sergey P; Emel'yanenko, Vladimir N; Kozlova, Svetlana A

    2008-10-23

    This work has been undertaken in order to obtain data on thermodynamic properties of organic carbonates and to revise the group-additivity values necessary for predicting their standard enthalpies of formation and enthalpies of vaporization. The standard molar enthalpies of formation of dibenzyl carbonate, tert-butyl phenyl carbonate, and diphenyl carbonate were measured using combustion calorimetry. Molar enthalpies of vaporization of these compounds were obtained from the temperature dependence of the vapor pressure measured by the transpiration method. Molar enthalpy of sublimation of diphenyl carbonate was measured in the same way. Ab initio calculations of molar enthalpies of formation of organic carbonates have been performed using the G3MP2 method, and results are in excellent agreement with the available experiment. Then the group-contribution method has been developed to predict values of the enthalpies of formation and enthalpies of vaporization of organic carbonates.

  6. Reducing death on the road: the effects of minimum safety standards, publicized crash tests, seat belts, and alcohol.

    PubMed Central

    Robertson, L S

    1996-01-01

    OBJECTIVES. Two phases of attempts to improve passenger car crash worthiness have occurred: minimum safety standards and publicized crash tests. This study evaluated these attempts, as well as changes in seat belt and alcohol use, in terms of their effect on occupant death and fatal crash rates. METHODS. Data on passenger car occupant fatalities and total involvement in fatal crashes, for 1975 through 1991, were obtained from the Fatal Accident Reporting System. Rates per mile were calculated through published sources on vehicle use by vehicle age. Regression estimates of effects of regulation, publicized crash tests, seat belt use and alcohol involvement were obtained. RESULTS. Substantial reductions in fatalities occurred in the vehicle model years from the late 1960s through most of the 1970s, when federal standards were applied. Some additional increments in reduced death rates, attributable to additional improved vehicle crashworthiness, occurred during the period of publicized crash tests. Increased seat belt use and reduced alcohol use also contributed significantly to reduced deaths. CONCLUSIONS. Minimum safety standards, crashworthiness improvements, seat belt use laws, and reduced alcohol use each contributed to a large reduction in passenger car occupant deaths. PMID:8561238

  7. The effects of deterioration and technological levels on pollutant emission factors for gasoline light-duty trucks.

    PubMed

    Zhang, Qingyu; Fan, Juwang; Yang, Weidong; Chen, Bixin; Zhang, Lijuan; Liu, Jiaoyu; Wang, Jingling; Zhou, Chunyao; Chen, Xuan

    2017-07-01

    Vehicle deterioration and technological change influence emission factors (EFs). In this study, the impacts of vehicle deterioration and emission standards on EFs of regulated pollutants (carbon monoxide [CO], hydrocarbon [HC], and nitrogen oxides [NO x ]) for gasoline light-duty trucks (LDTs) were investigated according to the inspection and maintenance (I/M) data using a chassis dynamometer method. Pollutant EFs for LDTs markedly varied with accumulated mileages and emission standards, and the trends of EFs are associated with accumulated mileages. In addition, the study also found that in most cases, the median EFs of CO, HC, and NO x are higher than those of basic EFs in the International Vehicle Emissions (IVE) model; therefore, the present study provides correction factors for the IVE model relative to the corresponding emission standards and mileages. Currently, vehicle emissions are great contributors to air pollution in cities, especially in developing countries. Emission factors play a key role in creating emission inventory and estimating emissions. Deterioration represented by vehicle age and accumulated mileage and changes of emission standards markedly influence emission factors. In addition, the results provide collection factors for implication in the IVE model in the region levels.

  8. Evaluating the Good Ontology Design Guideline (GoodOD) with the ontology quality requirements and evaluation method and metrics (OQuaRE).

    PubMed

    Duque-Ramos, Astrid; Boeker, Martin; Jansen, Ludger; Schulz, Stefan; Iniesta, Miguela; Fernández-Breis, Jesualdo Tomás

    2014-01-01

    To (1) evaluate the GoodOD guideline for ontology development by applying the OQuaRE evaluation method and metrics to the ontology artefacts that were produced by students in a randomized controlled trial, and (2) informally compare the OQuaRE evaluation method with gold standard and competency questions based evaluation methods, respectively. In the last decades many methods for ontology construction and ontology evaluation have been proposed. However, none of them has become a standard and there is no empirical evidence of comparative evaluation of such methods. This paper brings together GoodOD and OQuaRE. GoodOD is a guideline for developing robust ontologies. It was previously evaluated in a randomized controlled trial employing metrics based on gold standard ontologies and competency questions as outcome parameters. OQuaRE is a method for ontology quality evaluation which adapts the SQuaRE standard for software product quality to ontologies and has been successfully used for evaluating the quality of ontologies. In this paper, we evaluate the effect of training in ontology construction based on the GoodOD guideline within the OQuaRE quality evaluation framework and compare the results with those obtained for the previous studies based on the same data. Our results show a significant effect of the GoodOD training over developed ontologies by topics: (a) a highly significant effect was detected in three topics from the analysis of the ontologies of untrained and trained students; (b) both positive and negative training effects with respect to the gold standard were found for five topics. The GoodOD guideline had a significant effect over the quality of the ontologies developed. Our results show that GoodOD ontologies can be effectively evaluated using OQuaRE and that OQuaRE is able to provide additional useful information about the quality of the GoodOD ontologies.

  9. An isotope-dilution standard GC/MS/MS method for steroid hormones in water

    USGS Publications Warehouse

    Foreman, William T.; Gray, James L.; ReVello, Rhiannon C.; Lindley, Chris E.; Losche, Scott A.

    2013-01-01

    An isotope-dilution quantification method was developed for 20 natural and synthetic steroid hormones and additional compounds in filtered and unfiltered water. Deuterium- or carbon-13-labeled isotope-dilution standards (IDSs) are added to the water sample, which is passed through an octadecylsilyl solid-phase extraction (SPE) disk. Following extract cleanup using Florisil SPE, method compounds are converted to trimethylsilyl derivatives and analyzed by gas chromatography with tandem mass spectrometry. Validation matrices included reagent water, wastewater-affected surface water, and primary (no biological treatment) and secondary wastewater effluent. Overall method recovery for all analytes in these matrices averaged 100%; with overall relative standard deviation of 28%. Mean recoveries of the 20 individual analytes for spiked reagent-water samples prepared along with field samples analyzed in 2009–2010 ranged from 84–104%, with relative standard deviations of 6–36%. Detection levels estimated using ASTM International’s D6091–07 procedure range from 0.4 to 4 ng/L for 17 analytes. Higher censoring levels of 100 ng/L for bisphenol A and 200 ng/L for cholesterol and 3-beta-coprostanol are used to prevent bias and false positives associated with the presence of these analytes in blanks. Absolute method recoveries of the IDSs provide sample-specific performance information and guide data reporting. Careful selection of labeled compounds for use as IDSs is important because both inexact IDS-analyte matches and deuterium label loss affect an IDS’s ability to emulate analyte performance. Six IDS compounds initially tested and applied in this method exhibited deuterium loss and are not used in the final method.

  10. A quantitative liquid chromatography tandem mass spectrometry method for metabolomic analysis of Plasmodium falciparum lipid related metabolites.

    PubMed

    Vo Duy, S; Besteiro, S; Berry, L; Perigaud, C; Bressolle, F; Vial, H J; Lefebvre-Tournier, I

    2012-08-20

    Plasmodium falciparum is the causative agent of malaria, a deadly infectious disease for which treatments are scarce and drug-resistant parasites are now increasingly found. A comprehensive method of identifying and quantifying metabolites of this intracellular parasite could expand the arsenal of tools to understand its biology, and be used to develop new treatments against the disease. Here, we present two methods based on liquid chromatography tandem mass spectrometry for reliable measurement of water-soluble metabolites involved in phospholipid biosynthesis, as well as several other metabolites that reflect the metabolic status of the parasite including amino acids, carboxylic acids, energy-related carbohydrates, and nucleotides. A total of 35 compounds was quantified. In the first method, polar compounds were retained by hydrophilic interaction chromatography (amino column) and detected in negative mode using succinic acid-(13)C(4) and fluorovaline as internal standards. In the second method, separations were carried out using reverse phase (C18) ion-pair liquid chromatography, with heptafluorobutyric acid as a volatile ion pairing reagent in positive detection mode, using d(9)-choline and 4-aminobutanol as internal standards. Standard curves were performed in P. falciparum-infected and uninfected red blood cells using standard addition method (r(2)>0.99). The intra- and inter-day accuracy and precision as well as the extraction recovery of each compound were determined. The lower limit of quantitation varied from 50pmol to 100fmol/3×10(7)cells. These methods were validated and successfully applied to determine intracellular concentrations of metabolites from uninfected host RBCs and isolated Plasmodium parasites. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. An experimental survey of additives for improving dehydrogenation properties of magnesium hydride

    NASA Astrophysics Data System (ADS)

    Zhou, Chengshang; Fang, Zhigang Zak; Sun, Pei

    2015-03-01

    The use of a wide range of additives has been known as an important method for improving hydrogen storage properties of MgH2. There is a lack of a standard methodology, however, that can be used to select or compare the effectiveness of different additives. A systematic experimental survey was carried out in this study to compare a wide range of additives including transitions metals, transition metal oxides, hydrides, intermetallic compounds, and carbon materials, with respect to their effects on dehydrogenation properties of MgH2. MgH2 with various additives were prepared by using a high-energy-high-pressure planetary ball milling method and characterized by using thermogravimetric analysis (TGA) techniques. The results showed that additives such as Ti and V-based metals, hydride, and certain intermetallic compounds have strong catalytic effects. Additives such as Al, In, Sn, Si showed minor effects on the kinetics of the dehydrogenation of MgH2, while exhibiting moderate thermodynamic destabilizing effects. In combination, MgH2 with both kinetic and thermodynamic additives, such as the MgH2-In-TiMn2 system, exhibited a drastically decreased dehydrogenation temperature.

  12. Conservative bin-to-bin fractional collisions

    NASA Astrophysics Data System (ADS)

    Martin, Robert

    2016-11-01

    Particle methods such as direct simulation Monte Carlo (DSMC) and particle-in-cell (PIC) are commonly used to model rarefied kinetic flows for engineering applications because of their ability to efficiently capture non-equilibrium behavior. The primary drawback to these methods relates to the poor convergence properties due to the stochastic nature of the methods which typically rely heavily on high degrees of non-equilibrium and time averaging to compensate for poor signal to noise ratios. For standard implementations, each computational particle represents many physical particles which further exacerbate statistical noise problems for flow with large species density variation such as encountered in flow expansions and chemical reactions. The stochastic weighted particle method (SWPM) introduced by Rjasanow and Wagner overcome this difficulty by allowing the ratio of real to computational particles to vary on a per particle basis throughout the flow. The DSMC procedure must also be slightly modified to properly sample the Boltzmann collision integral accounting for the variable particle weights and to avoid the creation of additional particles with negative weight. In this work, the SWPM with necessary modification to incorporate the variable hard sphere (VHS) collision cross section model commonly used in engineering applications is first incorporated into an existing engineering code, the Thermophysics Universal Research Framework. The results and computational efficiency are compared to a few simple test cases using a standard validated implementation of the DSMC method along with the adapted SWPM/VHS collision using an octree based conservative phase space reconstruction. The SWPM method is then further extended to combine the collision and phase space reconstruction into a single step which avoids the need to create additional computational particles only to destroy them again during the particle merge. This is particularly helpful when oversampling the collision integral when compared to the standard DSMC method. However, it is found that the more frequent phase space reconstructions can cause added numerical thermalization with low particle per cell counts due to the coarseness of the octree used. However, the methods are expected to be of much greater utility in transient expansion flows and chemical reactions in the future.

  13. Review of MRI-based measurements of pulse wave velocity: a biomarker of arterial stiffness

    PubMed Central

    Wentland, Andrew L.; Grist, Thomas M.

    2014-01-01

    Atherosclerosis is the leading cause of cardiovascular disease (CVD) in the Western world. In the early development of atherosclerosis, vessel walls remodel outwardly such that the vessel luminal diameter is minimally affected by early plaque development. Only in the late stages of the disease does the vessel lumen begin to narrow—leading to stenoses. As a result, angiographic techniques are not useful for diagnosing early atherosclerosis. Given the absence of stenoses in the early stages of atherosclerosis, CVD remains subclinical for decades. Thus, methods of diagnosing atherosclerosis early in the disease process are needed so that affected patients can receive the necessary interventions to prevent further disease progression. Pulse wave velocity (PWV) is a biomarker directly related to vessel stiffness that has the potential to provide information on early atherosclerotic disease burden. A number of clinical methods are available for evaluating global PWV, including applanation tonometry and ultrasound. However, these methods only provide a gross global measurement of PWV—from the carotid to femoral arteries—and may mitigate regional stiffness within the vasculature. Additionally, the distance measurements used in the PWV calculation with these methods can be highly inaccurate. Faster and more robust magnetic resonance imaging (MRI) sequences have facilitated increased interest in MRI-based PWV measurements. This review provides an overview of the state-of-the-art in MRI-based PWV measurements. In addition, both gold standard and clinical standard methods of computing PWV are discussed. PMID:24834415

  14. Rapid Quantitation of Ascorbic and Folic Acids in SRM 3280 Multivitamin/Multielement Tablets using Flow-Injection Tandem Mass Spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhandari, Deepak; Kertesz, Vilmos; Van Berkel, Gary J

    RATIONALE: Ascorbic acid (AA) and folic acid (FA) are water-soluble vitamins and are usually fortified in food and dietary supplements. For the safety of human health, proper intake of these vitamins is recommended. Improvement in the analysis time required for the quantitative determination of these vitamins in food and nutritional formulations is desired. METHODS: A simple and fast (~5 min) in-tube sample preparation was performed, independently for FA and AA, by mixing extraction solvent with a powdered sample aliquot followed by agitation, centrifugation, and filtration to recover an extract for analysis. Quantitative detection was achieved by flow-injection (1 L injectionmore » volume) electrospray ionization tandem mass spectrometry (ESI-MS/MS) in negative ion mode using the method of standard addition. RESULTS: Method of standard addition was employed for the quantitative estimation of each vitamin in a sample extract. At least 2 spiked and 1 non-spiked sample extract were injected in triplicate for each quantitative analysis. Given an injection-to-injection interval of approximately 2 min, about 18 min was required to complete the quantitative estimation of each vitamin. The concentration values obtained for the respective vitamins in the standard reference material (SRM) 3280 using this approach were within the statistical range of the certified values provided in the NIST Certificate of Analysis. The estimated limit of detections of FA and AA were 13 and 5.9 ng/g, respectively. CONCLUSIONS: Flow-injection ESI-MS/MS was successfully applied for the rapid quantitation of FA and AA in SRM 3280 multivitamin/multielement tablets.« less

  15. Determination of tributyltin in whole water matrices under the European Water Framework Directive.

    PubMed

    Richter, Janine; Fettig, Ina; Philipp, Rosemarie; Jakubowski, Norbert; Panne, Ulrich; Fisicaro, Paola; Alasonati, Enrica

    2016-08-12

    Monitoring of water quality is important to control water pollution. Contamination of the aquatic system has a large effect on human health and the environment. Under the European Water Framework Directive (WFD) 2000/60/EC and the related directive on environmental quality standards (EQS) in the field of water policy 2008/105/EC, the need for sensitive reference methods was highlighted. Since tributyltin (TBT) is one of the WFD listed priority substances a method was developed which is capable to qualify and quantify the pollutant at the required low WFD EQS of 0.2ngL(-1) in whole water bodies, i.e. in non-filtered water samples with dissolved organic carbon and suspended particulate matter. Therefore special attention was paid on the interaction of TBT with the suspended particulate matter and humic substances to obtain a complete representation of the pollution in surface waters. Different water samples were investigated varying the content of organic dissolved and suspended matter. Quantification was performed using species-specific isotope dilution (SSID) and gas chromatography with inductively coupled plasma mass spectrometry (GC-ICP-MS). Different sample treatment strategies were evaluated and compared. The process of internal standard addition was investigated and optimized, hence the equilibrium between internal standards and matrix is of primary importance to perform accurate SSID. Samples spiked at EQS level were analyzed with a recovery between 95 and 105 %. Additionally real surface water samples were investigated and the TBT concentration for the whole water body was determined and compared with conventional routine analysis method. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Expanding the Design Space: Forging the Transition from 3D Printing to Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Amend, Matthew

    The synergy of Additive Manufacturing and Computational Geometry has the potential to radically expand the "design space" of solutions available to designers. Additive Manufacturing (AM) is capable of fabricating objects that are highly complex both in geometry and material properties. However, the introduction of any new technology can have a disruptive effect on established design practices and organizations. Before "Design for Additive Manufacturing" (DFAM) is a commonplace means of producing objects employed in "real world" products, appropriate design knowledge must be sufficiently integrated within industry. First, materials suited to additive manufacturing methods must be developed to satisfy existing industry standards and specifications, or new standards must be developed. Second, a new class of design representation (CAD) tools will need to be developed. Third, designers and design organizations will need to develop strategies for employing such tools. This thesis describes three DFAM exercises intended to demonstrate the potential for innovative design when using advanced additive materials, tools, and printers. These design exercises included 1) a light-weight composite layup mold developed with topology optimization, 2) a low-pressure fluid duct enhanced with an external lattice structure, and 3) an airline seat tray designed using a non-uniform lattice structure optimized with topology optimization.

  17. Wavelet images and Chou's pseudo amino acid composition for protein classification.

    PubMed

    Nanni, Loris; Brahnam, Sheryl; Lumini, Alessandra

    2012-08-01

    The last decade has seen an explosion in the collection of protein data. To actualize the potential offered by this wealth of data, it is important to develop machine systems capable of classifying and extracting features from proteins. Reliable machine systems for protein classification offer many benefits, including the promise of finding novel drugs and vaccines. In developing our system, we analyze and compare several feature extraction methods used in protein classification that are based on the calculation of texture descriptors starting from a wavelet representation of the protein. We then feed these texture-based representations of the protein into an Adaboost ensemble of neural network or a support vector machine classifier. In addition, we perform experiments that combine our feature extraction methods with a standard method that is based on the Chou's pseudo amino acid composition. Using several datasets, we show that our best approach outperforms standard methods. The Matlab code of the proposed protein descriptors is available at http://bias.csr.unibo.it/nanni/wave.rar .

  18. Ultrasonic slurry sampling electrothermal vaporization inductively coupled plasma mass spectrometry for the determination of Cr, Fe, Cu, Zn and Se in cereals

    NASA Astrophysics Data System (ADS)

    Huang, Shih-Yi; Jiang, Shiuh-Jen; Sahayam, A. C.

    2014-11-01

    Ultrasonic slurry sampling electrothermal vaporization inductively coupled plasma mass spectrometry (USS-ETV-ICP-MS) has been applied to determine Cr, Fe, Cu, Zn and Se in several cereal samples. Thioacetamide was used as the modifier to enhance the ion signals. The background ions at the masses of interest were reduced in intensity significantly by using 1.0 mL min- 1 methane (CH4) as reaction cell gas in the dynamic reaction cell (DRC). Since the sensitivities of Cr, Fe, Cu, Zn and Se in different matrices were quite different, standard addition and isotope dilution methods were used for the determination of Cr, Fe, Cu, Zn and Se in these cereal samples. The method detection limits estimated from standard addition curves were about 1, 10, 4, 12 and 2 ng g- 1 for Cr, Fe, Cu, Zn and Se, respectively, in original cereal samples. This procedure has been applied to the determination of Cr, Fe, Cu, Zn and Se whose concentrations are in μg g- 1 (except Cr and Se) in standard reference materials (SRM) of National institute of standards and technology (NIST), NIST SRM 1568a Rice Flour and NIST SRM 1567a Wheat Flour and two cereal samples purchased from a local market. The analysis results of reference materials agreed with certified values at 95% confidence level according to Student's T-test. The results for the real world cereal samples were also found to be in good agreement with the pneumatic nebulization DRC ICP-MS results of the sample solutions.

  19. Simultaneous determination of water-soluble vitamins in beverages and dietary supplements by LC-MS/MS.

    PubMed

    Kakitani, Ayano; Inoue, Tomonori; Matsumoto, Keiko; Watanabe, Jun; Nagatomi, Yasushi; Mochizuki, Naoki

    2014-01-01

    An LC-MS/MS method was developed for the simultaneous determination of 15 water-soluble vitamins that are widely used as additives in beverages and dietary supplements. This combined method involves the following simple pre-treatment procedures: dietary supplement samples were prepared by centrifugation and filtration after an extraction step, whereas beverage samples were diluted prior to injection. Chromatographic analysis in this method utilised a multi-mode ODS column, which provided reverse-phase, anion- and cation-exchange capacities, and therefore improved the retention of highly polar analytes such as water-soluble vitamins. Additionally, the multi-mode ODS column did not require adding ion pair reagents to the mobile phase. We optimised the chromatographic separation of 15 water-soluble vitamins by adjusting the mobile phase pH and the organic solvent. We also conducted an analysis of a NIST Standard Reference Material (SRM 3280 Multi-vitamin/Multi-element tablets) using this method to verify its accuracy. In addition, the method was applied to identify the vitamins in commercial beverages and dietary supplements. By comparing results with the label values and results obtained by official methods, it was concluded that the method could be used for quality control and to compose nutrition labels for vitamin-enriched products.

  20. Field Application of a Rapid Spectrophotometric Method for Determination of Persulfate in Soil

    PubMed Central

    Cunningham, Colin J.; Pitschi, Vanessa; Anderson, Peter; Barry, D. A.; Patterson, Colin; Peshkur, Tanya A.

    2013-01-01

    Remediation of hydrocarbon contaminated soils can be performed both in situ and ex situ using chemical oxidants such as sodium persulfate. Standard methods for quantifying persulfate require either centrifugation or prolonged settling times. An optimized soil extraction procedure was developed for persulfate involving simple water extraction using a modified disposable syringe. This allows considerable saving of time and removes the need for centrifugation. The extraction time was reduced to only 5 min compared to 15 min for the standard approach. A comparison of the two approaches demonstrated that each provides comparable results. Comparisons were made using high (93 g kg−1 soil) and low (9.3 g kg−1 soil) additions of sodium persulfate to a petroleum hydrocarbon-contaminated soil, as well as sand spiked with diesel. Recoveries of 95±1% and 96±10% were observed with the higher application rate in the contaminated soil and spiked sand, respectively. Corresponding recoveries of 86±5% and 117±19% were measured for the lower application rate. Results were obtained in only 25 min and the method is well suited to batch analyses. In addition, it is suitable for application in a small field laboratory or even a mobile, vehicle-based system, as it requires minimal equipment and reagents. PMID:23776446

  1. Effect of Time-of-Flight Information on PET/MR Reconstruction Artifacts: Comparison of Free-breathing versus Breath-hold MR-based Attenuation Correction.

    PubMed

    Delso, Gaspar; Khalighi, Mohammed; Ter Voert, Edwin; Barbosa, Felipe; Sekine, Tetsuro; Hüllner, Martin; Veit-Haibach, Patrick

    2017-01-01

    Purpose To evaluate the magnitude and anatomic extent of the artifacts introduced on positron emission tomographic (PET)/magnetic resonance (MR) images by respiratory state mismatch in the attenuation map. Materials and Methods The method was tested on 14 patients referred for an oncologic examination who underwent PET/MR imaging. The acquisition included standard PET and MR series for each patient, and an additional attenuation correction series was acquired by using breath hold. PET data were reconstructed with and without time-of-flight (TOF) information, first by using the standard free-breathing attenuation map and then again by using the additional breath-hold map. Two-tailed paired t testing and linear regression with 0 intercept was performed on TOF versus non-TOF and free-breathing versus breath-hold data for all detected lesions. Results Fluorodeoxyglucose-avid lesions were found in eight of the 14 patients included in the study. The uptake differences (maximum standardized uptake values) between PET reconstructions with free-breathing versus breath-hold attenuation ranged, for non-TOF reconstructions, from -18% to 26%. The corresponding TOF reconstructions yielded differences from -15% to 18%. Conclusion TOF information was shown to reduce the artifacts caused at PET/MR by respiratory mismatch between emission and attenuation data. © RSNA, 2016 Online supplemental material is available for this article.

  2. Screening and determination of polycyclic aromatic hydrocarbons in seafoods using QuEChERS-based extraction and high-performance liquid chromatography with fluorescence detection.

    PubMed

    Gratz, Samuel R; Ciolino, Laura A; Mohrhaus, Angela S; Gamble, Bryan M; Gracie, Jill M; Jackson, David S; Roetting, John P; McCauley, Heather A; Heitkemper, Douglas T; Fricke, Fred L; Krol, Walter J; Arsenault, Terri L; White, Jason C; Flottmeyer, Michele M; Johnson, Yoko S

    2011-01-01

    A rapid, sensitive, and accurate method for the screening and determination of polycyclic aromatic hydrocarbons (PAHs) in edible seafood is described. The method uses quick, easy, cheap, effective, rugged, and safe (QuEChERS)-based extraction and HPLC with fluorescence detection (FLD). The method was developed and validated in response to the massive Deepwater Horizon oil spill in the Gulf of Mexico. Rapid and highly sensitive PAH screening methods are critical tools needed for oil spill response; they help to assess when seafood is safe for harvesting and consumption. Sample preparation involves SPE of edible seafood portions with acetonitrile, followed by the addition of salts to induce water partitioning. After centrifugation, a portion of the acetonitrile layer is filtered prior to analysis via HPLC-FLD. The chromatographic method uses a polymeric C18 stationary phase designed for PAH analysis with gradient elution, and it resolves 15 U.S. Environmental Protection Agency priority parent PAHs in fewer than 20 min. The procedure was validated in three laboratories for the parent PAHs using spike recovery experiments at PAH fortification levels ranging from 25 to 10 000 microg/kg in oysters, shrimp, crab, and finfish, with recoveries ranging from 78 to 99%. Additional validation was conducted for a series of alkylated homologs of naphthalene, dibenzothiophene, and phenanthrene, with recoveries ranging from 87 to 128%. Method accuracy was further assessed based on analysis of National Institute of Standards and Technology Standard Reference Material 1974b. The method provides method detection limits in the sub to low ppb (microg/kg) range, and practical LOQs in the low ppb (microg/kg) range for most of the PAH compounds studied.

  3. Association between pregnancy complications and small-for-gestational-age birth weight defined by customized fetal growth standard versus a population-based standard.

    PubMed

    Odibo, Anthony O; Francis, Andre; Cahill, Alison G; Macones, George A; Crane, James P; Gardosi, Jason

    2011-03-01

    To derive coefficients for developing a customized growth chart for a Mid-Western US population, and to estimate the association between pregnancy outcomes and smallness for gestational age (SGA) defined by the customized growth chart compared with a population-based growth chart for the USA. A retrospective cohort study of an ultrasound database using 54,433 pregnancies meeting inclusion criteria was conducted. Coefficients for customized centiles were derived using 42,277 pregnancies and compared with those obtained from other populations. Two adverse outcome indicators were defined (greater than 7 day stay in the neonatal unit and stillbirth [SB]), and the risk for each outcome was calculated for the groups of pregnancies defined as SGA by the population standard and SGA by the customized standard using 12,456 pregnancies for the validation sample. The growth potential expressed as weight at 40 weeks in this population was 3524 g (standard error: 402 g). In the validation population, 4055 cases of SGA were identified using both population and customized standards. The cases additionally identified as SGA by the customized method had a significantly increased risk of each of the adverse outcome categories. The sensitivity and specificity of those identified as SGA by customized method only for detecting pregnancies at risk for SB was 32.7% (95% confidence interval [CI] 27.0-38.8%) and 95.1% (95% CI: 94.7-95.0%) versus 0.8% (95% CI 0.1-2.7%) and 98.0% (95% CI 97.8-98.2%)for those identified by only the population-based method, respectively. SGA defined by customized growth potential is able to identify substantially more pregnancies at a risk for adverse outcome than the currently used national standard for fetal growth.

  4. A systematic study on the influencing parameters and improvement of quantitative analysis of multi-component with single marker method using notoginseng as research subject.

    PubMed

    Wang, Chao-Qun; Jia, Xiu-Hong; Zhu, Shu; Komatsu, Katsuko; Wang, Xuan; Cai, Shao-Qing

    2015-03-01

    A new quantitative analysis of multi-component with single marker (QAMS) method for 11 saponins (ginsenosides Rg1, Rb1, Rg2, Rh1, Rf, Re and Rd; notoginsenosides R1, R4, Fa and K) in notoginseng was established, when 6 of these saponins were individually used as internal referring substances to investigate the influences of chemical structure, concentrations of quantitative components, and purities of the standard substances on the accuracy of the QAMS method. The results showed that the concentration of the analyte in sample solution was the major influencing parameter, whereas the other parameters had minimal influence on the accuracy of the QAMS method. A new method for calculating the relative correction factors by linear regression was established (linear regression method), which demonstrated to decrease standard method differences of the QAMS method from 1.20%±0.02% - 23.29%±3.23% to 0.10%±0.09% - 8.84%±2.85% in comparison with the previous method. And the differences between external standard method and the QAMS method using relative correction factors calculated by linear regression method were below 5% in the quantitative determination of Rg1, Re, R1, Rd and Fa in 24 notoginseng samples and Rb1 in 21 notoginseng samples. And the differences were mostly below 10% in the quantitative determination of Rf, Rg2, R4 and N-K (the differences of these 4 constituents bigger because their contents lower) in all the 24 notoginseng samples. The results indicated that the contents assayed by the new QAMS method could be considered as accurate as those assayed by external standard method. In addition, a method for determining applicable concentration ranges of the quantitative components assayed by QAMS method was established for the first time, which could ensure its high accuracy and could be applied to QAMS methods of other TCMs. The present study demonstrated the practicability of the application of the QAMS method for the quantitative analysis of multi-component and the quality control of TCMs and TCM prescriptions. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. A Method for Fabricating Additive Manufactured Lightweight Metallic Mirrors

    DTIC Science & Technology

    2015-06-14

    systems [3, 4].    The state of the art in this industry is the ULE™,  Zerodur ™, or beryllium isogrid  mirrors .  The isogrid design is a standard in...1    A Method for Fabricating Additive Manufactured Lightweight  Metallic  Mirrors   Michael Stern, Joseph Bari  This work  is  sponsored  by  the...methods for fabricating  low‐weight optics are in use today. We present a novel methodology for generating lightweight  metallic  mirrors  fabricated by

  6. Revisiting the Quantitative-Qualitative Debate: Implications for Mixed-Methods Research

    PubMed Central

    SALE, JOANNA E. M.; LOHFELD, LYNNE H.; BRAZIL, KEVIN

    2015-01-01

    Health care research includes many studies that combine quantitative and qualitative methods. In this paper, we revisit the quantitative-qualitative debate and review the arguments for and against using mixed-methods. In addition, we discuss the implications stemming from our view, that the paradigms upon which the methods are based have a different view of reality and therefore a different view of the phenomenon under study. Because the two paradigms do not study the same phenomena, quantitative and qualitative methods cannot be combined for cross-validation or triangulation purposes. However, they can be combined for complementary purposes. Future standards for mixed-methods research should clearly reflect this recommendation. PMID:26523073

  7. Much ado about mice: Standard-setting in model organism research.

    PubMed

    Hardesty, Rebecca A

    2018-04-11

    Recently there has been a practice turn in the philosophy of science that has called for analyses to be grounded in the actual doings of everyday science. This paper is in furtherance of this call and it does so by employing participant-observation ethnographic methods as a tool for discovering epistemological features of scientific practice in a neuroscience lab. The case I present focuses on a group of neurobiologists researching the genetic underpinnings of cognition in Down syndrome (DS) and how they have developed a new mouse model which they argue should be regarded as the "gold standard" for all DS mouse research. Through use of ethnographic methods, interviews, and analyses of publications, I uncover how the lab constructed their new mouse model. Additionally, I describe how model organisms can serve as abstract standards for scientific work that impact the epistemic value of scientific claims, regulate practice, and constrain future work. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. A validated high-performance liquid chromatographic method for the determination of glibenclamide in human plasma and its application to pharmacokinetic studies.

    PubMed

    Niopas, Ioannis; Daftsios, Athanasios C

    2002-05-15

    Glibenclamide is a potent second generation oral sulfonylurea antidiabetic agent widely used for the treatment of type II diabetes melitus. A rapid, sensitive, precise, accurate and specific HPLC assay for the determination of glibenclamide in human plasma was developed and validated. After addition of flufenamic acid as internal standard, the analytes were isolated from human plasma by liquid-liquid extraction. The method was linear in the 10-400 ng/ml concentration range (r > 0.999). Recovery for glibenclamide was greater than 91.5% and for internal standard was 93.5%. Within-day and between-day precision, expressed as the relative standard deviation (RSD%), ranged from 1.4 to 5.9% and 5.8 to 6.6%, respectively. Assay accuracy was better than 93.4%. The assay was used to estimate the pharmacokinetics of glibenclamide after oral administration of a 5 mg tablet of glibenclamide to 18 healthy volunteers.

  9. Guidelines for the standardization of preanalytic variables for blood-based biomarker studies in Alzheimer’s disease research

    PubMed Central

    Gupta, Veer; Henriksen, Kim; Edwards, Melissa; Jeromin, Andreas; Lista, Simone; Bazenet, Chantal; Soares, Holly; Lovestone, Simon; Hampel, Harald; Montine, Thomas; Blennow, Kaj; Foroud, Tatiana; Carrillo, Maria; Graff-Radford, Neill; Laske, Christoph; Breteler, Monique; Shaw, Leslie; Trojanowski, John Q.; Schupf, Nicole; Rissman, Robert A.; Fagan, Anne M.; Oberoi, Pankaj; Umek, Robert; Weiner, Michael W.; Grammas, Paula; Posner, Holly; Martins, Ralph

    2015-01-01

    The lack of readily available biomarkers is a significant hindrance towards progressing to effective therapeutic and preventative strategies for Alzheimer’s disease (AD). Blood-based biomarkers have potential to overcome access and cost barriers and greatly facilitate advanced neuroimaging and cerebrospinal fluid biomarker approaches. Despite the fact that preanalytical processing is the largest source of variability in laboratory testing, there are no currently available standardized preanalytical guidelines. The current international working group provides the initial starting point for such guidelines for standardized operating procedures (SOPs). It is anticipated that these guidelines will be updated as additional research findings become available. The statement provides (1) a synopsis of selected preanalytical methods utilized in many international AD cohort studies, (2) initial draft guidelines/SOPs for preanalytical methods, and (3) a list of required methodological information and protocols to be made available for publications in the field in order to foster cross-validation across cohorts and laboratories. PMID:25282381

  10. Exploring local regularities for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Tian, Huaiwen; Qin, Shengfeng

    2016-11-01

    In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.

  11. Probabilistic Design Storm Method for Improved Flood Estimation in Ungauged Catchments

    NASA Astrophysics Data System (ADS)

    Berk, Mario; Å pačková, Olga; Straub, Daniel

    2017-12-01

    The design storm approach with event-based rainfall-runoff models is a standard method for design flood estimation in ungauged catchments. The approach is conceptually simple and computationally inexpensive, but the underlying assumptions can lead to flawed design flood estimations. In particular, the implied average recurrence interval (ARI) neutrality between rainfall and runoff neglects uncertainty in other important parameters, leading to an underestimation of design floods. The selection of a single representative critical rainfall duration in the analysis leads to an additional underestimation of design floods. One way to overcome these nonconservative approximations is the use of a continuous rainfall-runoff model, which is associated with significant computational cost and requires rainfall input data that are often not readily available. As an alternative, we propose a novel Probabilistic Design Storm method that combines event-based flood modeling with basic probabilistic models and concepts from reliability analysis, in particular the First-Order Reliability Method (FORM). The proposed methodology overcomes the limitations of the standard design storm approach, while utilizing the same input information and models without excessive computational effort. Additionally, the Probabilistic Design Storm method allows deriving so-called design charts, which summarize representative design storm events (combinations of rainfall intensity and other relevant parameters) for floods with different return periods. These can be used to study the relationship between rainfall and runoff return periods. We demonstrate, investigate, and validate the method by means of an example catchment located in the Bavarian Pre-Alps, in combination with a simple hydrological model commonly used in practice.

  12. Robust electroencephalogram phase estimation with applications in brain-computer interface systems.

    PubMed

    Seraj, Esmaeil; Sameni, Reza

    2017-03-01

    In this study, a robust method is developed for frequency-specific electroencephalogram (EEG) phase extraction using the analytic representation of the EEG. Based on recent theoretical findings in this area, it is shown that some of the phase variations-previously associated to the brain response-are systematic side-effects of the methods used for EEG phase calculation, especially during low analytical amplitude segments of the EEG. With this insight, the proposed method generates randomized ensembles of the EEG phase using minor perturbations in the zero-pole loci of narrow-band filters, followed by phase estimation using the signal's analytical form and ensemble averaging over the randomized ensembles to obtain a robust EEG phase and frequency. This Monte Carlo estimation method is shown to be very robust to noise and minor changes of the filter parameters and reduces the effect of fake EEG phase jumps, which do not have a cerebral origin. As proof of concept, the proposed method is used for extracting EEG phase features for a brain computer interface (BCI) application. The results show significant improvement in classification rates using rather simple phase-related features and a standard K-nearest neighbors and random forest classifiers, over a standard BCI dataset. The average performance was improved between 4-7% (in absence of additive noise) and 8-12% (in presence of additive noise). The significance of these improvements was statistically confirmed by a paired sample t-test, with 0.01 and 0.03 p-values, respectively. The proposed method for EEG phase calculation is very generic and may be applied to other EEG phase-based studies.

  13. Assessing network scale-up estimates for groups most at risk of HIV/AIDS: evidence from a multiple-method study of heavy drug users in Curitiba, Brazil.

    PubMed

    Salganik, Matthew J; Fazito, Dimitri; Bertoni, Neilane; Abdo, Alexandre H; Mello, Maeve B; Bastos, Francisco I

    2011-11-15

    One of the many challenges hindering the global response to the human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS) epidemic is the difficulty of collecting reliable information about the populations most at risk for the disease. Thus, the authors empirically assessed a promising new method for estimating the sizes of most at-risk populations: the network scale-up method. Using 4 different data sources, 2 of which were from other researchers, the authors produced 5 estimates of the number of heavy drug users in Curitiba, Brazil. The authors found that the network scale-up and generalized network scale-up estimators produced estimates 5-10 times higher than estimates made using standard methods (the multiplier method and the direct estimation method using data from 2004 and 2010). Given that equally plausible methods produced such a wide range of results, the authors recommend that additional studies be undertaken to compare estimates based on the scale-up method with those made using other methods. If scale-up-based methods routinely produce higher estimates, this would suggest that scale-up-based methods are inappropriate for populations most at risk of HIV/AIDS or that standard methods may tend to underestimate the sizes of these populations.

  14. Semiautomated segmentation of head and neck cancers in 18F-FDG PET scans: A just-enough-interaction approach.

    PubMed

    Beichel, Reinhard R; Van Tol, Markus; Ulrich, Ethan J; Bauer, Christian; Chang, Tangel; Plichta, Kristin A; Smith, Brian J; Sunderland, John J; Graham, Michael M; Sonka, Milan; Buatti, John M

    2016-06-01

    The purpose of this work was to develop, validate, and compare a highly computer-aided method for the segmentation of hot lesions in head and neck 18F-FDG PET scans. A semiautomated segmentation method was developed, which transforms the segmentation problem into a graph-based optimization problem. For this purpose, a graph structure around a user-provided approximate lesion centerpoint is constructed and a suitable cost function is derived based on local image statistics. To handle frequently occurring situations that are ambiguous (e.g., lesions adjacent to each other versus lesion with inhomogeneous uptake), several segmentation modes are introduced that adapt the behavior of the base algorithm accordingly. In addition, the authors present approaches for the efficient interactive local and global refinement of initial segmentations that are based on the "just-enough-interaction" principle. For method validation, 60 PET/CT scans from 59 different subjects with 230 head and neck lesions were utilized. All patients had squamous cell carcinoma of the head and neck. A detailed comparison with the current clinically relevant standard manual segmentation approach was performed based on 2760 segmentations produced by three experts. Segmentation accuracy measured by the Dice coefficient of the proposed semiautomated and standard manual segmentation approach was 0.766 and 0.764, respectively. This difference was not statistically significant (p = 0.2145). However, the intra- and interoperator standard deviations were significantly lower for the semiautomated method. In addition, the proposed method was found to be significantly faster and resulted in significantly higher intra- and interoperator segmentation agreement when compared to the manual segmentation approach. Lack of consistency in tumor definition is a critical barrier for radiation treatment targeting as well as for response assessment in clinical trials and in clinical oncology decision-making. The properties of the authors approach make it well suited for applications in image-guided radiation oncology, response assessment, or treatment outcome prediction.

  15. Tandem derivatization combined with salting-out assisted liquid-liquid microextraction for determination of biothiols in urine by gas chromatography-mass spectrometry.

    PubMed

    Tsai, Chia-Ju; Liao, Fang-Yi; Weng, Jing-Ru; Feng, Chia-Hsien

    2017-11-17

    Detection of polar organic compounds (POCs) using gas chromatography (GC) is not straightforward due to high polarity, hydrophilicity, and low volatility of POCs. In this study, we report a tandem microwave-assisted derivatization method combined with salting-out assisted liquid-liquid microextraction (SALLME) to modify successively the polar groups of POCs in protic and aprotic solvents. Biothiols (cysteine and homocysteine) served as a proof of concept for this method because they possess three polar groups (thiol, amine, and carboxyl); the derivatizing reagent was 3,4,5-trifluorobenzyl bromide (Br-TFB) for alkylation. The solubility of the POCs in the protic or aprotic reaction medium affected the number of TFB molecules attached. Using the tandem derivatization with Br-TFB, the thiol and amine groups of biothiols were alkylated in the protic system, and the carboxylic groups of biothiols were alkylated in the aprotic system. The developed method was then successfully applied to measure biothiols in human urine. Because of the complex urine matrix and the lack of urine samples without endogenous biothiols, the standard addition method was utilized to avoid the matrix effect, check the recovery, and calculate the initial biothiol content in the urine. Regarding the linearity of the standard addition curves, the coefficient of determination was >0.996, and the linear regression showed satisfactory reproducibility with a relative standard deviation <3.9% for the slope and <8.8% for the intercept. The levels of cysteine and homocysteine in healthy human urine ranged from 28.8 to 111μmolL -1 and from 1.28 to 3.73μmolL -1 , respectively. The proposed method effectively increased the sensitivity of GC-MS assays of water-soluble compounds in human urine. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  17. A comparison of five standard methods for evaluating image intensity uniformity in partially parallel imaging MRI

    PubMed Central

    Goerner, Frank L.; Duong, Timothy; Stafford, R. Jason; Clarke, Geoffrey D.

    2013-01-01

    Purpose: To investigate the utility of five different standard measurement methods for determining image uniformity for partially parallel imaging (PPI) acquisitions in terms of consistency across a variety of pulse sequences and reconstruction strategies. Methods: Images were produced with a phantom using a 12-channel head matrix coil in a 3T MRI system (TIM TRIO, Siemens Medical Solutions, Erlangen, Germany). Images produced using echo-planar, fast spin echo, gradient echo, and balanced steady state free precession pulse sequences were evaluated. Two different PPI reconstruction methods were investigated, generalized autocalibrating partially parallel acquisition algorithm (GRAPPA) and modified sensitivity-encoding (mSENSE) with acceleration factors (R) of 2, 3, and 4. Additionally images were acquired with conventional, two-dimensional Fourier imaging methods (R = 1). Five measurement methods of uniformity, recommended by the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) were considered. The methods investigated were (1) an ACR method and a (2) NEMA method for calculating the peak deviation nonuniformity, (3) a modification of a NEMA method used to produce a gray scale uniformity map, (4) determining the normalized absolute average deviation uniformity, and (5) a NEMA method that focused on 17 areas of the image to measure uniformity. Changes in uniformity as a function of reconstruction method at the same R-value were also investigated. Two-way analysis of variance (ANOVA) was used to determine whether R-value or reconstruction method had a greater influence on signal intensity uniformity measurements for partially parallel MRI. Results: Two of the methods studied had consistently negative slopes when signal intensity uniformity was plotted against R-value. The results obtained comparing mSENSE against GRAPPA found no consistent difference between GRAPPA and mSENSE with regard to signal intensity uniformity. The results of the two-way ANOVA analysis suggest that R-value and pulse sequence type produce the largest influences on uniformity and PPI reconstruction method had relatively little effect. Conclusions: Two of the methods of measuring signal intensity uniformity, described by the (NEMA) MRI standards, consistently indicated a decrease in uniformity with an increase in R-value. Other methods investigated did not demonstrate consistent results for evaluating signal uniformity in MR images obtained by partially parallel methods. However, because the spatial distribution of noise affects uniformity, it is recommended that additional uniformity quality metrics be investigated for partially parallel MR images. PMID:23927345

  18. Age adjustment in ecological studies: using a study on arsenic ingestion and bladder cancer as an example.

    PubMed

    Guo, How-Ran

    2011-10-20

    Despite its limitations, ecological study design is widely applied in epidemiology. In most cases, adjustment for age is necessary, but different methods may lead to different conclusions. To compare three methods of age adjustment, a study on the associations between arsenic in drinking water and incidence of bladder cancer in 243 townships in Taiwan was used as an example. A total of 3068 cases of bladder cancer, including 2276 men and 792 women, were identified during a ten-year study period in the study townships. Three methods were applied to analyze the same data set on the ten-year study period. The first (Direct Method) applied direct standardization to obtain standardized incidence rate and then used it as the dependent variable in the regression analysis. The second (Indirect Method) applied indirect standardization to obtain standardized incidence ratio and then used it as the dependent variable in the regression analysis instead. The third (Variable Method) used proportions of residents in different age groups as a part of the independent variables in the multiple regression models. All three methods showed a statistically significant positive association between arsenic exposure above 0.64 mg/L and incidence of bladder cancer in men and women, but different results were observed for the other exposure categories. In addition, the risk estimates obtained by different methods for the same exposure category were all different. Using an empirical example, the current study confirmed the argument made by other researchers previously that whereas the three different methods of age adjustment may lead to different conclusions, only the third approach can obtain unbiased estimates of the risks. The third method can also generate estimates of the risk associated with each age group, but the other two are unable to evaluate the effects of age directly.

  19. Comprehensive histological evaluation of bone implants

    PubMed Central

    Rentsch, Claudia; Schneiders, Wolfgang; Manthey, Suzanne; Rentsch, Barbe; Rammelt, Stephan

    2014-01-01

    To investigate and assess bone regeneration in sheep in combination with new implant materials classical histological staining methods as well as immunohistochemistry may provide additional information to standard radiographs or computer tomography. Available published data of bone defect regenerations in sheep often present none or sparely labeled histological images. Repeatedly, the exact location of the sample remains unclear, detail enlargements are missing and the labeling of different tissues or cells is absent. The aim of this article is to present an overview of sample preparation, staining methods and their benefits as well as a detailed histological description of bone regeneration in the sheep tibia. General histological staining methods like hematoxylin and eosin, Masson-Goldner trichrome, Movat’s pentachrome and alcian blue were used to define new bone formation within a sheep tibia critical size defect containing a polycaprolactone-co-lactide (PCL) scaffold implanted for 3 months (n = 4). Special attention was drawn to describe the bone healing patterns down to cell level. Additionally one histological quantification method and immunohistochemical staining methods are described. PMID:24504113

  20. Evaluation of extraction methods for ochratoxin A detection in cocoa beans employing HPLC.

    PubMed

    Mishra, Rupesh K; Catanante, Gaëlle; Hayat, Akhtar; Marty, Jean-Louis

    2016-01-01

    Cocoa is an important ingredient for the chocolate industry and for many food products. However, it is prone to contamination by ochratoxin A (OTA), which is highly toxic and potentially carcinogenic to humans. In this work, four different extraction methods were tested and compared based on their recoveries. The best protocol was established which involves an organic solvent-free extraction method for the detection of OTA in cocoa beans using 1% sodium hydrogen carbonate (NaHCO3) in water within 30 min. The extraction method is rapid (as compared with existing methods), simple, reliable and practical to perform without complex experimental set-ups. The cocoa samples were freshly extracted and cleaned-up using immunoaffinity column (IAC) for HPLC analysis using a fluorescence detector. Under the optimised condition, the limit of detection (LOD) and limit of quantification (LOQ) for OTA were 0.62 and 1.25 ng ml(-1) respectively in standard solutions. The method could successfully quantify OTA in naturally contaminated samples. Moreover, good recoveries of OTA were obtained up to 86.5% in artificially spiked cocoa samples, with a maximum relative standard deviation (RSD) of 2.7%. The proposed extraction method could determine OTA at the level 1.5 µg kg(-)(1), which surpassed the standards set by the European Union for cocoa (2 µg kg(-1)). In addition, an efficiency comparison of IAC and molecular imprinted polymer (MIP) column was also performed and evaluated.

  1. Microbleed Detection Using Automated Segmentation (MIDAS): A New Method Applicable to Standard Clinical MR Images

    PubMed Central

    Seghier, Mohamed L.; Kolanko, Magdalena A.; Leff, Alexander P.; Jäger, Hans R.; Gregoire, Simone M.; Werring, David J.

    2011-01-01

    Background Cerebral microbleeds, visible on gradient-recalled echo (GRE) T2* MRI, have generated increasing interest as an imaging marker of small vessel diseases, with relevance for intracerebral bleeding risk or brain dysfunction. Methodology/Principal Findings Manual rating methods have limited reliability and are time-consuming. We developed a new method for microbleed detection using automated segmentation (MIDAS) and compared it with a validated visual rating system. In thirty consecutive stroke service patients, standard GRE T2* images were acquired and manually rated for microbleeds by a trained observer. After spatially normalizing each patient's GRE T2* images into a standard stereotaxic space, the automated microbleed detection algorithm (MIDAS) identified cerebral microbleeds by explicitly incorporating an “extra” tissue class for abnormal voxels within a unified segmentation-normalization model. The agreement between manual and automated methods was assessed using the intraclass correlation coefficient (ICC) and Kappa statistic. We found that MIDAS had generally moderate to good agreement with the manual reference method for the presence of lobar microbleeds (Kappa = 0.43, improved to 0.65 after manual exclusion of obvious artefacts). Agreement for the number of microbleeds was very good for lobar regions: (ICC = 0.71, improved to ICC = 0.87). MIDAS successfully detected all patients with multiple (≥2) lobar microbleeds. Conclusions/Significance MIDAS can identify microbleeds on standard MR datasets, and with an additional rapid editing step shows good agreement with a validated visual rating system. MIDAS may be useful in screening for multiple lobar microbleeds. PMID:21448456

  2. Establishment of a reference collection of additives and an analytical handbook of reference data to support enforcement of EU regulations on food contact plastics.

    PubMed

    van Lierop, B; Castle, L; Feigenbaum, A; Ehlert, K; Boenke, A

    1998-10-01

    A collection has been made of additives that are required as analytical standards for enforcement of European Union legislation on food contact plastics. The 100 additives have been characterized by mass spectrometry, infra-red spectroscopy and proton nuclear magnetic resonance spectroscopy to provide reference spectra. Gas chromatographic retention times have been recorded to facilitate identification by retention index. This information has been further supplemented by physico-chemical data. Finally, chromatographic methods have been used to indicate the presence of any impurities in the commercial chemicals. Samples of the reference substances are available on request and the collection of spectra and other information will be made available in printed format and on-line through the Internet. This paper gives an overview of the work done to establish the reference collection and the spectral atlas, which together will assist enforcement laboratories in the characterization of plastics and the selection of analytical methods for additives that may migrate.

  3. 21 CFR 130.20 - Food additives proposed for use in foods for which definitions and standards of identity are...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 2 2013-04-01 2013-04-01 false Food additives proposed for use in foods for which...: GENERAL Food Additives in Standardized Foods § 130.20 Food additives proposed for use in foods for which... the act, which proposes the inclusion of a food additive in such definition and standard of identity...

  4. 21 CFR 130.20 - Food additives proposed for use in foods for which definitions and standards of identity are...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 2 2012-04-01 2012-04-01 false Food additives proposed for use in foods for which...: GENERAL Food Additives in Standardized Foods § 130.20 Food additives proposed for use in foods for which... the act, which proposes the inclusion of a food additive in such definition and standard of identity...

  5. 21 CFR 130.20 - Food additives proposed for use in foods for which definitions and standards of identity are...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 2 2014-04-01 2014-04-01 false Food additives proposed for use in foods for which...: GENERAL Food Additives in Standardized Foods § 130.20 Food additives proposed for use in foods for which... the act, which proposes the inclusion of a food additive in such definition and standard of identity...

  6. 21 CFR 130.20 - Food additives proposed for use in foods for which definitions and standards of identity are...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 2 2011-04-01 2011-04-01 false Food additives proposed for use in foods for which...: GENERAL Food Additives in Standardized Foods § 130.20 Food additives proposed for use in foods for which... the act, which proposes the inclusion of a food additive in such definition and standard of identity...

  7. 21 CFR 130.20 - Food additives proposed for use in foods for which definitions and standards of identity are...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false Food additives proposed for use in foods for which...: GENERAL Food Additives in Standardized Foods § 130.20 Food additives proposed for use in foods for which... the act, which proposes the inclusion of a food additive in such definition and standard of identity...

  8. Oblique patterned etching of vertical silicon sidewalls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burckel, D. Bruce; Finnegan, Patrick S.; Henry, M. David

    A method for patterning on vertical silicon surfaces in high aspect ratio silicontopography is presented. A Faraday cage is used to direct energetic reactive ions obliquely through a patterned suspended membrane positioned over the topography. The technique is capable of forming high-fidelity pattern (100 nm) features, adding an additional fabrication capability to standard top-down fabrication approaches.

  9. Simultaneous Determination of Caffeine and Vitamin B6 in Energy Drinks by High-Performance Liquid Chromatography (HPLC)

    ERIC Educational Resources Information Center

    Leacock, Rachel E.; Stankus, John J.; Davis, Julian M.

    2011-01-01

    A high-performance liquid chromatography experiment to determine the concentration of caffeine and vitamin B6 in sports energy drinks has been developed. This laboratory activity, which is appropriate for an upper-level instrumental analysis course, illustrates the standard addition method and simultaneous determination of two species. (Contains 1…

  10. Slash fire atmospheric pollution.

    Treesearch

    Leo Fritschen; Harley Bovee; Konrad Buettner; Robert Charlson; Lee Monteith; Stewart Pickford; James. Murphy

    1970-01-01

    In the Pacific Northwest, as in many other parts of the country, burning is the standard method for disposal of undesirable waste including logging debris and agricultural residue. About 81,000 hectares (200,000 acres) of logging slash are burned annually west of the Cascade Range in the States of Washington and Oregon. In addition, 101,000 hectares (250,000 acres) of...

  11. Oblique patterned etching of vertical silicon sidewalls

    NASA Astrophysics Data System (ADS)

    Bruce Burckel, D.; Finnegan, Patrick S.; David Henry, M.; Resnick, Paul J.; Jarecki, Robert L.

    2016-04-01

    A method for patterning on vertical silicon surfaces in high aspect ratio silicon topography is presented. A Faraday cage is used to direct energetic reactive ions obliquely through a patterned suspended membrane positioned over the topography. The technique is capable of forming high-fidelity pattern (100 nm) features, adding an additional fabrication capability to standard top-down fabrication approaches.

  12. Additive nonlinear biomass equations: A likelihood-based approach

    Treesearch

    David L. R. Affleck; Ulises Dieguez-Aranda

    2016-01-01

    Since Parresol’s (Can. J. For. Res. 31:865-878, 2001) seminal article on the topic, it has become standard to develop nonlinear tree biomass equations to ensure compatibility among total and component predictions and to fit these equations using multistep generalized least-squares methods. In particular, many studies have specified equations for total tree...

  13. Oblique patterned etching of vertical silicon sidewalls

    DOE PAGES

    Burckel, D. Bruce; Finnegan, Patrick S.; Henry, M. David; ...

    2016-04-05

    A method for patterning on vertical silicon surfaces in high aspect ratio silicontopography is presented. A Faraday cage is used to direct energetic reactive ions obliquely through a patterned suspended membrane positioned over the topography. The technique is capable of forming high-fidelity pattern (100 nm) features, adding an additional fabrication capability to standard top-down fabrication approaches.

  14. A phase one AR/C system design

    NASA Technical Reports Server (NTRS)

    Kachmar, Peter M.; Polutchko, Robert J.; Matusky, Martin; Chu, William; Jackson, William; Montez, Moises

    1991-01-01

    The Phase One AR&C System Design integrates an evolutionary design based on the legacy of previous mission successes, flight tested components from manned Rendezvous and Proximity Operations (RPO) space programs, and additional AR&C components validated using proven methods. The Phase One system has a modular, open architecture with the standardized interfaces proposed for Space Station Freedom system architecture.

  15. Quantitative analysis of phylloquinone (vitamin K1) in soy bean oils by high-performance liquid chromatography.

    PubMed

    Zonta, F; Stancher, B

    1985-07-19

    A high-performance liquid chromatographic method for determining phylloquinone (vitamin K1) in soy bean oils is described. Resolution of vitamin K1 from interfering peaks of the matrix was obtained after enzymatic digestion, extraction and liquid-solid chromatography on alumina. An isocratic reversed-phase chromatography with UV detection was used in the final stage. The quantitation was carried out by the standard addition method, and the recovery of the whole procedure was 88.2%.

  16. Multi-laboratory evaluations of the performance of Catellicoccus marimammalium PCR assays developed to target gull fecal sources

    USGS Publications Warehouse

    Sinigalliano, Christopher D.; Ervin, Jared S.; Van De Werfhorst, Laurie C.; Badgley, Brian D.; Ballestée, Elisenda; Bartkowiaka, Jakob; Boehm, Alexandria B.; Byappanahalli, Muruleedhara N.; Goodwin, Kelly D.; Gourmelon, Michèle; Griffith, John; Holden, Patricia A.; Jay, Jenny; Layton, Blythe; Lee, Cheonghoon; Lee, Jiyoung; Meijer, Wim G.; Noble, Rachel; Raith, Meredith; Ryu, Hodon; Sadowsky, Michael J.; Schriewer, Alexander; Wang, Dan; Wanless, David; Whitman, Richard; Wuertz, Stefan; Santo Domingo, Jorge W.

    2013-01-01

    Here we report results from a multi-laboratory (n = 11) evaluation of four different PCR methods targeting the 16S rRNA gene of Catellicoccus marimammalium originally developed to detect gull fecal contamination in coastal environments. The methods included a conventional end-point PCR method, a SYBR® Green qPCR method, and two TaqMan® qPCR methods. Different techniques for data normalization and analysis were tested. Data analysis methods had a pronounced impact on assay sensitivity and specificity calculations. Across-laboratory standardization of metrics including the lower limit of quantification (LLOQ), target detected but not quantifiable (DNQ), and target not detected (ND) significantly improved results compared to results submitted by individual laboratories prior to definition standardization. The unit of measure used for data normalization also had a pronounced effect on measured assay performance. Data normalization to DNA mass improved quantitative method performance as compared to enterococcus normalization. The MST methods tested here were originally designed for gulls but were found in this study to also detect feces from other birds, particularly feces composited from pigeons. Sequencing efforts showed that some pigeon feces from California contained sequences similar to C. marimammalium found in gull feces. These data suggest that the prevalence, geographic scope, and ecology of C. marimammalium in host birds other than gulls require further investigation. This study represents an important first step in the multi-laboratory assessment of these methods and highlights the need to broaden and standardize additional evaluations, including environmentally relevant target concentrations in ambient waters from diverse geographic regions.

  17. Division of methods for counting helminths' eggs and the problem of efficiency of these methods.

    PubMed

    Jaromin-Gleń, Katarzyna; Kłapeć, Teresa; Łagód, Grzegorz; Karamon, Jacek; Malicki, Jacek; Skowrońska, Agata; Bieganowski, Andrzej

    2017-03-21

    From the sanitary and epidemiological aspects, information concerning the developmental forms of intestinal parasites, especially the eggs of helminths present in our environment in: water, soil, sandpits, sewage sludge, crops watered with wastewater are very important. The methods described in the relevant literature may be classified in various ways, primarily according to the methodology of the preparation of samples from environmental matrices prepared for analysis, and the sole methods of counting and chambers/instruments used for this purpose. In addition, there is a possibility to perform the classification of the research methods analyzed from the aspect of the method and time of identification of the individuals counted, or the necessity for staining them. Standard methods for identification of helminths' eggs from environmental matrices are usually characterized by low efficiency, i.e. from 30% to approximately 80%. The efficiency of the method applied may be measured in a dual way, either by using the method of internal standard or the 'Split/Spike' method. While measuring simultaneously in an examined object the efficiency of the method and the number of eggs, the 'actual' number of eggs may be calculated by multiplying the obtained value of the discovered eggs of helminths by inverse efficiency.

  18. 45 CFR 156.285 - Additional standards specific to SHOP.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Section 156.285 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS HEALTH INSURANCE ISSUER STANDARDS UNDER THE AFFORDABLE CARE ACT, INCLUDING STANDARDS RELATED TO EXCHANGES Qualified Health Plan Minimum Certification Standards § 156.285 Additional standards...

  19. 45 CFR 156.285 - Additional standards specific to SHOP.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Section 156.285 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS HEALTH INSURANCE ISSUER STANDARDS UNDER THE AFFORDABLE CARE ACT, INCLUDING STANDARDS RELATED TO EXCHANGES Qualified Health Plan Minimum Certification Standards § 156.285 Additional standards...

  20. 45 CFR 156.285 - Additional standards specific to SHOP.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Section 156.285 Public Welfare Department of Health and Human Services REQUIREMENTS RELATING TO HEALTH CARE ACCESS HEALTH INSURANCE ISSUER STANDARDS UNDER THE AFFORDABLE CARE ACT, INCLUDING STANDARDS RELATED TO EXCHANGES Qualified Health Plan Minimum Certification Standards § 156.285 Additional standards...

  1. Beyond the Standard Curriculum: A Review of Available Opportunities for Medical Students to Prepare for a Career in Radiation Oncology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Ankit; DeNunzio, Nicholas J.; Ahuja, Divya

    Purpose: To review currently available opportunities for medical students to supplement their standard medical education to prepare for a career in radiation oncology. Methods and Materials: Google and PubMed were used to identify existing clinical, health policy, and research programs for medical students in radiation oncology. In addition, results publicly available by the National Resident Matching Program were used to explore opportunities that successful radiation oncology applicants pursued during their medical education, including obtaining additional graduate degrees. Results: Medical students can pursue a wide variety of opportunities before entering radiation oncology. Several national specialty societies, such as the American Societymore » for Radiation Oncology and the Radiological Society of North America, offer summer internships for medical students interested in radiation oncology. In 2011, 30% of allopathic senior medical students in the United States who matched into radiation oncology had an additional graduate degree, including PhD, MPH, MBA, and MA degrees. Some medical schools are beginning to further integrate dedicated education in radiation oncology into the standard 4-year medical curriculum. Conclusions: To the authors' knowledge, this is the first comprehensive review of available opportunities for medical students interested in radiation oncology. Early exposure to radiation oncology and additional educational training beyond the standard medical curriculum have the potential to create more successful radiation oncology applicants and practicing radiation oncologists while also promoting the growth of the field. We hope this review can serve as guide to radiation oncology applicants and mentors as well as encourage discussion regarding initiatives in radiation oncology opportunities for medical students.« less

  2. Applying Mondrian Cross-Conformal Prediction To Estimate Prediction Confidence on Large Imbalanced Bioactivity Data Sets.

    PubMed

    Sun, Jiangming; Carlsson, Lars; Ahlberg, Ernst; Norinder, Ulf; Engkvist, Ola; Chen, Hongming

    2017-07-24

    Conformal prediction has been proposed as a more rigorous way to define prediction confidence compared to other application domain concepts that have earlier been used for QSAR modeling. One main advantage of such a method is that it provides a prediction region potentially with multiple predicted labels, which contrasts to the single valued (regression) or single label (classification) output predictions by standard QSAR modeling algorithms. Standard conformal prediction might not be suitable for imbalanced data sets. Therefore, Mondrian cross-conformal prediction (MCCP) which combines the Mondrian inductive conformal prediction with cross-fold calibration sets has been introduced. In this study, the MCCP method was applied to 18 publicly available data sets that have various imbalance levels varying from 1:10 to 1:1000 (ratio of active/inactive compounds). Our results show that MCCP in general performed well on bioactivity data sets with various imbalance levels. More importantly, the method not only provides confidence of prediction and prediction regions compared to standard machine learning methods but also produces valid predictions for the minority class. In addition, a compound similarity based nonconformity measure was investigated. Our results demonstrate that although it gives valid predictions, its efficiency is much worse than that of model dependent metrics.

  3. Vectorial finite elements for solving the radiative transfer equation

    NASA Astrophysics Data System (ADS)

    Badri, M. A.; Jolivet, P.; Rousseau, B.; Le Corre, S.; Digonnet, H.; Favennec, Y.

    2018-06-01

    The discrete ordinate method coupled with the finite element method is often used for the spatio-angular discretization of the radiative transfer equation. In this paper we attempt to improve upon such a discretization technique. Instead of using standard finite elements, we reformulate the radiative transfer equation using vectorial finite elements. In comparison to standard finite elements, this reformulation yields faster timings for the linear system assemblies, as well as for the solution phase when using scattering media. The proposed vectorial finite element discretization for solving the radiative transfer equation is cross-validated against a benchmark problem available in literature. In addition, we have used the method of manufactured solutions to verify the order of accuracy for our discretization technique within different absorbing, scattering, and emitting media. For solving large problems of radiation on parallel computers, the vectorial finite element method is parallelized using domain decomposition. The proposed domain decomposition method scales on large number of processes, and its performance is unaffected by the changes in optical thickness of the medium. Our parallel solver is used to solve a large scale radiative transfer problem of the Kelvin-cell radiation.

  4. Implicit integration methods for dislocation dynamics

    DOE PAGES

    Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...

    2015-01-20

    In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less

  5. Determination of semi-volatile additives in wines using SPME and GC-MS.

    PubMed

    Sagandykova, Gulyaim N; Alimzhanova, Mereke B; Nurzhanova, Yenglik T; Kenessov, Bulat

    2017-04-01

    Parameters of headspace solid-phase microextraction, such as fiber coating (85μm CAR/PDMS), extraction time (2min for white and 3min for red wines), temperature (85°C), pre-incubation time (15min) were optimized for identification and quantification of semi-volatile additives (propylene glycol, sorbic and benzoic acids) in wines. To overcome problems in their determination, an evaporation of the wine matrix was performed. Using the optimized method, screening of 25 wine samples was performed, and the presence of propylene glycol, sorbic and benzoic acids was found in 22, 20 and 6 samples, respectively. Analysis of different wines using a standard addition approach showed good linearity in concentration ranges 0-250, 0-125, and 0-250mg/L for propylene glycol, sorbic and benzoic acids, respectively. The proposed method can be recommended for quality control of wine and disclosing adulterated samples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Terahertz imaging and tomography as efficient instruments for testing polymer additive manufacturing objects.

    PubMed

    Perraud, J B; Obaton, A F; Bou-Sleiman, J; Recur, B; Balacey, H; Darracq, F; Guillet, J P; Mounaix, P

    2016-05-01

    Additive manufacturing (AM) technology is not only used to make 3D objects but also for rapid prototyping. In industry and laboratories, quality controls for these objects are necessary though difficult to implement compared to classical methods of fabrication because the layer-by-layer printing allows for very complex object manufacturing that is unachievable with standard tools. Furthermore, AM can induce unknown or unexpected defects. Consequently, we demonstrate terahertz (THz) imaging as an innovative method for 2D inspection of polymer materials. Moreover, THz tomography may be considered as an alternative to x-ray tomography and cheaper 3D imaging for routine control. This paper proposes an experimental study of 3D polymer objects obtained by additive manufacturing techniques. This approach allows us to characterize defects and to control dimensions by volumetric measurements on 3D data reconstructed by tomography.

  7. Calculation of Five Thermodynamic Molecular Descriptors by Means of a General Computer Algorithm Based on the Group-Additivity Method: Standard Enthalpies of Vaporization, Sublimation and Solvation, and Entropy of Fusion of Ordinary Organic Molecules and Total Phase-Change Entropy of Liquid Crystals.

    PubMed

    Naef, Rudolf; Acree, William E

    2017-06-25

    The calculation of the standard enthalpies of vaporization, sublimation and solvation of organic molecules is presented using a common computer algorithm on the basis of a group-additivity method. The same algorithm is also shown to enable the calculation of their entropy of fusion as well as the total phase-change entropy of liquid crystals. The present method is based on the complete breakdown of the molecules into their constituting atoms and their immediate neighbourhood; the respective calculations of the contribution of the atomic groups by means of the Gauss-Seidel fitting method is based on experimental data collected from literature. The feasibility of the calculations for each of the mentioned descriptors was verified by means of a 10-fold cross-validation procedure proving the good to high quality of the predicted values for the three mentioned enthalpies and for the entropy of fusion, whereas the predictive quality for the total phase-change entropy of liquid crystals was poor. The goodness of fit ( Q ²) and the standard deviation (σ) of the cross-validation calculations for the five descriptors was as follows: 0.9641 and 4.56 kJ/mol ( N = 3386 test molecules) for the enthalpy of vaporization, 0.8657 and 11.39 kJ/mol ( N = 1791) for the enthalpy of sublimation, 0.9546 and 4.34 kJ/mol ( N = 373) for the enthalpy of solvation, 0.8727 and 17.93 J/mol/K ( N = 2637) for the entropy of fusion and 0.5804 and 32.79 J/mol/K ( N = 2643) for the total phase-change entropy of liquid crystals. The large discrepancy between the results of the two closely related entropies is discussed in detail. Molecules for which both the standard enthalpies of vaporization and sublimation were calculable, enabled the estimation of their standard enthalpy of fusion by simple subtraction of the former from the latter enthalpy. For 990 of them the experimental enthalpy-of-fusion values are also known, allowing their comparison with predictions, yielding a correlation coefficient R ² of 0.6066.

  8. Evaluation of the Clinical LOINC (Logical Observation Identifiers, Names, and Codes) Semantic Structure as a Terminology Model for Standardized Assessment Measures

    PubMed Central

    Bakken, Suzanne; Cimino, James J.; Haskell, Robert; Kukafka, Rita; Matsumoto, Cindi; Chan, Garrett K.; Huff, Stanley M.

    2000-01-01

    Objective: The purpose of this study was to test the adequacy of the Clinical LOINC (Logical Observation Identifiers, Names, and Codes) semantic structure as a terminology model for standardized assessment measures. Methods: After extension of the definitions, 1,096 items from 35 standardized assessment instruments were dissected into the elements of the Clinical LOINC semantic structure. An additional coder dissected at least one randomly selected item from each instrument. When multiple scale types occurred in a single instrument, a second coder dissected one randomly selected item representative of each scale type. Results: The results support the adequacy of the Clinical LOINC semantic structure as a terminology model for standardized assessments. Using the revised definitions, the coders were able to dissect into the elements of Clinical LOINC all the standardized assessment items in the sample instruments. Percentage agreement for each element was as follows: component, 100 percent; property, 87.8 percent; timing, 82.9 percent; system/sample, 100 percent; scale, 92.6 percent; and method, 97.6 percent. Discussion: This evaluation was an initial step toward the representation of standardized assessment items in a manner that facilitates data sharing and re-use. Further clarification of the definitions, especially those related to time and property, is required to improve inter-rater reliability and to harmonize the representations with similar items already in LOINC. PMID:11062226

  9. Standards on the permanence of recording materials

    NASA Astrophysics Data System (ADS)

    Adelstein, Peter Z.

    1996-02-01

    The permanence of recording materials is dependent upon many factors, and these differ for photographic materials, magnetic tape and optical disks. Photographic permanence is affected by the (1) stability of the material, (2) the photographic processing and (3) the storage conditions. American National Standards on the material and the processing have been published for different types of film and standard test methods have been established for color film. The third feature of photographic permanence is the storage requirements and these have been established for photographic film, prints and plates. Standardization on the permanence of electronic recording materials is more complicated. As with photographic materials, stability is dependent upon (1) the material itself and (2) the storage environment. In addition, retention of the necessary (3) hardware and (4) software is also a prerequisite. American National Standards activity in these areas has been underway for the past six years. A test method for the material which determines the life expectancy of CD-ROMs has been standardized. The problems of determining the expected life of magnetic tape have been more formidable but the critical physical properties have been determined. A specification for the storage environment of magnetic tape has been finalized and one on the storage of optical disks is being worked on. Critical but unsolved problems are the obsolescence of both the hardware and the software necessary to read digital images.

  10. Standards on the permanence of recording materials

    NASA Astrophysics Data System (ADS)

    Adelstein, Peter Z.

    1996-01-01

    The permanence of recording materials is dependent upon many factors, and these differ for photographic materials, magnetic tape and optical disks. Photographic permanence is affected by the (1) stability of the material, (2) the photographic processing, and (3) the storage conditions. American National Standards on the material and the processing have been published for different types of film and standard test methods have been established for color film. The third feature of photographic permanence is the storage requirements and these have been established for photographic film, prints, and plates. Standardization on the permanence of electronic recording materials is more complicated. As with photographic materials, stability is dependent upon (1) the material itself and (2) the storage environment. In addition, retention of the necessary (3) hardware and (4) software is also a prerequisite. American National Standards activity in these areas has been underway for the past six years. A test method for the material which determines the life expectancy of CD-ROMs has been standardized. The problems of determining the expected life of magnetic tape have been more formidable but the critical physical properties have been determined. A specification for the storage environment of magnetic tapes has been finalized and one on the storage of optical disks is being worked on. Critical but unsolved problems are the obsolescence of both the hardware and the software necessary to read digital images.

  11. The COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) and how to select an outcome measurement instrument

    PubMed Central

    Mokkink, Lidwine B.; Prinsen, Cecilia A. C.; Bouter, Lex M.; de Vet, Henrica C. W.; Terwee, Caroline B.

    2016-01-01

    Background: COSMIN (COnsensus-based Standards for the selection of health Measurement INstruments) is an initiative of an international multidisciplinary team of researchers who aim to improve the selection of outcome measurement instruments both in research and in clinical practice by developing tools for selecting the most appropriate available instrument. Method: In this paper these tools are described, i.e. the COSMIN taxonomy and definition of measurement properties; the COSMIN checklist to evaluate the methodological quality of studies on measurement properties; a search filter for finding studies on measurement properties; a protocol for systematic reviews of outcome measurement instruments; a database of systematic reviews of outcome measurement instruments; and a guideline for selecting outcome measurement instruments for Core Outcome Sets in clinical trials. Currently, we are updating the COSMIN checklist, particularly the standards for content validity studies. Also new standards for studies using Item Response Theory methods will be developed. Additionally, in the future we want to develop standards for studies on the quality of non-patient reported outcome measures, such as clinician-reported outcomes and performance-based outcomes. Conclusions: In summary, we plea for more standardization in the use of outcome measurement instruments, for conducting high quality systematic reviews on measurement instruments in which the best available outcome measurement instrument is recommended, and for stopping the use of poor outcome measurement instruments. PMID:26786084

  12. Quantification of the predominant monomeric catechins in baking chocolate standard reference material by LC/APCI-MS.

    PubMed

    Nelson, Bryant C; Sharpless, Katherine E

    2003-01-29

    Catechins are polyphenolic plant compounds (flavonoids) that may offer significant health benefits to humans. These benefits stem largely from their anticarcinogenic, antioxidant, and antimutagenic properties. Recent epidemiological studies suggest that the consumption of flavonoid-containing foods is associated with reduced risk of cardiovascular disease. Chocolate is a natural cocoa bean-based product that reportedly contains high levels of monomeric, oligomeric, and polymeric catechins. We have applied solid-liquid extraction and liquid chromatography coupled with atmospheric pressure chemical ionization-mass spectrometry to the identification and determination of the predominant monomeric catechins, (+)-catechin and (-)-epicatechin, in a baking chocolate Standard Reference Material (NIST Standard Reference Material 2384). (+)-Catechin and (-)-epicatechin are detected and quantified in chocolate extracts on the basis of selected-ion monitoring of their protonated [M + H](+) molecular ions. Tryptophan methyl ester is used as an internal standard. The developed method has the capacity to accurately quantify as little as 0.1 microg/mL (0.01 mg of catechin/g of chocolate) of either catechin in chocolate extracts, and the method has additionally been used to certify (+)-catechin and (-)-epicatechin levels in the baking chocolate Standard Reference Material. This is the first reported use of liquid chromatography/mass spectrometry for the quantitative determination of monomeric catechins in chocolate and the only report certifying monomeric catechin levels in a food-based Standard Reference Material.

  13. Reproducibility in Computational Neuroscience Models and Simulations

    PubMed Central

    McDougal, Robert A.; Bulanova, Anna S.; Lytton, William W.

    2016-01-01

    Objective Like all scientific research, computational neuroscience research must be reproducible. Big data science, including simulation research, cannot depend exclusively on journal articles as the method to provide the sharing and transparency required for reproducibility. Methods Ensuring model reproducibility requires the use of multiple standard software practices and tools, including version control, strong commenting and documentation, and code modularity. Results Building on these standard practices, model sharing sites and tools have been developed that fit into several categories: 1. standardized neural simulators, 2. shared computational resources, 3. declarative model descriptors, ontologies and standardized annotations; 4. model sharing repositories and sharing standards. Conclusion A number of complementary innovations have been proposed to enhance sharing, transparency and reproducibility. The individual user can be encouraged to make use of version control, commenting, documentation and modularity in development of models. The community can help by requiring model sharing as a condition of publication and funding. Significance Model management will become increasingly important as multiscale models become larger, more detailed and correspondingly more difficult to manage by any single investigator or single laboratory. Additional big data management complexity will come as the models become more useful in interpreting experiments, thus increasing the need to ensure clear alignment between modeling data, both parameters and results, and experiment. PMID:27046845

  14. Accelerated Slice Encoding for Metal Artifact Correction

    PubMed Central

    Hargreaves, Brian A.; Chen, Weitian; Lu, Wenmiao; Alley, Marcus T.; Gold, Garry E.; Brau, Anja C. S.; Pauly, John M.; Pauly, Kim Butts

    2010-01-01

    Purpose To demonstrate accelerated imaging with artifact reduction near metallic implants and different contrast mechanisms. Materials and Methods Slice-encoding for metal artifact correction (SEMAC) is a modified spin echo sequence that uses view-angle tilting and slice-direction phase encoding to correct both in-plane and through-plane artifacts. Standard spin echo trains and short-TI inversion recovery (STIR) allow efficient PD-weighted imaging with optional fat suppression. A completely linear reconstruction allows incorporation of parallel imaging and partial Fourier imaging. The SNR effects of all reconstructions were quantified in one subject. 10 subjects with different metallic implants were scanned using SEMAC protocols, all with scan times below 11 minutes, as well as with standard spin echo methods. Results The SNR using standard acceleration techniques is unaffected by the linear SEMAC reconstruction. In all cases with implants, accelerated SEMAC significantly reduced artifacts compared with standard imaging techniques, with no additional artifacts from acceleration techniques. The use of different contrast mechanisms allowed differentiation of fluid from other structures in several subjects. Conclusion SEMAC imaging can be combined with standard echo-train imaging, parallel imaging, partial-Fourier imaging and inversion recovery techniques to offer flexible image contrast with a dramatic reduction of metal-induced artifacts in scan times under 11 minutes. PMID:20373445

  15. Standardisation of costs: the Dutch Manual for Costing in economic evaluations.

    PubMed

    Oostenbrink, Jan B; Koopmanschap, Marc A; Rutten, Frans F H

    2002-01-01

    The lack of a uniform costing methodology is often considered a weakness of economic evaluations that hinders the interpretation and comparison of studies. Standardisation is therefore an important topic within the methodology of economic evaluations and in national guidelines that formulate the formal requirements for studies to be considered when deciding on the reimbursement of new medical therapies. Recently, the Dutch Manual for Costing: Methods and Standard Costs for Economic Evaluations in Health Care (further referred to as "the manual") has been published, in addition to the Dutch guidelines for pharmacoeconomic research. The objectives of this article are to describe the main content of the manual and to discuss some key issues of the manual in relation to the standardisation of costs. The manual introduces a six-step procedure for costing. These steps concern: the scope of the study;the choice of cost categories;the identification of units;the measurement of resource use;the monetary valuation of units; andthe calculation of unit costs. Each step consists of a number of choices and these together define the approach taken. In addition to a description of the costing process, five key issues regarding the standardisation of costs are distinguished. These are the use of basic principles, methods for measurement and valuation, standard costs (average prices of healthcare services), standard values (values that can be used within unit cost calculations), and the reporting of outcomes. The use of the basic principles, standard values and minimal requirements for reporting outcomes, as defined in the manual, are obligatory in studies that support submissions to acquire reimbursement for new pharmaceuticals. Whether to use standard costs, and the choice of a particular method to measure or value costs, is left mainly to the investigator, depending on the specific study setting. In conclusion, several instruments are available to increase standardisation in costing methodology among studies. These instruments have to be used in such a way that a balance is found between standardisation and the specific setting in which a study is performed. The way in which the Dutch manual tries to reach this balance can serve as an illustration for other countries.

  16. Tropospheric ozone observations - How well can we assess tropospheric ozone changes?

    NASA Astrophysics Data System (ADS)

    Tarasick, D. W.; Galbally, I. E.; Ancellet, G.; Leblanc, T.; Wallington, T. J.; Ziemke, J. R.; Steinbacher, M.; Stähelin, J.; Vigouroux, C.; Hannigan, J. W.; García, O. E.; Foret, G.; Zanis, P.; Liu, X.; Weatherhead, E. C.; Petropavlovskikh, I. V.; Worden, H. M.; Osman, M.; Liu, J.; Lin, M.; Cooper, O. R.; Schultz, M. G.; Granados-Muñoz, M. J.; Thompson, A. M.; Cuesta, J.; Dufour, G.; Thouret, V.; Hassler, B.; Trickl, T.

    2017-12-01

    Since the early 20th century, measurements of ozone in the free troposphere have evolved and changed. Data records have different uncertainties and biases, and differ with respect to coverage, information content, and representativeness. Almost all validation studies employ ECC ozonesondes. These have been compared to UV-absorption measurements in a number of intercomparison studies, and show a modest ( 1-5%) high bias in the troposphere, with an uncertainty of 5%, but no evidence of a change over time. Umkehr, lidar, FTIR, and commercial aircraft all show modest low biases relative to the ECCs, and so -- if the ECC biases are transferable -- all agree within 1σ with the modern UV standard. Relative to the UV standard, Brewer-Mast sondes show a 20% increase in sensitivity from 1970-1995, while Japanese KC sondes show an increase of 5-10%. Combined with the shift of the global ozonesonde network to ECCs, this can induce a false positive trend, in analyses based on sonde data. Passive sounding methods -- Umkehr, FTIR and satellites -- have much lower vertical resolution than active methods, and this can limit the attribution of trends. Satellite biases are larger than those of other measurement systems, ranging between -10% and +20%, and standard deviations are large: about 10-30%, versus 5-10% for sondes, aircraft, lidar and ground-based FTIR. There is currently little information on measurement drift for satellite measurements of tropospheric ozone. This is an evident area of concern if satellite retrievals are used for trend studies. The importance of ECC sondes as a transfer standard for satellite validation means that efforts to homogenize existing records, by correcting for known changes and by adopting strict standard operating procedures, should continue, and additional research effort should be put into understanding and reducing sonde uncertainties. Representativeness is also a potential source of large errors, which are difficult to quantify. The global observation network is unevenly distributed, and so additional sites (or airports), would be of benefit. Objective methods of quantifying spatial representativeness can optimize future network design. International cooperation and data sharing will be of paramount importance, as the TOAR project has demonstrated.

  17. Characterization of a new candidate isotopic reference material for natural Pb using primary measurement method.

    PubMed

    Nonose, Naoko; Suzuki, Toshihiro; Shin, Ki-Cheol; Miura, Tsutomu; Hioki, Akiharu

    2017-06-29

    A lead isotopic standard solution with natural abundance has been developed by applying a mixture of a solution of enriched 208 Pb and a solution of enriched 204 Pb ( 208 Pb- 204 Pb double spike solution) as bracketing method. The amount-of-substance ratio of 208 Pb: 204 Pb in this solution is accurately measured by applying EDTA titrimetry, which is one of the primary measurement methods, to each enriched Pb isotope solution. Also metal impurities affecting EDTA titration and minor lead isotopes contained in each enriched Pb isotope solution are quantified by ICP-SF-MS. The amount-of-substance ratio of 208 Pb: 204 Pb in the 208 Pb- 204 Pb double spike solution is 0.961959 ± 0.000056 (combined standard uncertainty; k = 1). Both the measurement of lead isotope ratios in a candidate isotopic standard solution and the correction of mass discrimination in MC-ICP-MS are carried out by coupling of a bracketing method with the 208 Pb- 204 Pb double spike solution and a thallium internal addition method, where thallium solution is added to the standard and the sample. The measured lead isotope ratios and their expanded uncertainties (k = 2) in the candidate isotopic standard solution are 18.0900 ± 0.0046 for 206 Pb: 204 Pb, 15.6278 ± 0.0036 for 207 Pb: 204 Pb, 38.0626 ± 0.0089 for 208 Pb: 204 Pb, 2.104406 ± 0.00013 for 208 Pb: 206 Pb, and 0.863888 ± 0.000036 for 207 Pb: 206 Pb. The expanded uncertainties are about one half of the stated uncertainty for NIST SRM 981, for 208 Pb: 204 Pb, 207 Pb: 204 Pb and 206 Pb: 204 Pb, or one eighth, for 208 Pb: 206 Pb and 207 Pb: 206 Pb, The combined uncertainty consists of the uncertainties due to lead isotope ratio measurements and the remaining time-drift effect of mass discrimination in MC-ICP-MS, which is not removed by the coupled correction method. In the measurement of 208 Pb: 204 Pb, 207 Pb: 204 Pb and 206 Pb: 204 Pb, the latter contribution is two or three times larger than the former. When the coupling of a bracketing method with the 208 Pb- 204 Pb double spike solution and a thallium internal addition method is applied to the analysis of NIST SRM 981, the measured lead isotope ratios are in good agreement with its certified values. This proves that the developed method is not only consistent with the conventional one by NIST SRM 981 but also enables measurement of the lead isotope ratios with higher precision. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. The oxidation of organic additives in the positive vanadium electrolyte and its effect on the performance of vanadium redox flow battery

    NASA Astrophysics Data System (ADS)

    Nguyen, Tam D.; Whitehead, Adam; Scherer, Günther G.; Wai, Nyunt; Oo, Moe O.; Bhattarai, Arjun; Chandra, Ghimire P.; Xu, Zhichuan J.

    2016-12-01

    Despite many desirable properties, the vanadium redox flow battery is limited, in the maximum operation temperature that can be continuously endured, before precipitation begins in the positive electrolyte. Many additives have been proposed to improve the thermal stability of the charged positive electrolyte. However, we have found that the apparent stability, revealed in laboratory testing, is often simply an artifact of the test method and arises from the oxidation of the additive, with corresponding partial reduction of V(V) to V(IV). This does not improve the stability of the electrolyte in an operating system. Here, we examined the oxidation of some typical organic additives with carboxyl, alcohol, and multi-functional groups, in sulfuric acid solutions containing V(V). The UV-vis measurements and titration results showed that many compounds reduced the state-of-charge (SOC) of vanadium electrolyte, for example, by 27.8, 88.5, and 81.9% with the addition of 1%wt of EDTA disodium salt, pyrogallol, and ascorbic acid, respectively. The cell cycling also indicated the effect of organic additives on the cell performance, with significant reduction in the usable charge capacity. In addition, a standard screening method for thermally stable additives was introduced, to quickly screen suitable additives for the positive vanadium electrolyte.

  19. Model-based multi-fringe interferometry using Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Gu, Wei; Song, Weihong; Wu, Gaofeng; Quan, Haiyang; Wu, Yongqian; Zhao, Wenchuan

    2018-06-01

    In this paper, a general phase retrieval method is proposed, which is based on one single interferogram with a small amount of fringes (either tilt or power). Zernike polynomials are used to characterize the phase to be measured; the phase distribution is reconstructed by a non-linear least squares method. Experiments show that the proposed method can obtain satisfactory results compared to the standard phase-shifting interferometry technique. Additionally, the retrace errors of proposed method can be neglected because of the few fringes; it does not need any auxiliary phase shifting facilities (low cost) and it is easy to implement without the process of phase unwrapping.

  20. Fully automated motion correction in first-pass myocardial perfusion MR image sequences.

    PubMed

    Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2008-11-01

    This paper presents a novel method for registration of cardiac perfusion magnetic resonance imaging (MRI). The presented method is capable of automatically registering perfusion data, using independent component analysis (ICA) to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of that ICA. This reference image is used in a two-pass registration framework. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Despite varying image quality and motion patterns in the evaluation set, validation of the method showed a reduction of the average right ventricle (LV) motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. Comparison of clinically relevant parameters computed using registered data and the manual gold standard show a good agreement. Additional tests with a simulated free-breathing protocol showed robustness against considerable deviations from a standard breathing protocol. We conclude that this fully automatic ICA-based method shows an accuracy, a robustness and a computation speed adequate for use in a clinical environment.

  1. Evaluation of two multi-locus sequence typing schemes for commensal Escherichia coli from dairy cattle in Washington State.

    PubMed

    Ahmed, Sara; Besser, Thomas E; Call, Douglas R; Weissman, Scott J; Jones, Lisa P; Davis, Margaret A

    2016-05-01

    Multi-locus sequence typing (MLST) is a useful system for phylogenetic and epidemiological studies of multidrug-resistant Escherichiacoli. Most studies utilize a seven-locus MLST, but an alternate two-locus typing method (fumC and fimH; CH typing) has been proposed that may offer a similar degree of discrimination at lower cost. Herein, we compare CH typing to the standard seven-locus method for typing commensal E. coli isolates from dairy cattle. In addition, we evaluated alternative combinations of eight loci to identify combinations that maximize discrimination and congruence with standard seven-locus MLST among commensal E. coli while minimizing the cost. We also compared both methods when used for typing uropathogenic E. coli (UPEC). CH typing was less discriminatory for commensal E. coli than the standard seven-locus method (Simpson's Index of Diversity=0.933 [0.902-0.964] and 0.97 [0.96-0.979], respectively). Combining fimH with housekeeping gene loci improved discriminatory power for commensal E. coli from cattle but resulted in poor congruence with MLST. We found that a four-locus typing method including the housekeeping genes adk, purA, gyrB and recA could be used to minimize cost without sacrificing discriminatory power or congruence with Achtman seven-locus MLST when typing commensal E. coli. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Microbiological methods for the water recovery systems test, revision 1.1

    NASA Technical Reports Server (NTRS)

    Rhoads, Tim; Kilgore, M. V., Jr.; Mikell, A. T., Jr.

    1990-01-01

    Current microbiological parameters specified to verify microbiological quality of Space Station Freedom water quality include the enumeration of total bacteria, anaerobes, aerobes, yeasts and molds, enteric bacteria, gram positives, gram negatives, and E. coli. In addition, other parameters have been identified as necessary to support the Water Recovery Test activities to be conducted at the NASA/MSFC later this year. These other parameters include aerotolerant eutrophic mesophiles, legionellae, and an additional method for heterotrophic bacteria. If inter-laboratory data are to be compared to evaluate quality, analytical methods must be eliminated as a variable. Therefore, each participating laboratory must utilize the same analytical methods and procedures. Without this standardization, data can be neither compared nor validated between laboratories. Multiple laboratory participation represents a conservative approach to insure quality and completeness of data. Invariably, sample loss will occur in transport and analyses. Natural variance is a reality on any test of this magnitude and is further enhanced because biological entities, capable of growth and death, are specific parameters of interest. The large variation due to the participation of human test subjects has been noted with previous testing. The resultant data might be dismissed as 'out of control' unless intra-laboratory control is included as part of the method or if participating laboratories are not available for verification. The purpose of this document is to provide standardized laboratory procedures for the enumeration of certain microorganisms in water and wastewater specific to the water recovery systems test. The document consists of ten separate cultural methods and one direct count procedure. It is not intended nor is it implied to be a complete microbiological methods manual.

  3. Effects of test design and temperature in a partial life-cycle study with the freshwater gastropod Potamopyrgus antipodarum.

    PubMed

    Macken, Ailbhe; Le Page, Gareth; Hayfield, Amanda; Williams, Timothy D; Brown, Rebecca J

    2012-09-01

    Potamopyrgus antipodarum is a candidate for a standardized mollusk partial life-cycle study. This is a comparative study of two test designs (microplate and beaker), with additional endpoints to the proposed guideline methods, for example, tracking of continuous reproductive output over 28 d and attributing it to individual female snails. In addition, an investigation of the effects of temperature (16, 20, and 25°C) on reproduction was also conducted employing the microplate design. Copyright © 2012 SETAC.

  4. One Primer To Rule Them All: Universal Primer That Adds BBa_B0034 Ribosomal Binding Site to Any Coding Standard 10 BioBrick

    PubMed Central

    2015-01-01

    Here, we present a universal, simple, efficient, and reliable way to add small BioBrick parts to any BioBrick via PCR that is compatible with BioBrick assembly standard 10. As a proof of principle, we have designed a universal primer, rbs_B0034, that contains a ribosomal binding site (RBS; BBa_B0034) and that can be used in PCR to amplify any coding BioBrick that starts with ATG. We performed test PCRs with rbs_B0034 on 31 different targets and found it to be 93.6% efficient. Moreover, when supplemented with a complementary primer, addition of RBS can be accomplished via whole plasmid site-directed mutagenesis, thus reducing the time required for further assembly of composite parts. The described method brings simplicity to the addition of small parts, such as regulatory elements to existing BioBricks. The final product of the PCR assembly is indistinguishable from the standard or 3A BioBrick assembly. PMID:25524097

  5. Traditional and Modern Cell Culture in Virus Diagnosis.

    PubMed

    Hematian, Ali; Sadeghifard, Nourkhoda; Mohebi, Reza; Taherikalani, Morovat; Nasrolahi, Abbas; Amraei, Mansour; Ghafourian, Sobhan

    2016-04-01

    Cell cultures are developed from tissue samples and then disaggregated by mechanical, chemical, and enzymatic methods to extract cells suitable for isolation of viruses. With the recent advances in technology, cell culture is considered a gold standard for virus isolation. This paper reviews the evolution of cell culture methods and demonstrates why cell culture is a preferred method for identification of viruses. In addition, the advantages and disadvantages of both traditional and modern cell culture methods for diagnosis of each type of virus are discussed. Detection of viruses by the novel cell culture methods is considered more accurate and sensitive. However, there is a need to include some more accurate methods such as molecular methods in cell culture for precise identification of viruses.

  6. Stability analysis of spacecraft power systems

    NASA Technical Reports Server (NTRS)

    Halpin, S. M.; Grigsby, L. L.; Sheble, G. B.; Nelms, R. M.

    1990-01-01

    The problems in applying standard electric utility models, analyses, and algorithms to the study of the stability of spacecraft power conditioning and distribution systems are discussed. Both single-phase and three-phase systems are considered. Of particular concern are the load and generator models that are used in terrestrial power system studies, as well as the standard assumptions of load and topological balance that lead to the use of the positive sequence network. The standard assumptions regarding relative speeds of subsystem dynamic responses that are made in the classical transient stability algorithm, which forms the backbone of utility-based studies, are examined. The applicability of these assumptions to a spacecraft power system stability study is discussed in detail. In addition to the classical indirect method, the applicability of Liapunov's direct methods to the stability determination of spacecraft power systems is discussed. It is pointed out that while the proposed method uses a solution process similar to the classical algorithm, the models used for the sources, loads, and networks are, in general, more accurate. Some preliminary results are given for a linear-graph, state-variable-based modeling approach to the study of the stability of space-based power distribution networks.

  7. Positive effect on patient experience of video information given prior to cardiovascular magnetic resonance imaging: A clinical trial.

    PubMed

    Ahlander, Britt-Marie; Engvall, Jan; Maret, Eva; Ericsson, Elisabeth

    2018-03-01

    To evaluate the effect of video information given before cardiovascular magnetic resonance imaging on patient anxiety and to compare patient experiences of cardiovascular magnetic resonance imaging versus myocardial perfusion scintigraphy. To evaluate whether additional information has an impact on motion artefacts. Cardiovascular magnetic resonance imaging and myocardial perfusion scintigraphy are technically advanced methods for the evaluation of heart diseases. Although cardiovascular magnetic resonance imaging is considered to be painless, patients may experience anxiety due to the closed environment. A prospective randomised intervention study, not registered. The sample (n = 148) consisted of 97 patients referred for cardiovascular magnetic resonance imaging, randomised to receive either video information in addition to standard text-information (CMR-video/n = 49) or standard text-information alone (CMR-standard/n = 48). A third group undergoing myocardial perfusion scintigraphy (n = 51) was compared with the cardiovascular magnetic resonance imaging-standard group. Anxiety was evaluated before, immediately after the procedure and 1 week later. Five questionnaires were used: Cardiac Anxiety Questionnaire, State-Trait Anxiety Inventory, Hospital Anxiety and Depression scale, MRI Fear Survey Schedule and the MRI-Anxiety Questionnaire. Motion artefacts were evaluated by three observers, blinded to the information given. Data were collected between April 2015-April 2016. The study followed the CONSORT guidelines. The CMR-video group scored lower (better) than the cardiovascular magnetic resonance imaging-standard group in the factor Relaxation (p = .039) but not in the factor Anxiety. Anxiety levels were lower during scintigraphic examinations compared to the CMR-standard group (p < .001). No difference was found regarding motion artefacts between CMR-video and CMR-standard. Patient ability to relax during cardiovascular magnetic resonance imaging increased by adding video information prior the exam, which is important in relation to perceived quality in nursing. No effect was seen on motion artefacts. Video information prior to examinations can be an easy and time effective method to help patients cooperate in imaging procedures. © 2017 John Wiley & Sons Ltd.

  8. Simultaneous measurement of sulfur and lead isotopes in sulfides using nanosecond laser ablation coupled with two multi-collector inductively coupled plasma mass spectrometers

    NASA Astrophysics Data System (ADS)

    Yuan, Honglin; Liu, Xu; Chen, Lu; Bao, Zhian; Chen, Kaiyun; Zong, Chunlei; Li, Xiao-Chun; Qiu, Johnson Wenhong

    2018-04-01

    We herein report the coupling of a nanosecond laser ablation system with a large-scale multi-collector inductively coupled plasma mass spectrometer (Nu1700 MC-ICPMS, NP-1700) and a conventional Nu Plasma II MC-ICPMS (NP-II) for the simultaneous laser ablation and determination of in situ S and Pb isotopic compositions of sulfide minerals. We found that the required aerosol distribution between the two spectrometers depended on the Pb content of the sample. For example, for a sulfide containing 100-3000 ppm Pb, the aerosol was distributed between the NP-1700 and the NP-II spectrometers in a 1:1 ratio, while for lead contents >3000 and <100 ppm, these ratios were 5:1 and 1:3, respectively. In addition, S isotopic analysis showed a pronounced matrix effect, so a matrix-matched external standard was used for standard-sample bracketing correction. The NIST NBS 977 (NBS, National Bureau of Standards; NIST, National Institute of Standards & Technology) Tl (thallium) dry aerosol internal standard and the NIST SRM 610 (SRM, standard reference material) external standard were employed to obtain accurate results for the analysis of Pb isotopes. In tandem experiments where airflow conditions were similar to those employed during stand-alone analyses, small changes in the aerosol carrier gas flow did not significantly influence the accurate determination of S and Pb isotope ratios. In addition, careful optimization of the flow ratio of the aerosol carrier (He) and makeup (Ar) gases to match stand-alone analytical conditions allowed comparable S and Pb isotope ratios to be obtained within an error of 2 s analytical uncertainties. Furthermore, the results of tandem analyses obtained using our method were consistent with those of previously reported stand-alone techniques for the S and Pb isotopes of chalcopyrite, pyrite, galena, and sphalerite, thus indicating that this method is suitable for the simultaneous analysis of S and Pb isotopes of natural sulfide minerals, and provides an effective tool to determine S and Pb isotope compositions of sulfides formed through multi-stage deposition routes.

  9. Performance issues for iterative solvers in device simulation

    NASA Technical Reports Server (NTRS)

    Fan, Qing; Forsyth, P. A.; Mcmacken, J. R. F.; Tang, Wei-Pai

    1994-01-01

    Due to memory limitations, iterative methods have become the method of choice for large scale semiconductor device simulation. However, it is well known that these methods still suffer from reliability problems. The linear systems which appear in numerical simulation of semiconductor devices are notoriously ill-conditioned. In order to produce robust algorithms for practical problems, careful attention must be given to many implementation issues. This paper concentrates on strategies for developing robust preconditioners. In addition, effective data structures and convergence check issues are also discussed. These algorithms are compared with a standard direct sparse matrix solver on a variety of problems.

  10. Virtual reality for spherical images

    NASA Astrophysics Data System (ADS)

    Pilarczyk, Rafal; Skarbek, Władysław

    2017-08-01

    Paper presents virtual reality application framework and application concept for mobile devices. Framework uses Google Cardboard library for Android operating system. Framework allows to create virtual reality 360 video player using standard OpenGL ES rendering methods. Framework provides network methods in order to connect to web server as application resource provider. Resources are delivered using JSON response as result of HTTP requests. Web server also uses Socket.IO library for synchronous communication between application and server. Framework implements methods to create event driven process of rendering additional content based on video timestamp and virtual reality head point of view.

  11. Evaluation of different methods for determining growing degree-day thresholds in apricot cultivars

    NASA Astrophysics Data System (ADS)

    Ruml, Mirjana; Vuković, Ana; Milatović, Dragan

    2010-07-01

    The aim of this study was to examine different methods for determining growing degree-day (GDD) threshold temperatures for two phenological stages (full bloom and harvest) and select the optimal thresholds for a greater number of apricot ( Prunus armeniaca L.) cultivars grown in the Belgrade region. A 10-year data series were used to conduct the study. Several commonly used methods to determine the threshold temperatures from field observation were evaluated: (1) the least standard deviation in GDD; (2) the least standard deviation in days; (3) the least coefficient of variation in GDD; (4) regression coefficient; (5) the least standard deviation in days with a mean temperature above the threshold; (6) the least coefficient of variation in days with a mean temperature above the threshold; and (7) the smallest root mean square error between the observed and predicted number of days. In addition, two methods for calculating daily GDD, and two methods for calculating daily mean air temperatures were tested to emphasize the differences that can arise by different interpretations of basic GDD equation. The best agreement with observations was attained by method (7). The lower threshold temperature obtained by this method differed among cultivars from -5.6 to -1.7°C for full bloom, and from -0.5 to 6.6°C for harvest. However, the “Null” method (lower threshold set to 0°C) and “Fixed Value” method (lower threshold set to -2°C for full bloom and to 3°C for harvest) gave very good results. The limitations of the widely used method (1) and methods (5) and (6), which generally performed worst, are discussed in the paper.

  12. Domain decomposition methods for nonconforming finite element spaces of Lagrange-type

    NASA Technical Reports Server (NTRS)

    Cowsar, Lawrence C.

    1993-01-01

    In this article, we consider the application of three popular domain decomposition methods to Lagrange-type nonconforming finite element discretizations of scalar, self-adjoint, second order elliptic equations. The additive Schwarz method of Dryja and Widlund, the vertex space method of Smith, and the balancing method of Mandel applied to nonconforming elements are shown to converge at a rate no worse than their applications to the standard conforming piecewise linear Galerkin discretization. Essentially, the theory for the nonconforming elements is inherited from the existing theory for the conforming elements with only modest modification by constructing an isomorphism between the nonconforming finite element space and a space of continuous piecewise linear functions.

  13. Ratio of sequential chromatograms for quantitative analysis and peak deconvolution: Application to standard addition method and process monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Synovec, R.E.; Johnson, E.L.; Bahowick, T.J.

    1990-08-01

    This paper describes a new technique for data analysis in chromatography, based on taking the point-by-point ratio of sequential chromatograms that have been base line corrected. This ratio chromatogram provides a robust means for the identification and the quantitation of analytes. In addition, the appearance of an interferent is made highly visible, even when it coelutes with desired analytes. For quantitative analysis, the region of the ratio chromatogram corresponding to the pure elution of an analyte is identified and is used to calculate a ratio value equal to the ratio of concentrations of the analyte in sequential injections. For themore » ratio value calculation, a variance-weighted average is used, which compensates for the varying signal-to-noise ratio. This ratio value, or equivalently the percent change in concentration, is the basis of a chromatographic standard addition method and an algorithm to monitor analyte concentration in a process stream. In the case of overlapped peaks, a spiking procedure is used to calculate both the original concentration of an analyte and its signal contribution to the original chromatogram. Thus, quantitation and curve resolution may be performed simultaneously, without peak modeling or curve fitting. These concepts are demonstrated by using data from ion chromatography, but the technique should be applicable to all chromatographic techniques.« less

  14. Key parameters in testing biodegradation of bio-based materials in soil.

    PubMed

    Briassoulis, D; Mistriotis, A

    2018-09-01

    Biodegradation of plastics in soil is currently tested by international standard testing methods (e.g. ISO 17556-12 or ASTM D5988-12). Although these testing methods have been developed for plastics, it has been shown in project KBBPPS that they can be extended also to lubricants with small modifications. Reproducibility is a critical issue regarding biodegradation tests in the laboratory. Among the main testing variables are the soil types and nutrients available (mainly nitrogen). For this reason, the effect of the soil type on the biodegradation rates of various bio-based materials (cellulose and lubricants) was tested for five different natural soil types (loam, loamy sand, clay, clay-loam, and silt-loam organic). It was shown that use of samples containing 1 g of C in a substrate of 300 g of soil with the addition of 0.1 g of N as nutrient strongly improves the reproducibility of the test making the results practically independent of the soil type with the exception of the organic soil. The sandy soil was found to need addition of higher amount of nutrients to exhibit similar biodegradation rates as those achieved with the other soil types. Therefore, natural soils can be used for Standard biodegradation tests of bio-based materials yielding reproducible results with the addition of appropriate nutrients. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Determination of Ca content of coral skeleton by analyte additive method using the LIBS technique

    NASA Astrophysics Data System (ADS)

    Haider, A. F. M. Y.; Khan, Z. H.

    2012-09-01

    Laser-induced breakdown spectroscopic (LIBS) technique was used to study the elemental profile of coral skeletons. Apart from calcium and carbon, which are the main elemental constituents of coral skeleton, elements like Sr, Na, Mg, Li, Si, Cu, Ti, K, Mn, Zn, Ba, Mo, Br and Fe were detected in the coral skeletons from the Inani Beach and the Saint Martin's island of Bangladesh and the coral from the Philippines. In addition to the qualitative analysis, the quantitative analysis of the main elemental constituent, calcium (Ca), was done. The result shows the presence of (36.15±1.43)% by weight of Ca in the coral skeleton collected from the Inani Beach, Cox's Bazar, Bangladesh. It was determined by using six calibration curves, drawn for six emission lines of Ca I (428.301 nm, 428.936 nm, 431.865 nm, 443.544 nm, 443.569 nm, and 445.589 nm), by standard analyte additive method. Also from AAS measurement the percentage content of Ca in the same sample of coral skeleton obtained was 39.87% by weight which compares fairly well with the result obtained by the analyte additive method.

  16. Methods for estimating flood frequency in Montana based on data through water year 1998

    USGS Publications Warehouse

    Parrett, Charles; Johnson, Dave R.

    2004-01-01

    Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.

  17. Development of an extractive spectrophotometric method for estimation of uranium in ore leach solutions using 2-ethylhexyl phosphonic acid-mono-2-ethylhexyl ester (PC88A) and tri-n-octyl phosphine oxide (TOPO) mixture as extractant and 2-(5-bromo-2-pyridylozo)-5-diethyl aminophenol (Br-PADAP) as chromophore

    NASA Astrophysics Data System (ADS)

    Biswas, Sujoy; Pathak, P. N.; Roy, S. B.

    2012-06-01

    An extractive spectrophotometric analytical method has been developed for the determination of uranium in ore leach solution. This technique is based on the selective extraction of uranium from multielement system using a synergistic mixture of 2-ethylhexyl phosphonic acid-mono-2-ethylhexyl ester (PC88A) and tri-n-octyl phosphine oxide (TOPO) in cyclohexane and color development from the organic phase aliquot using 2-(5-Bromo-2-pyridylazo)-5-diethyl aminophenol (Br-PADAP) as chromogenic reagent. The absorption maximum (λmax) for UO22+-Br-PADAP complex in organic phase samples, in 64% (v/v) ethanol containing buffer solution (pH 7.8) and 1,2-cyclohexylenedinitrilotetraacetic acid (CyDTA) complexing agent, has been found to be at 576 nm (molar extinction coefficient, ɛ: 36,750 ± 240 L mol-1 cm-1). Effects of various parameters like stability of complex, ethanol volume, ore matrix, interfering ions etc. on the determination of uranium have also been evaluated. Absorbance measurements as a function of time showed that colored complex is stable up to >24 h. Presence of increased amount of ethanol in colored solution suppresses the absorption of a standard UO22+-Br-PADAP solution. Analyses of synthetic standard as well as ore leach a solution show that for 10 determination relative standard deviation (RSD) is <2%. The accuracy of the developed method has been checked by determining uranium using standard addition method and was found to be accurate with a 98-105% recovery rate. The developed method has been applied for the analysis of a number of uranium samples generated from uranium ore leach solutions and results were compared with standard methods like inductively coupled plasma emission spectrometry (ICPAES). The determined values of uranium concentrations by these methods are within ±2%. This method can be used to determine 2.5-250 μg mL-1 uranium in ore leach solutions with high accuracy and precision.

  18. YeastFab: the design and construction of standard biological parts for metabolic engineering in Saccharomyces cerevisiae

    PubMed Central

    Guo, Yakun; Dong, Junkai; Zhou, Tong; Auxillos, Jamie; Li, Tianyi; Zhang, Weimin; Wang, Lihui; Shen, Yue; Luo, Yisha; Zheng, Yijing; Lin, Jiwei; Chen, Guo-Qiang; Wu, Qingyu; Cai, Yizhi; Dai, Junbiao

    2015-01-01

    It is a routine task in metabolic engineering to introduce multicomponent pathways into a heterologous host for production of metabolites. However, this process sometimes may take weeks to months due to the lack of standardized genetic tools. Here, we present a method for the design and construction of biological parts based on the native genes and regulatory elements in Saccharomyces cerevisiae. We have developed highly efficient protocols (termed YeastFab Assembly) to synthesize these genetic elements as standardized biological parts, which can be used to assemble transcriptional units in a single-tube reaction. In addition, standardized characterization assays are developed using reporter constructs to calibrate the function of promoters. Furthermore, the assembled transcription units can be either assayed individually or applied to construct multi-gene metabolic pathways, which targets a genomic locus or a receiving plasmid effectively, through a simple in vitro reaction. Finally, using β-carotene biosynthesis pathway as an example, we demonstrate that our method allows us not only to construct and test a metabolic pathway in several days, but also to optimize the production through combinatorial assembly of a pathway using hundreds of regulatory biological parts. PMID:25956650

  19. Determination of Trace lead (II) by Resonance Light Scattering Based on Pb (II)-KI-MG System

    NASA Astrophysics Data System (ADS)

    Chen, Ninghua; Yang, Yingchun; Hao, Shuai; Li, Yangmin

    2018-01-01

    In pH=3.0 weak acidic solution, it is found that Pb2+ can react with I-to form [PbI4]2-, and it further reacted with MG to form ion-association complex. As a result, the new spectra of RLS appeared and their intensities enhanced greatly. Accordingly, a new method developed for the determination of Pb (II).The appropriate reaction conditions were optimized through experiments. The results show that a strong and stable resonance scattering spectra emerge at the wavelength of 338 nm. The resonance light scattering strength (ΔIRLS) has good linear relationship with the concentration of Pb (II) in the range of 0.2 μg/mL ~ 1.0 μg/mL. The detection limits (LOD) is 0.0155 μg/mL. The relative standard deviation (RSD) is 3.61% (n=11) for the determination of 0.6 μg/mL Pb (II) standard solution. And this method was successfully applied to the determination of three environmental water samples (nongfu spring, tap water, laboratory wastewater). Results illustrate that the addition standard recovery are 80%~107% with relative standard deviation (RSD) between 1.8% to 4.6%.

  20. [Determination of trace heavy metal elements in cortex Phellodendron chinense by ICP-MS after microwave-assisted digestion].

    PubMed

    Kou, Xing-Ming; Xu, Min; Gu, Yong-Zuo

    2007-06-01

    An inductively coupled plasma mass spectrometry (ICP-MS) for determination of the contents of 8 trace heavy metal elements in cortex Phellodendron chinense after microwave-assisted digestion of the sample has been developed. The accuracy of the method was evaluated by the analysis of corresponding trace heavy metal elements in standard reference materials (GBW 07604 and GBW 07605). By applying the proposed method, the contents of 8 trace heavy metal elements in cortex Phellodendron chinense cultivated in different areas (in Bazhong, Yibin and Yingjing, respectively) of Sichuan and different growth period (6, 8 and 10 years of samples from Yingjing) were determined. The relative standard deviation (RSD) is in the range of 3.2%-17.8% and the recoveries of standard addition are in the range of 70%-120%. The results of the study indicate that the proposed method has the advantages of simplicity, speediness and sensitivity. It is suitable for the determination of the contents of 8 trace heavy metal elements in cortex Phellodendron chinense. The results also show that the concentrations of 4 harmful trace heavy metal elements As, Cd, Hg and Pb in cortex Phellodendron chinense are all lower than the limits of Chinese Pharmacopoeia and Green Trade Standard for Importing and Exporting Medicinal Plant and Preparation. Therefore, the cortex Phellodendron chinense is fit for use as medicine and export.

  1. Quantitative analysis of active compounds in pharmaceutical preparations by use of attenuated total-reflection Fourier transform mid-infrared spectrophotometry and the internal standard method.

    PubMed

    Sastre Toraño, J; van Hattum, S H

    2001-10-01

    A new method is presented for the quantitative analysis of compounds in pharmaceutical preparations Fourier transform (FT) mid-infrared (MIR) spectroscopy with an attenuated total reflection (ATR) module. Reduction of the quantity of overlapping absorption bands, by interaction of the compound of interest with an appropriate solvent, and the employment of an internal standard (IS), makes MIR suitable for quantitative analysis. Vigabatrin, as active compound in vigabatrin 100-mg capsules, was used as a model compound for the development of the method. Vigabatrin was extracted from the capsule content with water after addition of a sodium thiosulfate IS solution. The extract was concentrated by volume reduction and applied to the FTMIR-ATR module. Concentrations of unknown samples were calculated from the ratio of the vigabatrin band area (1321-1610 cm(-1)) and the IS band area (883-1215 cm(-1)) using a calibration standard. The ratio of the area of the vigabatrin peak to that of the IS was linear with the concentration in the range of interest (90-110 mg, in twofold; n=2). The accuracy of the method in this range was 99.7-100.5% (n=5) with a variability of 0.4-1.3% (n=5). The comparison of the presented method with an HPLC assay showed similar results; the analysis of five vigabatrin 100-mg capsules resulted in a mean concentration of 102 mg with a variation of 2% with both methods.

  2. Determination of Wastewater Compounds in Whole Water by Continuous Liquid-Liquid Extraction and Capillary-Column Gas Chromatography/Mass Spectrometry

    USGS Publications Warehouse

    Zaugg, Steven D.; Smith, Steven G.; Schroeder, Michael P.

    2006-01-01

    A method for the determination of 69 compounds typically found in domestic and industrial wastewater is described. The method was developed in response to increasing concern over the impact of endocrine-disrupting chemicals on aquatic organisms in wastewater. This method also is useful for evaluating the effects of combined sanitary and storm-sewer overflow on the water quality of urban streams. The method focuses on the determination of compounds that are indicators of wastewater or have endocrine-disrupting potential. These compounds include the alkylphenol ethoxylate nonionic surfactants, food additives, fragrances, antioxidants, flame retardants, plasticizers, industrial solvents, disinfectants, fecal sterols, polycyclic aromatic hydrocarbons, and high-use domestic pesticides. Wastewater compounds in whole-water samples were extracted using continuous liquid-liquid extractors and methylene chloride solvent, and then determined by capillary-column gas chromatography/mass spectrometry. Recoveries in reagent-water samples fortified at 0.5 microgram per liter averaged 72 percent ? 8 percent relative standard deviation. The concentration of 21 compounds is always reported as estimated because method recovery was less than 60 percent, variability was greater than 25 percent relative standard deviation, or standard reference compounds were prepared from technical mixtures. Initial method detection limits averaged 0.18 microgram per liter. Samples were preserved by adding 60 grams of sodium chloride and stored at 4 degrees Celsius. The laboratory established a sample holding-time limit prior to sample extraction of 14 days from the date of collection.

  3. Quality evaluation of no-reference MR images using multidirectional filters and image statistics.

    PubMed

    Jang, Jinseong; Bang, Kihun; Jang, Hanbyol; Hwang, Dosik

    2018-09-01

    This study aimed to develop a fully automatic, no-reference image-quality assessment (IQA) method for MR images. New quality-aware features were obtained by applying multidirectional filters to MR images and examining the feature statistics. A histogram of these features was then fitted to a generalized Gaussian distribution function for which the shape parameters yielded different values depending on the type of distortion in the MR image. Standard feature statistics were established through a training process based on high-quality MR images without distortion. Subsequently, the feature statistics of a test MR image were calculated and compared with the standards. The quality score was calculated as the difference between the shape parameters of the test image and the undistorted standard images. The proposed IQA method showed a >0.99 correlation with the conventional full-reference assessment methods; accordingly, this proposed method yielded the best performance among no-reference IQA methods for images containing six types of synthetic, MR-specific distortions. In addition, for authentically distorted images, the proposed method yielded the highest correlation with subjective assessments by human observers, thus demonstrating its superior performance over other no-reference IQAs. Our proposed IQA was designed to consider MR-specific features and outperformed other no-reference IQAs designed mainly for photographic images. Magn Reson Med 80:914-924, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  4. Efficient Bayesian hierarchical functional data analysis with basis function approximations using Gaussian-Wishart processes.

    PubMed

    Yang, Jingjing; Cox, Dennis D; Lee, Jong Soo; Ren, Peng; Choi, Taeryon

    2017-12-01

    Functional data are defined as realizations of random functions (mostly smooth functions) varying over a continuum, which are usually collected on discretized grids with measurement errors. In order to accurately smooth noisy functional observations and deal with the issue of high-dimensional observation grids, we propose a novel Bayesian method based on the Bayesian hierarchical model with a Gaussian-Wishart process prior and basis function representations. We first derive an induced model for the basis-function coefficients of the functional data, and then use this model to conduct posterior inference through Markov chain Monte Carlo methods. Compared to the standard Bayesian inference that suffers serious computational burden and instability in analyzing high-dimensional functional data, our method greatly improves the computational scalability and stability, while inheriting the advantage of simultaneously smoothing raw observations and estimating the mean-covariance functions in a nonparametric way. In addition, our method can naturally handle functional data observed on random or uncommon grids. Simulation and real studies demonstrate that our method produces similar results to those obtainable by the standard Bayesian inference with low-dimensional common grids, while efficiently smoothing and estimating functional data with random and high-dimensional observation grids when the standard Bayesian inference fails. In conclusion, our method can efficiently smooth and estimate high-dimensional functional data, providing one way to resolve the curse of dimensionality for Bayesian functional data analysis with Gaussian-Wishart processes. © 2017, The International Biometric Society.

  5. Spectrofluorimetric analysis of famotidine in pharmaceutical preparations and biological fluids by derivatization with benzoin

    NASA Astrophysics Data System (ADS)

    Alamgir, Malik; Khuhawar, Muhammad Yar; Memon, Saima Q.; Hayat, Amir; Zounr, Rizwan Ali

    2015-01-01

    A sensitive and simple spectrofluorimetric method has been developed for the analysis of famotidine, from pharmaceutical preparations and biological fluids after derivatization with benzoin. The reaction was carried out in alkaline medium with measurement of fluorescence intensity at 446 nm with excitation wavelength at 286 nm. Linear calibration was obtained with 0.5-15 μg/ml with coefficient of determination (r2) 0.997. The factors affecting the fluorescence intensity were optimized. The pharmaceutical additives and amino acid did not interfere in the determination. The mean percentage recovery (n = 4) calculated by standard addition from pharmaceutical preparation was 94.8-98.2% with relative standard deviation (RSD) 1.56-3.34% and recovery from deproteinized spiked serum and urine of healthy volunteers was 98.6-98.9% and 98.0-98.4% with RSD 0.34-0.84% and 0.29-0.87% respectively.

  6. [ELEMENTS OF A SYSTEMATIC APPROACH TO HYGIENIC REGULATION OF XENOBIOTICS].

    PubMed

    Shtabskiy, B M; Gzhegotskiy, M R; Shafran, L M

    2016-01-01

    Hygienic standardization (HS) of chemicals remains to be the one of the effective ways to ensure chemical safety of the population. At that hygienic standards (such as maximum allowable concentrations--MACs) are interrelated and aggregated into the coherent systems. Therefore, the task of the study was in establishment of the logic of inter- standard relations between the existing standards and actualization of legitimate relations of the interrelations such as MACwz/MACatm, (i.e., to systematize standards) and so as CL₅₀/MACwz (reflecting the ratio of reliability). In the suggested systemic approach the benchmark indices of the proposed HS system are the values of the MACwz. Standards for other media, including atmosphere air may be only some compartments of MACwz. The performed studies and calculations allowed to justify and implement the system approach into the practice of HS in Ukraine. There is need for further search for additional solutions in nonreachability of LC₅₀ in the experiment, justification of standards for the population in the absence of MACwz, comparison with the data of normative databases of other countries. It is necessary to introduce the value of permissible deviation from the requirements of the systemness, to embody conditions (1)-(7) into the general principle of the prohibition of greater deviation and to harmonize acting and newly introduced standards within frameworks of modern ideology and methods of HS of harmful substances. This opens up broad prospects for the new phase of HS and a significant increase in the reliability of results obtained by the various methods and in different laboratories.

  7. Comparison of current practices of cardiopulmonary perfusion technology in Iran with American Society of Extracorporeal Technology’s standards

    PubMed Central

    Faravan, Amir; Mohammadi, Nooredin; Alizadeh Ghavidel, Alireza; Toutounchi, Mohammad Zia; Ghanbari, Ameneh; Mazloomi, Mehran

    2016-01-01

    Introduction: Standards have a significant role in showing the minimum level of optimal optimum and the expected performance. Since the perfusion technology staffs play an the leading role in providing the quality services to the patients undergoing open heart surgery with cardiopulmonary bypass machine, this study aimed to assess the standards on how Iranian perfusion technology staffs evaluate and manage the patients during the cardiopulmonary bypass process and compare their practice with the recommended standards by American Society of Extracorporeal Technology. Methods: In this descriptive study, data was collected from 48 Iranian public hospitals and educational health centers through a researcher-created questionnaire. The data collection questionnaire assessed the standards which are recommended by American Society of Extracorporeal Technology. Results: Findings showed that appropriate measurements were carried out by the perfusion technology staffs to prevent the hemodilution and avoid the blood transfusion and unnecessary blood products, determine the initial dose of heparin based on one of the proposed methods, monitor the anticoagulants based on ACT measurement, and determine the additional doses of heparin during the cardiopulmonary bypass based on ACT or protamine titration. It was done only in 4.2% of hospitals and health centers. Conclusion: Current practices of cardiopulmonary perfusion technology in Iran are inappropriate based on the standards of American Society of Cardiovascular Perfusion. This represents the necessity of authorities’ attention to the validation programs and development of the caring standards on one hand and continuous assessment of using these standards on the other hand. PMID:27489600

  8. Comparative analysis of success of psoriasis treatment with standard therapeutic modalities and balneotherapy.

    PubMed

    Baros, Duka Ninković; Gajanin, Vesna S; Gajanin, Radoslav B; Zrnić, Bogdan

    2014-01-01

    Psoriasis is a chronic, inflammatory, immune-mediated skin disease. In addition to standard therapeutic modalities (antibiotics, cytostatics, phototherapy, photochemotherapy and retinoids), nonstandard methods can be used in the treatment of psoriasis. This includes balneotherapy which is most commonly used in combination with therapeutic resources. The aim of this research was to determine the length of remission of psoriasis in patients treated with standard therapeutic modalities, balneotherapy, and combined treatment (standard therapeutic modalities and balneotherapy). The study analyzed 60 adult patients, of both sexes, with different clinical forms of psoriasis, who were divided into three groups according to the applied therapeutic modalities: the first group (treated with standard therapeutic modalities), the second group (treated with balneotherapy) and the third group (treated with combined therapy-standard methods therapy and balneotherapy). The Psoriasis Area and Severity Index was determined in first, third and sixth week of treatment for all patients. The following laboratory analysis were performed and monitored: C reactive protein, iron with total iron binding capacity, unsaturated iron binding capacity and ferritin, uric acid, rheumatoid factors and antibodies to streptolysin O in the first and sixth week of treatment. The average length of remission in patients treated with standard therapeutic modalities and in those treated with balneotherapy was 1.77 +/- 0.951 months and 1.79 +/- 0.918 months, respectively. There was a statistically significant difference in the duration of remission between the patients treated with combination therapy and patients treated with standard therapeutic modalities (p = 0.019) and balneotherapy (p = 0.032). The best results have been achieved when the combination therapy was administered.

  9. 75 FR 68608 - Information Collection; Request for Authorization of Additional Classification and Rate, Standard...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-08

    ... Authorization of Additional Classification and Rate, Standard Form 1444 AGENCY: Department of Defense (DOD... of Additional Classification and Rate, Standard Form 1444. DATES: Comments may be submitted on or.../or business confidential information provided. FOR FURTHER INFORMATION CONTACT: Mr. Ernest Woodson...

  10. Trends in Computer-Aided Manufacturing in Prosthodontics: A Review of the Available Streams

    PubMed Central

    Bennamoun, Mohammed

    2014-01-01

    In prosthodontics, conventional methods of fabrication of oral and facial prostheses have been considered the gold standard for many years. The development of computer-aided manufacturing and the medical application of this industrial technology have provided an alternative way of fabricating oral and facial prostheses. This narrative review aims to evaluate the different streams of computer-aided manufacturing in prosthodontics. To date, there are two streams: the subtractive and the additive approaches. The differences reside in the processing protocols, materials used, and their respective accuracy. In general, there is a tendency for the subtractive method to provide more homogeneous objects with acceptable accuracy that may be more suitable for the production of intraoral prostheses where high occlusal forces are anticipated. Additive manufacturing methods have the ability to produce large workpieces with significant surface variation and competitive accuracy. Such advantages make them ideal for the fabrication of facial prostheses. PMID:24817888

  11. Extractive-spectrophotometric determination of disopyramide and irbesartan in their pharmaceutical formulation

    NASA Astrophysics Data System (ADS)

    Abdellatef, Hisham E.

    2007-04-01

    Picric acid, bromocresol green, bromothymol blue, cobalt thiocyanate and molybdenum(V) thiocyanate have been tested as spectrophotometric reagents for the determination of disopyramide and irbesartan. Reaction conditions have been optimized to obtain coloured comoplexes of higher sensitivity and longer stability. The absorbance of ion-pair complexes formed were found to increases linearity with increases in concentrations of disopyramide and irbesartan which were corroborated by correction coefficient values. The developed methods have been successfully applied for the determination of disopyramide and irbesartan in bulk drugs and pharmaceutical formulations. The common excipients and additives did not interfere in their determination. The results obtained by the proposed methods have been statistically compared by means of student t-test and by the variance ratio F-test. The validity was assessed by applying the standard addition technique. The results were compared statistically with the official or reference methods showing a good agreement with high precision and accuracy.

  12. Development of a selective and pH-independent method for the analysis of ultra trace amounts of nitrite in environmental water samples after dispersive magnetic solid phase extraction by spectrofluorimetry.

    PubMed

    Daneshvar Tarigh, Ghazale; Shemirani, Farzaneh

    2014-10-01

    This paper describes an innovative and rapidly dispersive magnetic solid phase extraction spectrofluorimetry (DMSPE-FL) method for the analysis of trace amounts of nitrite in some environmental water samples. The method includes derivatization of aqueous nitrite with 2, 3-diaminonaphthalene (DAN), analysis of highly fluorescent 2, 3-naphthotriazole (NAT) derivative using spectrofluorimetry after DSPME. The novelty of our method is based on forming NAT that was independent with the pH-responsive and was adsorbed on MMWCNT by hydrophobic attractions in both acidic and basic media. The extraction efficiency of the sorbent was investigated by extraction of nitrite. The optimum extraction conditions for NO2(-) were obtained as of extraction time, 1.5 min; 10mg sorbent from 160 mL of the sample solution, and elution with 1 mL of acetone/KOH. Under the optimal conditions, the calibration curves were obtained in the range of 0.1-80 µg L(-1) (R(2)=0.999) and LOD (S/N=3) was obtained in 34 ng L(-1). Relative standard deviations (RSD) were 0.6 % (five replicates at 5 μg L(-1)). In addition, the feasibility of the method was demonstrated with extraction and determination of nitrite from some real samples containing tap, mineral, sea, rain, snow and ground waters, with the recovery in standard addition to real matrix of 94-102 % and RSDs of 1.8-10.6%. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Evaluation of subset matching methods and forms of covariate balance.

    PubMed

    de Los Angeles Resa, María; Zubizarreta, José R

    2016-11-30

    This paper conducts a Monte Carlo simulation study to evaluate the performance of multivariate matching methods that select a subset of treatment and control observations. The matching methods studied are the widely used nearest neighbor matching with propensity score calipers and the more recently proposed methods, optimal matching of an optimally chosen subset and optimal cardinality matching. The main findings are: (i) covariate balance, as measured by differences in means, variance ratios, Kolmogorov-Smirnov distances, and cross-match test statistics, is better with cardinality matching because by construction it satisfies balance requirements; (ii) for given levels of covariate balance, the matched samples are larger with cardinality matching than with the other methods; (iii) in terms of covariate distances, optimal subset matching performs best; (iv) treatment effect estimates from cardinality matching have lower root-mean-square errors, provided strong requirements for balance, specifically, fine balance, or strength-k balance, plus close mean balance. In standard practice, a matched sample is considered to be balanced if the absolute differences in means of the covariates across treatment groups are smaller than 0.1 standard deviations. However, the simulation results suggest that stronger forms of balance should be pursued in order to remove systematic biases due to observed covariates when a difference in means treatment effect estimator is used. In particular, if the true outcome model is additive, then marginal distributions should be balanced, and if the true outcome model is additive with interactions, then low-dimensional joints should be balanced. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. EMRP JRP MetNH3: Towards a Consistent Metrological Infrastructure for Ammonia Measurements in Ambient Air

    NASA Astrophysics Data System (ADS)

    Leuenberger, Daiana; Balslev-Harder, David; Braban, Christine F.; Ebert, Volker; Ferracci, Valerio; Gieseking, Bjoern; Hieta, Tuomas; Martin, Nicholas A.; Pascale, Céline; Pogány, Andrea; Tiebe, Carlo; Twigg, Marsailidh M.; Vaittinen, Olavi; van Wijk, Janneke; Wirtz, Klaus; Niederhauser, Bernhard

    2016-04-01

    Measuring ammonia in ambient air is a sensitive and priority issue due to its harmful effects on human health and ecosystems. In addition to its acidifying effect on natural waters and soils and to the additional nitrogen input to ecosystems, ammonia is an important precursor for secondary aerosol formation in the atmosphere. The European Directive 2001/81/EC on "National Emission Ceilings for Certain Atmospheric Pollutants (NEC)" regulates ammonia emissions in the member states. However, there is a lack of regulation regarding certified reference material (CRM), applicable analytical methods, measurement uncertainty, quality assurance and quality control (QC/QA) procedures as well as in the infrastructure to attain metrological traceability. As shown in a key comparison in 2007, there are even discrepancies between reference materials provided by European National Metrology Institutes (NMIs) at amount fraction levels up to three orders of magnitude higher than ambient air levels. MetNH3 (Metrology for ammonia in ambient air), a three-year project that started in June 2014 in the framework of the European Metrology Research Programme (EMRP), aims to reduce the gap between requirements set by the European emission regulations and state-of-the-art of analytical methods and reference materials. The overarching objective of the JRP is to achieve metrological traceability for ammonia measurements in ambient air from primary certified reference material CRM and instrumental standards to the field level. This requires the successful completion of the three main goals, which have been assigned to three technical work packages: To develop improved reference gas mixtures by static and dynamic gravimetric generation methods Realisation and characterisation of traceable preparative calibration standards (in pressurised cylinders as well as mobile generators) of ammonia amount fractions similar to those in ambient air based on existing methods for other reactive analytes. The aimed uncertainty is < 1 % for static mixtures at the 10 to 100 μmol/mol level, and < 3 % for portable dynamic generators in the 0 to 500 nmol/mol amount fraction range. Special emphasis is put on the minimisation of adsorption losses. To develop and characterise laser based optical spectrometric standards Evaluation and characterisation of the applicability of a newly developed open-path as well as of existing extractive measurement techniques as optical transfer standards according to metrological standards. To establish the transfer from high-accuracy standards to field applicable methods Employment of characterised exposure chambers as well as field sites for validation and comparison experiments to test and evaluate the performance of different instruments and measurement methods at ammonia amount fractions of the ambient air. The active exchange in workshops and inter-comparisons, publications in technical journals as well as presentations at relevant conferences and standardisation bodies will transfer the knowledge to stakeholders and end-users. The work has been carried out in the framework of the EMRP. The EMRP is jointly funded by the EMRP participating countries within EURAMET and the European Union.

  15. Updated techniques for estimating monthly streamflow-duration characteristics at ungaged and partial-record sites in central Nevada

    USGS Publications Warehouse

    Hess, Glen W.

    2002-01-01

    Techniques for estimating monthly streamflow-duration characteristics at ungaged and partial-record sites in central Nevada have been updated. These techniques were developed using streamflow records at six continuous-record sites, basin physical and climatic characteristics, and concurrent streamflow measurements at four partial-record sites. Two methods, the basin-characteristic method and the concurrent-measurement method, were developed to provide estimating techniques for selected streamflow characteristics at ungaged and partial-record sites in central Nevada. In the first method, logarithmic-regression analyses were used to relate monthly mean streamflows (from all months and by month) from continuous-record gaging sites of various percent exceedence levels or monthly mean streamflows (by month) to selected basin physical and climatic variables at ungaged sites. Analyses indicate that the total drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the equations developed from all months of monthly mean streamflow, the coefficient of determination averaged 0.84 and the standard error of estimate of the relations for the ungaged sites averaged 72 percent. For the equations derived from monthly means by month, the coefficient of determination averaged 0.72 and the standard error of estimate of the relations averaged 78 percent. If standard errors are compared, the relations developed in this study appear generally to be less accurate than those developed in a previous study. However, the new relations are based on additional data and the slight increase in error may be due to the wider range of streamflow for a longer period of record, 1995-2000. In the second method, streamflow measurements at partial-record sites were correlated with concurrent streamflows at nearby gaged sites by the use of linear-regression techniques. Statistical measures of results using the second method typically indicated greater accuracy than for the first method. However, to make estimates for individual months, the concurrent-measurement method requires several years additional streamflow data at more partial-record sites. Thus, exceedence values for individual months are not yet available due to the low number of concurrent-streamflow-measurement data available. Reliability, limitations, and applications of both estimating methods are described herein.

  16. Treatment outcomes in palliative care: the TOPCare study. A mixed methods phase III randomised controlled trial to assess the effectiveness of a nurse-led palliative care intervention for HIV positive patients on antiretroviral therapy

    PubMed Central

    2012-01-01

    Background Patients with HIV/AIDS on Antiretroviral Therapy (ART) suffer from physical, psychological and spiritual problems. Despite international policy explicitly stating that a multidimensional approach such as palliative care should be delivered throughout the disease trajectory and alongside treatment, the effectiveness of this approach has not been tested in ART-experienced populations. Methods/design This mixed methods study uses a Randomised Controlled Trial (RCT) to test the null hypothesis that receipt of palliative care in addition to standard HIV care does not affect pain compared to standard care alone. An additional qualitative component will explore the mechanism of action and participant experience. The sample size is designed to detect a statistically significant decrease in reported pain, determined by a two tailed test and a p value of ≤0.05. Recruited patients will be adults on ART for more than one month, who report significant pain or symptoms which have lasted for more than two weeks (as measured by the African Palliative Care Association (APCA) African Palliative Outcome Scale (POS)). The intervention under trial is palliative care delivered by an existing HIV facility nurse trained to a set standard. Following an initial pilot the study will be delivered in two African countries, using two parallel independent Phase III clinical RCTs. Qualitative data will be collected from semi structured interviews and documentation from clinical encounters, to explore the experience of receiving palliative care in this context. Discussion The data provided by this study will provide evidence to inform the improvement of outcomes for people living with HIV and on ART in Africa. ClinicalTrials.gov Identifier: NCT01608802 PMID:23130740

  17. Electrospray ionization and time-of-flight mass spectrometric method for simultaneous determination of spermidine and spermine.

    PubMed

    Samejima, Keijiro; Otani, Masahiro; Murakami, Yasuko; Oka, Takami; Kasai, Misao; Tsumoto, Hiroki; Kohda, Kohfuku

    2007-10-01

    A sensitive method for the determination of polyamines in mammalian cells was described using electrospray ionization and time-of-flight mass spectrometer. This method was 50-fold more sensitive than the previous method using ionspray ionization and quadrupole mass spectrometer. The method employed the partial purification and derivatization of polyamines, but allowed a measurement of multiple samples which contained picomol amounts of polyamines. Time required for data acquisition of one sample was approximately 2 min. The method was successfully applied for the determination of reduced spermidine and spermine contents in cultured cells under the inhibition of aminopropyltransferases. In addition, a new proper internal standard was proposed for the tracer experiment using (15)N-labeled polyamines.

  18. 21 CFR 173.300 - Chlorine dioxide.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 3 2011-04-01 2011-04-01 false Chlorine dioxide. 173.300 Section 173.300 Food and... Additives § 173.300 Chlorine dioxide. Chlorine dioxide (CAS Reg. No. 10049-04-4) may be safely used in food... chlorine dioxide with respect to all chlorine species as determined by Method 4500-ClO2 E in the “Standard...

  19. Analysis of tincal ore waste by energy dispersive X-ray fluorescence (EDXRF) Technique

    NASA Astrophysics Data System (ADS)

    Kalfa, Orhan Murat; Üstündağ, Zafer; Özkırım, Ilknur; Kagan Kadıoğlu, Yusuf

    2007-01-01

    Etibank Borax Plant is located in Kırka-Eskişehir, Turkey. The borax waste from this plant was analyzed by means of energy dispersive X-ray fluorescence (EDXRF). The standard addition method was used for the determination of the concentration of Al, Fe, Zn, Sn, and Ba. The results are presented and discussed in this paper.

  20. PARAMETRIC AND NON PARAMETRIC (MARS: MULTIVARIATE ADDITIVE REGRESSION SPLINES) LOGISTIC REGRESSIONS FOR PREDICTION OF A DICHOTOMOUS RESPONSE VARIABLE WITH AN EXAMPLE FOR PRESENCE/ABSENCE OF AMPHIBIANS

    EPA Science Inventory

    The purpose of this report is to provide a reference manual that could be used by investigators for making informed use of logistic regression using two methods (standard logistic regression and MARS). The details for analyses of relationships between a dependent binary response ...

  1. Autogenous accelerated curing of concrete cylinders. Part V, ASTM Cooperative Testing Program with additional emphasis on the influence of container and storage characteristics (supplemented by data on water bath curing from an earlier council project).

    DOT National Transportation Integrated Search

    1971-01-01

    Concomitant with the Research Council's studies of accelerated curing for strength testing, Subcommittee II-i of ASTM Committee C-9 was developing and refining accelerated methods for standardization. This development included a cooperative testing p...

  2. 76 FR 72769 - National Emissions Standards for Hazardous Air Pollutants: Mineral Wool Production and Wool...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-25

    ...-2010-1041 and EPA-HQ-OAR-2010-1042, by one of the following methods: http://www.regulations.gov... units: a cupola furnace for melting the mineral charge; a blow chamber in which air and, in some cases... addition to one complete version of the comment that includes information claimed as CBI, a copy of the...

  3. A Decision Tool to Evaluate Budgeting Methodologies for Estimating Facility Recapitalization Requirements

    DTIC Science & Technology

    2008-03-01

    1 . Maintenance Practices Influence Service Life .......................................................... 11 2 . Expectations or Standards May...BRB, 1991, p. 1 - 2 ) Additionally, public sector organizations typically have a larger inventory of facilities to maintain, making asset management...questions were answered. 1 . What are the long term causes and effects of under-funding the maintenance of facilities? 2 . What methods currently

  4. CTEPP STANDARD OPERATING PROCEDURE FOR COLLECTION OF FLOOR DUST SAMPLES FOR PERSISTENT ORGANIC POLLUTANTS (SOP-2.19)

    EPA Science Inventory

    This SOP describes the method for collecting a floor dust sample from carpet. Dust samples will be collected in the room that the child uses most at home and/or at day care using a High Volume Small Surface Sampler (HVS3). In addition, participants will also be asked to donate a ...

  5. Determination of eugenol, anethole, and coumarin in the mainstream cigarette smoke of Indonesian clove cigarettes.

    PubMed

    Polzin, Gregory M; Stanfill, Stephen B; Brown, Candace R; Ashley, David L; Watson, Clifford H

    2007-10-01

    Indonesian clove cigarettes (kreteks), typically have the appearance of a conventional domestic cigarette. The unique aspects of kreteks are that in addition to tobacco they contain dried clove buds (15-40%, by wt.), and are flavored with a proprietary "sauce". Whereas the clove buds contribute to generating high levels of eugenol in the smoke, the "sauce" may also contribute other potentially harmful constituents in addition to those associated with tobacco use. We measured levels of eugenol, trans-anethole (anethole), and coumarin in smoke from 33 brands of clove-flavored cigarettes (filtered and unfiltered) from five kretek manufacturers. In order to provide information for evaluating the delivery of these compounds under standard smoking conditions, a quantification method was developed for their measurement in mainstream cigarette smoke. The method allowed collection of mainstream cigarette smoke particulate matter on a Cambridge filter pad, extraction with methanol, sampling by automated headspace solid-phase microextraction, and subsequent analysis using gas chromatography/mass spectrometry. The presence of these compounds was confirmed in the smoke of kreteks using mass spectral library matching, high-resolution mass spectrometry (+/-0.0002 amu), and agreement with a relative retention time index, and native standards. We found that when kreteks were smoked according to standardized machine smoke parameters as specified by the International Standards Organization, all 33 clove brands contained levels of eugenol ranging from 2,490 to 37,900 microg/cigarette (microg/cig). Anethole was detected in smoke from 13 brands at levels of 22.8-1,030 microg/cig, and coumarin was detected in 19 brands at levels ranging from 9.2 to 215 microg/cig. These detected levels are significantly higher than the levels found in commercial cigarette brands available in the United States.

  6. Kaizen method for esophagectomy patients: improved quality control, outcomes, and decreased costs.

    PubMed

    Iannettoni, Mark D; Lynch, William R; Parekh, Kalpaj R; McLaughlin, Kelley A

    2011-04-01

    The majority of costs associated with esophagectomy are related to the initial 3 days of hospital stay requiring intensive care unit stays, ventilator support, and intraoperative time. Additional costs arise from hospital-based services. The major cost increases are related to complications associated with the procedure. We attempted to define these costs and identify expense management by streamlining care through strict adherence to patient care maps, operative standardization, and rapid discharge planning to reduce variability. Utilizing methods of Kaizen philosophy we evaluated all processes related to the entire experience of esophageal resection. This process has taken over 5 years to achieve, with quality and cost being tracked over this time period. Cost analysis included expenses related to intensive care unit, anesthesia, disposables, and hospital services. Quality improvement measures were related to intraoperative complications, in-hospital complications, and postoperative outcomes. The Institutional Review Board approved the use of anonymous data from standard clinical practice because no additional treatment was planned (observational study). Utilizing a continuous process improvement methodology, a 43% reduction in cost per case has been achieved with a significant increase in contribution margin for esophagectomy. The length of stay has been reduced from 14 days to 5. With intraoperative and postoperative standardization the leak rate has dropped from 12% to less than 3% to no leaks in our current Kaizen modification of care in our last 64 patients. Utilizing lean manufacturing techniques and continuous process evaluation we have attempted to eliminate variability, standardized the phases of care resulting in improved outcomes, decreased length of stay, and improved contribution margins. These Kaizen improvements require continuous interventions, strict adherence to care maps, and input from all levels for quality improvements. Copyright © 2011 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  7. How Well Does Physician Selection of Microbiologic Tests Identify Clostridium difficile and other Pathogens in Paediatric Diarrhoea? Insights Using Multiplex PCR-Based Detection

    PubMed Central

    Stockmann, Chris; Rogatcheva, Margarita; Harrel, Brian; Vaughn, Mike; Crisp, Rob; Poritz, Mark; Thatcher, Stephanie; Korgenski, Ernest K; Barney, Trenda; Daly, Judy; Pavia, Andrew T

    2014-01-01

    The objective of this study was to compare the aetiologic yield of standard of care microbiologic testing ordered by physicians with that of a multiplex PCR platform. Stool specimens obtained from children and young adults with gastrointestinal illness were evaluated by standard laboratory methods and a developmental version of the FilmArray Gastrointestinal Diagnostic System (FilmArray GI Panel), a rapid multiplex PCR platform that detects 23 bacterial, viral, and protozoal agents. Results were classified according to the microbiologic tests requested by the treating physician. A median of 3 (range 1-10) microbiologic tests were performed by the clinical laboratory during 378 unique diarrhoeal episodes. A potential aetiologic agent was identified in 46% of stool specimens by standard laboratory methods and in 65% of specimens tested using the FilmArray GI Panel (P<0.001). For those patients who only had Clostridium difficile testing requested, an alternative pathogen was identified in 29% of cases with the FilmArray GI Panel. Notably, 11 (12%) cases of norovirus were identified among children who only had testing for C. difficile ordered. Among those who had C. difficile testing ordered in combination with other tests, an additional pathogen was identified in 57% of stool specimens with the FilmArray GI Panel. For patients who had no C. difficile testing performed, the FilmArray GI Panel identified a pathogen in 63% of cases, including C. difficile in 8%. Physician-specified laboratory testing may miss important diarrhoeal pathogens. Additionally, standard laboratory testing is likely to underestimate co-infections with multiple infectious diarrhoeagenic agents. PMID:25599941

  8. Measurement of reaeration coefficients for selected Florida streams

    USGS Publications Warehouse

    Hampson, P.S.; Coffin, J.E.

    1989-01-01

    A total of 29 separate reaeration coefficient determinations were performed on 27 subreaches of 12 selected Florida streams between October 1981 and May 1985. Measurements performed prior to June 1984 were made using the peak and area methods with ethylene and propane as the tracer gases. Later measurements utilized the steady-state method with propane as the only tracer gas. The reaeration coefficients ranged from 1.07 to 45.9 days with a mean estimated probable error of +/16.7%. Ten predictive equations (compiled from the literature) were also evaluated using the measured coefficients. The most representative equation was one of the energy dissipation type with a standard error of 60.3%. Seven of the 10 predictive additional equations were modified using the measured coefficients and nonlinear regression techniques. The most accurate of the developed equations was also of the energy dissipation form and had a standard error of 54.9%. For 5 of the 13 subreaches in which both ethylene and propane were used, the ethylene data resulted in substantially larger reaeration coefficient values which were rejected. In these reaches, ethylene concentrations were probably significantly affected by one or more electrophilic addition reactions known to occur in aqueous media. (Author 's abstract)

  9. Wind Tunnel Force Balance Calibration Study - Interim Results

    NASA Technical Reports Server (NTRS)

    Rhew, Ray D.

    2012-01-01

    Wind tunnel force balance calibration is preformed utilizing a variety of different methods and does not have a direct traceable standard such as standards used for most calibration practices (weights, and voltmeters). These different calibration methods and practices include, but are not limited to, the loading schedule, the load application hardware, manual and automatic systems, re-leveling and non-re-leveling. A study of the balance calibration techniques used by NASA was undertaken to develop metrics for reviewing and comparing results using sample calibrations. The study also includes balances of different designs, single and multi-piece. The calibration systems include, the manual, and the automatic that are provided by NASA and its vendors. The results to date will be presented along with the techniques for comparing the results. In addition, future planned calibrations and investigations based on the results will be provided.

  10. Developing the Cleanliness Requirements for an Organic-detection Instrument MOMA-MS

    NASA Technical Reports Server (NTRS)

    Perry, Radford; Canham, John; Lalime, Erin

    2015-01-01

    The cleanliness requirements for an organic-detection instrument, like the Mars Organic Molecule Analyzer Mass Spectrometer (MOMA-MS), on a Planetary Protection Class IVb mission can be extremely stringent. These include surface molecular and particulate, outgassing, and bioburden. The prime contractor for the European Space Agencys ExoMars 2018 project, Thales Alenia Space Italy, provided requirements based on a standard, conservative approach of defining limits which yielded levels that are unverifiable by standard cleanliness verification methods. Additionally, the conservative method for determining contamination surface area uses underestimation while conservative bioburden surface area relies on overestimation, which results in inconsistencies for the normalized reporting. This presentation will provide a survey of the challenge to define requirements that can be reasonably verified and still remain appropriate to the core science of the ExoMars mission.

  11. Development of WAIS-III General Ability Index Minus WMS-III memory discrepancy scores.

    PubMed

    Lange, Rael T; Chelune, Gordon J; Tulsky, David S

    2006-09-01

    Analysis of the discrepancy between intellectual functioning and memory ability has received some support as a useful means for evaluating memory impairment. In recent additions to Wechlser scale interpretation, the WAIS-III General Ability Index (GAI) and the WMS-III Delayed Memory Index (DMI) were developed. The purpose of this investigation is to develop base rate data for GAI-IMI, GAI-GMI, and GAI-DMI discrepancy scores using data from the WAIS-III/WMS-III standardization sample (weighted N = 1250). Base rate tables were developed using the predicted-difference method and two simple-difference methods (i.e., stratified and non-stratified). These tables provide valuable data for clinical reference purposes to determine the frequency of GAI-IMI, GAI-GMI, and GAI-DMI discrepancy scores in the WAIS-III/WMS-III standardization sample.

  12. Bayesian methods for outliers detection in GNSS time series

    NASA Astrophysics Data System (ADS)

    Qianqian, Zhang; Qingming, Gui

    2013-07-01

    This article is concerned with the problem of detecting outliers in GNSS time series based on Bayesian statistical theory. Firstly, a new model is proposed to simultaneously detect different types of outliers based on the conception of introducing different types of classification variables corresponding to the different types of outliers; the problem of outlier detection is converted into the computation of the corresponding posterior probabilities, and the algorithm for computing the posterior probabilities based on standard Gibbs sampler is designed. Secondly, we analyze the reasons of masking and swamping about detecting patches of additive outliers intensively; an unmasking Bayesian method for detecting additive outlier patches is proposed based on an adaptive Gibbs sampler. Thirdly, the correctness of the theories and methods proposed above is illustrated by simulated data and then by analyzing real GNSS observations, such as cycle slips detection in carrier phase data. Examples illustrate that the Bayesian methods for outliers detection in GNSS time series proposed by this paper are not only capable of detecting isolated outliers but also capable of detecting additive outlier patches. Furthermore, it can be successfully used to process cycle slips in phase data, which solves the problem of small cycle slips.

  13. Methodological Considerations for Hair Cortisol Measurements in Children

    PubMed Central

    Slominski, Radomir; Rovnaghi, Cynthia R.; Anand, Kanwaljeet J. S.

    2015-01-01

    Background Hair cortisol levels are used increasingly as a measure for chronic stress in young children. We propose modifications to the current methods used for hair cortisol analysis to more accurately determine reference ranges for hair cortisol across different populations and age groups. Methods The authors compared standard (finely cutting hair) vs. milled methods for hair processing (n=16), developed a 4-step extraction process for hair protein and cortisol (n=16), and compared liquid chromatography-mass spectrometry (LCMS) vs. ELISA assays for measuring hair cortisol (n=28). The extraction process included sequential incubations in methanol and acetone, repeated twice. Hair protein was measured via spectrophotometric ratios at 260/280 nm to indicate the hair dissolution state using a BioTek® plate reader and dedicated software. Hair cortisol was measured using an ELISA assay kit. Individual (n=13), pooled hair samples (n=12) with high, intermediate, and low cortisol values and the ELISA assay internal standards (n=3) were also evaluated by LCMS. Results Milled and standard methods showed highly correlated hair cortisol (rs=0.951, p<0.0001) and protein values (rs=0.902, p=0.0002), although higher yields of cortisol and protein were obtained from the standard method in 13/16 and 14/16 samples respectively (p<0.05). Four sequential extractions yielded additional amounts of protein (36.5%, 27.5%, 30.5%, 3.1%) and cortisol (45.4%, 31.1%, 15.1%, 0.04%) from hair samples. Cortisol values measured by LCMS and ELISA were correlated (rs=0.737; p<0.0001), although cortisol levels (median [IQR]) detected in the same samples by LCMS (38.7 [14.4, 136] ng/ml) were lower than by ELISA (172.2 [67.9, 1051] ng/ml). LCMS also detected cortisone, which comprised 13.4% (3.7%, 25.9%) of the steroids detected. Conclusion Methodological studies suggest that finely cutting hair with sequential incubations in methanol and acetone, repeated twice, extracts greater yields of cortisol than does milled hair. Based on these findings, at least three incubations may be required to extract most of the cortisol in human hair samples. In addition, ELISA-based assays showed greater sensitivity for measuring hair cortisol levels than LCMS-based assays. PMID:25811341

  14. Novel Ultrasound Joint Selection Methods Using a Reduced Joint Number Demonstrate Inflammatory Improvement when Compared to Existing Methods and Disease Activity Score at 28 Joints.

    PubMed

    Tan, York Kiat; Allen, John C; Lye, Weng Kit; Conaghan, Philip G; D'Agostino, Maria Antonietta; Chew, Li-Ching; Thumboo, Julian

    2016-01-01

    A pilot study testing novel ultrasound (US) joint-selection methods in rheumatoid arthritis. Responsiveness of novel [individualized US (IUS) and individualized composite US (ICUS)] methods were compared with existing US methods and the Disease Activity Score at 28 joints (DAS28) for 12 patients followed for 3 months. IUS selected up to 7 and 12 most ultrasonographically inflamed joints, while ICUS additionally incorporated clinically symptomatic joints. The existing, IUS, and ICUS methods' standardized response means were -0.39, -1.08, and -1.11, respectively, for 7 joints; -0.49, -1.00, and -1.16, respectively, for 12 joints; and -0.94 for DAS28. Novel methods effectively demonstrate inflammatory improvement when compared with existing methods and DAS28.

  15. Recommendations for fluorescence instrument qualification: the new ASTM Standard Guide.

    PubMed

    DeRose, Paul C; Resch-Genger, Ute

    2010-03-01

    Aimed at improving quality assurance and quantitation for modern fluorescence techniques, ASTM International (ASTM) is about to release a Standard Guide for Fluorescence, reviewed here. The guide's main focus is on steady state fluorometry, for which available standards and instrument characterization procedures are discussed along with their purpose, suitability, and general instructions for use. These include the most relevant instrument properties needing qualification, such as linearity and spectral responsivity of the detection system, spectral irradiance reaching the sample, wavelength accuracy, sensitivity or limit of detection for an analyte, and day-to-day performance verification. With proper consideration of method-inherent requirements and limitations, many of these procedures and standards can be adapted to other fluorescence techniques. In addition, procedures for the determination of other relevant fluorometric quantities including fluorescence quantum yields and fluorescence lifetimes are briefly introduced. The guide is a clear and concise reference geared for users of fluorescence instrumentation at all levels of experience and is intended to aid in the ongoing standardization of fluorescence measurements.

  16. Multiple window spatial registration error of a gamma camera: 133Ba point source as a replacement of the NEMA procedure.

    PubMed

    Bergmann, Helmar; Minear, Gregory; Raith, Maria; Schaffarich, Peter M

    2008-12-09

    The accuracy of multiple window spatial resolution characterises the performance of a gamma camera for dual isotope imaging. In the present study we investigate an alternative method to the standard NEMA procedure for measuring this performance parameter. A long-lived 133Ba point source with gamma energies close to 67Ga and a single bore lead collimator were used to measure the multiple window spatial registration error. Calculation of the positions of the point source in the images used the NEMA algorithm. The results were validated against the values obtained by the standard NEMA procedure which uses a liquid 67Ga source with collimation. Of the source-collimator configurations under investigation an optimum collimator geometry, consisting of a 5 mm thick lead disk with a diameter of 46 mm and a 5 mm central bore, was selected. The multiple window spatial registration errors obtained by the 133Ba method showed excellent reproducibility (standard deviation < 0.07 mm). The values were compared with the results from the NEMA procedure obtained at the same locations and showed small differences with a correlation coefficient of 0.51 (p < 0.05). In addition, the 133Ba point source method proved to be much easier to use. A Bland-Altman analysis showed that the 133Ba and the 67Ga Method can be used interchangeably. The 133Ba point source method measures the multiple window spatial registration error with essentially the same accuracy as the NEMA-recommended procedure, but is easier and safer to use and has the potential to replace the current standard procedure.

  17. A risk-based statistical investigation of the quantification of polymorphic purity of a pharmaceutical candidate by solid-state 19F NMR.

    PubMed

    Barry, Samantha J; Pham, Tran N; Borman, Phil J; Edwards, Andrew J; Watson, Simon A

    2012-01-27

    The DMAIC (Define, Measure, Analyse, Improve and Control) framework and associated statistical tools have been applied to both identify and reduce variability observed in a quantitative (19)F solid-state NMR (SSNMR) analytical method. The method had been developed to quantify levels of an additional polymorph (Form 3) in batches of an active pharmaceutical ingredient (API), where Form 1 is the predominant polymorph. In order to validate analyses of the polymorphic form, a single batch of API was used as a standard each time the method was used. The level of Form 3 in this standard was observed to gradually increase over time, the effect not being immediately apparent due to method variability. In order to determine the cause of this unexpected increase and to reduce method variability, a risk-based statistical investigation was performed to identify potential factors which could be responsible for these effects. Factors identified by the risk assessment were investigated using a series of designed experiments to gain a greater understanding of the method. The increase of the level of Form 3 in the standard was primarily found to correlate with the number of repeat analyses, an effect not previously reported in SSNMR literature. Differences in data processing (phasing and linewidth) were found to be responsible for the variability in the method. After implementing corrective actions the variability was reduced such that the level of Form 3 was within an acceptable range of ±1% ww(-1) in fresh samples of API. Copyright © 2011. Published by Elsevier B.V.

  18. Forcing scheme analysis for the axisymmetric lattice Boltzmann method under incompressible limit.

    PubMed

    Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chen, Jie; Yin, Linmao; Chew, Jia Wei

    2017-04-01

    Because the standard lattice Boltzmann (LB) method is proposed for Cartesian Navier-Stokes (NS) equations, additional source terms are necessary in the axisymmetric LB method for representing the axisymmetric effects. Therefore, the accuracy and applicability of the axisymmetric LB models depend on the forcing schemes adopted for discretization of the source terms. In this study, three forcing schemes, namely, the trapezium rule based scheme, the direct forcing scheme, and the semi-implicit centered scheme, are analyzed theoretically by investigating their derived macroscopic equations in the diffusive scale. Particularly, the finite difference interpretation of the standard LB method is extended to the LB equations with source terms, and then the accuracy of different forcing schemes is evaluated for the axisymmetric LB method. Theoretical analysis indicates that the discrete lattice effects arising from the direct forcing scheme are part of the truncation error terms and thus would not affect the overall accuracy of the standard LB method with general force term (i.e., only the source terms in the momentum equation are considered), but lead to incorrect macroscopic equations for the axisymmetric LB models. On the other hand, the trapezium rule based scheme and the semi-implicit centered scheme both have the advantage of avoiding the discrete lattice effects and recovering the correct macroscopic equations. Numerical tests applied for validating the theoretical analysis show that both the numerical stability and the accuracy of the axisymmetric LB simulations are affected by the direct forcing scheme, which indicate that forcing schemes free of the discrete lattice effects are necessary for the axisymmetric LB method.

  19. Endoscopic and Photodynamic Therapy of Cholangiocarcinoma

    PubMed Central

    Meier, Benjamin; Caca, Karel

    2016-01-01

    Background Most patients with cholangiocarcinoma (CCA) have unresectable disease. Endoscopic bile duct drainage is one of the major objectives of palliation of obstructive jaundice. Methods/Results Stent implantation using endoscopic retrograde cholangiography is considered to be the standard technique. Unilateral versus bilateral stenting is associated with different advantages and disadvantages; however, a standard approach is still not defined. As there are various kinds of stents, there is an ongoing discussion on which stent to use in which situation. Palliation of obstructive jaundice can be augmented through the use of photodynamic therapy (PDT). Studies have shown a prolonged survival for the combinations of PDT and different stent applications as well as combinations of PDT and additional systemic chemotherapy. Conclusion More well-designed studies are needed to better evaluate and standardize endoscopic treatment of unresectable CCA. PMID:28229075

  20. Differentiation of organic and non-organic winter wheat cultivars from a controlled field trial by crystallization patterns.

    PubMed

    Kahl, Johannes; Busscher, Nicolaas; Mergardt, Gaby; Mäder, Paul; Torp, Torfinn; Ploeger, Angelika

    2015-01-01

    There is a need for authentication tools in order to verify the existing certification system. Recently, markers for analytical authentication of organic products were evaluated. Herein, crystallization with additives was described as an interesting fingerprint approach which needs further evidence, based on a standardized method and well-documented sample origin. The fingerprint of wheat cultivars from a controlled field trial is generated from structure analysis variables of crystal patterns. Method performance was tested on factors such as crystallization chamber, day of experiment and region of interest of the patterns. Two different organic treatments and two different treatments of the non-organic regime can be grouped together in each of three consecutive seasons. When the k-nearest-neighbor classification method was applied, approximately 84% of Runal samples and 95% of Titlis samples were classified correctly into organic and non-organic origin using cross-validation. Crystallization with additive offers an interesting complementary fingerprint method for organic wheat samples. When the method is applied to winter wheat from the DOK trial, organic and non-organic treated samples can be differentiated significantly based on pattern recognition. Therefore crystallization with additives seems to be a promising tool in organic wheat authentication. © 2014 Society of Chemical Industry.

  1. Using Polymer Confinement for Stem Cell Differentiation: 3D Printed vs Molded Scaffolds

    NASA Astrophysics Data System (ADS)

    Rafailovich, Miriam

    Additive manufacturing technologies are increasingly being used to replace standard extrusion or molding methods in engineering polymeric biomedical implants, which can be further seeded with cells for tissue regeneration. The principal advantage of this new technology is the ability to print directly from a scan and hence produce parts which are an ideal fit for an individual, eliminating much of the sizing and fitting associated with standard manufacturing methods. The question though arises whether devices which may be macroscopically similar, serve identical functions and are produced from the same material, interact in the same manner with cells and living tissue. Here we show that fundamental differences can exist between 3-D printed and extruded scaffolds which can impact stem cell differentiation and lineage selection. We will show how polymer confinement inherent in these methods affect the printed features on multiple length scales. We will also and how the differentiation of stem cells is affected by substrate heterogeneity in both morphological and mechanical features. NSF-Inspire award # 1344267.

  2. Picomolar quantitation of free sulfite in foods by means of [57Co]hydroxocobalamin and radiometric chromatography of [57Co]sulfitocobalamin. Method, applications and significance of coexisting sulfides.

    PubMed

    Beck, R A; Anes, J M; Savini, L M; Mateer, R A

    2000-06-09

    The concentration dependent reaction of sulfite with 57Co-labeled hydroxocobalamin (OH57CoCbl) to produce a sulfitocobalamin (SO(3)57CoCbl) adduct served as a quantification strategy for foodborne sulfite residues freely extracted into pH 5.2, 0.05 M acetate buffer. SO(3)57CoCbl was then resolved using SP-Sephadex C-25 gel chromatography and its radiometric detection allowed calculation of a standard logit plot from which unknown sulfite concentrations could be determined. The sulfite detection range was 6.0 nM-0.3 pM with respective relative standard deviations of 4.4-29.4% for 50-microl samples. Individual incidences of foodborne sulfite intolerances provoked by L-cysteine or sulfite additive use in bakery products, which remained undetected using conventional sulfite analytical methods, underscored the quantitative value of the method. The analytical significance and occurrences of detectable sulfides coexisting with foodborne sulfite residues was also addressed.

  3. Microplate-based filter paper assay to measure total cellulase activity.

    PubMed

    Xiao, Zhizhuang; Storms, Reginald; Tsang, Adrian

    2004-12-30

    The standard filter paper assay (FPA) published by the International Union of Pure and Applied Chemistry (IUPAC) is widely used to determine total cellulase activity. However, the IUPAC method is not suitable for the parallel analyses of large sample numbers. We describe here a microplate-based method for assaying large sample numbers. To achieve this, we reduced the enzymatic reaction volume to 60 microl from the 1.5 ml used in the IUPAC method. The modified 60-microl format FPA can be carried out in 96-well assay plates. Statistical analyses showed that the cellulase activities of commercial cellulases from Trichoderma reesei and Aspergillus species determined with our 60-microl format FPA were not significantly different from the activities measured with the standard FPA. Our results also indicate that the 60-microl format FPA is quantitative and highly reproducible. Moreover, the addition of excess beta-glucosidase increased the sensitivity of the assay by up to 60%. 2004 Wiley Periodicals, Inc.

  4. Derivatization coupled to headspace programmed-temperature vaporizer gas chromatography with mass spectrometry for the determination of amino acids: Application to urine samples.

    PubMed

    González Paredes, Rosa María; García Pinto, Carmelo; Pérez Pavón, José Luis; Moreno Cordero, Bernardo

    2016-09-01

    A new method based on headspace programmed-temperature vaporizer gas chromatography with mass spectrometry has been developed and validated for the determination of amino acids (alanine, sarcosine, ethylglycine, valine, leucine, and proline) in human urine samples. Derivatization with ethyl chloroformate was employed successfully to determine the amino acids. The derivatization reaction conditions as well as the variables of the headspace sampling were optimized. The existence of a matrix effect was checked and the analytical characteristics of the method were determined. The limits of detection were 0.15-2.89 mg/L, and the limits of quantification were 0.46-8.67 mg/L. The instrumental repeatability was 1.6-11.5%. The quantification of the amino acids in six urine samples from healthy subjects was performed with the method developed with the one-point standard additions protocol, with norleucine as the internal standard. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Determination of cadmium in seawater by chelate vapor generation atomic fluorescence spectrometry

    NASA Astrophysics Data System (ADS)

    Sun, Rui; Ma, Guopeng; Duan, Xuchuan; Sun, Jinsheng

    2018-03-01

    A method for the determination of cadmium in seawater by chelate vapor generation (Che-VG) atomic fluorescence spectrometry is described. Several commercially available chelating agents, including ammonium pyrrolidine dithiocarbamate (APDC), sodium dimethyl dithiocarbamate (DMDTC), ammonium dibutyl dithiophosphate (DBDTP) and sodium O,O-diethyl dithiophosphate (DEDTP), are compared with sodium diethyldithiocarbamate (DDTC) for the Che-VG of cadmium, and results showed that DDTC and DEDTP had very good cadmium signal intensity. The effect of the conditions of Che-VG with DDTC on the intensity of cadmium signal was investigated. Under the optimal conditions, 85 ± 3% Che-VG efficiency is obtained for cadmium. The detection limit (3σ) obtained in the optimal conditions was 0.19 ng ml- 1. The relative standard deviation (RSD, %) for ten replicate determinations at 2 ng ml- 1 Cd was 3.42%. The proposed method was successfully applied to the ultratrace determination of cadmium in seawater samples by the standard addition method.

  6. Genotoxicity investigations on nanomaterials.

    PubMed

    Oesch, Franz; Landsiedel, Robert

    2012-07-01

    This review is based on the lecture presented at the April 2010 nanomaterials safety assessment Postsatellite to the 2009 EUROTOX Meeting and summarizes genotoxicity investigations on nanomaterials published in the open scientific literature (up to 2008). Special attention is paid to the relationship between particle size and positive versus negative outcome, as well as the dependence of the outcome on the test used. Salient conclusions and outstanding recommendations emerging from the information summarized in this review are as follows: recognize that nanomaterials are not all the same; therefore know and document what nanomaterial has been tested and in what form; take nanomaterials specific properties into account; in order to make your results comparable with those of others and on other nanomaterials: use or at least include in your studies standardized methods; use in vivo studies to put in vitro results into perspective; take uptake and distribution of the nanomaterial into account; and in order to become able to make extrapolations to risk for human: learn about the mechanism of nanomaterials genotoxic effects. Past experience with standard non-nanosubstances already had shown that mechanisms of genotoxic effects can be complex and their elucidation can be demanding, while there often is an immediate need to assess the genotoxic hazard. Thus, a practical and pragmatic approach to genotoxicity investigations of novel nanomaterials is the use of a battery of standard genotoxicity testing methods covering a wide range of mechanisms. Application of these standard methods to nanomaterials demands, however, adaptations, and the interpretation of results from the genotoxicity testing of nanomaterials needs additional considerations exceeding those used for standard size materials.

  7. Knee Images Digital Analysis (KIDA): a novel method to quantify individual radiographic features of knee osteoarthritis in detail.

    PubMed

    Marijnissen, A C A; Vincken, K L; Vos, P A J M; Saris, D B F; Viergever, M A; Bijlsma, J W J; Bartels, L W; Lafeber, F P J G

    2008-02-01

    Radiography is still the golden standard for imaging features of osteoarthritis (OA), such as joint space narrowing, subchondral sclerosis, and osteophyte formation. Objective assessment, however, remains difficult. The goal of the present study was to evaluate a novel digital method to analyse standard knee radiographs. Standardized radiographs of 20 healthy and 55 OA knees were taken in general practise according to the semi-flexed method by Buckland-Wright. Joint Space Width (JSW), osteophyte area, subchondral bone density, joint angle, and tibial eminence height were measured as continuous variables using newly developed Knee Images Digital Analysis (KIDA) software on a standard PC. Two observers evaluated the radiographs twice, each on two different occasions. The observers were blinded to the source of the radiographs and to their previous measurements. Statistical analysis to compare measurements within and between observers was performed according to Bland and Altman. Correlations between KIDA data and Kellgren & Lawrence (K&L) grade were calculated and data of healthy knees were compared to those of OA knees. Intra- and inter-observer variations for measurement of JSW, subchondral bone density, osteophytes, tibial eminence, and joint angle were small. Significant correlations were found between KIDA parameters and K&L grade. Furthermore, significant differences were found between healthy and OA knees. In addition to JSW measurement, objective evaluation of osteophyte formation and subchondral bone density is possible on standard radiographs. The measured differences between OA and healthy individuals suggest that KIDA allows detection of changes in time, although sensitivity to change has to be demonstrated in long-term follow-up studies.

  8. Use of ceric ammonium sulphate and two dyes, methyl orange and indigo carmine, in the determination of lansoprazole in pharmaceuticals.

    PubMed

    Basavaiah, Kanakapura; Ramakrishna, Veeraiah; Kumar, Urdigere Rangachar Anil

    2007-06-01

    Two spectrophotometric methods are proposed for the assay of lansoprazole (LPZ) in bulk drug and in dosage forms using ceric ammonium sulphate (CAS) and two dyes, methyl orange and indigo carmine, as reagents. The methods involve addition of a known excess of CAS to LPZ in acid medium, followed by determination of residual CAS by reacting with a fixed amount of either methyl orange, measuring the absorbance at 520 nm (method A), or indigo carmine, measuring the absorbance at 610 nm (method B). In both methods, the amount of CAS reacted corresponds to the amount of LPZ and the measured absorbance was found to increase linearly with the concentration of LPZ, which is corroborated by the correlation coefficients of 0.9979 and 0.9954 for methods A and B, respectively. The systems obey Beer's law for 0.5-7.0 microg mL(-1) and 0.25-3.0 microg mL(-1) for methods A and B, respectively. The apparent molar absorptivities were calculated to be 3.0 x 10(4) and 4.4 x 10(4) L mol(-1) cm(-1) for methods A and B, respectively. The limits of detection (LOD) and quantification (LOQ) were calculated to be 0.08 and 0.25 microg mL(-1) for method A, and 0.09 and 0.27 microg mLs(-1) for method B, respectively. The intra-day and inter-day precision and accuracy of the methods were evaluated according to the current ICH guidelines. Both methods were of comparable accuracy (er < or = 2 %). Also, both methods are equally precise as shown by the relative standard deviation values < 1.5%. No interference was observed from common pharmaceutical adjuvants. The accuracy of the methods was further ascertained by performing recovery studies using the standard addition method. The methods were successfully applied to the assay of LPZ in capsule preparations and the results were statistically compared with those of the literature UV-spectrophotometric method by applying Student's t-test and F-test.

  9. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less

  10. Amino acid analysis in physiological samples by GC-MS with propyl chloroformate derivatization and iTRAQ-LC-MS/MS.

    PubMed

    Dettmer, Katja; Stevens, Axel P; Fagerer, Stephan R; Kaspar, Hannelore; Oefner, Peter J

    2012-01-01

    Two mass spectrometry-based methods for the quantitative analysis of free amino acids are described. The first method uses propyl chloroformate/propanol derivatization and gas chromatography-quadrupole mass spectrometry (GC-qMS) analysis in single-ion monitoring mode. Derivatization is carried out directly in aqueous samples, thereby allowing automation of the entire procedure, including addition of reagents, extraction, and injection into the GC-MS. The method delivers the quantification of 26 amino acids. The isobaric tagging for relative and absolute quantification (iTRAQ) method employs the labeling of amino acids with isobaric iTRAQ tags. The tags contain two different cleavable reporter ions, one for the sample and one for the standard, which are detected by fragmentation in a tandem mass spectrometer. Reversed-phase liquid chromatography of the labeled amino acids is performed prior to mass spectrometric analysis to separate isobaric amino acids. The commercial iTRAQ kit allows for the analysis of 42 physiological amino acids with a respective isotope-labeled standard for each of these 42 amino acids.

  11. The Development and Application of a Method to Quantify the Quality of Cryoprotectant Conditions Using Standard Area Detector X-Ray Images

    NASA Technical Reports Server (NTRS)

    McFerrin, Michael; Snell, Edward; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    An X-ray based method for determining cryoprotectant concentrations necessary to protect solutions from crystalline ice formation was developed. X-ray images from a CCD area detector were integrated as powder patterns and quantified by determining the standard deviation of the slope of the normalized intensity curve in the resolution range where ice rings are known to occur. The method was tested determining the concentrations of glycerol, PEG400, ethylene glycol and 1,2-propanediol necessary to form an amorphous glass at 1OOK with each of the 98 crystallization solutions of Crystal Screens I and II (Hampton Research, Laguna Hills, California, USA). For conditions that required glycerol concentrations of 35% or above cryoprotectant conditions using 2,3-butanediol were determined. The method proved to be remarkably accurate. The results build on the work of [Garman and Mitchell] and extend the number, of suitable starting conditions to alternative cryoprotectants. In particular, 1,2-propanediol has emerged as a particularly good additive for glass formation upon flash cooling.

  12. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    DOEpatents

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  13. Determination of non-certified levoglucosan, sugar polyols and ergosterol in NIST Standard Reference Material 1649a

    NASA Astrophysics Data System (ADS)

    Pomata, Donatella; Di Filippo, Patrizia; Riccardi, Carmela; Buiarelli, Francesca; Gallo, Valentina

    2014-02-01

    Organic component of airborne particulate matter originates from both natural and anthropogenic sources whose contributions can be identified through the analysis of chemical markers. The validation of analytical methods for analysis of compounds used as chemical markers is of great importance especially if they must be determined in rather complex matrices. Currently, standard reference materials (SRM) with certified values for all those analytes are not available. In this paper, we report a method for the simultaneous determination of levoglucosan and xylitol as tracers for biomass burning emissions, and arabitol, mannitol and ergosterol as biomarkers for airborne fungi in SRM 1649a, by GC/MS. Their quantitative analysis in SRM 1649a was carried out using both internal standard calibration curves and standard addition method. A matrix effect was observed for all analytes, minor for levoglucosan and major for polyols and ergosterol. The results related to levoglucosan around 160 μg g-1 agreed with those reported by other authors, while no comparison was possible for xylitol (120 μg g-1), arabitol (15 μg g-1), mannitol (18 μg g-1), and ergosterol (0.5 μg g-1). The analytical method used for SRM 1649a was also applied to PM10 samples collected in Rome during four seasonal sampling campaigns. The ratios between annual analyte concentrations in PM10 samples and in SRM 1649a were of the same order of magnitude although particulate matter samples analyzed were collected in two different sites and periods.

  14. Can SNOMED CT Changes Be Used as a Surrogate Standard for Evaluating the Performance of Its Auditing Methods?

    PubMed Central

    Guo-Qiang, Zhang; Yan, Huang; Licong, Cui

    2017-01-01

    We introduce RGT, Retrospective Ground-Truthing, as a surrogate reference standard for evaluating the performance of automated Ontology Quality Assurance (OQA) methods. The key idea of RGT is to use cumulative SNOMED CT changes derived from its regular longitudinal distributions by the official SNOMED CT editorial board as a partial, surrogate reference standard. The contributions of this paper are twofold: (1) to construct an RGT reference set for SNOMED CT relational changes; and (2) to perform a comparative evaluation of the performances of lattice, non-lattice, and randomized relational error detection methods using the standard precision, recall, and geometric measures. An RGT relational-change reference set of 32,241 IS-A changes were constructed from 5 U.S. editions of SNOMED CT from September 2014 to September 2016, with reversals and changes due to deletion or addition of new concepts excluded. 68,849 independent non-lattice fragments, 118,587 independent lattice fragments, and 446,603 relations were extracted from the SNOMED CT March 2014 distribution. Comparative performance analysis of smaller (less than 15) lattice vs. non-lattice fragments was also given to approach the more realistic setting in which such methods may be applied. Among the 32,241 IS-A changes, independent non-lattice fragments covered 52.8% changes with 26.4% precision with a G-score of 0.373. Even though this G-score is significantly lower in comparison to those in information retrieval, it breaks new ground in that such evaluations have never performed before in the highly discovery-oriented setting of OQA. PMID:29854262

  15. Can SNOMED CT Changes Be Used as a Surrogate Standard for Evaluating the Performance of Its Auditing Methods?

    PubMed

    Guo-Qiang, Zhang; Yan, Huang; Licong, Cui

    2017-01-01

    We introduce RGT, Retrospective Ground-Truthing, as a surrogate reference standard for evaluating the performance of automated Ontology Quality Assurance (OQA) methods. The key idea of RGT is to use cumulative SNOMED CT changes derived from its regular longitudinal distributions by the official SNOMED CT editorial board as a partial, surrogate reference standard. The contributions of this paper are twofold: (1) to construct an RGT reference set for SNOMED CT relational changes; and (2) to perform a comparative evaluation of the performances of lattice, non-lattice, and randomized relational error detection methods using the standard precision, recall, and geometric measures. An RGT relational-change reference set of 32,241 IS-A changes were constructed from 5 U.S. editions of SNOMED CT from September 2014 to September 2016, with reversals and changes due to deletion or addition of new concepts excluded. 68,849 independent non-lattice fragments, 118,587 independent lattice fragments, and 446,603 relations were extracted from the SNOMED CT March 2014 distribution. Comparative performance analysis of smaller (less than 15) lattice vs. non-lattice fragments was also given to approach the more realistic setting in which such methods may be applied. Among the 32,241 IS-A changes, independent non-lattice fragments covered 52.8% changes with 26.4% precision with a G-score of 0.373. Even though this G-score is significantly lower in comparison to those in information retrieval, it breaks new ground in that such evaluations have never performed before in the highly discovery-oriented setting of OQA.

  16. 78 FR 12005 - Regulation of Fuels and Fuel Additives: 2013 Renewable Fuel Standards; Public Hearing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-21

    ... ENVIRONMENTAL PROTECTION AGENCY 40 CFR Part 80 [EPA-HQ-OAR-2012-0546; FRL-9784-4] RIN 2060-AR43 Regulation of Fuels and Fuel Additives: 2013 Renewable Fuel Standards; Public Hearing AGENCY: Environmental... Additives: 2013 Renewable Fuel Standards,'' which was published separately in the Federal Register on...

  17. 10 CFR 50.43 - Additional standards and provisions affecting class 103 licenses and certifications for...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Additional standards and provisions affecting class 103... Regulatory Approvals § 50.43 Additional standards and provisions affecting class 103 licenses and... propose nuclear reactor designs which differ significantly from light-water reactor designs that were...

  18. 10 CFR 50.43 - Additional standards and provisions affecting class 103 licenses and certifications for...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Additional standards and provisions affecting class 103... Regulatory Approvals § 50.43 Additional standards and provisions affecting class 103 licenses and... propose nuclear reactor designs which differ significantly from light-water reactor designs that were...

  19. 10 CFR 50.43 - Additional standards and provisions affecting class 103 licenses and certifications for...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Additional standards and provisions affecting class 103... Regulatory Approvals § 50.43 Additional standards and provisions affecting class 103 licenses and... propose nuclear reactor designs which differ significantly from light-water reactor designs that were...

  20. 10 CFR 50.43 - Additional standards and provisions affecting class 103 licenses and certifications for...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Additional standards and provisions affecting class 103... Regulatory Approvals § 50.43 Additional standards and provisions affecting class 103 licenses and... propose nuclear reactor designs which differ significantly from light-water reactor designs that were...

Top