40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
ERIC Educational Resources Information Center
Kimaru, Irene; Koether, Marina; Chichester, Kimberly; Eaton, Lafayette
2017-01-01
Analytical method transfer (AMT) and dissolution testing are important topics required in industry that should be taught in analytical chemistry courses. Undergraduate students in senior level analytical chemistry laboratory courses at Kennesaw State University (KSU) and St. John Fisher College (SJFC) participated in development, validation, and…
Major advances in testing of dairy products: milk component and dairy product attribute testing.
Barbano, D M; Lynch, J M
2006-04-01
Milk component analysis is relatively unusual in the field of quantitative analytical chemistry because an analytical test result determines the allocation of very large amounts of money between buyers and sellers of milk. Therefore, there is high incentive to develop and refine these methods to achieve a level of analytical performance rarely demanded of most methods or laboratory staff working in analytical chemistry. In the last 25 yr, well-defined statistical methods to characterize and validate analytical method performance combined with significant improvements in both the chemical and instrumental methods have allowed achievement of improved analytical performance for payment testing. A shift from marketing commodity dairy products to the development, manufacture, and marketing of value added dairy foods for specific market segments has created a need for instrumental and sensory approaches and quantitative data to support product development and marketing. Bringing together sensory data from quantitative descriptive analysis and analytical data from gas chromatography olfactometry for identification of odor-active compounds in complex natural dairy foods has enabled the sensory scientist and analytical chemist to work together to improve the consistency and quality of dairy food flavors.
Analytic Methods Used in Quality Control in a Compounding Pharmacy.
Allen, Loyd V
2017-01-01
Analytical testing will no doubt become a more important part of pharmaceutical compounding as the public and regulatory agencies demand increasing documentation of the quality of compounded preparations. Compounding pharmacists must decide what types of testing and what amount of testing to include in their quality-control programs, and whether testing should be done in-house or outsourced. Like pharmaceutical compounding, analytical testing should be performed only by those who are appropriately trained and qualified. This article discusses the analytical methods that are used in quality control in a compounding pharmacy. Copyright© by International Journal of Pharmaceutical Compounding, Inc.
A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.
Yang, Harry; Zhang, Jianchun
2015-01-01
The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.
Vandekerckhove, Kristof; Seidl, Andreas; Gutka, Hiten; Kumar, Manish; Gratzl, Gyöngyi; Keire, David; Coffey, Todd; Kuehne, Henriette
2018-05-10
Leading regulatory agencies recommend biosimilar assessment to proceed in a stepwise fashion, starting with a detailed analytical comparison of the structural and functional properties of the proposed biosimilar and reference product. The degree of analytical similarity determines the degree of residual uncertainty that must be addressed through downstream in vivo studies. Substantive evidence of similarity from comprehensive analytical testing may justify a targeted clinical development plan, and thus enable a shorter path to licensing. The importance of a careful design of the analytical similarity study program therefore should not be underestimated. Designing a state-of-the-art analytical similarity study meeting current regulatory requirements in regions such as the USA and EU requires a methodical approach, consisting of specific steps that far precede the work on the actual analytical study protocol. This white paper discusses scientific and methodological considerations on the process of attribute and test method selection, criticality assessment, and subsequent assignment of analytical measures to US FDA's three tiers of analytical similarity assessment. Case examples of selection of critical quality attributes and analytical methods for similarity exercises are provided to illustrate the practical implementation of the principles discussed.
40 CFR 63.145 - Process wastewater provisions-test methods and procedures to determine compliance.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 9 2011-07-01 2011-07-01 false Process wastewater provisions-test... Operations, and Wastewater § 63.145 Process wastewater provisions—test methods and procedures to determine... analytical method for wastewater which has that compound as a target analyte. (7) Treatment using a series of...
40 CFR 63.145 - Process wastewater provisions-test methods and procedures to determine compliance.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 10 2013-07-01 2013-07-01 false Process wastewater provisions-test... Operations, and Wastewater § 63.145 Process wastewater provisions—test methods and procedures to determine... analytical method for wastewater which has that compound as a target analyte. (7) Treatment using a series of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method... (analytical method) provided that the chemistry of the method or the determinative technique is not changed... prevent efficient recovery of organic pollutants and prevent the method from meeting QC requirements, the...
2017-06-16
Acoustic Impacts on Marine Mammals and Sea Turtles: Methods and Analytical Approach for Phase III Training and Testing Sarah A. Blackstock Joseph O...December 2017 4. TITLE AND SUBTITLE Quantifying Acoustic Impacts on Marine Mammals and Sea Turtles: Methods and Analytical Approach for Phase III...Navy’s Phase III Study Areas as described in each Environmental Impact Statement/ Overseas Environmental Impact Statement and describes the methods
The Development of MST Test Information for the Prediction of Test Performances
ERIC Educational Resources Information Center
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.
2017-01-01
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
PESTICIDE ANALYTICAL METHODS TO SUPPORT DUPLICATE-DIET HUMAN EXPOSURE MEASUREMENTS
Historically, analytical methods for determination of pesticides in foods have been developed in support of regulatory programs and are specific to food items or food groups. Most of the available methods have been developed, tested and validated for relatively few analytes an...
Meier, D C; Benkstein, K D; Hurst, W S; Chu, P M
2017-05-01
Performance standard specifications for point chemical vapor detectors are established in ASTM E 2885-13 and ASTM E 2933-13. The performance evaluation of the detectors requires the accurate delivery of known concentrations of the chemical target to the system under test. Referee methods enable the analyte test concentration and associated uncertainties in the analyte test concentration to be validated by independent analysis, which is especially important for reactive analytes. This work extends the capability of a previously demonstrated method for using Fourier transform infrared (FT-IR) absorption spectroscopy for quantitatively evaluating the composition of vapor streams containing hazardous materials at Acute Exposure Guideline Levels (AEGL) to include test conditions colder than laboratory ambient temperatures. The described method covers the use of primary reference spectra to establish analyte concentrations, the generation of secondary reference spectra suitable for measuring analyte concentrations under specified testing environments, and the use of additional reference spectra and spectral profile strategies to mitigate the uncertainties due to impurities and water condensation within the low-temperature (7 °C, -5 °C) test cell. Important benefits of this approach include verification of the test analyte concentration with characterized uncertainties by in situ measurements co-located with the detector under test, near-real-time feedback, and broad applicability to toxic industrial chemicals.
Meier, D.C.; Benkstein, K.D.; Hurst, W.S.; Chu, P.M.
2016-01-01
Performance standard specifications for point chemical vapor detectors are established in ASTM E 2885-13 and ASTM E 2933-13. The performance evaluation of the detectors requires the accurate delivery of known concentrations of the chemical target to the system under test. Referee methods enable the analyte test concentration and associated uncertainties in the analyte test concentration to be validated by independent analysis, which is especially important for reactive analytes. This work extends the capability of a previously demonstrated method for using Fourier transform infrared (FT-IR) absorption spectroscopy for quantitatively evaluating the composition of vapor streams containing hazardous materials at Acute Exposure Guideline Levels (AEGL) to include test conditions colder than laboratory ambient temperatures. The described method covers the use of primary reference spectra to establish analyte concentrations, the generation of secondary reference spectra suitable for measuring analyte concentrations under specified testing environments, and the use of additional reference spectra and spectral profile strategies to mitigate the uncertainties due to impurities and water condensation within the low-temperature (7 °C, −5 °C) test cell. Important benefits of this approach include verification of the test analyte concentration with characterized uncertainties by in situ measurements co-located with the detector under test, near-real-time feedback, and broad applicability to toxic industrial chemicals. PMID:28090126
Brooks, M.H.; Schroder, L.J.; Malo, B.A.
1985-01-01
Four laboratories were evaluated in their analysis of identical natural and simulated precipitation water samples. Interlaboratory comparability was evaluated using analysis of variance coupled with Duncan 's multiple range test, and linear-regression models describing the relations between individual laboratory analytical results for natural precipitation samples. Results of the statistical analyses indicate that certain pairs of laboratories produce different results when analyzing identical samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple range test on data produced by the laboratories from the analysis of identical simulated precipitation samples. Bias for a given analyte produced by a single laboratory has been indicated when the laboratory mean for that analyte is shown to be significantly different from the mean for the most-probable analyte concentrations in the simulated precipitation samples. Ion-chromatographic methods for the determination of chloride, nitrate, and sulfate have been compared with the colorimetric methods that were also in use during the study period. Comparisons were made using analysis of variance coupled with Duncan 's multiple range test for means produced by the two methods. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Analyte estimated precisions have been compared using F-tests and differences in analyte precisions for laboratory pairs have been reported. (USGS)
Beloglazova, N V; Goryacheva, I Yu; Rusanova, T Yu; Yurasov, N A; Galve, R; Marco, M-P; De Saeger, S
2010-07-05
A new rapid method which allows simultaneous one step detection of two analytes of different nature (2,4,6,-trichlorophenol (TCP) and ochratoxin A (OTA)) in red wine was developed. It was based on a column test with three separate immunolayers: two test layers and one control layer. Each layer consisted of sepharose gel with immobilized anti-OTA (OTA test layer), anti-TCP (TCP test layer) or anti-HRP (control layer) antibodies. Analytes bind to the antibodies in the corresponding test layer while sample flows through the column. Then a mixture of OTA-HRP and TCP-HRP in appropriate dilutions was used, followed by the application of chromogenic substrate. Colour development of the test layer occurred when the corresponding analyte was absent in the sample. HRP-conjugates bound to anti-HRP antibody in the control layer independently of presence or absence of analytes and a blue colour developed in the control layer. Cut-off values for both analytes were 2 microg L(-1). The described method was applied to the simultaneous detection of TCP and OTA in wine samples. To screen the analytes in red wine samples, clean-up columns were used for sample pre-treatment in combination with the test column. Results were confirmed by chromatographic methods. Copyright 2010 Elsevier B.V. All rights reserved.
An analytic data analysis method for oscillatory slug tests.
Chen, Chia-Shyun
2006-01-01
An analytical data analysis method is developed for slug tests in partially penetrating wells in confined or unconfined aquifers of high hydraulic conductivity. As adapted from the van der Kamp method, the determination of the hydraulic conductivity is based on the occurrence times and the displacements of the extreme points measured from the oscillatory data and their theoretical counterparts available in the literature. This method is applied to two sets of slug test response data presented by Butler et al.: one set shows slow damping with seven discernable extremities, and the other shows rapid damping with three extreme points. The estimates of the hydraulic conductivity obtained by the analytic method are in good agreement with those determined by an available curve-matching technique.
Pesticide manufacturers must develop and submit analytical methods for their pesticide products to support registration of their products under FIFRA. Learn about these methods as well as SOPs for testing of antimicrobial products against three organisms.
Annual banned-substance review: Analytical approaches in human sports drug testing.
Thevis, Mario; Kuuranne, Tiia; Geyer, Hans
2018-01-01
Several high-profile revelations concerning anti-doping rule violations over the past 12 months have outlined the importance of tackling prevailing challenges and reducing the limitations of the current anti-doping system. At this time, the necessity to enhance, expand, and improve analytical test methods in response to the substances outlined in the World Anti-Doping Agency's (WADA) Prohibited List represents an increasingly crucial task for modern sports drug-testing programs. The ability to improve analytical testing methods often relies on the expedient application of novel information regarding superior target analytes for sports drug-testing assays, drug elimination profiles, alternative test matrices, together with recent advances in instrumental developments. This annual banned-substance review evaluates literature published between October 2016 and September 2017 offering an in-depth evaluation of developments in these arenas and their potential application to substances reported in WADA's 2017 Prohibited List. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Chambers, Jeffrey A.
1994-01-01
Finite element analysis is regularly used during the engineering cycle of mechanical systems to predict the response to static, thermal, and dynamic loads. The finite element model (FEM) used to represent the system is often correlated with physical test results to determine the validity of analytical results provided. Results from dynamic testing provide one means for performing this correlation. One of the most common methods of measuring accuracy is by classical modal testing, whereby vibratory mode shapes are compared to mode shapes provided by finite element analysis. The degree of correlation between the test and analytical mode shapes can be shown mathematically using the cross orthogonality check. A great deal of time and effort can be exhausted in generating the set of test acquired mode shapes needed for the cross orthogonality check. In most situations response data from vibration tests are digitally processed to generate the mode shapes from a combination of modal parameters, forcing functions, and recorded response data. An alternate method is proposed in which the same correlation of analytical and test acquired mode shapes can be achieved without conducting the modal survey. Instead a procedure is detailed in which a minimum of test information, specifically the acceleration response data from a random vibration test, is used to generate a set of equivalent local accelerations to be applied to the reduced analytical model at discrete points corresponding to the test measurement locations. The static solution of the analytical model then produces a set of deformations that once normalized can be used to represent the test acquired mode shapes in the cross orthogonality relation. The method proposed has been shown to provide accurate results for both a simple analytical model as well as a complex space flight structure.
[Automated analyzer of enzyme immunoassay].
Osawa, S
1995-09-01
Automated analyzers for enzyme immunoassay can be classified by several points of view: the kind of labeled antibodies or enzymes, detection methods, the number of tests per unit time, analytical time and speed per run. In practice, it is important for us consider the several points such as detection limits, the number of tests per unit time, analytical range, and precision. Most of the automated analyzers on the market can randomly access and measure samples. I will describe the recent advance of automated analyzers reviewing their labeling antibodies and enzymes, the detection methods, the number of test per unit time and analytical time and speed per test.
Laboratory Analytical Procedures | Bioenergy | NREL
analytical procedures (LAPs) to provide validated methods for biofuels and pyrolysis bio-oils research . Biomass Compositional Analysis These lab procedures provide tested and accepted methods for performing
Turner, Terry D.; Beller, Laurence S.; Clark, Michael L.; Klingler, Kerry M.
1997-01-01
A method of processing a test sample to concentrate an analyte in the sample from a solvent in the sample includes: a) boiling the test sample containing the analyte and solvent in a boiling chamber to a temperature greater than or equal to the solvent boiling temperature and less than the analyte boiling temperature to form a rising sample vapor mixture; b) passing the sample vapor mixture from the boiling chamber to an elongated primary separation tube, the separation tube having internal sidewalls and a longitudinal axis, the longitudinal axis being angled between vertical and horizontal and thus having an upper region and a lower region; c) collecting the physically transported liquid analyte on the internal sidewalls of the separation tube; and d) flowing the collected analyte along the angled internal sidewalls of the separation tube to and pass the separation tube lower region. The invention also includes passing a turbulence inducing wave through a vapor mixture to separate physically transported liquid second material from vaporized first material. Apparatus are also disclosed for effecting separations. Further disclosed is a fluidically powered liquid test sample withdrawal apparatus for withdrawing a liquid test sample from a test sample container and for cleaning the test sample container.
Turner, T.D.; Beller, L.S.; Clark, M.L.; Klingler, K.M.
1997-10-14
A method of processing a test sample to concentrate an analyte in the sample from a solvent in the sample includes: (a) boiling the test sample containing the analyte and solvent in a boiling chamber to a temperature greater than or equal to the solvent boiling temperature and less than the analyte boiling temperature to form a rising sample vapor mixture; (b) passing the sample vapor mixture from the boiling chamber to an elongated primary separation tube, the separation tube having internal sidewalls and a longitudinal axis, the longitudinal axis being angled between vertical and horizontal and thus having an upper region and a lower region; (c) collecting the physically transported liquid analyte on the internal sidewalls of the separation tube; and (d) flowing the collected analyte along the angled internal sidewalls of the separation tube to and pass the separation tube lower region. The invention also includes passing a turbulence inducing wave through a vapor mixture to separate physically transported liquid second material from vaporized first material. Apparatus is also disclosed for effecting separations. Further disclosed is a fluidically powered liquid test sample withdrawal apparatus for withdrawing a liquid test sample from a test sample container and for cleaning the test sample container. 8 figs.
Martín-Sabroso, Cristina; Tavares-Fernandes, Daniel Filipe; Espada-García, Juan Ignacio; Torres-Suárez, Ana Isabel
2013-12-15
In this work a protocol to validate analytical procedures for the quantification of drug substances formulated in polymeric systems that comprise both drug entrapped into the polymeric matrix (assay:content test) and drug released from the systems (assay:dissolution test) is developed. This protocol is applied to the validation two isocratic HPLC analytical procedures for the analysis of dexamethasone phosphate disodium microparticles for parenteral administration. Preparation of authentic samples and artificially "spiked" and "unspiked" samples is described. Specificity (ability to quantify dexamethasone phosphate disodium in presence of constituents of the dissolution medium and other microparticle constituents), linearity, accuracy and precision are evaluated, in the range from 10 to 50 μg mL(-1) in the assay:content test procedure and from 0.25 to 10 μg mL(-1) in the assay:dissolution test procedure. The robustness of the analytical method to extract drug from microparticles is also assessed. The validation protocol developed allows us to conclude that both analytical methods are suitable for their intended purpose, but the lack of proportionality of the assay:dissolution analytical method should be taken into account. The validation protocol designed in this work could be applied to the validation of any analytical procedure for the quantification of drugs formulated in controlled release polymeric microparticles. Copyright © 2013 Elsevier B.V. All rights reserved.
Westgard, Sten A
2016-06-01
To assess the analytical performance of instruments and methods through external quality assessment and proficiency testing data on the Sigma scale. A representative report from five different EQA/PT programs around the world (2 US, 1 Canadian, 1 UK, and 1 Australasian) was accessed. The instrument group standard deviations were used as surrogate estimates of instrument imprecision. Performance specifications from the US CLIA proficiency testing criteria were used to establish a common quality goal. Then Sigma-metrics were calculated to grade the analytical performance. Different methods have different Sigma-metrics for each analyte reviewed. Summary Sigma-metrics estimate the percentage of the chemistry analytes that are expected to perform above Five Sigma, which is where optimized QC design can be implemented. The range of performance varies from 37% to 88%, exhibiting significant differentiation between instruments and manufacturers. Median Sigmas for the different manufacturers in three analytes (albumin, glucose, sodium) showed significant differentiation. Chemistry tests are not commodities. Quality varies significantly from manufacturer to manufacturer, instrument to instrument, and method to method. The Sigma-assessments from multiple EQA/PT programs provide more insight into the performance of methods and instruments than any single program by itself. It is possible to produce a ranking of performance by manufacturer, instrument and individual method. Laboratories seeking optimal instrumentation would do well to consult this data as part of their decision-making process. To confirm that these assessments are stable and reliable, a longer term study should be conducted that examines more results over a longer time period. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Mason, P. W.; Harris, H. G.; Zalesak, J.; Bernstein, M.
1974-01-01
The methods and procedures used in the analysis and testing of the scale model are reported together with the correlation of the analytical and experimental results. The model, the NASTRAN finite element analysis, and results are discussed. Tests and analytical investigations are also reported.
HSRP and HSRP Partner Analytical Methods and Protocols
HSRP has worked with various partners to develop and test analytical methods and protocols for use by laboratories charged with analyzing environmental and/or buildling material samples following contamination incident.
New Tools to Prepare ACE Cross-section Files for MCNP Analytic Test Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
Monte Carlo calculations using one-group cross sections, multigroup cross sections, or simple continuous energy cross sections are often used to: (1) verify production codes against known analytical solutions, (2) verify new methods and algorithms that do not involve detailed collision physics, (3) compare Monte Carlo calculation methods with deterministic methods, and (4) teach fundamentals to students. In this work we describe 2 new tools for preparing the ACE cross-section files to be used by MCNP ® for these analytic test problems, simple_ace.pl and simple_ace_mg.pl.
NASA Astrophysics Data System (ADS)
Carraro, F.; Valiani, A.; Caleffi, V.
2018-03-01
Within the framework of the de Saint Venant equations coupled with the Exner equation for morphodynamic evolution, this work presents a new efficient implementation of the Dumbser-Osher-Toro (DOT) scheme for non-conservative problems. The DOT path-conservative scheme is a robust upwind method based on a complete Riemann solver, but it has the drawback of requiring expensive numerical computations. Indeed, to compute the non-linear time evolution in each time step, the DOT scheme requires numerical computation of the flux matrix eigenstructure (the totality of eigenvalues and eigenvectors) several times at each cell edge. In this work, an analytical and compact formulation of the eigenstructure for the de Saint Venant-Exner (dSVE) model is introduced and tested in terms of numerical efficiency and stability. Using the original DOT and PRICE-C (a very efficient FORCE-type method) as reference methods, we present a convergence analysis (error against CPU time) to study the performance of the DOT method with our new analytical implementation of eigenstructure calculations (A-DOT). In particular, the numerical performance of the three methods is tested in three test cases: a movable bed Riemann problem with analytical solution; a problem with smooth analytical solution; a test in which the water flow is characterised by subcritical and supercritical regions. For a given target error, the A-DOT method is always the most efficient choice. Finally, two experimental data sets and different transport formulae are considered to test the A-DOT model in more practical case studies.
Clean Water Act Analytical Methods
EPA publishes laboratory analytical methods (test procedures) that are used by industries and municipalities to analyze the chemical, physical and biological components of wastewater and other environmental samples required by the Clean Water Act.
Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration
NASA Technical Reports Server (NTRS)
Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.
1993-01-01
Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.
Manickum, Thavrin; John, Wilson
2015-07-01
The availability of national test centers to offer a routine service for analysis and quantitation of some selected steroid hormones [natural estrogens (17-β-estradiol, E2; estrone, E1; estriol, E3), synthetic estrogen (17-α-ethinylestradiol, EE2), androgen (testosterone), and progestogen (progesterone)] in wastewater matrix was investigated; corresponding internationally used chemical- and immuno-analytical test methods were reviewed. The enzyme-linked immunosorbent assay (ELISA) (immuno-analytical technique) was also assessed for its suitability as a routine test method to quantitate the levels of these hormones at a sewage/wastewater treatment plant (WTP) (Darvill, Pietermaritzburg, South Africa), over a 2-year period. The method performance and other relevant characteristics of the immuno-analytical ELISA method were compared to the conventional chemical-analytical methodology, like gas/liquid chromatography-mass spectrometry (GC/LC-MS), and GC-LC/tandem mass spectrometry (MSMS), for quantitation of the steroid hormones in wastewater and environmental waters. The national immuno-analytical ELISA technique was found to be sensitive (LOQ 5 ng/L, LOD 0.2-5 ng/L), accurate (mean recovery 96%), precise (RSD 7-10%), and cost-effective for screening and quantitation of these steroid hormones in wastewater and environmental water matrix. A survey of the most current international literature indicates a fairly equal use of the LC-MS/MS, GC-MS/MS (chemical-analytical), and ELISA (immuno-analytical) test methods for screening and quantitation of the target steroid hormones in both water and wastewater matrix. Internationally, the observed sensitivity, based on LOQ (ng/L), for the steroid estrogens E1, E2, EE2, is, in decreasing order: LC-MSMS (0.08-9.54) > GC-MS (1) > ELISA (5) (chemical-analytical > immuno-analytical). At the national level, the routine, unoptimized chemical-analytical LC-MSMS method was found to lack the required sensitivity for meeting environmental requirements for steroid hormone quantitation. Further optimization of the sensitivity of the chemical-analytical LC-tandem mass spectrometry methods, especially for wastewater screening, in South Africa is required. Risk assessment studies showed that it was not practical to propose standards or allowable limits for the steroid estrogens E1, E2, EE2, and E3; the use of predicted-no-effect concentration values of the steroid estrogens appears to be appropriate for use in their risk assessment in relation to aquatic organisms. For raw water sources, drinking water, raw and treated wastewater, the use of bioassays, with trigger values, is a useful screening tool option to decide whether further examination of specific endocrine activity may be warranted, or whether concentrations of such activity are of low priority, with respect to health concerns in the human population. The achievement of improved quantitation limits for immuno-analytical methods, like ELISA, used for compound quantitation, and standardization of the method for measuring E2 equivalents (EEQs) used for biological activity (endocrine: e.g., estrogenic) are some areas for future EDC research.
ERIC Educational Resources Information Center
Moraes, Edgar P.; da Silva, Nilbert S. A.; de Morais, Camilo de L. M.; das Neves, Luiz S.; de Lima, Kassio M. G.
2014-01-01
The flame test is a classical analytical method that is often used to teach students how to identify specific metals. However, some universities in developing countries have difficulties acquiring the sophisticated instrumentation needed to demonstrate how to identify and quantify metals. In this context, a method was developed based on the flame…
Pang, Susan; Cowen, Simon
2017-12-13
We describe a novel generic method to derive the unknown endogenous concentrations of analyte within complex biological matrices (e.g. serum or plasma) based upon the relationship between the immunoassay signal response of a biological test sample spiked with known analyte concentrations and the log transformed estimated total concentration. If the estimated total analyte concentration is correct, a portion of the sigmoid on a log-log plot is very close to linear, allowing the unknown endogenous concentration to be estimated using a numerical method. This approach obviates conventional relative quantification using an internal standard curve and need for calibrant diluent, and takes into account the individual matrix interference on the immunoassay by spiking the test sample itself. This technique is based on standard additions for chemical analytes. Unknown endogenous analyte concentrations within even 2-fold diluted human plasma may be determined reliably using as few as four reaction wells.
Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà
2010-03-01
Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.
Differential homogeneous immunosensor device
Malmros, Mark K.; Gulbinski, III, Julian
1990-04-10
There is provided a novel method of testing for the presence of an analyte in a fluid suspected of containing the same. In this method, in the presence of the analyte, a substance capable of modifying certain characteristics of the substrate is bound to the substrate and the change in these qualities is measured. While the method may be modified for carrying out quantitative differential analyses, it eliminates the need for washing analyte from the substrate which is characteristic of prior art methods.
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory.
Horowitz, Gary L; Zaman, Zahur; Blanckaert, Norbert J C; Chan, Daniel W; Dubois, Jeffrey A; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W; Nilsen, Olaug L; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality.
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory
Zaman, Zahur; Blanckaert, Norbert J. C.; Chan, Daniel W.; Dubois, Jeffrey A.; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W.; Nilsen, Olaug L.; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L.; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality. PMID:18924721
Airside HVAC BESTEST: HVAC Air-Distribution System Model Test Cases for ASHRAE Standard 140
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ronald; Neymark, Joel; Kennedy, Mike D.
This paper summarizes recent work to develop new airside HVAC equipment model analytical verification test cases for ANSI/ASHRAE Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs. The analytical verification test method allows comparison of simulation results from a wide variety of building energy simulation programs with quasi-analytical solutions, further described below. Standard 140 is widely cited for evaluating software for use with performance-path energy efficiency analysis, in conjunction with well-known energy-efficiency standards including ASHRAE Standard 90.1, the International Energy Conservation Code, and other international standards. Airside HVAC Equipment is a common area ofmore » modelling not previously explicitly tested by Standard 140. Integration of the completed test suite into Standard 140 is in progress.« less
The Relationship between SW-846, PBMS, and Innovative Analytical Technologies
This paper explains EPA's position regarding testing methods used within waste programs, documentation of EPA's position, the reasoning behind EPA's position, and the relationship between analytical method regulatory flexibility and the use of on-site...
Hazardous Waste Test Methods / SW-846
The Resource Conservation and Recovery Act Test Methods for Evaluating Solid Waste: Physical/Chemical Methods (SW-846) provide guidance to analytical scientists, enforcement officers and method developers across a variety of sectors.
Stephanson, N N; Signell, P; Helander, A; Beck, O
2017-08-01
The influx of new psychoactive substances (NPS) has created a need for improved methods for drug testing in toxicology laboratories. The aim of this work was to design, validate and apply a multi-analyte liquid chromatography-high-resolution mass spectrometry (LC-HRMS) method for screening of 148 target analytes belonging to the NPS class, plant alkaloids and new psychoactive therapeutic drugs. The analytical method used a fivefold dilution of urine with nine deuterated internal standards and injection of 2 μl. The LC system involved a 2.0 μm 100 × 2.0 mm YMC-UltraHT Hydrosphere-C 18 column and gradient elution with a flow rate of 0.5 ml/min and a total analysis time of 6.0 min. Solvent A consisted of 10 mmol/l ammonium formate and 0.005% formic acid, pH 4.8, and Solvent B was methanol with 10 mmol/l ammonium formate and 0.005% formic acid. The HRMS (Q Exactive, Thermo Scientific) used a heated electrospray interface and was operated in positive mode with 70 000 resolution. The scan range was 100-650 Da, and data for extracted ion chromatograms used ± 10 ppm tolerance. Product ion monitoring was applied for confirmation analysis and for some selected analytes also for screening. Method validation demonstrated limited influence from urine matrix, linear response within the measuring range (typically 0.1-1.0 μg/ml) and acceptable imprecision in quantification (CV <15%). A few analytes were found to be unstable in urine upon storage. The method was successfully applied for routine drug testing of 17 936 unknown samples, of which 2715 (15%) contained 52 of the 148 analytes. It is concluded that the method design based on simple dilution of urine and using LC-HRMS in extracted ion chromatogram mode may offer an analytical system for urine drug testing that fulfils the requirement of a 'black box' solution and can replace immunochemical screening applied on autoanalyzers. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Systems and methods for laser assisted sample transfer to solution for chemical analysis
Van Berkel, Gary J.; Kertesz, Vilmos; Ovchinnikova, Olga S.
2014-06-03
Systems and methods are described for laser ablation of an analyte from a specimen and capturing of the analyte in a dispensed solvent to form a testing solution. A solvent dispensing and extraction system can form a liquid microjunction with the specimen. The solvent dispensing and extraction system can include a surface sampling probe. The laser beam can be directed through the surface sampling probe. The surface sampling probe can also serve as an atomic force microscopy probe. The surface sampling probe can form a seal with the specimen. The testing solution including the analyte can then be analyzed using an analytical instrument or undergo further processing.
Systems and methods for laser assisted sample transfer to solution for chemical analysis
Van Berkel, Gary J.; Kertesz, Vilmos; Ovchinnikova, Olga S.
2015-09-29
Systems and methods are described for laser ablation of an analyte from a specimen and capturing of the analyte in a dispensed solvent to form a testing solution. A solvent dispensing and extraction system can form a liquid microjunction with the specimen. The solvent dispensing and extraction system can include a surface sampling probe. The laser beam can be directed through the surface sampling probe. The surface sampling probe can also serve as an atomic force microscopy probe. The surface sampling probe can form a seal with the specimen. The testing solution including the analyte can then be analyzed using an analytical instrument or undergo further processing.
Systems and methods for laser assisted sample transfer to solution for chemical analysis
Van Berkel, Gary J; Kertesz, Vilmos; Ovchinnikova, Olga S
2013-08-27
Systems and methods are described for laser ablation of an analyte from a specimen and capturing of the analyte in a dispensed solvent to form a testing solution. A solvent dispensing and extraction system can form a liquid microjunction with the specimen. The solvent dispensing and extraction system can include a surface sampling probe. The laser beam can be directed through the surface sampling probe. The surface sampling probe can also serve as an atomic force microscopy probe. The surface sampling probe can form a seal with the specimen. The testing solution including the analyte can then be analyzed using an analytical instrument or undergo further processing.
NASA Technical Reports Server (NTRS)
Gallardo, V. C.; Storace, A. S.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.
1981-01-01
The component element method was used to develop a transient dynamic analysis computer program which is essentially based on modal synthesis combined with a central, finite difference, numerical integration scheme. The methodology leads to a modular or building-block technique that is amenable to computer programming. To verify the analytical method, turbine engine transient response analysis (TETRA), was applied to two blade-out test vehicles that had been previously instrumented and tested. Comparison of the time dependent test data with those predicted by TETRA led to recommendations for refinement or extension of the analytical method to improve its accuracy and overcome its shortcomings. The development of working equations, their discretization, numerical solution scheme, the modular concept of engine modelling, the program logical structure and some illustrated results are discussed. The blade-loss test vehicles (rig full engine), the type of measured data, and the engine structural model are described.
Gasoline and Diesel Fuel Test Methods Additional Resources
Supporting documents on the Direct Final Rule that allows refiners and laboratories to use more current and improved fuel testing procedures for twelve American Society for Testing and Materials analytical test methods.
40 CFR 63.1208 - What are the test methods?
Code of Federal Regulations, 2010 CFR
2010-07-01
... test must be repeated. (5) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... 40 Protection of Environment 11 2010-07-01 2010-07-01 true What are the test methods? 63.1208... § 63.1208 What are the test methods? (a) [Reserved] (b) Test methods. You must use the following test...
40 CFR 63.1208 - What are the test methods?
Code of Federal Regulations, 2011 CFR
2011-07-01
... test must be repeated. (5) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... 40 Protection of Environment 11 2011-07-01 2011-07-01 false What are the test methods? 63.1208... § 63.1208 What are the test methods? (a) [Reserved] (b) Test methods. You must use the following test...
40 CFR 63.1208 - What are the test methods?
Code of Federal Regulations, 2012 CFR
2012-07-01
... test must be repeated. (5) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... 40 Protection of Environment 12 2012-07-01 2011-07-01 true What are the test methods? 63.1208... § 63.1208 What are the test methods? (a) [Reserved] (b) Test methods. You must use the following test...
Eckfeldt, J H; Copeland, K R
1993-04-01
Proficiency testing using stabilized control materials has been used for decades as a means of monitoring and improving performance in the clinical laboratory. Often, the commonly used proficiency testing materials exhibit "matrix effects" that cause them to behave differently from fresh human specimens in certain clinical analytic systems. Because proficiency testing is the primary method in which regulatory agencies have chosen to evaluate clinical laboratory performance, the College of American Pathologists (CAP) has proposed guidelines for investigating the influence of matrix effects on their Survey results. The purpose of this investigation was to determine the feasibility, usefulness, and potential problems associated with this CAP Matrix Effect Analytical Protocol, in which fresh patient specimens and CAP proficiency specimens are analyzed simultaneously by a field method and a definitive, reference, or other comparative method. The optimal outcome would be that both the fresh human and CAP Survey specimens agree closely with the comparative method result. However, this was not always the case. Using several different analytic configurations, we were able to demonstrate matrix and calibration biases for several of the analytes investigated.
Differential homogeneous immunosensor device
Malmros, M.K.; Gulbinski, J. III.
1990-04-10
There is provided a novel method of testing for the presence of an analyte in a fluid suspected of containing the same. In this method, in the presence of the analyte, a substance capable of modifying certain characteristics of the substrate is bound to the substrate and the change in these qualities is measured. While the method may be modified for carrying out quantitative differential analyses, it eliminates the need for washing the analyte from the substrate which is characteristic of prior art methods. 12 figs.
Westenberger, Benjamin J; Ellison, Christopher D; Fussner, Andrew S; Jenney, Susan; Kolinski, Richard E; Lipe, Terra G; Lyon, Robbe C; Moore, Terry W; Revelle, Larry K; Smith, Anjanette P; Spencer, John A; Story, Kimberly D; Toler, Duckhee Y; Wokovich, Anna M; Buhse, Lucinda F
2005-12-08
This work investigated the use of non-traditional analytical methods to evaluate the quality of a variety of pharmaceutical products purchased via internet sites from foreign sources and compared the results with those obtained from conventional quality assurance methods. Traditional analytical techniques employing HPLC for potency, content uniformity, chromatographic purity and drug release profiles were used to evaluate the quality of five selected drug products (fluoxetine hydrochloride, levothyroxine sodium, metformin hydrochloride, phenytoin sodium, and warfarin sodium). Non-traditional techniques, such as near infrared spectroscopy (NIR), NIR imaging and thermogravimetric analysis (TGA), were employed to verify the results and investigate their potential as alternative testing methods. Two of 20 samples failed USP monographs for quality attributes. The additional analytical methods found 11 of 20 samples had different formulations when compared to the U.S. product. Seven of the 20 samples arrived in questionable containers, and 19 of 20 had incomplete labeling. Only 1 of the 20 samples had final packaging similar to the U.S. products. The non-traditional techniques complemented the traditional techniques used and highlighted additional quality issues for the products tested. For example, these methods detected suspect manufacturing issues (such as blending), which were not evident from traditional testing alone.
40 CFR 63.1208 - What are the test methods?
Code of Federal Regulations, 2013 CFR
2013-07-01
... test must be repeated. (5) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... 40 Protection of Environment 12 2013-07-01 2013-07-01 false What are the test methods? 63.1208... Compliance Provisions § 63.1208 What are the test methods? (a) [Reserved] (b) Test methods. You must use the...
40 CFR 63.1208 - What are the test methods?
Code of Federal Regulations, 2014 CFR
2014-07-01
... test must be repeated. (5) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... 40 Protection of Environment 12 2014-07-01 2014-07-01 false What are the test methods? 63.1208... Compliance Provisions § 63.1208 What are the test methods? (a) [Reserved] (b) Test methods. You must use the...
A transient laboratory method for determining the hydraulic properties of 'tight' rocks-I. Theory
Hsieh, P.A.; Tracy, J.V.; Neuzil, C.E.; Bredehoeft, J.D.; Silliman, Stephen E.
1981-01-01
Transient pulse testing has been employed increasingly in the laboratory to measure the hydraulic properties of rock samples with low permeability. Several investigators have proposed a mathematical model in terms of an initial-boundary value problem to describe fluid flow in a transient pulse test. However, the solution of this problem has not been available. In analyzing data from the transient pulse test, previous investigators have either employed analytical solutions that are derived with the use of additional, restrictive assumptions, or have resorted to numerical methods. In Part I of this paper, a general, analytical solution for the transient pulse test is presented. This solution is graphically illustrated by plots of dimensionless variables for several cases of interest. The solution is shown to contain, as limiting cases, the more restrictive analytical solutions that the previous investigators have derived. A method of computing both the permeability and specific storage of the test sample from experimental data will be presented in Part II. ?? 1981.
Improvement of analytical dynamic models using modal test data
NASA Technical Reports Server (NTRS)
Berman, A.; Wei, F. S.; Rao, K. V.
1980-01-01
A method developed to determine maximum changes in analytical mass and stiffness matrices to make them consistent with a set of measured normal modes and natural frequencies is presented. The corrected model will be an improved base for studies of physical changes, boundary condition changes, and for prediction of forced responses. The method features efficient procedures not requiring solutions of the eigenvalue problem, and the ability to have more degrees of freedom than the test data. In addition, modal displacements are obtained for all analytical degrees of freedom, and the frequency dependence of the coordinate transformations is properly treated.
Ozarda, Yesim; Ichihara, Kiyoshi; Aslan, Diler; Aybek, Hulya; Ari, Zeki; Taneli, Fatma; Coker, Canan; Akan, Pinar; Sisman, Ali Riza; Bahceci, Onur; Sezgin, Nurzen; Demir, Meltem; Yucel, Gultekin; Akbas, Halide; Ozdem, Sebahat; Polat, Gurbuz; Erbagci, Ayse Binnur; Orkmez, Mustafa; Mete, Nuriye; Evliyaoglu, Osman; Kiyici, Aysel; Vatansev, Husamettin; Ozturk, Bahadir; Yucel, Dogan; Kayaalp, Damla; Dogan, Kubra; Pinar, Asli; Gurbilek, Mehmet; Cetinkaya, Cigdem Damla; Akin, Okhan; Serdar, Muhittin; Kurt, Ismail; Erdinc, Selda; Kadicesme, Ozgur; Ilhan, Necip; Atali, Dilek Sadak; Bakan, Ebubekir; Polat, Harun; Noyan, Tevfik; Can, Murat; Bedir, Abdulkerim; Okuyucu, Ali; Deger, Orhan; Agac, Suret; Ademoglu, Evin; Kaya, Ayşem; Nogay, Turkan; Eren, Nezaket; Dirican, Melahat; Tuncer, GulOzlem; Aykus, Mehmet; Gunes, Yeliz; Ozmen, Sevda Unalli; Kawano, Reo; Tezcan, Sehavet; Demirpence, Ozlem; Degirmen, Elif
2014-12-01
A nationwide multicenter study was organized to establish reference intervals (RIs) in the Turkish population for 25 commonly tested biochemical analytes and to explore sources of variation in reference values, including regionality. Blood samples were collected nationwide in 28 laboratories from the seven regions (≥400 samples/region, 3066 in all). The sera were collectively analyzed in Uludag University in Bursa using Abbott reagents and analyzer. Reference materials were used for standardization of test results. After secondary exclusion using the latent abnormal values exclusion method, RIs were derived by a parametric method employing the modified Box-Cox formula and compared with the RIs by the non-parametric method. Three-level nested ANOVA was used to evaluate variations among sexes, ages and regions. Associations between test results and age, body mass index (BMI) and region were determined by multiple regression analysis (MRA). By ANOVA, differences of reference values among seven regions were significant in none of the 25 analytes. Significant sex-related and age-related differences were observed for 10 and seven analytes, respectively. MRA revealed BMI-related changes in results for uric acid, glucose, triglycerides, high-density lipoprotein (HDL)-cholesterol, alanine aminotransferase, and γ-glutamyltransferase. Their RIs were thus derived by applying stricter criteria excluding individuals with BMI >28 kg/m2. Ranges of RIs by non-parametric method were wider than those by parametric method especially for those analytes affected by BMI. With the lack of regional differences and the well-standardized status of test results, the RIs derived from this nationwide study can be used for the entire Turkish population.
Experimental and Analytical Determinations of Spiral Bevel Gear-Tooth Bending Stress Compared
NASA Technical Reports Server (NTRS)
Handschuh, Robert F.
2000-01-01
Spiral bevel gears are currently used in all main-rotor drive systems for rotorcraft produced in the United States. Applications such as these need spiral bevel gears to turn the corner from the horizontal gas turbine engine to the vertical rotor shaft. These gears must typically operate at extremely high rotational speeds and carry high power levels. With these difficult operating conditions, an improved analytical capability is paramount to increasing aircraft safety and reliability. Also, literature on the analysis and testing of spiral bevel gears has been very sparse in comparison to that for parallel axis gears. This is due to the complex geometry of this type of gear and to the specialized test equipment necessary to test these components. To develop an analytical model of spiral bevel gears, researchers use differential geometry methods to model the manufacturing kinematics. A three-dimensional spiral bevel gear modeling method was developed that uses finite elements for the structural analysis. This method was used to analyze the three-dimensional contact pattern between the test pinion and gear used in the Spiral Bevel Gear Test Facility at the NASA Glenn Research Center at Lewis Field. Results of this analysis are illustrated in the preceding figure. The development of the analytical method was a joint endeavor between NASA Glenn, the U.S. Army Research Laboratory, and the University of North Dakota.
Passive Magnetic Bearing With Ferrofluid Stabilization
NASA Technical Reports Server (NTRS)
Jansen, Ralph; DiRusso, Eliseo
1996-01-01
A new class of magnetic bearings is shown to exist analytically and is demonstrated experimentally. The class of magnetic bearings utilize a ferrofluid/solid magnet interaction to stabilize the axial degree of freedom of a permanent magnet radial bearing. Twenty six permanent magnet bearing designs and twenty two ferrofluid stabilizer designs are evaluated. Two types of radial bearing designs are tested to determine their force and stiffness utilizing two methods. The first method is based on the use of frequency measurements to determine stiffness by utilizing an analytical model. The second method consisted of loading the system and measuring displacement in order to measure stiffness. Two ferrofluid stabilizers are tested and force displacement curves are measured. Two experimental test fixtures are designed and constructed in order to conduct the stiffness testing. Polynomial models of the data are generated and used to design the bearing prototype. The prototype was constructed and tested and shown to be stable. Further testing shows the possibility of using this technology for vibration isolation. The project successfully demonstrated the viability of the passive magnetic bearing with ferrofluid stabilization both experimentally and analytically.
System and method for laser assisted sample transfer to solution for chemical analysis
Van Berkel, Gary J; Kertesz, Vilmos
2014-01-28
A system and method for laser desorption of an analyte from a specimen and capturing of the analyte in a suspended solvent to form a testing solution are described. The method can include providing a specimen supported by a desorption region of a specimen stage and desorbing an analyte from a target site of the specimen with a laser beam centered at a radiation wavelength (.lamda.). The desorption region is transparent to the radiation wavelength (.lamda.) and the sampling probe and a laser source emitting the laser beam are on opposite sides of a primary surface of the specimen stage. The system can also be arranged where the laser source and the sampling probe are on the same side of a primary surface of the specimen stage. The testing solution can then be analyzed using an analytical instrument or undergo further processing.
IMPROVED METHOD FOR THE STORAGE OF GROUND WATER SAMPLES CONTAINING VOLATILE ORGANIC ANALYTES
The sorption of volatile organic analytes from water samples by the Teflon septum surface used with standard glass 40-ml sample collection vials was investigated. Analytes tested included alkanes, isoalkanes, olefins, cycloalkanes, a cycloalkene, monoaromatics, a polynuclear arom...
Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua
2018-01-01
Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Schlager, Kenneth J.; Ruchti, Timothy L.
1995-04-01
TAMM for Transcutaneous Analyte Measuring Method is a near infrared spectroscopic technique for the noninvasive measurement of human blood chemistry. A near infrared indium gallium arsenide (InGaAs) photodiode array spectrometer has been developed and tested on over 1,000 patients as a part of an SBIR program sponsored by the Naval Medical Research and Development Command. Nine (9) blood analytes have been measured and evaluated during pre-clinical testing: sodium, chloride, calcium, potassium, bicarbonate, BUN, glucose, hematocrit and hemoglobin. A reflective rather than a transmissive invasive approach to measurement has been taken to avoid variations resulting from skin color and sensor positioning. The current status of the instrumentation, neural network pattern recognition algorithms and test results will be discussed.
Analysis methods for Kevlar shield response to rotor fragments
NASA Technical Reports Server (NTRS)
Gerstle, J. H.
1977-01-01
Several empirical and analytical approaches to rotor burst shield sizing are compared and principal differences in metal and fabric dynamic behavior are discussed. The application of transient structural response computer programs to predict Kevlar containment limits is described. For preliminary shield sizing, present analytical methods are useful if insufficient test data for empirical modeling are available. To provide other information useful for engineering design, analytical methods require further developments in material characterization, failure criteria, loads definition, and post-impact fragment trajectory prediction.
Ferrell, Jack R.; Olarte, Mariefel V.; Christensen, Earl D.; ...
2016-07-05
Here, we discuss the standardization of analytical techniques for pyrolysis bio-oils, including the current status of methods, and our opinions on future directions. First, the history of past standardization efforts is summarized, and both successful and unsuccessful validation of analytical techniques highlighted. The majority of analytical standardization studies to-date has tested only physical characterization techniques. In this paper, we present results from an international round robin on the validation of chemical characterization techniques for bio-oils. Techniques tested included acid number, carbonyl titrations using two different methods (one at room temperature and one at 80 °C), 31P NMR for determination ofmore » hydroxyl groups, and a quantitative gas chromatography–mass spectrometry (GC-MS) method. Both carbonyl titration and acid number methods have yielded acceptable inter-laboratory variabilities. 31P NMR produced acceptable results for aliphatic and phenolic hydroxyl groups, but not for carboxylic hydroxyl groups. As shown in previous round robins, GC-MS results were more variable. Reliable chemical characterization of bio-oils will enable upgrading research and allow for detailed comparisons of bio-oils produced at different facilities. Reliable analytics are also needed to enable an emerging bioenergy industry, as processing facilities often have different analytical needs and capabilities than research facilities. We feel that correlations in reliable characterizations of bio-oils will help strike a balance between research and industry, and will ultimately help to -determine metrics for bio-oil quality. Lastly, the standardization of additional analytical methods is needed, particularly for upgraded bio-oils.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrell, Jack R.; Olarte, Mariefel V.; Christensen, Earl D.
Here, we discuss the standardization of analytical techniques for pyrolysis bio-oils, including the current status of methods, and our opinions on future directions. First, the history of past standardization efforts is summarized, and both successful and unsuccessful validation of analytical techniques highlighted. The majority of analytical standardization studies to-date has tested only physical characterization techniques. In this paper, we present results from an international round robin on the validation of chemical characterization techniques for bio-oils. Techniques tested included acid number, carbonyl titrations using two different methods (one at room temperature and one at 80 °C), 31P NMR for determination ofmore » hydroxyl groups, and a quantitative gas chromatography–mass spectrometry (GC-MS) method. Both carbonyl titration and acid number methods have yielded acceptable inter-laboratory variabilities. 31P NMR produced acceptable results for aliphatic and phenolic hydroxyl groups, but not for carboxylic hydroxyl groups. As shown in previous round robins, GC-MS results were more variable. Reliable chemical characterization of bio-oils will enable upgrading research and allow for detailed comparisons of bio-oils produced at different facilities. Reliable analytics are also needed to enable an emerging bioenergy industry, as processing facilities often have different analytical needs and capabilities than research facilities. We feel that correlations in reliable characterizations of bio-oils will help strike a balance between research and industry, and will ultimately help to -determine metrics for bio-oil quality. Lastly, the standardization of additional analytical methods is needed, particularly for upgraded bio-oils.« less
Galy, Bertrand; Lan, André
2018-03-01
Among the many occupational risks construction workers encounter every day falling from a height is the most dangerous. The objective of this article is to propose a simple analytical design method for horizontal lifelines (HLLs) that considers anchorage flexibility. The article presents a short review of the standards and regulations/acts/codes concerning HLLs in Canada the USA and Europe. A static analytical approach is proposed considering anchorage flexibility. The analytical results are compared with a series of 42 dynamic fall tests and a SAP2000 numerical model. The experimental results show that the analytical method is a little conservative and overestimates the line tension in most cases with a maximum of 17%. The static SAP2000 results show a maximum 2.1% difference with the analytical method. The analytical method is accurate enough to safely design HLLs and quick design abaci are provided to allow the engineer to make quick on-site verification if needed.
An analytical approach to obtaining JWL parameters from cylinder tests
NASA Astrophysics Data System (ADS)
Sutton, B. D.; Ferguson, J. W.; Hodgson, A. N.
2017-01-01
An analytical method for determining parameters for the JWL Equation of State from cylinder test data is described. This method is applied to four datasets obtained from two 20.3 mm diameter EDC37 cylinder tests. The calculated pressure-relative volume (p-Vr) curves agree with those produced by hydro-code modelling. The average calculated Chapman-Jouguet (CJ) pressure is 38.6 GPa, compared to the model value of 38.3 GPa; the CJ relative volume is 0.729 for both. The analytical pressure-relative volume curves produced agree with the one used in the model out to the commonly reported expansion of 7 relative volumes, as do the predicted energies generated by integrating under the p-Vr curve. The calculated energy is within 1.6% of that predicted by the model.
The National Shipbuilding Research Program. Environmental Studies and Testing (Phase V)
2000-11-20
development of an analytical procedure for toxic organic compounds, including TBT ( tributyltin ), whose turnaround time would be in the order of minutes...Cost of the Subtask was $20,000. Subtask #33 - Turnaround Analytical Method for TBT This Subtask performed a preliminary investigation leading to the...34Quick TBT Analytical Method" that will yield reliable results in 15 minutes, a veritable breakthrough in sampling technology. The Subtask was managed by
AmO 2 Analysis for Analytical Method Testing and Assessment: Analysis Support for AmO 2 Production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhn, Kevin John; Bland, Galey Jean; Fulwyler, James Brent
Americium oxide samples will be measured for various analytes to support AmO 2 production. The key analytes that are currently requested by the Am production customer at LANL include total Am content, Am isotopics, Pu assay, Pu isotopics, and trace element content including 237Np content. Multiple analytical methods will be utilized depending on the sensitivity, accuracy and precision needs of the Am matrix. Traceability to the National Institute of Standards and Technology (NIST) will be achieved, where applicable, by running NIST traceable quality control materials. This given that there are no suitable AmO 2 reference materials currently available for requestedmore » analytes. The primary objective is to demonstrate the suitability of actinide analytical chemistry methods to support AmO 2 production operations.« less
40 CFR Appendix A to Part 63 - Test Methods Pollutant Measurement Methods From Various Waste Media
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 15 2013-07-01 2013-07-01 false Test Methods Pollutant Measurement... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. A Appendix A to Part 63—Test Methods Pollutant... analyte spiking? 13.0How do I conduct tests at similar sources? Optional Requirements 14.0How do I use and...
40 CFR Appendix A to Part 63 - Test Methods Pollutant Measurement Methods From Various Waste Media
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 15 2014-07-01 2014-07-01 false Test Methods Pollutant Measurement... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. A Appendix A to Part 63—Test Methods Pollutant... analyte spiking? 13.0How do I conduct tests at similar sources? Optional Requirements 14.0How do I use and...
40 CFR Appendix A to Part 63 - Test Methods Pollutant Measurement Methods From Various Waste Media
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 14 2011-07-01 2011-07-01 false Test Methods Pollutant Measurement... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. A Appendix A to Part 63—Test Methods Pollutant... analyte spiking? 13.0How do I conduct tests at similar sources? Optional Requirements 14.0How do I use and...
40 CFR Appendix A to Part 63 - Test Methods Pollutant Measurement Methods From Various Waste Media
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 15 2012-07-01 2012-07-01 false Test Methods Pollutant Measurement... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. A Appendix A to Part 63—Test Methods Pollutant... analyte spiking? 13.0How do I conduct tests at similar sources? Optional Requirements 14.0How do I use and...
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Vartio, Eric; Shimko, Anthony; Kvaternik, Raymond G.; Eure, Kenneth W.; Scott,Robert C.
2007-01-01
Aeroservoelastic (ASE) analytical models of a SensorCraft wind-tunnel model are generated using measured data. The data was acquired during the ASE wind-tunnel test of the HiLDA (High Lift-to-Drag Active) Wing model, tested in the NASA Langley Transonic Dynamics Tunnel (TDT) in late 2004. Two time-domain system identification techniques are applied to the development of the ASE analytical models: impulse response (IR) method and the Generalized Predictive Control (GPC) method. Using measured control surface inputs (frequency sweeps) and associated sensor responses, the IR method is used to extract corresponding input/output impulse response pairs. These impulse responses are then transformed into state-space models for use in ASE analyses. Similarly, the GPC method transforms measured random control surface inputs and associated sensor responses into an AutoRegressive with eXogenous input (ARX) model. The ARX model is then used to develop the gust load alleviation (GLA) control law. For the IR method, comparison of measured with simulated responses are presented to investigate the accuracy of the ASE analytical models developed. For the GPC method, comparison of simulated open-loop and closed-loop (GLA) time histories are presented.
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Shimko, Anthony; Kvaternik, Raymond G.; Eure, Kenneth W.; Scott, Robert C.
2006-01-01
Aeroservoelastic (ASE) analytical models of a SensorCraft wind-tunnel model are generated using measured data. The data was acquired during the ASE wind-tunnel test of the HiLDA (High Lift-to-Drag Active) Wing model, tested in the NASA Langley Transonic Dynamics Tunnel (TDT) in late 2004. Two time-domain system identification techniques are applied to the development of the ASE analytical models: impulse response (IR) method and the Generalized Predictive Control (GPC) method. Using measured control surface inputs (frequency sweeps) and associated sensor responses, the IR method is used to extract corresponding input/output impulse response pairs. These impulse responses are then transformed into state-space models for use in ASE analyses. Similarly, the GPC method transforms measured random control surface inputs and associated sensor responses into an AutoRegressive with eXogenous input (ARX) model. The ARX model is then used to develop the gust load alleviation (GLA) control law. For the IR method, comparison of measured with simulated responses are presented to investigate the accuracy of the ASE analytical models developed. For the GPC method, comparison of simulated open-loop and closed-loop (GLA) time histories are presented.
On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood
ERIC Educational Resources Information Center
Karabatsos, George
2017-01-01
This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon…
Ishibashi, Midori
2015-01-01
The cost, speed, and quality are the three important factors recently indicated by the Ministry of Health, Labour and Welfare (MHLW) for the purpose of accelerating clinical studies. Based on this background, the importance of laboratory tests is increasing, especially in the evaluation of clinical study participants' entry and safety, and drug efficacy. To assure the quality of laboratory tests, providing high-quality laboratory tests is mandatory. For providing adequate quality assurance in laboratory tests, quality control in the three fields of pre-analytical, analytical, and post-analytical processes is extremely important. There are, however, no detailed written requirements concerning specimen collection, handling, preparation, storage, and shipping. Most laboratory tests for clinical studies are performed onsite in a local laboratory; however, a part of laboratory tests is done in offsite central laboratories after specimen shipping. As factors affecting laboratory tests, individual and inter-individual variations are well-known. Besides these factors, standardizing the factors of specimen collection, handling, preparation, storage, and shipping, may improve and maintain the high quality of clinical studies in general. Furthermore, the analytical method, units, and reference interval are also important factors. It is concluded that, to overcome the problems derived from pre-analytical processes, it is necessary to standardize specimen handling in a broad sense.
Tests of Measurement Invariance without Subgroups: A Generalization of Classical Methods
ERIC Educational Resources Information Center
Merkle, Edgar C.; Zeileis, Achim
2013-01-01
The issue of measurement invariance commonly arises in factor-analytic contexts, with methods for assessment including likelihood ratio tests, Lagrange multiplier tests, and Wald tests. These tests all require advance definition of the number of groups, group membership, and offending model parameters. In this paper, we study tests of measurement…
Buckling Testing and Analysis of Space Shuttle Solid Rocket Motor Cylinders
NASA Technical Reports Server (NTRS)
Weidner, Thomas J.; Larsen, David V.; McCool, Alex (Technical Monitor)
2002-01-01
A series of full-scale buckling tests were performed on the space shuttle Reusable Solid Rocket Motor (RSRM) cylinders. The tests were performed to determine the buckling capability of the cylinders and to provide data for analytical comparison. A nonlinear ANSYS Finite Element Analysis (FEA) model was used to represent and evaluate the testing. Analytical results demonstrated excellent correlation to test results, predicting the failure load within 5%. The analytical value was on the conservative side, predicting a lower failure load than was applied to the test. The resulting study and analysis indicated the important parameters for FEA to accurately predict buckling failure. The resulting method was subsequently used to establish the pre-launch buckling capability of the space shuttle system.
Thermodynamics of Gas Turbine Cycles with Analytic Derivatives in OpenMDAO
NASA Technical Reports Server (NTRS)
Gray, Justin; Chin, Jeffrey; Hearn, Tristan; Hendricks, Eric; Lavelle, Thomas; Martins, Joaquim R. R. A.
2016-01-01
A new equilibrium thermodynamics analysis tool was built based on the CEA method using the OpenMDAO framework. The new tool provides forward and adjoint analytic derivatives for use with gradient based optimization algorithms. The new tool was validated against the original CEA code to ensure an accurate analysis and the analytic derivatives were validated against finite-difference approximations. Performance comparisons between analytic and finite difference methods showed a significant speed advantage for the analytic methods. To further test the new analysis tool, a sample optimization was performed to find the optimal air-fuel equivalence ratio, , maximizing combustion temperature for a range of different pressures. Collectively, the results demonstrate the viability of the new tool to serve as the thermodynamic backbone for future work on a full propulsion modeling tool.
An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1991-01-01
Continuing studies associated with the development of the quasi-analytical (QA) sensitivity method for three dimensional transonic flow about wings are presented. Furthermore, initial results using the quasi-analytical approach were obtained and compared to those computed using the finite difference (FD) approach. The basic goals achieved were: (1) carrying out various debugging operations pertaining to the quasi-analytical method; (2) addition of section design variables to the sensitivity equation in the form of multiple right hand sides; (3) reconfiguring the analysis/sensitivity package in order to facilitate the execution of analysis/FD/QA test cases; and (4) enhancing the display of output data to allow careful examination of the results and to permit various comparisons of sensitivity derivatives obtained using the FC/QA methods to be conducted easily and quickly. In addition to discussing the above goals, the results of executing subcritical and supercritical test cases are presented.
Fluid mechanics of dynamic stall. II - Prediction of full scale characteristics
NASA Technical Reports Server (NTRS)
Ericsson, L. E.; Reding, J. P.
1988-01-01
Analytical extrapolations are made from experimental subscale dynamics to predict full scale characteristics of dynamic stall. The method proceeds by establishing analytic relationships between dynamic and static aerodynamic characteristics induced by viscous flow effects. The method is then validated by predicting dynamic test results on the basis of corresponding static test data obtained at the same subscale flow conditions, and the effect of Reynolds number on the static aerodynamic characteristics are determined from subscale to full scale flow conditions.
Lee, Sang Hun; Yoo, Myung Hoon; Park, Jun Woo; Kang, Byung Chul; Yang, Chan Joo; Kang, Woo Suk; Ahn, Joong Ho; Chung, Jong Woo; Park, Hong Ju
2018-06-01
To evaluate whether video head impulse test (vHIT) gains are dependent on the measuring device and method of analysis. Prospective study. vHIT was performed in 25 healthy subjects using two devices simultaneously. vHIT gains were compared between these instruments and using five different methods of comparing position and velocity gains during head movement intervals. The two devices produced different vHIT gain results with the same method of analysis. There were also significant differences in the vHIT gains measured using different analytical methods. The gain analytic method that compares the areas under the velocity curve (AUC) of the head and eye movements during head movements showed lower vHIT gains than a method that compared the peak velocities of the head and eye movements. The former method produced the vHIT gain with the smallest standard deviation among the five procedures tested in this study. vHIT gains differ in normal subjects depending on the device and method of analysis used, suggesting that it is advisable for each device to have its own normal values. Gain calculations that compare the AUC of the head and eye movements during the head movements show the smallest variance.
Luo, Yuan; Szolovits, Peter; Dighe, Anand S; Baron, Jason M
2018-06-01
A key challenge in clinical data mining is that most clinical datasets contain missing data. Since many commonly used machine learning algorithms require complete datasets (no missing data), clinical analytic approaches often entail an imputation procedure to "fill in" missing data. However, although most clinical datasets contain a temporal component, most commonly used imputation methods do not adequately accommodate longitudinal time-based data. We sought to develop a new imputation algorithm, 3-dimensional multiple imputation with chained equations (3D-MICE), that can perform accurate imputation of missing clinical time series data. We extracted clinical laboratory test results for 13 commonly measured analytes (clinical laboratory tests). We imputed missing test results for the 13 analytes using 3 imputation methods: multiple imputation with chained equations (MICE), Gaussian process (GP), and 3D-MICE. 3D-MICE utilizes both MICE and GP imputation to integrate cross-sectional and longitudinal information. To evaluate imputation method performance, we randomly masked selected test results and imputed these masked results alongside results missing from our original data. We compared predicted results to measured results for masked data points. 3D-MICE performed significantly better than MICE and GP-based imputation in a composite of all 13 analytes, predicting missing results with a normalized root-mean-square error of 0.342, compared to 0.373 for MICE alone and 0.358 for GP alone. 3D-MICE offers a novel and practical approach to imputing clinical laboratory time series data. 3D-MICE may provide an additional tool for use as a foundation in clinical predictive analytics and intelligent clinical decision support.
Prediction of turning stability using receptance coupling
NASA Astrophysics Data System (ADS)
Jasiewicz, Marcin; Powałka, Bartosz
2018-01-01
This paper presents an issue of machining stability prediction of dynamic "lathe - workpiece" system evaluated using receptance coupling method. Dynamic properties of the lathe components (the spindle and the tailstock) are assumed to be constant and can be determined experimentally based on the results of the impact test. Hence, the variable of the system "machine tool - holder - workpiece" is the machined part, which can be easily modelled analytically. The method of receptance coupling enables a synthesis of experimental (spindle, tailstock) and analytical (machined part) models, so impact testing of the entire system becomes unnecessary. The paper presents methodology of analytical and experimental models synthesis, evaluation of the stability lobes and experimental validation procedure involving both the determination of the dynamic properties of the system and cutting tests. In the summary the experimental verification results would be presented and discussed.
An Analytical Approach to Obtaining JWL Parameters from Cylinder Tests
NASA Astrophysics Data System (ADS)
Sutton, Ben; Ferguson, James
2015-06-01
An analytical method for determining parameters for the JWL equation of state (EoS) from cylinder test data is described. This method is applied to four datasets obtained from two 20.3 mm diameter EDC37 cylinder tests. The calculated parameters and pressure-volume (p-V) curves agree with those produced by hydro-code modelling. The calculated Chapman-Jouguet (CJ) pressure is 38.6 GPa, compared to the model value of 38.3 GPa; the CJ relative volume is 0.729 for both. The analytical pressure-volume curves produced agree with the one used in the model out to the commonly reported expansion of 7 relative volumes, as do the predicted energies generated by integrating under the p-V curve. The calculated and model energies are 8.64 GPa and 8.76 GPa respectively.
Validation of the enthalpy method by means of analytical solution
NASA Astrophysics Data System (ADS)
Kleiner, Thomas; Rückamp, Martin; Bondzio, Johannes; Humbert, Angelika
2014-05-01
Numerical simulations moved in the recent year(s) from describing the cold-temperate transition surface (CTS) towards an enthalpy description, which allows avoiding incorporating a singular surface inside the model (Aschwanden et al., 2012). In Enthalpy methods the CTS is represented as a level set of the enthalpy state variable. This method has several numerical and practical advantages (e.g. representation of the full energy by one scalar field, no restriction to topology and shape of the CTS). The proposed method is rather new in glaciology and to our knowledge not verified and validated against analytical solutions. Unfortunately we are still lacking analytical solutions for sufficiently complex thermo-mechanically coupled polythermal ice flow. However, we present two experiments to test the implementation of the enthalpy equation and corresponding boundary conditions. The first experiment tests particularly the functionality of the boundary condition scheme and the corresponding basal melt rate calculation. Dependent on the different thermal situations that occur at the base, the numerical code may have to switch to another boundary type (from Neuman to Dirichlet or vice versa). The main idea of this set-up is to test the reversibility during transients. A former cold ice body that run through a warmer period with an associated built up of a liquid water layer at the base must be able to return to its initial steady state. Since we impose several assumptions on the experiment design analytical solutions can be formulated for different quantities during distinct stages of the simulation. The second experiment tests the positioning of the internal CTS in a parallel-sided polythermal slab. We compare our simulation results to the analytical solution proposed by Greve and Blatter (2009). Results from three different ice flow-models (COMIce, ISSM, TIMFD3) are presented.
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
Analytical approximate solutions for a general class of nonlinear delay differential equations.
Căruntu, Bogdan; Bota, Constantin
2014-01-01
We use the polynomial least squares method (PLSM), which allows us to compute analytical approximate polynomial solutions for a very general class of strongly nonlinear delay differential equations. The method is tested by computing approximate solutions for several applications including the pantograph equations and a nonlinear time-delay model from biology. The accuracy of the method is illustrated by a comparison with approximate solutions previously computed using other methods.
40 CFR 63.7142 - What are the requirements for claiming area source status?
Code of Federal Regulations, 2013 CFR
2013-07-01
... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...
40 CFR 63.7142 - What are the requirements for claiming area source status?
Code of Federal Regulations, 2011 CFR
2011-07-01
... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...
40 CFR 63.7142 - What are the requirements for claiming area source status?
Code of Federal Regulations, 2012 CFR
2012-07-01
... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...
40 CFR 63.7142 - What are the requirements for claiming area source status?
Code of Federal Regulations, 2010 CFR
2010-07-01
... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...
40 CFR 63.7142 - What are the requirements for claiming area source status?
Code of Federal Regulations, 2014 CFR
2014-07-01
... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...
Testing and Validation of the Dynamic Inertia Measurement Method
NASA Technical Reports Server (NTRS)
Chin, Alexander W.; Herrera, Claudia Y.; Spivey, Natalie D.; Fladung, William A.; Cloutier, David
2015-01-01
The Dynamic Inertia Measurement (DIM) method uses a ground vibration test setup to determine the mass properties of an object using information from frequency response functions. Most conventional mass properties testing involves using spin tables or pendulum-based swing tests, which for large aerospace vehicles becomes increasingly difficult and time-consuming, and therefore expensive, to perform. The DIM method has been validated on small test articles but has not been successfully proven on large aerospace vehicles. In response, the National Aeronautics and Space Administration Armstrong Flight Research Center (Edwards, California) conducted mass properties testing on an "iron bird" test article that is comparable in mass and scale to a fighter-type aircraft. The simple two-I-beam design of the "iron bird" was selected to ensure accurate analytical mass properties. Traditional swing testing was also performed to compare the level of effort, amount of resources, and quality of data with the DIM method. The DIM test showed favorable results for the center of gravity and moments of inertia; however, the products of inertia showed disagreement with analytical predictions.
Overview of mycotoxin methods, present status and future needs.
Gilbert, J
1999-01-01
This article reviews current requirements for the analysis for mycotoxins in foods and identifies legislative as well as other factors that are driving development and validation of new methods. New regulatory limits for mycotoxins and analytical quality assurance requirements for laboratories to only use validated methods are seen as major factors driving developments. Three major classes of methods are identified which serve different purposes and can be categorized as screening, official and research. In each case the present status and future needs are assessed. In addition to an overview of trends in analytical methods, some other areas of analytical quality assurance such as participation in proficiency testing and reference materials are identified.
Documentation of spreadsheets for the analysis of aquifer-test and slug-test data
Halford, Keith J.; Kuniansky, Eve L.
2002-01-01
Several spreadsheets have been developed for the analysis of aquifer-test and slug-test data. Each spreadsheet incorporates analytical solution(s) of the partial differential equation for ground-water flow to a well for a specific type of condition or aquifer. The derivations of the analytical solutions were previously published. Thus, this report abbreviates the theoretical discussion, but includes practical information about each method and the important assumptions for the applications of each method. These spreadsheets were written in Microsoft Excel 9.0 (use of trade names does not constitute endorsement by the USGS). Storage properties should not be estimated with many of the spreadsheets because most are for analyzing single-well tests. Estimation of storage properties from single-well tests is generally discouraged because single-well tests are affected by wellbore storage and by well construction. These non-ideal effects frequently cause estimates of storage to be erroneous by orders of magnitude. Additionally, single-well tests are not sensitive to aquifer-storage properties. Single-well tests include all slug tests (Bouwer and Rice Method, Cooper, Bredehoeft, Papadopulos Method, and van der Kamp Method), the Cooper-Jacob straight-line Method, Theis recovery-data analysis, Jacob-Lohman method for flowing wells in a confined aquifer, and the step-drawdown test. Multi-well test spreadsheets included in this report are; Hantush-Jacob Leaky Aquifer Method and Distance-Drawdown Methods. The distance-drawdown method is an equilibrium or steady-state method, thus storage cannot be estimated.
NASA Technical Reports Server (NTRS)
Neustein, Joseph; Schafer, Louis J , Jr
1946-01-01
Several methods of predicting the compressible-flow pressure loss across a baffled aircraft-engine cylinder were analytically related and were experimentally investigated on a typical air-cooled aircraft-engine cylinder. Tests with and without heat transfer covered a wide range of cooling-air flows and simulated altitudes from sea level to 40,000 feet. Both the analysis and the test results showed that the method based on the density determined by the static pressure and the stagnation temperature at the baffle exit gave results comparable with those obtained from methods derived by one-dimensional-flow theory. The method based on a characteristic Mach number, although related analytically to one-dimensional-flow theory, was found impractical in the present tests because of the difficulty encountered in defining the proper characteristic state of the cooling air. Accurate predictions of altitude pressure loss can apparently be made by these methods, provided that they are based on the results of sea-level tests with heat transfer.
CENTRIFUGAL VIBRATION TEST OF RC PILE FOUNDATION
NASA Astrophysics Data System (ADS)
Higuchi, Shunichi; Tsutsumiuchi, Takahiro; Otsuka, Rinna; Ito, Koji; Ejiri, Joji
It is necessary that nonlinear responses of structures are clarified by soil-structure interaction analysis for the purpose of evaluating the seismic performances of underground structure or foundation structure. In this research, centrifuge shake table tests of reinforced concrete pile foundation installed in the liquefied ground were conducted. Then, finite element analyses for the tests were conducted to confirm an applicability of the analytical method by comparing the experimental results and analytical results.
40 CFR 63.786 - Test methods and procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... level of sample dilution must be factored in. (2) Repeatability. First, at the 0.1-5 percent analyte... percent analyte range the results would be suspect if duplicates vary by more than 5 percent relative and...) Reproducibility. First, at the 0.1-5 percent analyte range the results would be suspect if lab to lab variation...
40 CFR 63.786 - Test methods and procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... level of sample dilution must be factored in. (2) Repeatability. First, at the 0.1-5 percent analyte... percent analyte range the results would be suspect if duplicates vary by more than 5 percent relative and...) Reproducibility. First, at the 0.1-5 percent analyte range the results would be suspect if lab to lab variation...
Effects of Analytical and Holistic Scoring Patterns on Scorer Reliability in Biology Essay Tests
ERIC Educational Resources Information Center
Ebuoh, Casmir N.
2018-01-01
Literature revealed that the patterns/methods of scoring essay tests had been criticized for not being reliable and this unreliability is more likely to be more in internal examinations than in the external examinations. The purpose of this study is to find out the effects of analytical and holistic scoring patterns on scorer reliability in…
NASA Astrophysics Data System (ADS)
Cucu, Daniela; Woods, Mike
2008-08-01
The paper aims to present a practical approach for testing laboratories to ensure the quality of their test results. It is based on the experience gained in assessing a large number of testing laboratories, discussing with management and staff, reviewing results obtained in national and international PTs and ILCs and exchanging information in the EA laboratory committee. According to EN ISO/IEC 17025, an accredited laboratory has to implement a programme to ensure the quality of its test results for each measurand. Pre-analytical, analytical and post-analytical measures shall be applied in a systematic manner. They shall include both quality control and quality assurance measures. When designing the quality assurance programme a laboratory should consider pre-analytical activities (like personnel training, selection and validation of test methods, qualifying equipment), analytical activities ranging from sampling, sample preparation, instrumental analysis and post-analytical activities (like decoding, calculation, use of statistical tests or packages, management of results). Designed on different levels (analyst, quality manager and technical manager), including a variety of measures, the programme shall ensure the validity and accuracy of test results, the adequacy of the management system, prove the laboratory's competence in performing tests under accreditation and last but not least show the comparability of test results. Laboratory management should establish performance targets and review periodically QC/QA results against them, implementing appropriate measures in case of non-compliance.
Preusser, Matthias; Berghoff, Anna S.; Manzl, Claudia; Filipits, Martin; Weinhäusel, Andreas; Pulverer, Walter; Dieckmann, Karin; Widhalm, Georg; Wöhrer, Adelheid; Knosp, Engelbert; Marosi, Christine; Hainfellner, Johannes A.
2014-01-01
Testing of the MGMT promoter methylation status in glioblastoma is relevant for clinical decision making and research applications. Two recent and independent phase III therapy trials confirmed a prognostic and predictive value of the MGMT promoter methylation status in elderly glioblastoma patients. Several methods for MGMT promoter methylation testing have been proposed, but seem to be of limited test reliability. Therefore, and also due to feasibility reasons, translation of MGMT methylation testing into routine use has been protracted so far. Pyrosequencing after prior DNA bisulfite modification has emerged as a reliable, accurate, fast and easy-to-use method for MGMT promoter methylation testing in tumor tissues (including formalin-fixed and paraffin-embedded samples). We performed an intra- and inter-laboratory ring trial which demonstrates a high analytical performance of this technique. Thus, pyrosequencing-based assessment of MGMT promoter methylation status in glioblastoma meets the criteria of high analytical test performance and can be recommended for clinical application, provided that strict quality control is performed. Our article summarizes clinical indications, practical instructions and open issues for MGMT promoter methylation testing in glioblastoma using pyrosequencing. PMID:24359605
Genetics-based methods for detection of Salmonella spp. in foods.
Mozola, Mark A
2006-01-01
Genetic methods are now at the forefront of foodborne pathogen testing. The sensitivity, specificity, and inclusivity advantages offered by deoxyribonucleic acid (DNA) probe technology have driven an intense effort in methods development over the past 20 years. DNA probe-based methods for Salmonella spp. and other pathogens have progressed from time-consuming procedures involving the use of radioisotopes to simple, high throughput, automated assays. The analytical sensitivity of nucleic acid amplification technology has facilitated a reduction in analysis time by allowing enriched samples to be tested for previously undetectable quantities of analyte. This article will trace the evolution of the development of genetic methods for detection of Salmonella in foods, review the basic assay formats and their advantages and limitations, and discuss method performance characteristics and considerations for selection of methods.
Materials Compatibility Testing in Concentrated Hydrogen Peroxide
NASA Technical Reports Server (NTRS)
Boxwell, R.; Bromley, G.; Mason, D.; Crockett, D.; Martinez, L.; McNeal, C.; Lyles, G. (Technical Monitor)
2000-01-01
Materials test methods from the 1960's have been used as a starting point in evaluating materials for today's space launch vehicles. These established test methods have been modified to incorporate today's analytical laboratory equipment. The Orbital test objective was to test a wide range of materials to incorporate the revolution in polymer and composite materials that has occurred since the 1960's. Testing is accomplished in 3 stages from rough screening to detailed analytical tests. Several interesting test observations have been made during this testing and are included in the paper. A summary of the set-up, test and evaluation of long-term storage sub-scale tanks is also included. This sub-scale tank test lasted for a 7-month duration prior to being stopped due to a polar boss material breakdown. Chemical evaluations of the hydrogen peroxide and residue left on the polar boss surface identify the material breakdown quite clearly. The paper concludes with recommendations for future testing and a specific effort underway within the industry to standardize the test methods used in evaluating materials.
A simplified analytic form for generation of axisymmetric plasma boundaries
Luce, Timothy C.
2017-02-23
An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less
A simplified analytic form for generation of axisymmetric plasma boundaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luce, Timothy C.
An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less
Annual banned-substance review: analytical approaches in human sports drug testing.
Thevis, Mario; Kuuranne, Tiia; Walpurgis, Katja; Geyer, Hans; Schänzer, Wilhelm
2016-01-01
The aim of improving anti-doping efforts is predicated on several different pillars, including, amongst others, optimized analytical methods. These commonly result from exploiting most recent developments in analytical instrumentation as well as research data on elite athletes' physiology in general, and pharmacology, metabolism, elimination, and downstream effects of prohibited substances and methods of doping, in particular. The need for frequent and adequate adaptations of sports drug testing procedures has been incessant, largely due to the uninterrupted emergence of new chemical entities but also due to the apparent use of established or even obsolete drugs for reasons other than therapeutic means, such as assumed beneficial effects on endurance, strength, and regeneration capacities. Continuing the series of annual banned-substance reviews, literature concerning human sports drug testing published between October 2014 and September 2015 is summarized and reviewed in reference to the content of the 2015 Prohibited List as issued by the World Anti-Doping Agency (WADA), with particular emphasis on analytical approaches and their contribution to enhanced doping controls. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Littell, Justin D.; Binienda, Wieslaw K.; Arnold, William A.; Roberts, Gary D.; Goldberg, Robert K.
2010-01-01
The reliability of impact simulations for aircraft components made with triaxial-braided carbon-fiber composites is currently limited by inadequate material property data and lack of validated material models for analysis. Methods to characterize the material properties used in the analytical models from a systematically obtained set of test data are also lacking. A macroscopic finite element based analytical model to analyze the impact response of these materials has been developed. The stiffness and strength properties utilized in the material model are obtained from a set of quasi-static in-plane tension, compression and shear coupon level tests. Full-field optical strain measurement techniques are applied in the testing, and the results are used to help in characterizing the model. The unit cell of the braided composite is modeled as a series of shell elements, where each element is modeled as a laminated composite. The braided architecture can thus be approximated within the analytical model. The transient dynamic finite element code LS-DYNA is utilized to conduct the finite element simulations, and an internal LS-DYNA constitutive model is utilized in the analysis. Methods to obtain the stiffness and strength properties required by the constitutive model from the available test data are developed. Simulations of quasi-static coupon tests and impact tests of a represented braided composite are conducted. Overall, the developed method shows promise, but improvements that are needed in test and analysis methods for better predictive capability are examined.
Nationwide Multicenter Reference Interval Study for 28 Common Biochemical Analytes in China.
Xia, Liangyu; Chen, Ming; Liu, Min; Tao, Zhihua; Li, Shijun; Wang, Liang; Cheng, Xinqi; Qin, Xuzhen; Han, Jianhua; Li, Pengchang; Hou, Li'an; Yu, Songlin; Ichihara, Kiyoshi; Qiu, Ling
2016-03-01
A nationwide multicenter study was conducted in the China to explore sources of variation of reference values and establish reference intervals for 28 common biochemical analytes, as a part of the International Federation of Clinical Chemistry and Laboratory Medicine, Committee on Reference Intervals and Decision Limits (IFCC/C-RIDL) global study on reference values. A total of 3148 apparently healthy volunteers were recruited in 6 cities covering a wide area in China. Blood samples were tested in 2 central laboratories using Beckman Coulter AU5800 chemistry analyzers. Certified reference materials and value-assigned serum panel were used for standardization of test results. Multiple regression analysis was performed to explore sources of variation. Need for partition of reference intervals was evaluated based on 3-level nested ANOVA. After secondary exclusion using the latent abnormal values exclusion method, reference intervals were derived by a parametric method using the modified Box-Cox formula. Test results of 20 analytes were made traceable to reference measurement procedures. By the ANOVA, significant sex-related and age-related differences were observed in 12 and 12 analytes, respectively. A small regional difference was observed in the results for albumin, glucose, and sodium. Multiple regression analysis revealed BMI-related changes in results of 9 analytes for man and 6 for woman. Reference intervals of 28 analytes were computed with 17 analytes partitioned by sex and/or age. In conclusion, reference intervals of 28 common chemistry analytes applicable to Chinese Han population were established by use of the latest methodology. Reference intervals of 20 analytes traceable to reference measurement procedures can be used as common reference intervals, whereas others can be used as the assay system-specific reference intervals in China.
Nationwide Multicenter Reference Interval Study for 28 Common Biochemical Analytes in China
Xia, Liangyu; Chen, Ming; Liu, Min; Tao, Zhihua; Li, Shijun; Wang, Liang; Cheng, Xinqi; Qin, Xuzhen; Han, Jianhua; Li, Pengchang; Hou, Li’an; Yu, Songlin; Ichihara, Kiyoshi; Qiu, Ling
2016-01-01
Abstract A nationwide multicenter study was conducted in the China to explore sources of variation of reference values and establish reference intervals for 28 common biochemical analytes, as a part of the International Federation of Clinical Chemistry and Laboratory Medicine, Committee on Reference Intervals and Decision Limits (IFCC/C-RIDL) global study on reference values. A total of 3148 apparently healthy volunteers were recruited in 6 cities covering a wide area in China. Blood samples were tested in 2 central laboratories using Beckman Coulter AU5800 chemistry analyzers. Certified reference materials and value-assigned serum panel were used for standardization of test results. Multiple regression analysis was performed to explore sources of variation. Need for partition of reference intervals was evaluated based on 3-level nested ANOVA. After secondary exclusion using the latent abnormal values exclusion method, reference intervals were derived by a parametric method using the modified Box–Cox formula. Test results of 20 analytes were made traceable to reference measurement procedures. By the ANOVA, significant sex-related and age-related differences were observed in 12 and 12 analytes, respectively. A small regional difference was observed in the results for albumin, glucose, and sodium. Multiple regression analysis revealed BMI-related changes in results of 9 analytes for man and 6 for woman. Reference intervals of 28 analytes were computed with 17 analytes partitioned by sex and/or age. In conclusion, reference intervals of 28 common chemistry analytes applicable to Chinese Han population were established by use of the latest methodology. Reference intervals of 20 analytes traceable to reference measurement procedures can be used as common reference intervals, whereas others can be used as the assay system-specific reference intervals in China. PMID:26945390
40 CFR 260.21 - Petitions for equivalent testing or analytical methods.
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.21... method; (2) A description of the types of wastes or waste matrices for which the proposed method may be... will be incorporated by reference in § 260.11 and added to “Test Methods for Evaluating Solid Waste...
40 CFR 260.21 - Petitions for equivalent testing or analytical methods.
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.21... method; (2) A description of the types of wastes or waste matrices for which the proposed method may be... will be incorporated by reference in § 260.11 and added to “Test Methods for Evaluating Solid Waste...
40 CFR 260.21 - Petitions for equivalent testing or analytical methods.
Code of Federal Regulations, 2012 CFR
2012-07-01
... (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.21... method; (2) A description of the types of wastes or waste matrices for which the proposed method may be... will be incorporated by reference in § 260.11 and added to “Test Methods for Evaluating Solid Waste...
40 CFR 260.21 - Petitions for equivalent testing or analytical methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) SOLID WASTES (CONTINUED) HAZARDOUS WASTE MANAGEMENT SYSTEM: GENERAL Rulemaking Petitions § 260.21... will be incorporated by reference in § 260.11 and added to “Test Methods for Evaluating Solid Waste... method; (2) A description of the types of wastes or waste matrices for which the proposed method may be...
Bjorgaard, J. A.; Velizhanin, K. A.; Tretiak, S.
2015-08-06
This study describes variational energy expressions and analytical excited state energy gradients for time-dependent self-consistent field methods with polarizable solvent effects. Linear response, vertical excitation, and state-specific solventmodels are examined. Enforcing a variational ground stateenergy expression in the state-specific model is found to reduce it to the vertical excitation model. Variational excited state energy expressions are then provided for the linear response and vertical excitation models and analytical gradients are formulated. Using semiempiricalmodel chemistry, the variational expressions are verified by numerical and analytical differentiation with respect to a static external electric field. Lastly, analytical gradients are further tested by performingmore » microcanonical excited state molecular dynamics with p-nitroaniline.« less
Floré, Katelijne M J; Delanghe, Joris R
2009-01-01
Current point-of-care testing (POCT) glucometers are based on various test principles. Two major method groups dominate the market: glucose oxidase-based systems and glucose dehydrogenase-based systems using pyrroloquinoline quinone (GDH-PQQ) as a cofactor. The GDH-PQQ-based glucometers are replacing the older glucose oxidase-based systems because of their lower sensitivity for oxygen. On the other hand, the GDH-PQQ test method results in falsely elevated blood glucose levels in peritoneal dialysis patients receiving solutions containing icodextrin (e.g., Extraneal; Baxter, Brussels, Belgium). Icodextrin is metabolized in the systemic circulation into different glucose polymers, but mainly maltose, which interferes with the GDH-PQQ-based method. Clinicians should be aware of this analytical interference. The POCT glucometers based on the GDH-PQQ method should preferably not be used in this high-risk population and POCT glucose results inconsistent with clinical suspicion of hypoglycemic coma should be retested with another testing system.
Lou, Xianwen; van Dongen, Joost L J; Milroy, Lech-Gustav; Meijer, E W
2016-12-30
Ionization in matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) is a very complicated process. It has been reported that quaternary ammonium salts show extremely strong matrix and analyte suppression effects which cannot satisfactorily be explained by charge transfer reactions. Further investigation of the reasons causing these effects can be useful to improve our understanding of the MALDI process. The dried-droplet and modified thin-layer methods were used as sample preparation methods. In the dried-droplet method, analytes were co-crystallized with matrix, whereas in the modified thin-layer method analytes were deposited on the surface of matrix crystals. Model compounds, tetrabutylammonium iodide ([N(Bu) 4 ]I), cesium iodide (CsI), trihexylamine (THA) and polyethylene glycol 600 (PEG 600), were selected as the test analytes given their ability to generate exclusively pre-formed ions, protonated ions and metal ion adducts respectively in MALDI. The strong matrix suppression effect (MSE) observed using the dried-droplet method might disappear using the modified thin-layer method, which suggests that the incorporation of analytes in matrix crystals contributes to the MSE. By depositing analytes on the matrix surface instead of incorporating in the matrix crystals, the competition for evaporation/ionization from charged matrix/analyte clusters could be weakened resulting in reduced MSE. Further supporting evidence for this inference was found by studying the analyte suppression effect using the same two sample deposition methods. By comparing differences between the mass spectra obtained via the two sample preparation methods, we present evidence suggesting that the generation of gas-phase ions from charged matrix/analyte clusters may induce significant suppression of matrix and analyte ions. The results suggest that the generation of gas-phase ions from charged matrix/analyte clusters is an important ionization step in MALDI-MS. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-23
... Methods for Permit Applications and Reporting AGENCY: Environmental Protection Agency (EPA). ACTION... System (NPDES) program, only ``sufficiently sensitive'' analytical test methods can be used when... methods with respect to measurement of mercury and extend the approach outlined in that guidance to the...
Zietze, Stefan; Müller, Rainer H; Brecht, René
2008-03-01
In order to set up a batch-to-batch-consistency analytical scheme for N-glycosylation analysis, several sample preparation steps including enzyme digestions and fluorophore labelling and two HPLC-methods were established. The whole method scheme was standardized, evaluated and validated according to the requirements on analytical testing in early clinical drug development by usage of a recombinant produced reference glycoprotein (RGP). The standardization of the methods was performed by clearly defined standard operation procedures. During evaluation of the methods, the major interest was in the loss determination of oligosaccharides within the analytical scheme. Validation of the methods was performed with respect to specificity, linearity, repeatability, LOD and LOQ. Due to the fact that reference N-glycan standards were not available, a statistical approach was chosen to derive accuracy from the linearity data. After finishing the validation procedure, defined limits for method variability could be calculated and differences observed in consistency analysis could be separated into significant and incidental ones.
ANALYSIS OF ALDEHYDES AND KETONES IN THE GAS PHASE
The development and testing of a 2,4-dinitrophenylhydrazine-acetonitrile (DNPH-ACN) method for the analysis of aldehydes and ketones in ambient air are described. A discussion of interferences, preparation of calibration standards, analytical testing, fluorescence methods and car...
Piezocone Penetration Testing Device
DOT National Transportation Integrated Search
2017-01-03
Hydraulic characteristics of soils can be estimated from piezocone penetration test (called PCPT hereinafter) by performing dissipation test or on-the-fly using advanced analytical techniques. This research report presents a method for fast estimatio...
Properties of water as a novel stationary phase in capillary gas chromatography.
Gallant, Jonathan A; Thurbide, Kevin B
2014-09-12
A novel method of separation that uses water as a stationary phase in capillary gas chromatography (GC) is presented. Through applying a water phase to the interior walls of a stainless steel capillary, good separations were obtained for a large variety of analytes in this format. It was found that carrier gas humidification and backpressure were key factors in promoting stable operation over time at various temperatures. For example, with these measures in place, the retention time of an acetone test analyte was found to reduce by only 44s after 100min of operation at a column temperature of 100°C. In terms of efficiency, under optimum conditions the method produced about 20,000 plates for an acetone test analyte on a 250μm i.d.×30m column. Overall, retention on the stationary phase generally increased with analyte water solubility and polarity, but was relatively little correlated with analyte volatility. Conversely, non-polar analytes were essentially unretained in the system. These features were applied to the direct analysis of different polar analytes in both aqueous and organic samples. Results suggest that this approach could provide an interesting alternative tool in capillary GC separations. Copyright © 2014 Elsevier B.V. All rights reserved.
SRC-I demonstration plant analytical laboratory methods manual. Final technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klusaritz, M.L.; Tewari, K.C.; Tiedge, W.F.
1983-03-01
This manual is a compilation of analytical procedures required for operation of a Solvent-Refined Coal (SRC-I) demonstration or commercial plant. Each method reproduced in full includes a detailed procedure, a list of equipment and reagents, safety precautions, and, where possible, a precision statement. Procedures for the laboratory's environmental and industrial hygiene modules are not included. Required American Society for Testing and Materials (ASTM) methods are cited, and ICRC's suggested modifications to these methods for handling coal-derived products are provided.
NASA Technical Reports Server (NTRS)
Yang, Charles; Sun, Wenjun; Tomblin, John S.; Smeltzer, Stanley S., III
2007-01-01
A semi-analytical method for determining the strain energy release rate due to a prescribed interface crack in an adhesively-bonded, single-lap composite joint subjected to axial tension is presented. The field equations in terms of displacements within the joint are formulated by using first-order shear deformable, laminated plate theory together with kinematic relations and force equilibrium conditions. The stress distributions for the adherends and adhesive are determined after the appropriate boundary and loading conditions are applied and the equations for the field displacements are solved. Based on the adhesive stress distributions, the forces at the crack tip are obtained and the strain energy release rate of the crack is determined by using the virtual crack closure technique (VCCT). Additionally, the test specimen geometry from both the ASTM D3165 and D1002 test standards are utilized during the derivation of the field equations in order to correlate analytical models with future test results. The system of second-order differential field equations is solved to provide the adherend and adhesive stress response using the symbolic computation tool, Maple 9. Finite element analyses using J-integral as well as VCCT were performed to verify the developed analytical model. The finite element analyses were conducted using the commercial finite element analysis software ABAQUS. The results determined using the analytical method correlated well with the results from the finite element analyses.
The transfer of analytical procedures.
Ermer, J; Limberger, M; Lis, K; Wätzig, H
2013-11-01
Analytical method transfers are certainly among the most discussed topics in the GMP regulated sector. However, they are surprisingly little regulated in detail. General information is provided by USP, WHO, and ISPE in particular. Most recently, the EU emphasized the importance of analytical transfer by including it in their draft of the revised GMP Guideline. In this article, an overview and comparison of these guidelines is provided. The key to success for method transfers is the excellent communication between sending and receiving unit. In order to facilitate this communication, procedures, flow charts and checklists for responsibilities, success factors, transfer categories, the transfer plan and report, strategies in case of failed transfers, tables with acceptance limits are provided here, together with a comprehensive glossary. Potential pitfalls are described such that they can be avoided. In order to assure an efficient and sustainable transfer of analytical procedures, a practically relevant and scientifically sound evaluation with corresponding acceptance criteria is crucial. Various strategies and statistical tools such as significance tests, absolute acceptance criteria, and equivalence tests are thoroughly descibed and compared in detail giving examples. Significance tests should be avoided. The success criterion is not statistical significance, but rather analytical relevance. Depending on a risk assessment of the analytical procedure in question, statistical equivalence tests are recommended, because they include both, a practically relevant acceptance limit and a direct control of the statistical risks. However, for lower risk procedures, a simple comparison of the transfer performance parameters to absolute limits is also regarded as sufficient. Copyright © 2013 Elsevier B.V. All rights reserved.
Mirmohseni, A; Abdollahi, H; Rostamizadeh, K
2007-02-28
Net analyte signal (NAS)-based method called HLA/GO was applied for the selectively determination of binary mixture of ethanol and water by quartz crystal nanobalance (QCN) sensor. A full factorial design was applied for the formation of calibration and prediction sets in the concentration ranges 5.5-22.2 microg mL(-1) for ethanol and 7.01-28.07 microg mL(-1) for water. An optimal time range was selected by procedure which was based on the calculation of the net analyte signal regression plot in any considered time window for each test sample. A moving window strategy was used for searching the region with maximum linearity of NAS regression plot (minimum error indicator) and minimum of PRESS value. On the base of obtained results, the differences on the adsorption profiles in the time range between 1 and 600 s were used to determine mixtures of both compounds by HLA/GO method. The calculation of the net analytical signal using HLA/GO method allows determination of several figures of merit like selectivity, sensitivity, analytical sensitivity and limit of detection, for each component. To check the ability of the proposed method in the selection of linear regions of adsorption profile, a test for detecting non-linear regions of adsorption profile data in the presence of methanol was also described. The results showed that the method was successfully applied for the determination of ethanol and water.
Ultraviolet, Visible, and Fluorescence Spectroscopy
NASA Astrophysics Data System (ADS)
Penner, Michael H.
Spectroscopy in the ultraviolet-visible (UV-Vis) range is one of the most commonly encountered laboratory techniques in food analysis. Diverse examples, such as the quantification of macrocomponents (total carbohydrate by the phenol-sulfuric acid method), quantification of microcomponents, (thiamin by the thiochrome fluorometric procedure), estimates of rancidity (lipid oxidation status by the thiobarbituric acid test), and surveillance testing (enzyme-linked immunoassays), are presented in this text. In each of these cases, the analytical signal for which the assay is based is either the emission or absorption of radiation in the UV-Vis range. This signal may be inherent in the analyte, such as the absorbance of radiation in the visible range by pigments, or a result of a chemical reaction involving the analyte, such as the colorimetric copper-based Lowry method for the analysis of soluble protein.
NASA Astrophysics Data System (ADS)
Al Okab, Riyad Ahmed
2013-02-01
Green analytical methods using Cisapride (CPE) as green analytical reagent was investigated in this work. Rapid, simple, and sensitive spectrophotometric methods for the determination of bromate in water sample, bread and flour additives were developed. The proposed methods based on the oxidative coupling between phenoxazine and Cisapride in the presence of bromate to form red colored product with max at 520 nm. Phenoxazine and Cisapride and its reaction products were found to be environmentally friendly under the optimum experimental condition. The method obeys beers law in concentration range 0.11-4.00 g ml-1 and molar absorptivity 1.41 × 104 L mol-1 cm-1. All variables have been optimized and the presented reaction sequences were applied to the analysis of bromate in water, bread and flour additive samples. The performance of these method was evaluated in terms of Student's t-test and variance ratio F-test to find out the significance of proposed methods over the reference method. The combination of pharmaceutical drugs reagents with low concentration create some unique green chemical analyses.
A general statistical test for correlations in a finite-length time series.
Hanson, Jeffery A; Yang, Haw
2008-06-07
The statistical properties of the autocorrelation function from a time series composed of independently and identically distributed stochastic variables has been studied. Analytical expressions for the autocorrelation function's variance have been derived. It has been found that two common ways of calculating the autocorrelation, moving-average and Fourier transform, exhibit different uncertainty characteristics. For periodic time series, the Fourier transform method is preferred because it gives smaller uncertainties that are uniform through all time lags. Based on these analytical results, a statistically robust method has been proposed to test the existence of correlations in a time series. The statistical test is verified by computer simulations and an application to single-molecule fluorescence spectroscopy is discussed.
NASA Astrophysics Data System (ADS)
Ribera, Javier; Tahboub, Khalid; Delp, Edward J.
2015-03-01
Video surveillance systems are widely deployed for public safety. Real-time monitoring and alerting are some of the key requirements for building an intelligent video surveillance system. Real-life settings introduce many challenges that can impact the performance of real-time video analytics. Video analytics are desired to be resilient to adverse and changing scenarios. In this paper we present various approaches to characterize the uncertainty of a classifier and incorporate crowdsourcing at the times when the method is uncertain about making a particular decision. Incorporating crowdsourcing when a real-time video analytic method is uncertain about making a particular decision is known as online active learning from crowds. We evaluate our proposed approach by testing a method we developed previously for crowd flow estimation. We present three different approaches to characterize the uncertainty of the classifier in the automatic crowd flow estimation method and test them by introducing video quality degradations. Criteria to aggregate crowdsourcing results are also proposed and evaluated. An experimental evaluation is conducted using a publicly available dataset.
Cantrill, Richard C
2008-01-01
Methods of analysis for products of modern biotechnology are required for national and international trade in seeds, grain and food in order to meet the labeling or import/export requirements of different nations and trading blocks. Although many methods were developed by the originators of transgenic events, governments, universities, and testing laboratories, trade is less complicated if there exists a set of international consensus-derived analytical standards. In any analytical situation, multiple methods may exist for testing for the same analyte. These methods may be supported by regional preferences and regulatory requirements. However, tests need to be sensitive enough to determine low levels of these traits in commodity grain for regulatory purposes and also to indicate purity of seeds containing these traits. The International Organization for Standardization (ISO) and its European counterpart have worked to produce a suite of standards through open, balanced and consensus-driven processes. Presently, these standards are approaching the time for their first review. In fact, ISO 21572, the "protein standard" has already been circulated for systematic review. In order to expedite the review and revision of the nucleic acid standards an ISO Technical Specification (ISO/TS 21098) was drafted to set the criteria for the inclusion of precision data from collaborative studies into the annexes of these standards.
[Detection of rubella virus RNA in clinical material by real time polymerase chain reaction method].
Domonova, É A; Shipulina, O Iu; Kuevda, D A; Larichev, V F; Safonova, A P; Burchik, M A; Butenko, A M; Shipulin, G A
2012-01-01
Development of a reagent kit for detection of rubella virus RNA in clinical material by PCR-RT. During development and determination of analytical specificity and sensitivity DNA and RNA of 33 different microorganisms including 4 rubella strains were used. Comparison of analytical sensitivity of virological and molecular-biological methods was performed by using rubella virus strains Wistar RA 27/3, M-33, "Orlov", Judith. Evaluation of diagnostic informativity of rubella virus RNAisolation in various clinical material by PCR-RT method was performed in comparison with determination of virus specific serum antibodies by enzyme immunoassay. A reagent kit for the detection of rubella virus RNA in clinical material by PCR-RT was developed. Analytical specificity was 100%, analytical sensitivity - 400 virus RNA copies per ml. Analytical sensitivity of the developed technique exceeds analytical sensitivity of the Vero E6 cell culture infection method in studies of rubella virus strains Wistar RA 27/3 and "Orlov" by 11g and 31g, and for M-33 and Judith strains is analogous. Diagnostic specificity is 100%. Diagnostic specificity for testing samples obtained within 5 days of rash onset: for peripheral blood sera - 20.9%, saliva - 92.5%, nasopharyngeal swabs - 70.1%, saliva and nasopharyngeal swabs - 97%. Positive and negative predictive values of the results were shown depending on the type of clinical material tested. Application of reagent kit will allow to increase rubella diagnostics effectiveness at the early stages of infectious process development, timely and qualitatively perform differential diagnostics of exanthema diseases, support tactics of anti-epidemic regime.
Reference Intervals of Common Clinical Chemistry Analytes for Adults in Hong Kong.
Lo, Y C; Armbruster, David A
2012-04-01
Defining reference intervals is a major challenge because of the difficulty in recruiting volunteers to participate and testing samples from a significant number of healthy reference individuals. Historical literature citation intervals are often suboptimal because they're be based on obsolete methods and/or only a small number of poorly defined reference samples. Blood donors in Hong Kong gave permission for additional blood to be collected for reference interval testing. The samples were tested for twenty-five routine analytes on the Abbott ARCHITECT clinical chemistry system. Results were analyzed using the Rhoads EP evaluator software program, which is based on the CLSI/IFCC C28-A guideline, and defines the reference interval as the 95% central range. Method specific reference intervals were established for twenty-five common clinical chemistry analytes for a Chinese ethnic population. The intervals were defined for each gender separately and for genders combined. Gender specific or combined gender intervals were adapted as appropriate for each analyte. A large number of healthy, apparently normal blood donors from a local ethnic population were tested to provide current reference intervals for a new clinical chemistry system. Intervals were determined following an accepted international guideline. Laboratories using the same or similar methodologies may adapt these intervals if deemed validated and deemed suitable for their patient population. Laboratories using different methodologies may be able to successfully adapt the intervals for their facilities using the reference interval transference technique based on a method comparison study.
Irregular analytical errors in diagnostic testing - a novel concept.
Vogeser, Michael; Seger, Christoph
2018-02-23
In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC-isotope-dilution mass spectrometry methods are increasingly used for pre-market validation of routine diagnostic assays (these tests also involve substantial sets of clinical validation samples). Based on this definition/terminology, we list recognized causes of irregular analytical error as a risk catalog for clinical chemistry in this article. These issues include reproducible individual analytical errors (e.g. caused by anti-reagent antibodies) and non-reproducible, sporadic errors (e.g. errors due to incorrect pipetting volume due to air bubbles in a sample), which can both lead to inaccurate results and risks for patients.
Addressing the Analytic Challenges of Cross-Sectional Pediatric Pneumonia Etiology Data.
Hammitt, Laura L; Feikin, Daniel R; Scott, J Anthony G; Zeger, Scott L; Murdoch, David R; O'Brien, Katherine L; Deloria Knoll, Maria
2017-06-15
Despite tremendous advances in diagnostic laboratory technology, identifying the pathogen(s) causing pneumonia remains challenging because the infected lung tissue cannot usually be sampled for testing. Consequently, to obtain information about pneumonia etiology, clinicians and researchers test specimens distant to the site of infection. These tests may lack sensitivity (eg, blood culture, which is only positive in a small proportion of children with pneumonia) and/or specificity (eg, detection of pathogens in upper respiratory tract specimens, which may indicate asymptomatic carriage or a less severe syndrome, such as upper respiratory infection). While highly sensitive nucleic acid detection methods and testing of multiple specimens improve sensitivity, multiple pathogens are often detected and this adds complexity to the interpretation as the etiologic significance of results may be unclear (ie, the pneumonia may be caused by none, one, some, or all of the pathogens detected). Some of these challenges can be addressed by adjusting positivity rates to account for poor sensitivity or incorporating test results from controls without pneumonia to account for poor specificity. However, no classical analytic methods can account for measurement error (ie, sensitivity and specificity) for multiple specimen types and integrate the results of measurements for multiple pathogens to produce an accurate understanding of etiology. We describe the major analytic challenges in determining pneumonia etiology and review how the common analytical approaches (eg, descriptive, case-control, attributable fraction, latent class analysis) address some but not all challenges. We demonstrate how these limitations necessitate a new, integrated analytical approach to pneumonia etiology data. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America.
Addressing the Analytic Challenges of Cross-Sectional Pediatric Pneumonia Etiology Data
Feikin, Daniel R.; Scott, J. Anthony G.; Zeger, Scott L.; Murdoch, David R.; O’Brien, Katherine L.; Deloria Knoll, Maria
2017-01-01
Abstract Despite tremendous advances in diagnostic laboratory technology, identifying the pathogen(s) causing pneumonia remains challenging because the infected lung tissue cannot usually be sampled for testing. Consequently, to obtain information about pneumonia etiology, clinicians and researchers test specimens distant to the site of infection. These tests may lack sensitivity (eg, blood culture, which is only positive in a small proportion of children with pneumonia) and/or specificity (eg, detection of pathogens in upper respiratory tract specimens, which may indicate asymptomatic carriage or a less severe syndrome, such as upper respiratory infection). While highly sensitive nucleic acid detection methods and testing of multiple specimens improve sensitivity, multiple pathogens are often detected and this adds complexity to the interpretation as the etiologic significance of results may be unclear (ie, the pneumonia may be caused by none, one, some, or all of the pathogens detected). Some of these challenges can be addressed by adjusting positivity rates to account for poor sensitivity or incorporating test results from controls without pneumonia to account for poor specificity. However, no classical analytic methods can account for measurement error (ie, sensitivity and specificity) for multiple specimen types and integrate the results of measurements for multiple pathogens to produce an accurate understanding of etiology. We describe the major analytic challenges in determining pneumonia etiology and review how the common analytical approaches (eg, descriptive, case-control, attributable fraction, latent class analysis) address some but not all challenges. We demonstrate how these limitations necessitate a new, integrated analytical approach to pneumonia etiology data. PMID:28575372
Analytical control test plan and microbiological methods for the water recovery test
NASA Technical Reports Server (NTRS)
Traweek, M. S. (Editor); Tatara, J. D. (Editor)
1994-01-01
Qualitative and quantitative laboratory results are important to the decision-making process. In some cases, they may represent the only basis for deciding between two or more given options or processes. Therefore, it is essential that handling of laboratory samples and analytical operations employed are performed at a deliberate level of conscientious effort. Reporting erroneous results can lead to faulty interpretations and result in misinformed decisions. This document provides analytical control specifications which will govern future test procedures related to all Water Recovery Test (WRT) Phase 3 activities to be conducted at the National Aeronautics and Space Administration/Marshall Space Flight Center (NASA/MSFC). This document addresses the process which will be used to verify analytical data generated throughout the test period, and to identify responsibilities of key personnel and participating laboratories, the chains of communication to be followed, and ensure that approved methodology and procedures are used during WRT activities. This document does not outline specifics, but provides a minimum guideline by which sampling protocols, analysis methodologies, test site operations, and laboratory operations should be developed.
Extracting laboratory test information from biomedical text
Kang, Yanna Shen; Kayaalp, Mehmet
2013-01-01
Background: No previous study reported the efficacy of current natural language processing (NLP) methods for extracting laboratory test information from narrative documents. This study investigates the pathology informatics question of how accurately such information can be extracted from text with the current tools and techniques, especially machine learning and symbolic NLP methods. The study data came from a text corpus maintained by the U.S. Food and Drug Administration, containing a rich set of information on laboratory tests and test devices. Methods: The authors developed a symbolic information extraction (SIE) system to extract device and test specific information about four types of laboratory test entities: Specimens, analytes, units of measures and detection limits. They compared the performance of SIE and three prominent machine learning based NLP systems, LingPipe, GATE and BANNER, each implementing a distinct supervised machine learning method, hidden Markov models, support vector machines and conditional random fields, respectively. Results: Machine learning systems recognized laboratory test entities with moderately high recall, but low precision rates. Their recall rates were relatively higher when the number of distinct entity values (e.g., the spectrum of specimens) was very limited or when lexical morphology of the entity was distinctive (as in units of measures), yet SIE outperformed them with statistically significant margins on extracting specimen, analyte and detection limit information in both precision and F-measure. Its high recall performance was statistically significant on analyte information extraction. Conclusions: Despite its shortcomings against machine learning methods, a well-tailored symbolic system may better discern relevancy among a pile of information of the same type and may outperform a machine learning system by tapping into lexically non-local contextual information such as the document structure. PMID:24083058
[Blood sampling using "dried blood spot": a clinical biology revolution underway?].
Hirtz, Christophe; Lehmann, Sylvain
2015-01-01
Blood testing using the dried blood spot (DBS) is used since the 1960s in clinical analysis, mainly within the framework of the neonatal screening (Guthrie test). Since then numerous analytes such as nucleic acids, small molecules or lipids, were successfully measured on the DBS. While this pre-analytical method represents an interesting alternative to classic blood sampling, its use in routine is still limited. We review here the different clinical applications of the blood sampling on DBS and estimate its future place, supported by the new methods of analysis as the LC-MS mass spectrometry.
NASA Technical Reports Server (NTRS)
Corrigan, J. C.; Cronkhite, J. D.; Dompka, R. V.; Perry, K. S.; Rogers, J. P.; Sadler, S. G.
1989-01-01
Under a research program designated Design Analysis Methods for VIBrationS (DAMVIBS), existing analytical methods are used for calculating coupled rotor-fuselage vibrations of the AH-1G helicopter for correlation with flight test data from an AH-1G Operational Load Survey (OLS) test program. The analytical representation of the fuselage structure is based on a NASTRAN finite element model (FEM), which has been developed, extensively documented, and correlated with ground vibration test. One procedure that was used for predicting coupled rotor-fuselage vibrations using the advanced Rotorcraft Flight Simulation Program C81 and NASTRAN is summarized. Detailed descriptions of the analytical formulation of rotor dynamics equations, fuselage dynamic equations, coupling between the rotor and fuselage, and solutions to the total system of equations in C81 are included. Analytical predictions of hub shears for main rotor harmonics 2p, 4p, and 6p generated by C81 are used in conjunction with 2p OLS measured control loads and a 2p lateral tail rotor gearbox force, representing downwash impingement on the vertical fin, to excite the NASTRAN model. NASTRAN is then used to correlate with measured OLS flight test vibrations. Blade load comparisons predicted by C81 showed good agreement. In general, the fuselage vibration correlations show good agreement between anslysis and test in vibration response through 15 to 20 Hz.
Boareto, Marcelo; Cesar, Jonatas; Leite, Vitor B P; Caticha, Nestor
2015-01-01
We introduce Supervised Variational Relevance Learning (Suvrel), a variational method to determine metric tensors to define distance based similarity in pattern classification, inspired in relevance learning. The variational method is applied to a cost function that penalizes large intraclass distances and favors small interclass distances. We find analytically the metric tensor that minimizes the cost function. Preprocessing the patterns by doing linear transformations using the metric tensor yields a dataset which can be more efficiently classified. We test our methods using publicly available datasets, for some standard classifiers. Among these datasets, two were tested by the MAQC-II project and, even without the use of further preprocessing, our results improve on their performance.
Annual banned-substance review: analytical approaches in human sports drug testing.
Thevis, Mario; Kuuranne, Tiia; Geyer, Hans; Schänzer, Wilhelm
2017-01-01
There has been an immense amount of visibility of doping issues on the international stage over the past 12 months with the complexity of doping controls reiterated on various occasions. Hence, analytical test methods continuously being updated, expanded, and improved to provide specific, sensitive, and comprehensive test results in line with the World Anti-Doping Agency's (WADA) 2016 Prohibited List represent one of several critical cornerstones of doping controls. This enterprise necessitates expediting the (combined) exploitation of newly generated information on novel and/or superior target analytes for sports drug testing assays, drug elimination profiles, alternative test matrices, and recent advances in instrumental developments. This paper is a continuation of the series of annual banned-substance reviews appraising the literature published between October 2015 and September 2016 concerning human sports drug testing in the context of WADA's 2016 Prohibited List. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
40 CFR 766.16 - Developing the analytical test method.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Resolution Gas Chromatography (HRGC) with High Resolution Mass Spectrometry (HRMS) is the method of choice... meet the requirements of the chemical matrix. (d) Analysis. The method of choice is High Resolution Gas...
40 CFR 766.16 - Developing the analytical test method.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Resolution Gas Chromatography (HRGC) with High Resolution Mass Spectrometry (HRMS) is the method of choice... meet the requirements of the chemical matrix. (d) Analysis. The method of choice is High Resolution Gas...
Agut, C; Caron, A; Giordano, C; Hoffman, D; Ségalini, A
2011-09-10
In 2001, a multidisciplinary team made of analytical scientists and statisticians at Sanofi-aventis has published a methodology which has governed, from that time, the transfers from R&D sites to Manufacturing sites of the release monographs. This article provides an overview of the recent adaptations brought to this original methodology taking advantage of our experience and the new regulatory framework, and, in particular, the risk management perspective introduced by ICH Q9. Although some alternate strategies have been introduced in our practices, the comparative testing one, based equivalence testing as statistical approach, remains the standard for assays lying on very critical quality attributes. This is conducted with the concern to control the most important consumer's risk involved at two levels in analytical decisions in the frame of transfer studies: risk, for the receiving laboratory, to take poor release decisions with the analytical method and risk, for the sending laboratory, to accredit such a receiving laboratory on account of its insufficient performances with the method. Among the enhancements to the comparative studies, the manuscript presents the process settled within our company for a better integration of the transfer study into the method life-cycle, just as proposals of generic acceptance criteria and designs for assay and related substances methods. While maintaining rigor and selectivity of the original approach, these improvements tend towards an increased efficiency in the transfer operations. Copyright © 2011 Elsevier B.V. All rights reserved.
Lin, Shan-Yang; Wang, Shun-Li
2012-04-01
The solid-state chemistry of drugs has seen growing importance in the pharmaceutical industry for the development of useful API (active pharmaceutical ingredients) of drugs and stable dosage forms. The stability of drugs in various solid dosage forms is an important issue because solid dosage forms are the most common pharmaceutical formulation in clinical use. In solid-state stability studies of drugs, an ideal accelerated method must not only be selected by different complicated methods, but must also detect the formation of degraded product. In this review article, an analytical technique combining differential scanning calorimetry and Fourier-transform infrared (DSC-FTIR) microspectroscopy simulates the accelerated stability test, and simultaneously detects the decomposed products in real time. The pharmaceutical dipeptides aspartame hemihydrate, lisinopril dihydrate, and enalapril maleate either with or without Eudragit E were used as testing examples. This one-step simultaneous DSC-FTIR technique for real-time detection of diketopiperazine (DKP) directly evidenced the dehydration process and DKP formation as an impurity common in pharmaceutical dipeptides. DKP formation in various dipeptides determined by different analytical methods had been collected and compiled. Although many analytical methods have been applied, the combined DSC-FTIR technique is an easy and fast analytical method which not only can simulate the accelerated drug stability testing but also at the same time enable to explore phase transformation as well as degradation due to thermal-related reactions. This technique offers quick and proper interpretations. Copyright © 2012 Elsevier B.V. All rights reserved.
Literature review : an analysis of laboratory fatigue tests.
DOT National Transportation Integrated Search
1975-01-01
This report discusses the various types of fatigue tests, grouped by the type of specimen (beam, plate, Marshall, etc.) used. The discussion under each type of specimen covers the test, and the analytical methods used in evaluating the data. The test...
This research program was initiated with the objective of developing, codifying and testing a group of chemical analytical methods for measuring toxic compounds in the exhaust of distillate-fueled engines (i.e. diesel, gas turbine, Stirling, or Rankin cycle powerplants). It is a ...
Analytical concepts for health management systems of liquid rocket engines
NASA Technical Reports Server (NTRS)
Williams, Richard; Tulpule, Sharayu; Hawman, Michael
1990-01-01
Substantial improvement in health management systems performance can be realized by implementing advanced analytical methods of processing existing liquid rocket engine sensor data. In this paper, such techniques ranging from time series analysis to multisensor pattern recognition to expert systems to fault isolation models are examined and contrasted. The performance of several of these methods is evaluated using data from test firings of the Space Shuttle main engines.
NASA Technical Reports Server (NTRS)
Mikes, F.
1984-01-01
Silane primers for use as thermal protection on external tanks were subjected to various analytic techniques to determine the most effective testing method for silane lot evaluation. The analytic methods included high performance liquid chromatography, gas chromatography, thermogravimetry (TGA), and fourier transform infrared spectroscopy (FTIR). It is suggested that FTIR be used as the method for silane lot evaluation. Chromatograms, TGA profiles, bar graphs showing IR absorbances, and FTIR spectra are presented.
Flight and Analytical Methods for Determining the Coupled Vibration Response of Tandem Helicopters
NASA Technical Reports Server (NTRS)
Yeates, John E , Jr; Brooks, George W; Houbolt, John C
1957-01-01
Chapter one presents a discussion of flight-test and analysis methods for some selected helicopter vibration studies. The use of a mechanical shaker in flight to determine the structural response is reported. A method for the analytical determination of the natural coupled frequencies and mode shapes of vibrations in the vertical plane of tandem helicopters is presented in Chapter two. The coupled mode shapes and frequencies are then used to calculate the response of the helicopter to applied oscillating forces.
Survey of NASA research on crash dynamics
NASA Technical Reports Server (NTRS)
Thomson, R. G.; Carden, H. D.; Hayduk, R. J.
1984-01-01
Ten years of structural crash dynamics research activities conducted on general aviation aircraft by the National Aeronautics and Space Administration (NASA) are described. Thirty-two full-scale crash tests were performed at Langley Research Center, and pertinent data on airframe and seat behavior were obtained. Concurrent with the experimental program, analytical methods were developed to help predict structural behavior during impact. The effects of flight parameters at impact on cabin deceleration pulses at the seat/occupant interface, experimental and analytical correlation of data on load-limiting subfloor and seat configurations, airplane section test results for computer modeling validation, and data from emergency-locator-transmitter (ELT) investigations to determine probable cause of false alarms and nonactivations are assessed. Computer programs which provide designers with analytical methods for predicting accelerations, velocities, and displacements of collapsing structures are also discussed.
Sampling and sample processing in pesticide residue analysis.
Lehotay, Steven J; Cook, Jo Marie
2015-05-13
Proper sampling and sample processing in pesticide residue analysis of food and soil have always been essential to obtain accurate results, but the subject is becoming a greater concern as approximately 100 mg test portions are being analyzed with automated high-throughput analytical methods by agrochemical industry and contract laboratories. As global food trade and the importance of monitoring increase, the food industry and regulatory laboratories are also considering miniaturized high-throughput methods. In conjunction with a summary of the symposium "Residues in Food and Feed - Going from Macro to Micro: The Future of Sample Processing in Residue Analytical Methods" held at the 13th IUPAC International Congress of Pesticide Chemistry, this is an opportune time to review sampling theory and sample processing for pesticide residue analysis. If collected samples and test portions do not adequately represent the actual lot from which they came and provide meaningful results, then all costs, time, and efforts involved in implementing programs using sophisticated analytical instruments and techniques are wasted and can actually yield misleading results. This paper is designed to briefly review the often-neglected but crucial topic of sample collection and processing and put the issue into perspective for the future of pesticide residue analysis. It also emphasizes that analysts should demonstrate the validity of their sample processing approaches for the analytes/matrices of interest and encourages further studies on sampling and sample mass reduction to produce a test portion.
Statistically Qualified Neuro-Analytic system and Method for Process Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
1998-11-04
An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less
Exact test-based approach for equivalence test with parameter margin.
Cassie Dong, Xiaoyu; Bian, Yuanyuan; Tsong, Yi; Wang, Tianhua
2017-01-01
The equivalence test has a wide range of applications in pharmaceutical statistics which we need to test for the similarity between two groups. In recent years, the equivalence test has been used in assessing the analytical similarity between a proposed biosimilar product and a reference product. More specifically, the mean values of the two products for a given quality attribute are compared against an equivalence margin in the form of ±f × σ R , where ± f × σ R is a function of the reference variability. In practice, this margin is unknown and is estimated from the sample as ±f × S R . If we use this estimated margin with the classic t-test statistic on the equivalence test for the means, both Type I and Type II error rates may inflate. To resolve this issue, we develop an exact-based test method and compare this method with other proposed methods, such as the Wald test, the constrained Wald test, and the Generalized Pivotal Quantity (GPQ) in terms of Type I error rate and power. Application of those methods on data analysis is also provided in this paper. This work focuses on the development and discussion of the general statistical methodology and is not limited to the application of analytical similarity.
Mechanical and analytical screening of braided composites for transport fuselage applications
NASA Technical Reports Server (NTRS)
Fedro, Mark J.; Gunther, Christian; Ko, Frank K.
1991-01-01
The mechanics of materials progress in support of the goal of understanding the application of braided composites in a transport aircraft fuselage are summarized. Composites consisting of both 2-D and 3-D braid patterns are investigated. Both consolidation of commingled graphite/PEEK and resin transfer molding of graphite-epoxy braided composite processes are studied. Mechanical tests were used to examine unnotched tension, open hole tension, compression, compression after impact, in-plane shear, out-of-plane tension, bearing, and crippling. Analytical methods are also developed and applied to predict the stiffness and strengths of test specimens. A preliminary study using the test data and analytical results is performed to assess the applicability of braided composites to a commercial aircraft fuselage.
NASA Astrophysics Data System (ADS)
Pekşen, Ertan; Yas, Türker; Kıyak, Alper
2014-09-01
We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.
On Statistical Approaches for Demonstrating Analytical Similarity in the Presence of Correlation.
Yang, Harry; Novick, Steven; Burdick, Richard K
Analytical similarity is the foundation for demonstration of biosimilarity between a proposed product and a reference product. For this assessment, currently the U.S. Food and Drug Administration (FDA) recommends a tiered system in which quality attributes are categorized into three tiers commensurate with their risk and approaches of varying statistical rigor are subsequently used for the three-tier quality attributes. Key to the analyses of Tiers 1 and 2 quality attributes is the establishment of equivalence acceptance criterion and quality range. For particular licensure applications, the FDA has provided advice on statistical methods for demonstration of analytical similarity. For example, for Tier 1 assessment, an equivalence test can be used based on an equivalence margin of 1.5 σ R , where σ R is the reference product variability estimated by the sample standard deviation S R from a sample of reference lots. The quality range for demonstrating Tier 2 analytical similarity is of the form X̄ R ± K × σ R where the constant K is appropriately justified. To demonstrate Tier 2 analytical similarity, a large percentage (e.g., 90%) of test product must fall in the quality range. In this paper, through both theoretical derivations and simulations, we show that when the reference drug product lots are correlated, the sample standard deviation S R underestimates the true reference product variability σ R As a result, substituting S R for σ R in the Tier 1 equivalence acceptance criterion and the Tier 2 quality range inappropriately reduces the statistical power and the ability to declare analytical similarity. Also explored is the impact of correlation among drug product lots on Type I error rate and power. Three methods based on generalized pivotal quantities are introduced, and their performance is compared against a two-one-sided tests (TOST) approach. Finally, strategies to mitigate risk of correlation among the reference products lots are discussed. A biosimilar is a generic version of the original biological drug product. A key component of a biosimilar development is the demonstration of analytical similarity between the biosimilar and the reference product. Such demonstration relies on application of statistical methods to establish a similarity margin and appropriate test for equivalence between the two products. This paper discusses statistical issues with demonstration of analytical similarity and provides alternate approaches to potentially mitigate these problems. © PDA, Inc. 2016.
Dillon, Roslyn; Croner, Lisa J; Bucci, John; Kairs, Stefanie N; You, Jia; Beasley, Sharon; Blimline, Mark; Carino, Rochele B; Chan, Vicky C; Cuevas, Danissa; Diggs, Jeff; Jennings, Megan; Levy, Jacob; Mina, Ginger; Yee, Alvin; Wilcox, Bruce
2018-05-30
Early detection of colorectal cancer (CRC) is key to reducing associated mortality. Despite the importance of early detection, approximately 40% of individuals in the United States between the ages of 50-75 have never been screened for CRC. The low compliance with colonoscopy and fecal-based screening may be addressed with a non-invasive alternative such as a blood-based test. We describe here the analytical validation of a multiplexed blood-based assay that measures the plasma concentrations of 15 proteins to assess advanced adenoma (AA) and CRC risk in symptomatic patients. The test was developed on an electrochemiluminescent immunoassay platform employing four multi-marker panels, to be implemented in the clinic as a laboratory developed test (LDT). Under the Clinical Laboratory Improvement Amendments (CLIA) and College of American Pathologists (CAP) regulations, a United States-based clinical laboratory utilizing an LDT must establish performance characteristics relating to analytical validity prior to releasing patient test results. This report describes a series of studies demonstrating the precision, accuracy, analytical sensitivity, and analytical specificity for each of the 15 assays, as required by CLIA/CAP. In addition, the report describes studies characterizing each of the assays' dynamic range, parallelism, tolerance to common interfering substances, spike recovery, and stability to sample freeze-thaw cycles. Upon completion of the analytical characterization, a clinical accuracy study was performed to evaluate concordance of AA and CRC classifier model calls using the analytical method intended for use in the clinic. Of 434 symptomatic patient samples tested, the percent agreement with original CRC and AA calls was 87% and 92% respectively. All studies followed CLSI guidelines and met the regulatory requirements for implementation of a new LDT. The results provide the analytical evidence to support the implementation of the novel multi-marker test as a clinical test for evaluating CRC and AA risk in symptomatic individuals. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Tamkin, Glenn S. (Inventor); Duffy, Daniel Q. (Inventor); Schnase, John L. (Inventor)
2016-01-01
A system, method and computer-readable storage devices for providing a climate data analytic services application programming interface distribution package. The example system can provide various components. The system provides a climate data analytic services application programming interface library that enables software applications running on a client device to invoke the capabilities of a climate data analytic service. The system provides a command-line interface that provides a means of interacting with a climate data analytic service by issuing commands directly to the system's server interface. The system provides sample programs that call on the capabilities of the application programming interface library and can be used as templates for the construction of new client applications. The system can also provide test utilities, build utilities, service integration utilities, and documentation.
NASA Technical Reports Server (NTRS)
Jordon, D. E.; Patterson, W.; Sandlin, D. R.
1985-01-01
The XV-15 Tilt Rotor Research Aircraft download phenomenon was analyzed. This phenomenon is a direct result of the two rotor wakes impinging on the wing upper surface when the aircraft is in the hover configuration. For this study the analysis proceeded along tow lines. First was a method whereby results from actual hover tests of the XV-15 aircraft were combined with drag coefficient results from wind tunnel tests of a wing that was representative of the aircraft wing. Second, an analytical method was used that modeled that airflow caused gy the two rotors. Formulas were developed in such a way that acomputer program could be used to calculate the axial velocities were then used in conjunction with the aforementioned wind tunnel drag coefficinet results to produce download values. An attempt was made to validate the analytical results by modeling a model rotor system for which direct download values were determinrd..
NASA Astrophysics Data System (ADS)
Chang, Ya-Chi; Yeh, Hund-Der
2010-06-01
The constant-head pumping tests are usually employed to determine the aquifer parameters and they can be performed in fully or partially penetrating wells. Generally, the Dirichlet condition is prescribed along the well screen and the Neumann type no-flow condition is specified over the unscreened part of the test well. The mathematical model describing the aquifer response to a constant-head test performed in a fully penetrating well can be easily solved by the conventional integral transform technique under the uniform Dirichlet-type condition along the rim of wellbore. However, the boundary condition for a test well with partial penetration should be considered as a mixed-type condition. This mixed boundary value problem in a confined aquifer system of infinite radial extent and finite vertical extent is solved by the Laplace and finite Fourier transforms in conjunction with the triple series equations method. This approach provides analytical results for the drawdown in a partially penetrating well for arbitrary location of the well screen in a finite thickness aquifer. The semi-analytical solutions are particularly useful for the practical applications from the computational point of view.
Bourget, Philippe; Amin, Alexandre; Vidal, Fabrice; Merlette, Christophe; Troude, Pénélope; Baillet-Guffroy, Arlette
2014-08-15
The purpose of the study was to perform a comparative analysis of the technical performance, respective costs and environmental effect of two invasive analytical methods (HPLC and UV/visible-FTIR) as compared to a new non-invasive analytical technique (Raman spectroscopy). Three pharmacotherapeutic models were used to compare the analytical performances of the three analytical techniques. Statistical inter-method correlation analysis was performed using non-parametric correlation rank tests. The study's economic component combined calculations relative to the depreciation of the equipment and the estimated cost of an AQC unit of work. In any case, analytical validation parameters of the three techniques were satisfactory, and strong correlations between the two spectroscopic techniques vs. HPLC were found. In addition, Raman spectroscopy was found to be superior as compared to the other techniques for numerous key criteria including a complete safety for operators and their occupational environment, a non-invasive procedure, no need for consumables, and a low operating cost. Finally, Raman spectroscopy appears superior for technical, economic and environmental objectives, as compared with the other invasive analytical methods. Copyright © 2014 Elsevier B.V. All rights reserved.
Andersson, Maria; Stephanson, Nikolai; Ohman, Inger; Terzuoli, Tommy; Lindh, Jonatan D; Beck, Olof
2014-04-01
Opiates comprise a class of abused drugs that is of primary interest in clinical and forensic urine drug testing. Determination of heroin, codeine, or a multi-drug ingestion is complicated since both heroin and codeine can lead to urinary excretion of free and conjugated morphine. Liquid chromatography-tandem mass spectrometry (LC-MS/MS) offers advantage over gas chromatography-mass spectrometry by simplifying sample preparation but increases the number of analytes. A method based on direct injection of five-fold diluted urine for confirmation of morphine, morphine-3-glucuronide, morphine-6-glucuronide, codeine, codeine-6-glucuronide and 6-acetylmorphine was validated using LC-MS/MS in positive electrospray mode monitoring two transitions using selected reaction monitoring. The method was applied for the analysis of 3155 unknown urine samples which were positive for opiates in immunochemical screening. A linear response was observed for all compounds in the calibration curves covering more than three orders of magnitude. Cut off was set to 2 ng/ml for 6-acetylmorphine and 150 ng/ml for the other analytes. 6-Acetylmorphine was found to be effective (sensitivity 82%) in detecting samples as heroin intake. Morphine-3-glucuronide and codeine-6-glucuronide was the predominant components of total morphine and codeine, 84% and 93%, respectively. The authors have validated a robust LC-MS/MS method for rapid qualitative and quantitative analysis of opiates in urine. 6-Acetylmorphine has been demonstrated as a sensitive and important parameter for a heroin intake. A possible interpretation strategy to conclude the source of detected analytes was proposed. The method might be further developed by reducing the number of analytes to morphine-3-glucuronide, codeine-6-glucuronide and 6-acetylmorphine without compromising test performance. Copyright © 2013 John Wiley & Sons, Ltd.
Thermal/structural design verification strategies for large space structures
NASA Technical Reports Server (NTRS)
Benton, David
1988-01-01
Requirements for space structures of increasing size, complexity, and precision have engendered a search for thermal design verification methods that do not impose unreasonable costs, that fit within the capabilities of existing facilities, and that still adequately reduce technical risk. This requires a combination of analytical and testing methods. This requires two approaches. The first is to limit thermal testing to sub-elements of the total system only in a compact configuration (i.e., not fully deployed). The second approach is to use a simplified environment to correlate analytical models with test results. These models can then be used to predict flight performance. In practice, a combination of these approaches is needed to verify the thermal/structural design of future very large space systems.
Gosselin, Robert C; Adcock, Dorothy M; Bates, Shannon M; Douxfils, Jonathan; Favaloro, Emmanuel J; Gouin-Thibault, Isabelle; Guillermo, Cecilia; Kawai, Yohko; Lindhoff-Last, Edelgard; Kitchen, Steve
2018-03-01
This guidance document was prepared on behalf of the International Council for Standardization in Haematology (ICSH) for providing haemostasis-related guidance documents for clinical laboratories. This inaugural coagulation ICSH document was developed by an ad hoc committee, comprised of international clinical and laboratory direct acting oral anticoagulant (DOAC) experts. The committee developed consensus recommendations for laboratory measurement of DOACs (dabigatran, rivaroxaban, apixaban and edoxaban), which would be germane for laboratories assessing DOAC anticoagulation. This guidance document addresses all phases of laboratory DOAC measurements, including pre-analytical (e.g. preferred time sample collection, preferred sample type, sample stability), analytical (gold standard method, screening and quantifying methods) and post analytical (e.g. reporting units, quality assurance). The committee addressed the use and limitations of screening tests such as prothrombin time, activated partial thromboplastin time as well as viscoelastic measurements of clotting blood and point of care methods. Additionally, the committee provided recommendations for the proper validation or verification of performance of laboratory assays prior to implementation for clinical use, and external quality assurance to provide continuous assessment of testing and reporting method. Schattauer GmbH Stuttgart.
Selection and authentication of botanical materials for the development of analytical methods.
Applequist, Wendy L; Miller, James S
2013-05-01
Herbal products, for example botanical dietary supplements, are widely used. Analytical methods are needed to ensure that botanical ingredients used in commercial products are correctly identified and that research materials are of adequate quality and are sufficiently characterized to enable research to be interpreted and replicated. Adulteration of botanical material in commerce is common for some species. The development of analytical methods for specific botanicals, and accurate reporting of research results, depend critically on correct identification of test materials. Conscious efforts must therefore be made to ensure that the botanical identity of test materials is rigorously confirmed and documented through preservation of vouchers, and that their geographic origin and handling are appropriate. Use of material with an associated herbarium voucher that can be botanically identified is always ideal. Indirect methods of authenticating bulk material in commerce, for example use of organoleptic, anatomical, chemical, or molecular characteristics, are not always acceptable for the chemist's purposes. Familiarity with botanical and pharmacognostic literature is necessary to determine what potential adulterants exist and how they may be distinguished.
A numerical test of the topographic bias
NASA Astrophysics Data System (ADS)
Sjöberg, L. E.; Joud, M. S. S.
2018-02-01
In 1962 A. Bjerhammar introduced the method of analytical continuation in physical geodesy, implying that surface gravity anomalies are downward continued into the topographic masses down to an internal sphere (the Bjerhammar sphere). The method also includes analytical upward continuation of the potential to the surface of the Earth to obtain the quasigeoid. One can show that also the common remove-compute-restore technique for geoid determination includes an analytical continuation as long as the complete density distribution of the topography is not known. The analytical continuation implies that the downward continued gravity anomaly and/or potential are/is in error by the so-called topographic bias, which was postulated by a simple formula of L E Sjöberg in 2007. Here we will numerically test the postulated formula by comparing it with the bias obtained by analytical downward continuation of the external potential of a homogeneous ellipsoid to an inner sphere. The result shows that the postulated formula holds: At the equator of the ellipsoid, where the external potential is downward continued 21 km, the computed and postulated topographic biases agree to less than a millimetre (when the potential is scaled to the unit of metre).
Live load test and failure analysis for the steel deck truss bridge over the New River in Virginia.
DOT National Transportation Integrated Search
2009-01-01
This report presents the methods used to model a steel deck truss bridge over the New River in Hillsville, Virginia. These methods were evaluated by comparing analytical results with data recorded from 14 members during live load testing. The researc...
Becalski, Adam; Lau, Benjamin P Y; Lewis, David; Seaman, Stephen W; Sun, Wing F
2005-01-01
Recent concerns surrounding the presence of acrylamide in many types of thermally processed food have brought about the need for the development of analytical methods suitable for determination of acrylamide in diverse matrices with the goals of improving overall confidence in analytical results and better understanding of method capabilities. Consequently, the results are presented of acrylamide testing in commercially available food products--potato fries, potato chips, crispbread, instant coffee, coffee beans, cocoa, chocolate and peanut butter, obtained by using the same sample extract. The results obtained by using LC-MS/MS, GC/MS (El), GC/HRMS (El)--with or without derivatization--and the use of different analytical columns, are discussed and compared with respect to matrix borne interferences, detection limits and method complexities.
Bao, Yijun; Gaylord, Thomas K
2016-11-01
Multifilter phase imaging with partially coherent light (MFPI-PC) is a promising new quantitative phase imaging method. However, the existing MFPI-PC method is based on the paraxial approximation. In the present work, an analytical nonparaxial partially coherent phase optical transfer function is derived. This enables the MFPI-PC to be extended to the realistic nonparaxial case. Simulations over a wide range of test phase objects as well as experimental measurements on a microlens array verify higher levels of imaging accuracy compared to the paraxial method. Unlike the paraxial version, the nonparaxial MFPI-PC with obliquity factor correction exhibits no systematic error. In addition, due to its analytical expression, the increase in computation time compared to the paraxial version is negligible.
Zhang, Lei; Yue, Hong-Shui; Ju, Ai-Chun; Ye, Zheng-Liang
2016-10-01
Currently, near infrared spectroscopy (NIRS) has been considered as an efficient tool for achieving process analytical technology(PAT) in the manufacture of traditional Chinese medicine (TCM) products. In this article, the NIRS based process analytical system for the production of salvianolic acid for injection was introduced. The design of the process analytical system was described in detail, including the selection of monitored processes and testing mode, and potential risks that should be avoided. Moreover, the development of relative technologies was also presented, which contained the establishment of the monitoring methods for the elution of polyamide resin and macroporous resin chromatography processes, as well as the rapid analysis method for finished products. Based on author's experience of research and work, several issues in the application of NIRS to the process monitoring and control in TCM production were then raised, and some potential solutions were also discussed. The issues include building the technical team for process analytical system, the design of the process analytical system in the manufacture of TCM products, standardization of the NIRS-based analytical methods, and improving the management of process analytical system. Finally, the prospect for the application of NIRS in the TCM industry was put forward. Copyright© by the Chinese Pharmaceutical Association.
Vandenabeele-Trambouze, O; Claeys-Bruno, M; Dobrijevic, M; Rodier, C; Borruat, G; Commeyras, A; Garrelly, L
2005-02-01
The need for criteria to compare different analytical methods for measuring extraterrestrial organic matter at ultra-trace levels in relatively small and unique samples (e.g., fragments of meteorites, micrometeorites, planetary samples) is discussed. We emphasize the need to standardize the description of future analyses, and take the first step toward a proposed international laboratory network for performance testing.
Evaluation of Analytical Errors in a Clinical Chemistry Laboratory: A 3 Year Experience
Sakyi, AS; Laing, EF; Ephraim, RK; Asibey, OF; Sadique, OK
2015-01-01
Background: Proficient laboratory service is the cornerstone of modern healthcare systems and has an impact on over 70% of medical decisions on admission, discharge, and medications. In recent years, there is an increasing awareness of the importance of errors in laboratory practice and their possible negative impact on patient outcomes. Aim: We retrospectively analyzed data spanning a period of 3 years on analytical errors observed in our laboratory. The data covered errors over the whole testing cycle including pre-, intra-, and post-analytical phases and discussed strategies pertinent to our settings to minimize their occurrence. Materials and Methods: We described the occurrence of pre-analytical, analytical and post-analytical errors observed at the Komfo Anokye Teaching Hospital clinical biochemistry laboratory during a 3-year period from January, 2010 to December, 2012. Data were analyzed with Graph Pad Prism 5(GraphPad Software Inc. CA USA). Results: A total of 589,510 tests was performed on 188,503 outpatients and hospitalized patients. The overall error rate for the 3 years was 4.7% (27,520/58,950). Pre-analytical, analytical and post-analytical errors contributed 3.7% (2210/58,950), 0.1% (108/58,950), and 0.9% (512/58,950), respectively. The number of tests reduced significantly over the 3-year period, but this did not correspond with a reduction in the overall error rate (P = 0.90) along with the years. Conclusion: Analytical errors are embedded within our total process setup especially pre-analytical and post-analytical phases. Strategic measures including quality assessment programs for staff involved in pre-analytical processes should be intensified. PMID:25745569
NASA Astrophysics Data System (ADS)
Ivanova, V.; Surleva, A.; Koleva, B.
2018-06-01
An ion chromatographic method for determination of fluoride, chloride, nitrate and sulphate in untreated and treated drinking waters was described. An automated 850 IC Professional, Metrohm system equipped with conductivity detector and Metrosep A Supp 7-250 (250 x 4 mm) column was used. The validation of the method was performed for simultaneous determination of all studied analytes and the results have showed that the validated method fits the requirements of the current water legislation. The main analytical characteristics were estimated for each of studied analytes: limits of detection, limits of quantification, working and linear ranges, repeatability and intermediate precision, recovery. The trueness of the method was estimated by analysis of certified reference material for soft drinking water. Recovery test was performed on spiked drinking water samples. An uncertainty was estimated. The method was applied for analysis of drinking waters before and after chlorination.
NASA Astrophysics Data System (ADS)
Borazjani, Iman; Asgharzadeh, Hafez
2015-11-01
Flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates with explicit and semi-implicit schemes. Implicit schemes can be used to overcome these restrictions. However, implementing implicit solver for nonlinear equations including Navier-Stokes is not straightforward. Newton-Krylov subspace methods (NKMs) are one of the most advanced iterative methods to solve non-linear equations such as implicit descritization of the Navier-Stokes equation. The efficiency of NKMs massively depends on the Jacobian formation method, e.g., automatic differentiation is very expensive, and matrix-free methods slow down as the mesh is refined. Analytical Jacobian is inexpensive method, but derivation of analytical Jacobian for Navier-Stokes equation on staggered grid is challenging. The NKM with a novel analytical Jacobian was developed and validated against Taylor-Green vortex and pulsatile flow in a 90 degree bend. The developed method successfully handled the complex geometries such as an intracranial aneurysm with multiple overset grids, and immersed boundaries. It is shown that the NKM with an analytical Jacobian is 3 to 25 times faster than the fixed-point implicit Runge-Kutta method, and more than 100 times faster than automatic differentiation depending on the grid (size) and the flow problem. The developed methods are fully parallelized with parallel efficiency of 80-90% on the problems tested.
Goicoechea, H C; Olivieri, A C
2001-07-01
A newly developed multivariate method involving net analyte preprocessing (NAP) was tested using central composite calibration designs of progressively decreasing size regarding the multivariate simultaneous spectrophotometric determination of three active components (phenylephrine, diphenhydramine and naphazoline) and one excipient (methylparaben) in nasal solutions. Its performance was evaluated and compared with that of partial least-squares (PLS-1). Minimisation of the calibration predicted error sum of squares (PRESS) as a function of a moving spectral window helped to select appropriate working spectral ranges for both methods. The comparison of NAP and PLS results was carried out using two tests: (1) the elliptical joint confidence region for the slope and intercept of a predicted versus actual concentrations plot for a large validation set of samples and (2) the D-optimality criterion concerning the information content of the calibration data matrix. Extensive simulations and experimental validation showed that, unlike PLS, the NAP method is able to furnish highly satisfactory results when the calibration set is reduced from a full four-component central composite to a fractional central composite, as expected from the modelling requirements of net analyte based methods.
Warth, Arne; Muley, Thomas; Meister, Michael; Weichert, Wilko
2015-01-01
Preanalytic sampling techniques and preparation of tissue specimens strongly influence analytical results in lung tissue diagnostics both on the morphological but also on the molecular level. However, in contrast to analytics where tremendous achievements in the last decade have led to a whole new portfolio of test methods, developments in preanalytics have been minimal. This is specifically unfortunate in lung cancer, where usually only small amounts of tissue are at hand and optimization in all processing steps is mandatory in order to increase the diagnostic yield. In the following, we provide a comprehensive overview on some aspects of preanalytics in lung cancer from the method of sampling over tissue processing to its impact on analytical test results. We specifically discuss the role of preanalytics in novel technologies like next-generation sequencing and in the state-of the-art cytology preparations. In addition, we point out specific problems in preanalytics which hamper further developments in the field of lung tissue diagnostics.
40 CFR 1065.720 - Liquefied petroleum gas.
Code of Federal Regulations, 2014 CFR
2014-07-01
... CONTROLS ENGINE-TESTING PROCEDURES Engine Fluids, Test Fuels, Analytical Gases and Other Calibration....720—Test Fuel Specifications for Liquefied Petroleum Gas Property Value Reference procedure 1 Propane... methods yield different results, use the results from ASTM D1267. 3 The test fuel must not yield a...
A NEW METHOD OF SWEAT TESTING: THE CF QUANTUM® SWEAT TEST
Rock, Michael J.; Makholm, Linda; Eickhoff, Jens
2015-01-01
Background Conventional methods of sweat testing are time consuming and have many steps that can and do lead to errors. This study compares conventional sweat testing to a new quantitative method, the CF Quantum® (CFQT) sweat test. This study tests the diagnostic accuracy and analytic validity of the CFQT. Methods Previously diagnosed CF patients and patients who required a sweat test for clinical indications were invited to have the CFQT test performed. Both conventional sweat testing and the CFQT were performed bilaterally on the same day. Pairs of data from each test are plotted as a correlation graph and Bland Altman plot. Sensitivity and specificity were calculated as well as the means and coefficient of variation by test and by extremity. After completing the study, subjects or their parents were asked for their preference of the CFQT and conventional sweat testing. Results The correlation coefficient between the CFQT and conventional sweat testing was 0.98 (95% confidence interval: 0.97–0.99). The sensitivity and specificity of the CFQT in diagnosing CF was 100% (95% confidence interval: 94–100%) and 96% (95% confidence interval: 89–99%), respectively. In one center in this three center multicenter study, there were higher sweat chloride values in patients with CF and also more tests that were invalid due to discrepant values between the two extremities. The percentage of invalid tests was higher in the CFQT method (16.5%) compared to conventional sweat testing (3.8%)(p < 0.001). In the post-test questionnaire, 88% of subjects/parents preferred the CFQT test. Conclusions The CFQT is a fast and simple method of quantitative sweat chloride determination. This technology requires further refinement to improve the analytic accuracy at higher sweat chloride values and to decrease the number of invalid tests. PMID:24862724
Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C
2014-01-01
Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patients pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (SBM), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or QCP) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patients physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patients condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the simulated biosignals in the early stages of physiologic deterioration and while the variables are still within normal ranges. Thus, the SBM system was found to identify pathophysiologic conditions in a timeframe that would not have been detected in a usual clinical monitoring scenario. Conclusion. In this study the functionality of a multivariate machine learning predictive methodology that that incorporates commonly monitored clinical information was tested using a computer model of human physiology. SBM and predictive analytics were able to differentiate a state of decompensation while the monitored variables were still within normal clinical ranges. This finding suggests that the SBM could provide for early identification of a clinical deterioration using predictive analytic techniques. predictive analytics, hemodynamic, monitoring.
HEATED PURGE AND TRAP METHOD DEVELOPMENT AND TESTING
The goal of the research was to develop a heated purge and trap method that could be used in conjunction with SW-846 method 8240 for the analysis of volatile, water soluble Appendix VIII analytes. The developed method was validated according to a partial single laboratory method ...
Code of Federal Regulations, 2012 CFR
2012-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES... to a wide variety of sample types ranging from reagent (blank) water containing analyte to wastewater... times the standard deviation of replicate instrumental measurements of the analyte in reagent water. (c...
Analytical difficulties facing today's regulatory laboratories: issues in method validation.
MacNeil, James D
2012-08-01
The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.
Concept Development for Future Domains: A New Method of Knowledge Elicitation
2005-06-01
Procedure: U.S. Army Research Institute for the Behavioral and Social Sciences (ARI) examined methods to generate, refine, test , and validate new...generate, elaborate, refine, describe, test , and validate new Future Force concepts relating to doctrine, tactics, techniques, procedures, unit and team...System (Harvey, 1993), and the Job Element Method (Primoff & Eyde , 1988). Figure 1 provides a more comprehensive list of task analytic methods. Please see
Collaborative Visual Analytics: A Health Analytics Approach to Injury Prevention
Fisher, Brian; Smith, Jennifer; Pike, Ian
2017-01-01
Background: Accurate understanding of complex health data is critical in order to deal with wicked health problems and make timely decisions. Wicked problems refer to ill-structured and dynamic problems that combine multidimensional elements, which often preclude the conventional problem solving approach. This pilot study introduces visual analytics (VA) methods to multi-stakeholder decision-making sessions about child injury prevention; Methods: Inspired by the Delphi method, we introduced a novel methodology—group analytics (GA). GA was pilot-tested to evaluate the impact of collaborative visual analytics on facilitating problem solving and supporting decision-making. We conducted two GA sessions. Collected data included stakeholders’ observations, audio and video recordings, questionnaires, and follow up interviews. The GA sessions were analyzed using the Joint Activity Theory protocol analysis methods; Results: The GA methodology triggered the emergence of ‘common ground’ among stakeholders. This common ground evolved throughout the sessions to enhance stakeholders’ verbal and non-verbal communication, as well as coordination of joint activities and ultimately collaboration on problem solving and decision-making; Conclusions: Understanding complex health data is necessary for informed decisions. Equally important, in this case, is the use of the group analytics methodology to achieve ‘common ground’ among diverse stakeholders about health data and their implications. PMID:28895928
Collaborative Visual Analytics: A Health Analytics Approach to Injury Prevention.
Al-Hajj, Samar; Fisher, Brian; Smith, Jennifer; Pike, Ian
2017-09-12
Background : Accurate understanding of complex health data is critical in order to deal with wicked health problems and make timely decisions. Wicked problems refer to ill-structured and dynamic problems that combine multidimensional elements, which often preclude the conventional problem solving approach. This pilot study introduces visual analytics (VA) methods to multi-stakeholder decision-making sessions about child injury prevention; Methods : Inspired by the Delphi method, we introduced a novel methodology-group analytics (GA). GA was pilot-tested to evaluate the impact of collaborative visual analytics on facilitating problem solving and supporting decision-making. We conducted two GA sessions. Collected data included stakeholders' observations, audio and video recordings, questionnaires, and follow up interviews. The GA sessions were analyzed using the Joint Activity Theory protocol analysis methods; Results : The GA methodology triggered the emergence of ' common g round ' among stakeholders. This common ground evolved throughout the sessions to enhance stakeholders' verbal and non-verbal communication, as well as coordination of joint activities and ultimately collaboration on problem solving and decision-making; Conclusion s : Understanding complex health data is necessary for informed decisions. Equally important, in this case, is the use of the group analytics methodology to achieve ' common ground' among diverse stakeholders about health data and their implications.
NASA Astrophysics Data System (ADS)
Chen, Jui-Sheng; Li, Loretta Y.; Lai, Keng-Hsin; Liang, Ching-Ping
2017-11-01
A novel solution method is presented which leads to an analytical model for the advective-dispersive transport in a semi-infinite domain involving a wide spectrum of boundary inputs, initial distributions, and zero-order productions. The novel solution method applies the Laplace transform in combination with the generalized integral transform technique (GITT) to obtain the generalized analytical solution. Based on this generalized analytical expression, we derive a comprehensive set of special-case solutions for some time-dependent boundary distributions and zero-order productions, described by the Dirac delta, constant, Heaviside, exponentially-decaying, or periodically sinusoidal functions as well as some position-dependent initial conditions and zero-order productions specified by the Dirac delta, constant, Heaviside, or exponentially-decaying functions. The developed solutions are tested against an analytical solution from the literature. The excellent agreement between the analytical solutions confirms that the new model can serve as an effective tool for investigating transport behaviors under different scenarios. Several examples of applications, are given to explore transport behaviors which are rarely noted in the literature. The results show that the concentration waves resulting from the periodically sinusoidal input are sensitive to dispersion coefficient. The implication of this new finding is that a tracer test with a periodic input may provide additional information when for identifying the dispersion coefficients. Moreover, the solution strategy presented in this study can be extended to derive analytical models for handling more complicated problems of solute transport in multi-dimensional media subjected to sequential decay chain reactions, for which analytical solutions are not currently available.
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund; ...
2018-03-28
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
An electromechanical coupling model of a bending vibration type piezoelectric ultrasonic transducer.
Zhang, Qiang; Shi, Shengjun; Chen, Weishan
2016-03-01
An electromechanical coupling model of a bending vibration type piezoelectric ultrasonic transducer is proposed. The transducer is a Langevin type transducer which is composed of an exponential horn, four groups of PZT ceramics and a back beam. The exponential horn can focus the vibration energy, and can enlarge vibration amplitude and velocity efficiently. A bending vibration model of the transducer is first constructed, and subsequently an electromechanical coupling model is constructed based on the vibration model. In order to obtain the most suitable excitation position of the PZT ceramics, the effective electromechanical coupling coefficient is optimized by means of the quadratic interpolation method. When the effective electromechanical coupling coefficient reaches the peak value of 42.59%, the optimal excitation position (L1=22.52 mm) is found. The FEM method and the experimental method are used to validate the developed analytical model. Two groups of the FEM model (the Group A center bolt is not considered, and but the Group B center bolt is considered) are constructed and separately compared with the analytical model and the experimental model. Four prototype transducers around the peak value are fabricated and tested to validate the analytical model. A scanning laser Doppler vibrometer is employed to test the bending vibration shape and resonance frequency. Finally, the electromechanical coupling coefficient is tested indirectly through an impedance analyzer. Comparisons of the analytical results, FEM results and experiment results are presented, and the results show good agreement. Copyright © 2015 Elsevier B.V. All rights reserved.
Koscho, Michael E; Grubbs, Robert H; Lewis, Nathan S
2002-03-15
Arrays of vapor detectors have been formed through addition of varying mass fractions of the plasticizer diethylene glycol dibenzoate to carbon black-polymer composites of poly(vinyl acetate) (PVAc) or of poly(N-vinylpyrrolidone). Addition of plasticizer in 5% mass fraction increments produced 20 compositionally different detectors from each polymer composite. Differences in vapor sorption and permeability that effected changes in the dc electrical resistance response of these compositionally different detectors allowed identification and classification of various test analytes using standard chemometric methods. Glass transition temperatures, Tg, were measured using differential scanning calorimetry for plasticized polymers having a mass fraction of 0, 0.10, 0.20, 0.30, 0.40, or 0.50 of plasticizer in the composite. The plasticized PVAc composites with Tg < 25 degrees C showed rapid responses at room temperature to all of the test analyte vapors studied in this work, whereas composites with Tg > 25 degrees C showed response times that were highly dependent on the polymer/analyte combination. These composites showed a discontinuity in the temperature dependence of their resistance, and this discontinuity provided a simple method for determining the Tg of the composite and for determining the temperature or plasticizer mass fraction above which rapid resistance responses could be obtained for all members of the test set of analyte vapors. The plasticization approach provides a method for achieving rapid detector response times as well as for producing a large number of chemically different vapor detectors from a limited number of initial chemical feedstocks.
Arantes de Carvalho, Gabriel G; Kondaveeti, Stalin; Petri, Denise F S; Fioroto, Alexandre M; Albuquerque, Luiza G R; Oliveira, Pedro V
2016-12-01
Analytical methods for the determination of rare earth elements (REE) in natural waters by plasma spectrochemical techniques often require sample preparation procedures for analytes preconcentration as well as for removing matrix constituents, that may interfere on the analytical measurements. In the present work, calcium alginate (CA) beads were used for the first time aiming at Ce, La and Nd preconcentration from groundwater samples for further determination by inductively coupled plasma optical emission spectrometry (ICP OES). Test samples were analyzed in batch mode by transferring a 40mL test portion (pH=5±0.2) into a 50mL polyethylene flask containing 125mg CA beads. After 15min contact, the analytes were quantitatively extracted from the loaded CA beads with 2.0mL of 1.0molL -1 HCl solution for further determination by ICP OES, using Ce (II) 456.236, La (II) 379.478 and Nd (II) 430.358nm emission lines. The proposed approach is a reliable alternative for REE single-stage preconcentration from aqueous samples, as it provided accurate results based on the addition and recovery analysis of groundwater. The results obtained by the proposed method were also compared with those from reference method based on inductively coupled plasma mass spectrometry (ICP-MS) and no significant differences were observed after applying the Student's t-test at 95% confidence level. Copyright © 2016 Elsevier B.V. All rights reserved.
Experimental investigation of elastic mode control on a model of a transport aircraft
NASA Technical Reports Server (NTRS)
Abramovitz, M.; Heimbaugh, R. M.; Nomura, J. K.; Pearson, R. M.; Shirley, W. A.; Stringham, R. H.; Tescher, E. L.; Zoock, I. E.
1981-01-01
A 4.5 percent DC-10 derivative flexible model with active controls is fabricated, developed, and tested to investigate the ability to suppress flutter and reduce gust loads with active controlled surfaces. The model is analyzed and tested in both semispan and complete model configuration. Analytical methods are refined and control laws are developed and successfully tested on both versions of the model. A 15 to 25 percent increase in flutter speed due to the active system is demonstrated. The capability of an active control system to significantly reduce wing bending moments due to turbulence is demonstrated. Good correlation is obtained between test and analytical prediction.
Piérard, Gérald E; Courtois, Justine; Ritacco, Caroline; Humbert, Philippe; Fanian, Ferial; Piérard-Franchimont, Claudine
2015-01-01
Background In cosmetic science, noninvasive sampling of the upper part of the stratum corneum is conveniently performed using strippings with adhesive-coated discs (SACD) and cyanoacrylate skin surface strippings (CSSSs). Methods Under controlled conditions, it is possible to scrutinize SACD and CSSS with objectivity using appropriate methods of analytical morphology. These procedures apply to a series of clinical conditions including xerosis grading, comedometry, corneodynamics, corneomelametry, corneosurfametry, corneoxenometry, and dandruff assessment. Results With any of the analytical evaluations, SACD and CSSS provide specific salient information that is useful in the field of cosmetology. In particular, both methods appear valuable and complementary in assessing the human skin compatibility of personal skincare products. Conclusion A set of quantitative analytical methods applicable to the minimally invasive and low-cost SACD and CSSS procedures allow for a sound assessment of cosmetic effects on the stratum corneum. Under regular conditions, both methods are painless and do not induce adverse events. Globally, CSSS appears more precise and informative than the regular SACD stripping. PMID:25767402
Field demonstration of on-site analytical methods for TNT and RDX in ground water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craig, H.; Ferguson, G.; Markos, A.
1996-12-31
A field demonstration was conducted to assess the performance of eight commercially-available and emerging colorimetric, immunoassay, and biosensor on-site analytical methods for explosives 2,4,6-trinitrotoluene (TNT) and hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) in ground water and leachate at the Umatilla Army Depot Activity, Hermiston, Oregon and US Naval Submarine Base, Bangor, Washington, Superfund sites. Ground water samples were analyzed by each of the on-site methods and results compared to laboratory analysis using high performance liquid chromatography (HPLC) with EPA SW-846 Method 8330. The commercial methods evaluated include the EnSys, Inc., TNT and RDX colorimetric test kits (EPA SW-846 Methods 8515 and 8510) with amore » solid phase extraction (SPE) step, the DTECH/EM Science TNT and RDX immunoassay test kits (EPA SW-846 Methods 4050 and 4051), and the Ohmicron TNT immunoassay test kit. The emerging methods tested include the antibody-based Naval Research Laboratory (NRL) Continuous Flow Immunosensor (CFI) for TNT and RDX, and the Fiber Optic Biosensor (FOB) for TNT. Accuracy of the on-site methods were evaluated using linear regression analysis and relative percent difference (RPD) comparison criteria. Over the range of conditions tested, the colorimetric methods for TNT and RDX showed the highest accuracy of the emerging methods for TNT and RDX. The colorimetric method was selected for routine ground water monitoring at the Umatilla site, and further field testing on the NRL CFI and FOB biosensors will continue at both Superfund sites.« less
NASA Technical Reports Server (NTRS)
Geer, Richard D.
1989-01-01
To assure the quality of potable water (PW) on the Space Station (SS) a number of chemical and physical tests must be conducted routinely. After reviewing the requirements for potable water, both direct and indirect analytical methods are evaluated that could make the required tests and improvements compatible with the Space Station operation. A variety of suggestions are made to improve the analytical techniques for SS operation. The most important recommendations are: (1) the silver/silver chloride electrode (SB) method of removing I sub 2/I (-) biocide from the water, since it may interfere with analytical procedures for PW and also its end uses; (2) the orbital reactor (OR) method of carrying out chemistry and electrochemistry in microgravity by using a disk shaped reactor on an orbital table to impart artificial G force to the contents, allowing solution mixing and separation of gases and liquids; and (3) a simple ultra low volume highly sensitive electrochemical/conductivity detector for use with a capillary zone electrophoresis apparatus. It is also recommended, since several different conductivity and resistance measurements are made during the analysis of PW, that the bipolar pulse measuring circuit be used in all these applications for maximum compatibility and redundancy of equipment.
Annual banned-substance review: analytical approaches in human sports drug testing.
Thevis, Mario; Kuuranne, Tiia; Geyer, Hans; Schänzer, Wilhelm
2014-01-01
Monitoring the misuse of drugs and the abuse of substances and methods potentially or evidently improving athletic performance by analytical chemistry strategies is one of the main pillars of modern anti-doping efforts. Owing to the continuously growing knowledge in medicine, pharmacology, and (bio)chemistry, new chemical entities are frequently established and developed, various of which present a temptation for sportsmen and women due to assumed/attributed beneficial effects of such substances and preparations on, for example, endurance, strength, and regeneration. By means of new technologies, expanded existing test protocols, new insights into metabolism, distribution, and elimination of compounds prohibited by the World Anti-Doping Agency (WADA), analytical assays have been further improved in agreement with the content of the 2013 Prohibited List. In this annual banned-substance review, literature concerning human sports drug testing that was published between October 2012 and September 2013 is summarized and reviewed with particular emphasis on analytical approaches and their contribution to enhanced doping controls. Copyright © 2013 John Wiley & Sons, Ltd.
40 CFR 80.47 - Performance-based Analytical Test Method Approach.
Code of Federal Regulations, 2014 CFR
2014-07-01
... chemistry and statistics, or at least a bachelor's degree in chemical engineering, from an accredited... be compensated for any known chemical interferences using good laboratory practices. (3) The test... section, individual test results shall be compensated for any known chemical interferences using good...
Analytical Approaches to Verify Food Integrity: Needs and Challenges.
Stadler, Richard H; Tran, Lien-Anh; Cavin, Christophe; Zbinden, Pascal; Konings, Erik J M
2016-09-01
A brief overview of the main analytical approaches and practices to determine food authenticity is presented, addressing, as well, food supply chain and future requirements to more effectively mitigate food fraud. Food companies are introducing procedures and mechanisms that allow them to identify vulnerabilities in their food supply chain under the umbrella of a food fraud prevention management system. A key step and first line of defense is thorough supply chain mapping and full transparency, assessing the likelihood of fraudsters to penetrate the chain at any point. More vulnerable chains, such as those where ingredients and/or raw materials are purchased through traders or auctions, may require a higher degree of sampling, testing, and surveillance. Access to analytical tools is therefore pivotal, requiring continuous development and possibly sophistication in identifying chemical markers, data acquisition, and modeling. Significant progress in portable technologies is evident already today, for instance, as in the rapid testing now available at the agricultural level. In the near future, consumers may also have the ability to scan products in stores or at home to authenticate labels and food content. For food manufacturers, targeted analytical methods complemented by untargeted approaches are end control measures at the factory gate when the material is delivered. In essence, testing for food adulterants is an integral part of routine QC, ideally tailored to the risks in the individual markets and/or geographies or supply chains. The development of analytical methods is a first step in verifying the compliance and authenticity of food materials. A next, more challenging step is the successful establishment of global consensus reference methods as exemplified by the AOAC Stakeholder Panel on Infant Formula and Adult Nutritionals initiative, which can serve as an approach that could also be applied to methods for contaminants and adulterants in food. The food industry has taken these many challenges aboard, working closely with all stakeholders and continuously communicating on progress in a fully transparent manner.
Engel, A; Plöger, M; Mulac, D; Langer, K
2014-01-30
Nanoparticles composed of poly(DL-lactide-co-glycolide) (PLGA) represent promising colloidal drug carriers for improved drug targeting. Although most research activities are focused on intravenous application of these carriers the peroral administration is described to improve bioavailability of poorly soluble drugs. Based on these insights the manuscript describes a model tablet formulation for PLGA-nanoparticles and especially its analytical characterisation with regard to a nanosized drug carrier. Besides physico-chemical tablet characterisation according to pharmacopoeias the main goal of the study was the development of a suitable analytical method for the quantification of nanoparticle release from tablets. An analytical flow field-flow fractionation (AF4) method was established and validated which enables determination of nanoparticle content in solid dosage forms as well as quantification of particle release during dissolution testing. For particle detection a multi-angle light scattering (MALS) detector was coupled to the AF4-system. After dissolution testing, the presence of unaltered PLGA-nanoparticles was successfully proved by dynamic light scattering and scanning electron microscopy. Copyright © 2013 Elsevier B.V. All rights reserved.
Flight Test Experiment Design for Characterizing Stability and Control of Hypersonic Vehicles
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2008-01-01
A maneuver design method that is particularly well-suited for determining the stability and control characteristics of hypersonic vehicles is described in detail. Analytical properties of the maneuver design are explained. The importance of these analytical properties for maximizing information content in flight data is discussed, along with practical implementation issues. Results from flight tests of the X-43A hypersonic research vehicle (also called Hyper-X) are used to demonstrate the excellent modeling results obtained using this maneuver design approach. A detailed design procedure for generating the maneuvers is given to allow application to other flight test programs.
NASA Astrophysics Data System (ADS)
Khodaei, Mohammad; Fathi, Mohammadhossein; Meratian, Mahmood; Savabi, Omid
2018-05-01
Reducing the elastic modulus and also improving biological fixation to the bone is possible by using porous scaffolds. In the present study, porous titanium scaffolds containing different porosities were fabricated using the space holder method. Pore distribution, formed phases and mechanical properties of titanium scaffolds were studied by Scanning Electron Microscope (SEM), x-ray diffraction (XRD) and cold compression test. Then the results of compression test were compared to the Gibson-Ashby model. Both experimentally measured and analytically calculated elastic modulus of porous titanium scaffolds decreased by porosity increment. The compliance between experimentally measured and analytically calculated elastic modulus of titanium scaffolds are also increased by porosity increment.
Errors in clinical laboratories or errors in laboratory medicine?
Plebani, Mario
2006-01-01
Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes in pre- and post-examination steps must be minimized to guarantee the total quality of laboratory services.
NASA Astrophysics Data System (ADS)
Nunes, Josane C.
1991-02-01
This work quantifies the changes effected in electron absorbed dose to a soft-tissue equivalent medium when part of this medium is replaced by a material that is not soft -tissue equivalent. That is, heterogeneous dosimetry is addressed. Radionuclides which emit beta particles are the electron sources of primary interest. They are used in brachytherapy and in nuclear medicine: for example, beta -ray applicators made with strontium-90 are employed in certain ophthalmic treatments and iodine-131 is used to test thyroid function. More recent medical procedures under development and which involve beta radionuclides include radioimmunotherapy and radiation synovectomy; the first is a cancer modality and the second deals with the treatment of rheumatoid arthritis. In addition, the possibility of skin surface contamination exists whenever there is handling of radioactive material. Determination of absorbed doses in the examples of the preceding paragraph requires considering boundaries of interfaces. Whilst the Monte Carlo method can be applied to boundary calculations, for routine work such as in clinical situations, or in other circumstances where doses need to be determined quickly, analytical dosimetry would be invaluable. Unfortunately, few analytical methods for boundary beta dosimetry exist. Furthermore, the accuracy of results from both Monte Carlo and analytical methods has to be assessed. Although restricted to one radionuclide, phosphorus -32, the experimental data obtained in this work serve several purposes, one of which is to provide standards against which calculated results can be tested. The experimental data also contribute to the relatively sparse set of published boundary dosimetry data. At the same time, they may be useful in developing analytical boundary dosimetry methodology. The first application of the experimental data is demonstrated. Results from two Monte Carlo codes and two analytical methods, which were developed elsewhere, are compared with experimental data. Monte Carlo results compare satisfactory with experimental results for the boundaries considered. The agreement with experimental results for air interfaces is of particular interest because of discrepancies reported previously by another investigator who used data obtained from a different experimental technique. Results from one of the analytical methods differ significantly from the experimental data obtained here. The second analytical method provided data which approximate experimental results to within 30%. This is encouraging but it remains to be determined whether this method performs equally well for other source energies.
On the Use of Accelerated Test Methods for Characterization of Advanced Composite Materials
NASA Technical Reports Server (NTRS)
Gates, Thomas S.
2003-01-01
A rational approach to the problem of accelerated testing for material characterization of advanced polymer matrix composites is discussed. The experimental and analytical methods provided should be viewed as a set of tools useful in the screening of material systems for long-term engineering properties in aerospace applications. Consideration is given to long-term exposure in extreme environments that include elevated temperature, reduced temperature, moisture, oxygen, and mechanical load. Analytical formulations useful for predictive models that are based on the principles of time-based superposition are presented. The need for reproducible mechanisms, indicator properties, and real-time data are outlined as well as the methodologies for determining specific aging mechanisms.
Modeling and analysis of a novel planar eddy current damper
NASA Astrophysics Data System (ADS)
Zhang, He; Kou, Baoquan; Jin, Yinxi; Zhang, Lu; Zhang, Hailin; Li, Liyi
2014-05-01
In this paper, a novel 2-DOF permanent magnet planar eddy current damper is proposed, of which the stator is made of a copper plate and the mover is composed of two orthogonal 1-D permanent magnet arrays with a double sided structure. The main objective of the planar eddy current damper is to provide two orthogonal damping forces for dynamic systems like the 2-DOF high precision positioning system. Firstly, the basic structure and the operating principle of the planar damper are introduced. Secondly, the analytical model of the planar damper is established where the magnetic flux density distribution of the permanent magnet arrays is obtained by using the equivalent magnetic charge method and the image method. Then, the analytical expressions of the damping force and damping coefficient are derived. Lastly, to verify the analytical model, the finite element method (FEM) is adopted for calculating the flux density and a planar damper prototype is manufactured and thoroughly tested. The results from FEM and experiments are in good agreement with the ones from the analytical expressions indicating that the analytical model is reasonable and correct.
NASA Astrophysics Data System (ADS)
Wang, Xi; Yang, Bintang; Yu, Hu; Gao, Yulong
2017-04-01
The impulse excitation of mechanism causes transient vibration. In order to achieve adaptive transient vibration control, a method which can exactly model the response need to be proposed. This paper presents an analytical model to obtain the response of the primary system attached with dynamic vibration absorber (DVA) under impulse excitation. The impulse excitation which can be divided into single-impulse excitation and multi-impulse excitation is simplified as sinusoidal wave to establish the analytical model. To decouple the differential governing equations, a transform matrix is applied to convert the response from the physical coordinate to model coordinate. Therefore, the analytical response in the physical coordinate can be obtained by inverse transformation. The numerical Runge-Kutta method and experimental tests have demonstrated the effectiveness of the analytical model proposed. The wavelet of the response indicates that the transient vibration consists of components with multiple frequencies, and it shows that the modeling results coincide with the experiments. The optimizing simulations based on genetic algorithm and experimental tests demonstrate that the transient vibration of the primary system can be decreased by changing the stiffness of the DVA. The results presented in this paper are the foundations for us to develop the adaptive transient vibration absorber in the future.
Floré, Katelijne M J; Fiers, Tom; Delanghe, Joris R
2008-01-01
In recent years a number of point of care testing (POCT) glucometers were introduced on the market. We investigated the analytical variability (lot-to-lot variation, calibration error, inter-instrument and inter-operator variability) of glucose POCT systems in a university hospital environment and compared these results with the analytical needs required for tight glucose monitoring. The reference hexokinase method was compared to different POCT systems based on glucose oxidase (blood gas instruments) or glucose dehydrogenase (handheld glucometers). Based upon daily internal quality control data, total errors were calculated for the various glucose methods and the analytical variability of the glucometers was estimated. The total error of the glucometers exceeded by far the desirable analytical specifications (based on a biological variability model). Lot-to-lot variation, inter-instrument variation and inter-operator variability contributed approximately equally to total variance. As in a hospital environment, distribution of hematocrit values is broad, converting blood glucose into plasma values using a fixed factor further increases variance. The percentage of outliers exceeded the ISO 15197 criteria in a broad glucose concentration range. Total analytical variation of handheld glucometers is larger than expected. Clinicians should be aware that the variability of glucose measurements obtained by blood gas instruments is lower than results obtained with handheld glucometers on capillary blood.
Energy Analytics Campaign > 2014-2018 Assessment of Automated M&V Methods > 2012-2018 Better Assessment of automated measurement and verification methods. Granderson, J. et al. Lawrence Berkeley . PDF, 726 KB Performance Metrics and Objective Testing Methods for Energy Baseline Modeling Software
2017-01-01
Background Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Materials and Methods Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician’s request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. Results The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results. PMID:28107395
García-Blanco, Ana; Peña-Bautista, Carmen; Oger, Camille; Vigor, Claire; Galano, Jean-Marie; Durand, Thierry; Martín-Ibáñez, Nuria; Baquero, Miguel; Vento, Máximo; Cháfer-Pericás, Consuelo
2018-07-01
Lipid peroxidation plays an important role in Alzheimer Disease, so corresponding metabolites found in urine samples could be potential biomarkers. The aim of this work is to develop a reliable ultra-performance liquid chromatography-tandem mass spectrometry analytical method to determine a new set of lipid peroxidation compounds in urine samples. Excellent sensitivity was achieved with limits of detection between 0.08 and 17 nmol L -1 , which renders this method suitable to monitor analytes concentrations in real samples. The method's precision was satisfactory with coefficients of variation around 5-17% (intra-day) and 8-19% (inter-day). The accuracy of the method was assessed by analysis of spiked urine samples obtaining recoveries between 70% and 120% for most of the analytes. The utility of the described method was tested by analyzing urine samples from patients early diagnosed with mild cognitive impairment or mild dementia Alzheimer Disease following the clinical standard criteria. As preliminary results, some analytes (17(RS)-10-epi-SC-Δ 15 -11-dihomo-IsoF, PGE 2 ) and total parameters (Neuroprostanes, Isoprostanes, Isofurans) show differences between the control and the clinical groups. So, these analytes could be potential early Alzheimer Disease biomarkers assessing the patients' pro-oxidant condition. Copyright © 2018 Elsevier B.V. All rights reserved.
Analytical validation of a new point-of-care assay for serum amyloid A in horses.
Schwartz, D; Pusterla, N; Jacobsen, S; Christopher, M M
2018-01-17
Serum amyloid A (SAA) is a major acute phase protein in horses. A new point-of-care (POC) test for SAA (Stablelab) is available, but studies evaluating its analytical accuracy are lacking. To evaluate the analytical performance of the SAA POC test by 1) determining linearity and precision, 2) comparing results in whole blood with those in serum or plasma, and 3) comparing POC results with those obtained using a previously validated turbidimetric immunoassay (TIA). Assay validation. Analytical validation of the POC test was done in accordance with American Society of Veterinary Clinical Pathology guidelines using residual equine serum/plasma and whole blood samples from the Clinical Pathology Laboratory at the University of California-Davis. A TIA was used as the reference method. We also evaluated the effect of haematocrit (HCT). The POC test was linear for SAA concentrations of up to at least 1000 μg/mL (r = 0.991). Intra-assay CVs were 13, 18 and 15% at high (782 μg/mL), intermediate (116 μg/mL) and low (64 μg/mL) concentrations. Inter-assay (inter-batch) CVs were 45, 14 and 15% at high (1372 μg/mL), intermediate (140 μg/mL) and low (56 μg/mL) concentrations. SAA results in whole blood were significantly lower than those in serum/plasma (P = 0.0002), but were positively correlated (r = 0.908) and not affected by HCT (P = 0.261); proportional negative bias was observed in samples with SAA>500 μg/mL. The difference between methods exceeded the 95% confidence interval of the combined imprecision of both methods (15%). Analytical validation could not be performed in whole blood, the sample most likely to be used stall side. The POC test has acceptable accuracy and precision in equine serum/plasma with SAA concentrations of up to at least 1000 μg/mL. Low inter-batch precision at high concentrations may affect serial measurements, and the use of the same test batch and sample type (serum/plasma or whole blood) is recommended. Comparison of results between the POC test and the TIA is not recommended. © 2018 EVJ Ltd.
Propfan experimental data analysis
NASA Technical Reports Server (NTRS)
Vernon, David F.; Page, Gregory S.; Welge, H. Robert
1984-01-01
A data reduction method, which is consistent with the performance prediction methods used for analysis of new aircraft designs, is defined and compared to the method currently used by NASA using data obtained from an Ames Res. Center 11 foot transonic wind tunnel test. Pressure and flow visualization data from the Ames test for both the powered straight underwing nacelle, and an unpowered contoured overwing nacelle installation is used to determine the flow phenomena present for a wind mounted turboprop installation. The test data is compared to analytic methods, showing the analytic methods to be suitable for design and analysis of new configurations. The data analysis indicated that designs with zero interference drag levels are achieveable with proper wind and nacelle tailoring. A new overwing contoured nacelle design and a modification to the wing leading edge extension for the current wind tunnel model design are evaluated. Hardware constraints of the current model parts prevent obtaining any significant performance improvement due to a modified nacelle contouring. A new aspect ratio wing design for an up outboard rotation turboprop installation is defined, and an advanced contoured nacelle is provided.
Farajzadeh, Mir Ali; Bamorowat, Mahdi; Mogaddam, Mohammad Reza Afshar
2016-11-01
An efficient, reliable, sensitive, rapid, and green analytical method for the extraction and determination of neonicotinoid insecticides in aqueous samples has been developed using ionic liquid phase microextraction coupled with high performance liquid chromatography-diode array detector. In this method, a few microliters of 1-hexyl-3-methylimidazolium hexafluorophosphate (as an extractant) is added onto a ringer tablet and it is transferred into a conical test tube containing aqueous phase of the analytes. By manually shaking, the ringer tablet is dissolved and the extractant is released into the aqueous phase as very tiny droplets to provide a cloudy solution. After centrifuging the extracted analytes into ionic liquid are collected at the bottom of a conical test tube. Under the optimum extraction conditions, the method showed low limits of detection and quantification between 0.12 and 0.33 and 0.41 and 1.11ngmL(-1), respectively. Extraction recoveries and enrichment factors were from 66% to 84% and 655% to 843%, respectively. Finally different aqueous samples were successfully analyzed using the proposed method. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
White, Charles E., Jr.
The purpose of this study was to develop and implement a hypertext documentation system in an industrial laboratory and to evaluate its usefulness by participative observation and a questionnaire. Existing word-processing test method documentation was converted directly into a hypertext format or "hyperdocument." The hyperdocument was designed and…
Detection of molecular interactions
Groves, John T [Berkeley, CA; Baksh, Michael M [Fremont, CA; Jaros, Michal [Brno, CH
2012-02-14
A method and assay are described for measuring the interaction between a ligand and an analyte. The assay can include a suspension of colloidal particles that are associated with a ligand of interest. The colloidal particles are maintained in the suspension at or near a phase transition state from a condensed phase to a dispersed phase. An analyte to be tested is then added to the suspension. If the analyte binds to the ligand, a phase change occurs to indicate that the binding was successful.
Process-Hardened, Multi-Analyte Sensor for Characterizing Rocket Plume Constituents
NASA Technical Reports Server (NTRS)
Goswami, Kisholoy
2011-01-01
A multi-analyte sensor was developed that enables simultaneous detection of rocket engine combustion-product molecules in a launch-vehicle ground test stand. The sensor was developed using a pin-printing method by incorporating multiple sensor elements on a single chip. It demonstrated accurate and sensitive detection of analytes such as carbon dioxide, carbon monoxide, kerosene, isopropanol, and ethylene from a single measurement. The use of pin-printing technology enables high-volume fabrication of the sensor chip, which will ultimately eliminate the need for individual sensor calibration since many identical sensors are made in one batch. Tests were performed using a single-sensor chip attached to a fiber-optic bundle. The use of a fiber bundle allows placement of the opto-electronic readout device at a place remote from the test stand. The sensors are rugged for operation in harsh environments.
Paoloni, Angela; Alunni, Sabrina; Pelliccia, Alessandro; Pecorelli, Ivan
2016-01-01
A simple and straightforward method for simultaneous determination of residues of 13 pesticides in honey samples (acrinathrin, bifenthrin, bromopropylate, cyhalothrin-lambda, cypermethrin, chlorfenvinphos, chlorpyrifos, coumaphos, deltamethrin, fluvalinate-tau, malathion, permethrin and tetradifon) from different pesticide classes has been developed and validated. The analytical method provides dissolution of honey in water and an extraction of pesticide residues by n-Hexane followed by clean-up on a Florisil SPE column. The extract was evaporated and taken up by a solution of an injection internal standard (I-IS), ethion, and finally analyzed by capillary gas chromatography with electron capture detection (GC-µECD). Identification for qualitative purpose was conducted by gas chromatography with triple quadrupole mass spectrometer (GC-MS/MS). A matrix-matched calibration curve was performed for quantitative purposes by plotting the area ratio (analyte/I-IS) against concentration using a GC-µECD instrument. According to document No. SANCO/12571/2013, the method was validated by testing the following parameters: linearity, matrix effect, specificity, precision, trueness (bias) and measurement uncertainty. The analytical process was validated analyzing blank honey samples spiked at levels equal to and greater than 0.010 mg/kg (limit of quantification). All parameters were satisfactorily compared with the values established by document No. SANCO/12571/2013. The analytical performance was verified by participating in eight multi-residue proficiency tests organized by BIPEA, obtaining satisfactory z-scores in all 70 determinations. Measurement uncertainty was estimated according to the top-down approaches described in Appendix C of the SANCO document using the within-laboratory reproducibility relative standard deviation combined with laboratory bias using the proficiency test data.
Steuer, Andrea E; Forss, Anna-Maria; Dally, Annika M; Kraemer, Thomas
2014-11-01
In the context of driving under the influence of drugs (DUID), not only common drugs of abuse may have an influence, but also medications with similar mechanisms of action. Simultaneous quantification of a variety of drugs and medications relevant in this context allows faster and more effective analyses. Therefore, multi-analyte approaches have gained more and more popularity in recent years. Usually, calibration curves for such procedures contain a mixture of all analytes, which might lead to mutual interferences. In this study we investigated whether the use of such mixtures leads to reliable results for authentic samples containing only one or two analytes. Five hundred microliters of whole blood were extracted by routine solid-phase extraction (SPE, HCX). Analysis was performed on an ABSciex 3200 QTrap instrument with ESI+ in scheduled MRM mode. The method was fully validated according to international guidelines including selectivity, recovery, matrix effects, accuracy and precision, stabilities, and limit of quantification. The selected SPE provided recoveries >60% for all analytes except 6-monoacetylmorphine (MAM) with coefficients of variation (CV) below 15% or 20% for quality controls (QC) LOW and HIGH, respectively. Ion suppression >30% was found for benzoylecgonine, hydrocodone, hydromorphone, MDA, oxycodone, and oxymorphone at QC LOW, however CVs were always below 10% (n=6 different whole blood samples). Accuracy and precision criteria were fulfilled for all analytes except for MAM. Systematic investigation of accuracy determined for QC MED in a multi-analyte mixture compared to samples containing only single analytes revealed no relevant differences for any analyte, indicating that a multi-analyte calibration is suitable for the presented method. Comparison of approximately 60 samples to a former GC-MS method showed good correlation. The newly validated method was successfully applied to more than 1600 routine samples and 3 proficiency tests. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Mitchell, J M; Griffiths, M W; McEwen, S A; McNab, W B; Yee, A J
1998-06-01
This paper presents a historical review of antimicrobial use in food animals, the causes of residues in meat and milk, the types of residues found, their regulation in Canada, tests used for their detection, and test performance parameters, with an emphasis on immunoassay techniques. The development of residue detection methods began shortly after the introduction of antimicrobials to food animal production in the late 1940s. From initial technical concerns expressed by the dairy industry to the present public health and international trade implications, there has been an ongoing need for reliable, sensitive, and economical methods for the detection of antimicrobial residues in food animal products such as milk and meat. Initially there were microbial growth inhibition tests, followed by more sensitive and specific methods based on receptor binding, immunochemical, and chromatographic principle. An understanding of basic test performance parameters and their implications is essential when choosing an analytical strategy for residue testing. While each test format has its own attributes, none test will meet all the required analytical needs. Therefore the use of a tiered or integrated system employing assays designated for screening and confirmation is necessary to ensure that foods containing violative residues are not introduced into the food chain.
System identification of analytical models of damped structures
NASA Technical Reports Server (NTRS)
Fuh, J.-S.; Chen, S.-Y.; Berman, A.
1984-01-01
A procedure is presented for identifying linear nonproportionally damped system. The system damping is assumed to be representable by a real symmetric matrix. Analytical mass, stiffness and damping matrices which constitute an approximate representation of the system are assumed to be available. Given also are an incomplete set of measured natural frequencies, damping ratios and complex mode shapes of the structure, normally obtained from test data. A method is developed to find the smallest changes in the analytical model so that the improved model can exactly predict the measured modal parameters. The present method uses the orthogonality relationship to improve mass and damping matrices and the dynamic equation to find the improved stiffness matrix.
NASA Astrophysics Data System (ADS)
Li, Zhao; Wang, Dazhi; Zheng, Di; Yu, Linxin
2017-10-01
Rotational permanent magnet eddy current couplers are promising devices for torque and speed transmission without any mechanical contact. In this study, flux-concentration disk-type permanent magnet eddy current couplers with double conductor rotor are investigated. Given the drawback of the accurate three-dimensional finite element method, this paper proposes a mixed two-dimensional analytical modeling approach. Based on this approach, the closed-form expressions of magnetic field, eddy current, electromagnetic force and torque for such devices are obtained. Finally, a three-dimensional finite element method is employed to validate the analytical results. Besides, a prototype is manufactured and tested for the torque-speed characteristic.
NASA Astrophysics Data System (ADS)
Avitabile, Daniele; Bridges, Thomas J.
2010-06-01
Numerical integration of complex linear systems of ODEs depending analytically on an eigenvalue parameter are considered. Complex orthogonalization, which is required to stabilize the numerical integration, results in non-analytic systems. It is shown that properties of eigenvalues are still efficiently recoverable by extracting information from a non-analytic characteristic function. The orthonormal systems are constructed using the geometry of Stiefel bundles. Different forms of continuous orthogonalization in the literature are shown to correspond to different choices of connection one-form on the Stiefel bundle. For the numerical integration, Gauss-Legendre Runge-Kutta algorithms are the principal choice for preserving orthogonality, and performance results are shown for a range of GLRK methods. The theory and methods are tested by application to example boundary value problems including the Orr-Sommerfeld equation in hydrodynamic stability.
Comparisons of Exploratory and Confirmatory Factor Analysis.
ERIC Educational Resources Information Center
Daniel, Larry G.
Historically, most researchers conducting factor analysis have used exploratory methods. However, more recently, confirmatory factor analytic methods have been developed that can directly test theory either during factor rotation using "best fit" rotation methods or during factor extraction, as with the LISREL computer programs developed…
On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood.
Karabatsos, George
2018-06-01
This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon previous methods because it provides an omnibus test of the entire hierarchy of cancellation axioms, beyond double cancellation. It does so while accounting for the posterior uncertainty that is inherent in the empirical orderings that are implied by these axioms, together. The new method is illustrated through a test of the cancellation axioms on a classic survey data set, and through the analysis of simulated data.
Monneret, Denis; Mestari, Fouzi; Atlan, Gregory; Corlouer, Camille; Ramani, Zo; Jaffre, Jeremy; Dever, Sylvie; Fressart, Veronique; Alkouri, Rana; Lamari, Foudil; Devilliers, Catherine; Imbert-Bismut, Françoise; Bonnefont-Rousselot, Dominique
2015-04-01
To determine the hemolysis interference on biochemical tests and immunoassays performed on Roche Diagnostics analyzers, according to different maximum allowable limits. Heparinized plasma and serum pools, free of interferences, were overloaded by increasing amounts of a hemoglobin-titrated hemolysate. This interference was evaluated for 45 analytes using Modular(®) and Cobas(®) analyzers. For each parameter, the hemolysis index (HI) corresponding to the traditional ± 10% change of concentrations from baseline (± 10%Δ) was determined, as well as those corresponding to the analytical change limit (ACL), and to the reference change value (RCV). Then, the relative frequencies distribution (% RFD) of hemolyzed tests performed in a hospital laboratory over a 25-day period were established for each HI as allowable limit. Considering the ± 10%Δ, the analyte concentrations enhanced by hemolysis were: Lactate dehydrogenase (LDH), aspartate aminotransferase (AST), folate, potassium, creatine kinase, phosphorus, iron, alanine aminotransferase, lipase, magnesium and triglycerides, decreasingly. The analyte concentrations decreased by hemolysis were: Haptoglobin, high-sensitive troponin T and alkaline phosphatase. Over the 25-day period, the % RFD of tests impacted more than 10%Δ by hemolysis were < 7% for LDH; < 5% for AST, folates and iron; and < 1% for the other analytes. Considering the ACL, HI were lower, giving % RFD substantially increased for many analytes, whereas only four analytes remain sensitive to hemolysis when considering RCV. This study proposes new HI based on different allowable limits, and can therefore serve as a starting point for future harmonization of hemolysis interference evaluation needed in routine laboratory practice.
Humbert, H; Machinal, C; Labaye, Ivan; Schrotter, J C
2011-01-01
The determination of the virus retention capabilities of UF units during operation is essential for the operators of drinking water treatment facilities in order to guarantee an efficient and stable removal of viruses through time. In previous studies, an effective method (MS2-phage challenge tests) was developed by the Water Research Center of Veolia Environnement for the measurement of the virus retention rates (Log Removal Rate, LRV) of commercially available hollow fiber membranes at lab scale. In the present work, the protocol for monitoring membrane performance was transferred from lab scale to pilot scale. Membrane performances were evaluated during pilot trial and compared to the results obtained at lab scale with fibers taken from the pilot plant modules. PFU culture method was compared to RT-PCR method for the calculation of LRV in both cases. Preliminary tests at lab scale showed that both methods can be used interchangeably. For tests conducted on virgin membrane, a good consistency was observed between lab and pilot scale results with the two analytical methods used. This work intends to show that a reliable determination of the membranes performances based on RT-PCR analytical method can be achieved during the operation of the UF units.
Verification and application of the Iosipescu shear test method
NASA Technical Reports Server (NTRS)
Walrath, D. E.; Adams, D. F.
1984-01-01
Finite element models were used to study the effects of notch angle variations on the stress state within an Iosipescu shear test speciment. These analytical results were also studied to determine the feasibility of using strain gage rosettes and a modified extensometer to measure shear strains in this test specimen. Analytical results indicate that notch angle variations produced only small differences in simulated shear properties. Both strain gage rosettes and the modified extensometer were shown to be feasible shear strain transducers for the test method. The Iosipoescu shear test fixture was redesigned to incorporate several improvements. These improvements include accommodation of a 50 percent larger specimen for easier measurement of shear train, a clamping mechanism to relax strict tolerances on specimen width, and a self contained alignment tool for use during specimen installation. A set of in-plane and interlaminar shear properties were measured for three graphite fabric/epoxy composites of T300/934 composite material. The three weave patterns were Oxford, 5-harness satin, and 8-harness satin.
A Comparison of Lifting-Line and CFD Methods with Flight Test Data from a Research Puma Helicopter
NASA Technical Reports Server (NTRS)
Bousman, William G.; Young, Colin; Toulmay, Francois; Gilbert, Neil E.; Strawn, Roger C.; Miller, Judith V.; Maier, Thomas H.; Costes, Michel; Beaumier, Philippe
1996-01-01
Four lifting-line methods were compared with flight test data from a research Puma helicopter and the accuracy assessed over a wide range of flight speeds. Hybrid Computational Fluid Dynamics (CFD) methods were also examined for two high-speed conditions. A parallel analytical effort was performed with the lifting-line methods to assess the effects of modeling assumptions and this provided insight into the adequacy of these methods for load predictions.
DROMO formulation for planar motions: solution to the Tsien problem
NASA Astrophysics Data System (ADS)
Urrutxua, Hodei; Morante, David; Sanjurjo-Rivo, Manuel; Peláez, Jesús
2015-06-01
The two-body problem subject to a constant radial thrust is analyzed as a planar motion. The description of the problem is performed in terms of three perturbation methods: DROMO and two others due to Deprit. All of them rely on Hansen's ideal frame concept. An explicit, analytic, closed-form solution is obtained for this problem when the initial orbit is circular (Tsien problem), based on the DROMO special perturbation method, and expressed in terms of elliptic integral functions. The analytical solution to the Tsien problem is later used as a reference to test the numerical performance of various orbit propagation methods, including DROMO and Deprit methods, as well as Cowell and Kustaanheimo-Stiefel methods.
Borai, Anwar; Ichihara, Kiyoshi; Al Masaud, Abdulaziz; Tamimi, Waleed; Bahijri, Suhad; Armbuster, David; Bawazeer, Ali; Nawajha, Mustafa; Otaibi, Nawaf; Khalil, Haitham; Kawano, Reo; Kaddam, Ibrahim; Abdelaal, Mohamed
2016-05-01
This study is a part of the IFCC-global study to derive reference intervals (RIs) for 28 chemistry analytes in Saudis. Healthy individuals (n=826) aged ≥18 years were recruited using the global study protocol. All specimens were measured using an Architect analyzer. RIs were derived by both parametric and non-parametric methods for comparative purpose. The need for secondary exclusion of reference values based on latent abnormal values exclusion (LAVE) method was examined. The magnitude of variation attributable to gender, ages and regions was calculated by the standard deviation ratio (SDR). Sources of variations: age, BMI, physical exercise and smoking levels were investigated by using the multiple regression analysis. SDRs for gender, age and regional differences were significant for 14, 8 and 2 analytes, respectively. BMI-related changes in test results were noted conspicuously for CRP. For some metabolic related parameters the ranges of RIs by non-parametric method were wider than by the parametric method and RIs derived using the LAVE method were significantly different than those without it. RIs were derived with and without gender partition (BMI, drugs and supplements were considered). RIs applicable to Saudis were established for the majority of chemistry analytes, whereas gender, regional and age RI partitioning was required for some analytes. The elevated upper limits of metabolic analytes reflects the existence of high prevalence of metabolic syndrome in Saudi population.
A new method for flight test determination of propulsive efficiency and drag coefficient
NASA Technical Reports Server (NTRS)
Bull, G.; Bridges, P. D.
1983-01-01
A flight test method is described from which propulsive efficiency as well as parasite and induced drag coefficients can be directly determined using relatively simple instrumentation and analysis techniques. The method uses information contained in the transient response in airspeed for a small power change in level flight in addition to the usual measurement of power required for level flight. Measurements of pitch angle and longitudinal and normal acceleration are eliminated. The theoretical basis for the method, the analytical techniques used, and the results of application of the method to flight test data are presented.
Two-condition within-participant statistical mediation analysis: A path-analytic framework.
Montoya, Amanda K; Hayes, Andrew F
2017-03-01
Researchers interested in testing mediation often use designs where participants are measured on a dependent variable Y and a mediator M in both of 2 different circumstances. The dominant approach to assessing mediation in such a design, proposed by Judd, Kenny, and McClelland (2001), relies on a series of hypothesis tests about components of the mediation model and is not based on an estimate of or formal inference about the indirect effect. In this article we recast Judd et al.'s approach in the path-analytic framework that is now commonly used in between-participant mediation analysis. By so doing, it is apparent how to estimate the indirect effect of a within-participant manipulation on some outcome through a mediator as the product of paths of influence. This path-analytic approach eliminates the need for discrete hypothesis tests about components of the model to support a claim of mediation, as Judd et al.'s method requires, because it relies only on an inference about the product of paths-the indirect effect. We generalize methods of inference for the indirect effect widely used in between-participant designs to this within-participant version of mediation analysis, including bootstrap confidence intervals and Monte Carlo confidence intervals. Using this path-analytic approach, we extend the method to models with multiple mediators operating in parallel and serially and discuss the comparison of indirect effects in these more complex models. We offer macros and code for SPSS, SAS, and Mplus that conduct these analyses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Oyaert, Matthijs; Van Maerken, Tom; Bridts, Silke; Van Loon, Silvi; Laverge, Heleen; Stove, Veronique
2018-03-01
Point-of-care blood gas test results may benefit therapeutic decision making by their immediate impact on patient care. We evaluated the (pre-)analytical performance of a novel cartridge-type blood gas analyzer, the GEM Premier 5000 (Werfen), for the determination of pH, partial carbon dioxide pressure (pCO 2 ), partial oxygen pressure (pO 2 ), sodium (Na + ), potassium (K + ), chloride (Cl - ), ionized calcium ( i Ca 2+ ), glucose, lactate, and total hemoglobin (tHb). Total imprecision was estimated according to the CLSI EP5-A2 protocol. The estimated total error was calculated based on the mean of the range claimed by the manufacturer. Based on the CLSI EP9-A2 evaluation protocol, a method comparison with the Siemens RapidPoint 500 and Abbott i-STAT CG8+ was performed. Obtained data were compared against preset quality specifications. Interference of potential pre-analytical confounders on co-oximetry and electrolyte concentrations were studied. The analytical performance was acceptable for all parameters tested. Method comparison demonstrated good agreement to the RapidPoint 500 and i-STAT CG8+, except for some parameters (RapidPoint 500: pCO 2 , K + , lactate and tHb; i-STAT CG8+: pO 2 , Na + , i Ca 2+ and tHb) for which significant differences between analyzers were recorded. No interference of lipemia or methylene blue on CO-oximetry results was found. On the contrary, significant interference for benzalkonium and hemolysis on electrolyte measurements were found, for which the user is notified by an interferent specific flag. Identification of sample errors from pre-analytical sources, such as interferences and automatic corrective actions, along with the analytical performance, ease of use and low maintenance time of the instrument, makes the evaluated instrument a suitable blood gas analyzer for both POCT and laboratory use. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
ANALYTICAL METHOD DEVELOPMENTS TO SUPPORT PARTITIONING INTERWELL TRACER TESTING
Partitioning Interwell Tracer Testing (PITT) uses alcohol tracer compounds in estimating subsurface contamination from non-polar pollutants. PITT uses the analysis of water samples for various alcohols as part of the overall measurement process. The water samples may contain many...
Code of Federal Regulations, 2013 CFR
2013-07-01
... performance evaluation or test for a semi-regenerative catalytic reforming unit catalyst regenerator vent, you... properties. Examples of analytical methods include, but are not limited to: (i) Use of material balances...
Code of Federal Regulations, 2014 CFR
2014-07-01
... performance evaluation or test for a semi-regenerative catalytic reforming unit catalyst regenerator vent, you... properties. Examples of analytical methods include, but are not limited to: (i) Use of material balances...
Code of Federal Regulations, 2012 CFR
2012-07-01
... performance evaluation or test for a semi-regenerative catalytic reforming unit catalyst regenerator vent, you... properties. Examples of analytical methods include, but are not limited to: (i) Use of material balances...
Abdel Massih, M; Planchon, V; Polet, M; Dierick, K; Mahillon, J
2016-02-01
Based on the results of 19 food microbiology proficiency testing (PT) schemes, this study aimed to assess the laboratory performances, to highlight the main sources of unsatisfactory analytical results and to suggest areas of improvement. The 2009-2015 results of REQUASUD and IPH PT, involving a total of 48 laboratories, were analysed. On average, the laboratories failed to detect or enumerate foodborne pathogens in 3·0% of the tests. Thanks to a close collaboration with the PT participants, the causes of outliers could be identified in 74% of the cases. The main causes of erroneous PT results were either pre-analytical (handling of the samples, timing of analysis), analytical (unsuitable methods, confusion of samples, errors in colony counting or confirmation) or postanalytical mistakes (calculation and encoding of results). PT schemes are a privileged observation post to highlight analytical problems, which would otherwise remain unnoticed. In this perspective, this comprehensive study of PT results provides insight into the sources of systematic errors encountered during the analyses. This study draws the attention of the laboratories to the main causes of analytical errors and suggests practical solutions to avoid them, in an educational purpose. The observations support the hypothesis that regular participation to PT, when followed by feed-back and appropriate corrective actions, can play a key role in quality improvement and provide more confidence in the laboratory testing results. © 2015 The Society for Applied Microbiology.
NASA Technical Reports Server (NTRS)
Tegart, J. R.; Aydelott, J. C.
1978-01-01
The design of surface tension propellant acquisition systems using fine-mesh screen must take into account all factors that influence the liquid pressure differentials within the system. One of those factors is spacecraft vibration. Analytical models to predict the effects of vibration have been developed. A test program to verify the analytical models and to allow a comparative evaluation of the parameters influencing the response to vibration was performed. Screen specimens were tested under conditions simulating the operation of an acquisition system, considering the effects of such parameters as screen orientation and configuration, screen support method, screen mesh, liquid flow and liquid properties. An analytical model, based on empirical coefficients, was most successful in predicting the effects of vibration.
Thanh, Tran Thien; Vuong, Le Quang; Ho, Phan Long; Chuong, Huynh Dinh; Nguyen, Vo Hoang; Tao, Chau Van
2018-04-01
In this work, an advanced analytical procedure was applied to calculate radioactivity in spiked water samples in a close geometry gamma spectroscopy. It included MCNP-CP code in order to calculate the coincidence summing correction factor (CSF). The CSF results were validated by a deterministic method using ETNA code for both p-type HPGe detectors. It showed that a good agreement for both codes. Finally, the validity of the developed procedure was confirmed by a proficiency test to calculate the activities of various radionuclides. The results of the radioactivity measurement with both detectors using the advanced analytical procedure were received the ''Accepted'' statuses following the proficiency test. Copyright © 2018 Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
For any analytical system the population mean (mu) number of entities (e.g., cells or molecules) per tested volume, surface area, or mass also defines the population standard deviation (sigma = square root of mu ). For a preponderance of analytical methods, sigma is very small relative to mu due to...
RECENT DEVELOPMENTS IN ANALYTICAL METHODS FOR FIBROUS AMPHIBOLE IN VERMICULITE ATTIC INSULATION
The U.S. Environmental Protection Agency has developed a test method for the analysis of fibrous amphibole in vermiculite attic insulation. This method was developed to provide the Agency with monitoring tools to study the occurrence and potential for exposure to fibrous amphibo...
Engineering of a miniaturized, robotic clinical laboratory
Nourse, Marilyn B.; Engel, Kate; Anekal, Samartha G.; Bailey, Jocelyn A.; Bhatta, Pradeep; Bhave, Devayani P.; Chandrasekaran, Shekar; Chen, Yutao; Chow, Steven; Das, Ushati; Galil, Erez; Gong, Xinwei; Gessert, Steven F.; Ha, Kevin D.; Hu, Ran; Hyland, Laura; Jammalamadaka, Arvind; Jayasurya, Karthik; Kemp, Timothy M.; Kim, Andrew N.; Lee, Lucie S.; Liu, Yang Lily; Nguyen, Alphonso; O'Leary, Jared; Pangarkar, Chinmay H.; Patel, Paul J.; Quon, Ken; Ramachandran, Pradeep L.; Rappaport, Amy R.; Roy, Joy; Sapida, Jerald F.; Sergeev, Nikolay V.; Shee, Chandan; Shenoy, Renuka; Sivaraman, Sharada; Sosa‐Padilla, Bernardo; Tran, Lorraine; Trent, Amanda; Waggoner, Thomas C.; Wodziak, Dariusz; Yuan, Amy; Zhao, Peter; Holmes, Elizabeth A.
2018-01-01
Abstract The ability to perform laboratory testing near the patient and with smaller blood volumes would benefit patients and physicians alike. We describe our design of a miniaturized clinical laboratory system with three components: a hardware platform (ie, the miniLab) that performs preanalytical and analytical processing steps using miniaturized sample manipulation and detection modules, an assay‐configurable cartridge that provides consumable materials and assay reagents, and a server that communicates bidirectionally with the miniLab to manage assay‐specific protocols and analyze, store, and report results (i.e., the virtual analyzer). The miniLab can detect analytes in blood using multiple methods, including molecular diagnostics, immunoassays, clinical chemistry, and hematology. Analytical performance results show that our qualitative Zika virus assay has a limit of detection of 55 genomic copies/ml. For our anti‐herpes simplex virus type 2 immunoglobulin G, lipid panel, and lymphocyte subset panel assays, the miniLab has low imprecision, and method comparison results agree well with those from the United States Food and Drug Administration‐cleared devices. With its small footprint and versatility, the miniLab has the potential to provide testing of a range of analytes in decentralized locations. PMID:29376134
Engineering of a miniaturized, robotic clinical laboratory.
Nourse, Marilyn B; Engel, Kate; Anekal, Samartha G; Bailey, Jocelyn A; Bhatta, Pradeep; Bhave, Devayani P; Chandrasekaran, Shekar; Chen, Yutao; Chow, Steven; Das, Ushati; Galil, Erez; Gong, Xinwei; Gessert, Steven F; Ha, Kevin D; Hu, Ran; Hyland, Laura; Jammalamadaka, Arvind; Jayasurya, Karthik; Kemp, Timothy M; Kim, Andrew N; Lee, Lucie S; Liu, Yang Lily; Nguyen, Alphonso; O'Leary, Jared; Pangarkar, Chinmay H; Patel, Paul J; Quon, Ken; Ramachandran, Pradeep L; Rappaport, Amy R; Roy, Joy; Sapida, Jerald F; Sergeev, Nikolay V; Shee, Chandan; Shenoy, Renuka; Sivaraman, Sharada; Sosa-Padilla, Bernardo; Tran, Lorraine; Trent, Amanda; Waggoner, Thomas C; Wodziak, Dariusz; Yuan, Amy; Zhao, Peter; Young, Daniel L; Robertson, Channing R; Holmes, Elizabeth A
2018-01-01
The ability to perform laboratory testing near the patient and with smaller blood volumes would benefit patients and physicians alike. We describe our design of a miniaturized clinical laboratory system with three components: a hardware platform (ie, the miniLab) that performs preanalytical and analytical processing steps using miniaturized sample manipulation and detection modules, an assay-configurable cartridge that provides consumable materials and assay reagents, and a server that communicates bidirectionally with the miniLab to manage assay-specific protocols and analyze, store, and report results (i.e., the virtual analyzer). The miniLab can detect analytes in blood using multiple methods, including molecular diagnostics, immunoassays, clinical chemistry, and hematology. Analytical performance results show that our qualitative Zika virus assay has a limit of detection of 55 genomic copies/ml. For our anti-herpes simplex virus type 2 immunoglobulin G, lipid panel, and lymphocyte subset panel assays, the miniLab has low imprecision, and method comparison results agree well with those from the United States Food and Drug Administration-cleared devices. With its small footprint and versatility, the miniLab has the potential to provide testing of a range of analytes in decentralized locations.
Feigenbaum, A; Scholler, D; Bouquant, J; Brigot, G; Ferrier, D; Franzl, R; Lillemarktt, L; Riquet, A M; Petersen, J H; van Lierop, B; Yagoubi, N
2002-02-01
The results of a research project (EU AIR Research Programme CT94-1025) aimed to introduce control of migration into good manufacturing practice and into enforcement work are reported. Representative polymer classes were defined on the basis of chemical structure, technological function, migration behaviour and market share. These classes were characterized by analytical methods. Analytical techniques were investigated for identification of potential migrants. High-temperature gas chromatography was shown to be a powerful method and 1H-magnetic resonance provided a convenient fingerprint of plastic materials. Volatile compounds were characterized by headspace techniques, where it was shown to be essential to differentiate volatile compounds desorbed from those generated during the thermal desorption itself. For metal trace analysis, microwave mineralization followed by atomic absorption was employed. These different techniques were introduced into a systematic testing scheme that is envisaged as being suitable both for industrial control and for enforcement laboratories. Guidelines will be proposed in the second part of this paper.
Evaluation on Bending Properties of Biomaterial GUM Metal Meshed Plates for Bone Graft Applications
NASA Astrophysics Data System (ADS)
Suzuki, Hiromichi; He, Jianmei
2017-11-01
There are three bone graft methods for bone defects caused by diseases such as cancer and accident injuries: Autogenous bone grafts, Allografts and Artificial bone grafts. In this study, meshed GUM Metal plates with lower elasticity, high strength and high biocompatibility are introduced to solve the over stiffness & weight problems of ready-used metal implants. Basic mesh shapes are designed and applied to GUM Metal plates using 3D CAD modeling tools. Bending properties of prototype meshed GUM Metal plates are evaluated experimentally and analytically. Meshed plate specimens with 180°, 120° and 60° axis-symmetrical types were fabricated for 3-point bending tests. The pseudo bending elastic moduli of meshed plate specimens obtained from 3-point bending test are ranged from 4.22 GPa to 16.07 GPa, within the elasticity range of natural cortical bones from 2.0 GPa to 30.0 GPa. Analytical approach method is validated by comparison with experimental and analytical results for evaluation on bending property of meshed plates.
NASA Technical Reports Server (NTRS)
Smith, C. B.
1982-01-01
The Fymat analytic inversion method for retrieving a particle-area distribution function from anomalous diffraction multispectral extinction data and total area is generalized to the case of a variable complex refractive index m(lambda) near unity depending on spectral wavelength lambda. Inversion tests are presented for a water-haze aerosol model. An upper-phase shift limit of 5 pi/2 retrieved an accurate peak area distribution profile. Analytical corrections using both the total number and area improved the inversion.
Low thermal flux glass-fiber tubing for cryogenic service
NASA Technical Reports Server (NTRS)
Hall, C. A.; Spond, D. E.
1977-01-01
This paper describes analytical techniques, fabrication development, and test results for composite tubing that has many applications in aerospace and commercial cryogenic installations. Metal liner fabrication is discussed in detail with attention given to resistance-welded liners, fusion-welded liners, chem-milled tubing liners, joining tube liners and end fittings, heat treatment and leak checks. Composite overwrapping, a second method of tubing fabrication, is also discussed. Test programs and analytical correlation are considered along with composite tubing advantages such as minimum weight, thermal efficiency and safety and reliability.
Low level vapor verification of monomethyl hydrazine
NASA Technical Reports Server (NTRS)
Mehta, Narinder
1990-01-01
The vapor scrubbing system and the coulometric test procedure for the low level vapor verification of monomethyl hydrazine (MMH) are evaluated. Experimental data on precision, efficiency of the scrubbing liquid, instrument response, detection and reliable quantitation limits, stability of the vapor scrubbed solution, and interference were obtained to assess the applicability of the method for the low ppb level detection of the analyte vapor in air. The results indicated that the analyte vapor scrubbing system and the coulometric test procedure can be utilized for the quantitative detection of low ppb level vapor of MMH in air.
A research program to reduce interior noise in general aviation airplanes. [test methods and results
NASA Technical Reports Server (NTRS)
Roskam, J.; Muirhead, V. U.; Smith, H. W.; Peschier, T. D.; Durenberger, D.; Vandam, K.; Shu, T. C.
1977-01-01
Analytical and semi-empirical methods for determining the transmission of sound through isolated panels and predicting panel transmission loss are described. Test results presented include the influence of plate stiffness and mass and the effects of pressurization and vibration damping materials on sound transmission characteristics. Measured and predicted results are presented in tables and graphs.
Lightning Effects in the Payload Changeout Room
NASA Technical Reports Server (NTRS)
Thomas, Garland L.; Fisher, Franklin A.; Collier, Richard S.; Medelius, Pedro J.
1997-01-01
Analytical and empirical studies have been performed to provide better understanding of the electromagnetic environment inside the Payload Changeout Room and Orbiter payload bay resulting from lightning strikes to the launch pad lightning protection system. The analytical studies consisted of physical and mathematical modeling of the pad structure and the Payload Changeout Room. Empirical testing was performed using a lightning simulator to simulate controlled (8 kA) lightning strikes to the catenary wire lightning protection system. In addition to the analyses and testing listed above, an analysis of the configuration with the vehicle present was conducted, in lieu of testing, by the Finite Difference, Time Domain method.
Adulterants in Urine Drug Testing.
Fu, S
Urine drug testing plays an important role in monitoring licit and illicit drug use for both medico-legal and clinical purposes. One of the major challenges of urine drug testing is adulteration, a practice involving manipulation of a urine specimen with chemical adulterants to produce a false negative test result. This problem is compounded by the number of easily obtained chemicals that can effectively adulterate a urine specimen. Common adulterants include some household chemicals such as hypochlorite bleach, laundry detergent, table salt, and toilet bowl cleaner and many commercial products such as UrinAid (glutaraldehyde), Stealth® (containing peroxidase and peroxide), Urine Luck (pyridinium chlorochromate, PCC), and Klear® (potassium nitrite) available through the Internet. These adulterants can invalidate a screening test result, a confirmatory test result, or both. To counteract urine adulteration, drug testing laboratories have developed a number of analytical methods to detect adulterants in a urine specimen. While these methods are useful in detecting urine adulteration when such activities are suspected, they do not reveal what types of drugs are being concealed. This is particularly the case when oxidizing urine adulterants are involved as these oxidants are capable of destroying drugs and their metabolites in urine, rendering the drug analytes undetectable by any testing technology. One promising approach to address this current limitation has been the use of unique oxidation products formed from reaction of drug analytes with oxidizing adulterants as markers for monitoring drug misuse and urine adulteration. This novel approach will ultimately improve the effectiveness of the current urine drug testing programs. © 2016 Elsevier Inc. All rights reserved.
A ricin forensic profiling approach based on a complex set of biomarkers.
Fredriksson, Sten-Åke; Wunschel, David S; Lindström, Susanne Wiklund; Nilsson, Calle; Wahl, Karen; Åstot, Crister
2018-08-15
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1-PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods and robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved. Copyright © 2018 Elsevier B.V. All rights reserved.
Application of multiplex arrays for cytokine and chemokine profiling of bile.
Kemp, Troy J; Castro, Felipe A; Gao, Yu-Tang; Hildesheim, Allan; Nogueira, Leticia; Wang, Bing-Sheng; Sun, Lu; Shelton, Gloriana; Pfeiffer, Ruth M; Hsing, Ann W; Pinto, Ligia A; Koshiol, Jill
2015-05-01
Gallbladder disease is highly related to inflammation, but the inflammatory processes are not well understood. Bile provides a direct substrate in assessing the local inflammatory response that develops in the gallbladder. To assess the reproducibility of measuring inflammatory markers in bile, we designed a methods study of 69 multiplexed immune-related markers measured in bile obtained from gallstone patients. To evaluate assay performance, a total of 18 bile samples were tested twice within the same plate for each analyte, and the 18 bile samples were tested on two different days for each analyte. We used the following performance parameters: detectability, coefficient of variation (CV), intraclass correlation coefficient (ICC), and percent agreement (concordance among replicate measures above and below detection limit). Furthermore, we examined the association of analyte levels with gallstone characteristics such as type, numbers, and size. All but 3 analytes (Stem Cell Factor, SCF; Thrombopoietin, TPO; sIL-1RI) were detectable in bile. 52 of 69 (75.4%) analytes had detectable levels for at least 50% of the subjects tested. The within-plate CVs were ⩽25% for 53 of 66 (80.3%) detectable analytes, and across-plate CVs were ⩽25% for 32 of 66 (48.5%) detectable analytes. Moreover, 64 of 66 (97.0%) analytes had ICC values of at least 0.8. Lastly, the percent agreement was high between replicates for all of the analytes (median; within plate, 97.2%; across plate, 97.2%). In exploratory analyses, we assessed analyte levels by gallstone characteristics and found that levels for several analytes decreased with increasing size of the largest gallstone per patient. Our data suggest that multiplex assays can be used to reliably measure cytokines and chemokines in bile. In addition, gallstone size was inversely related to the levels of select analytes, which may aid in identifying critical pathways and mechanisms associated with the pathogenesis of gallbladder diseases. Copyright © 2015 Elsevier Ltd. All rights reserved.
This guidance deals with the self-qualification of analytical test methods at a testing facility for measuring Reid Vapor Pressure (RVP) of gasoline to meet precision requirements codified in regulations.
Annual banned-substance review: analytical approaches in human sports drug testing.
Thevis, Mario; Kuuranne, Tiia; Geyer, Hans; Schänzer, Wilhelm
2015-01-01
Within the mosaic display of international anti-doping efforts, analytical strategies based on up-to-date instrumentation as well as most recent information about physiology, pharmacology, metabolism, etc., of prohibited substances and methods of doping are indispensable. The continuous emergence of new chemical entities and the identification of arguably beneficial effects of established or even obsolete drugs on endurance, strength, and regeneration, necessitate frequent and adequate adaptations of sports drug testing procedures. These largely rely on exploiting new technologies, extending the substance coverage of existing test protocols, and generating new insights into metabolism, distribution, and elimination of compounds prohibited by the World Anti-Doping Agency (WADA). In reference of the content of the 2014 Prohibited List, literature concerning human sports drug testing that was published between October 2013 and September 2014 is summarized and reviewed in this annual banned-substance review, with particular emphasis on analytical approaches and their contribution to enhanced doping controls. Copyright © 2014 John Wiley & Sons, Ltd.
Maximum entropy formalism for the analytic continuation of matrix-valued Green's functions
NASA Astrophysics Data System (ADS)
Kraberger, Gernot J.; Triebl, Robert; Zingl, Manuel; Aichhorn, Markus
2017-10-01
We present a generalization of the maximum entropy method to the analytic continuation of matrix-valued Green's functions. To treat off-diagonal elements correctly based on Bayesian probability theory, the entropy term has to be extended for spectral functions that are possibly negative in some frequency ranges. In that way, all matrix elements of the Green's function matrix can be analytically continued; we introduce a computationally cheap element-wise method for this purpose. However, this method cannot ensure important constraints on the mathematical properties of the resulting spectral functions, namely positive semidefiniteness and Hermiticity. To improve on this, we present a full matrix formalism, where all matrix elements are treated simultaneously. We show the capabilities of these methods using insulating and metallic dynamical mean-field theory (DMFT) Green's functions as test cases. Finally, we apply the methods to realistic material calculations for LaTiO3, where off-diagonal matrix elements in the Green's function appear due to the distorted crystal structure.
An interactive website for analytical method comparison and bias estimation.
Bahar, Burak; Tuncel, Ayse F; Holmes, Earle W; Holmes, Daniel T
2017-12-01
Regulatory standards mandate laboratories to perform studies to ensure accuracy and reliability of their test results. Method comparison and bias estimation are important components of these studies. We developed an interactive website for evaluating the relative performance of two analytical methods using R programming language tools. The website can be accessed at https://bahar.shinyapps.io/method_compare/. The site has an easy-to-use interface that allows both copy-pasting and manual entry of data. It also allows selection of a regression model and creation of regression and difference plots. Available regression models include Ordinary Least Squares, Weighted-Ordinary Least Squares, Deming, Weighted-Deming, Passing-Bablok and Passing-Bablok for large datasets. The server processes the data and generates downloadable reports in PDF or HTML format. Our website provides clinical laboratories a practical way to assess the relative performance of two analytical methods. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Novak, Ana; Gutiérrez-Zamora, Mercè; Domenech, Lluís; Suñé-Negre, Josep M; Miñarro, Montserrat; García-Montoya, Encarna; Llop, Josep M; Ticó, Josep R; Pérez-Lozano, Pilar
2018-02-01
A simple analytical method for simultaneous determination of phytosterols, cholesterol and squalene in lipid emulsions was developed owing to increased interest in their clinical effects. Method development was based on commonly used stationary (C 18 , C 8 and phenyl) and mobile phases (mixtures of acetonitrile, methanol and water) under isocratic conditions. Differences in stationary phases resulted in peak overlapping or coelution of different peaks. The best separation of all analyzed compounds was achieved on Zorbax Eclipse XDB C 8 (150 × 4.6 mm, 5 μm; Agilent) and ACN-H 2 O-MeOH, 80:19.5:0.5 (v/v/v). In order to achieve a shorter time of analysis, the method was further optimized and gradient separation was established. The optimized analytical method was validated and tested for routine use in lipid emulsion analyses. Copyright © 2017 John Wiley & Sons, Ltd.
An isotope-dilution standard GC/MS/MS method for steroid hormones in water
Foreman, William T.; Gray, James L.; ReVello, Rhiannon C.; Lindley, Chris E.; Losche, Scott A.
2013-01-01
An isotope-dilution quantification method was developed for 20 natural and synthetic steroid hormones and additional compounds in filtered and unfiltered water. Deuterium- or carbon-13-labeled isotope-dilution standards (IDSs) are added to the water sample, which is passed through an octadecylsilyl solid-phase extraction (SPE) disk. Following extract cleanup using Florisil SPE, method compounds are converted to trimethylsilyl derivatives and analyzed by gas chromatography with tandem mass spectrometry. Validation matrices included reagent water, wastewater-affected surface water, and primary (no biological treatment) and secondary wastewater effluent. Overall method recovery for all analytes in these matrices averaged 100%; with overall relative standard deviation of 28%. Mean recoveries of the 20 individual analytes for spiked reagent-water samples prepared along with field samples analyzed in 2009–2010 ranged from 84–104%, with relative standard deviations of 6–36%. Detection levels estimated using ASTM International’s D6091–07 procedure range from 0.4 to 4 ng/L for 17 analytes. Higher censoring levels of 100 ng/L for bisphenol A and 200 ng/L for cholesterol and 3-beta-coprostanol are used to prevent bias and false positives associated with the presence of these analytes in blanks. Absolute method recoveries of the IDSs provide sample-specific performance information and guide data reporting. Careful selection of labeled compounds for use as IDSs is important because both inexact IDS-analyte matches and deuterium label loss affect an IDS’s ability to emulate analyte performance. Six IDS compounds initially tested and applied in this method exhibited deuterium loss and are not used in the final method.
Some important considerations in the development of stress corrosion cracking test methods.
NASA Technical Reports Server (NTRS)
Wei, R. P.; Novak, S. R.; Williams, D. P.
1972-01-01
Discussion of some of the precaution needs the development of fracture-mechanics based test methods for studying stress corrosion cracking involves. Following a review of pertinent analytical fracture mechanics considerations and of basic test methods, the implications for test corrosion cracking studies of the time-to-failure determining kinetics of crack growth and life are examined. It is shown that the basic assumption of the linear-elastic fracture mechanics analyses must be clearly recognized and satisfied in experimentation and that the effects of incubation and nonsteady-state crack growth must also be properly taken into account in determining the crack growth kinetics, if valid data are to be obtained from fracture-mechanics based test methods.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-20
... of animals in regulatory testing is anticipated to occur in parallel with an increased ability to... phylogenetically lower animal species (e.g., fish, worms), as well as high throughput whole genome analytical... result in test methods for toxicity testing that are more scientifically and economically efficient and...
Borkhoff, Cornelia M; Johnston, Patrick R; Stephens, Derek; Atenafu, Eshetu
2015-07-01
Aligning the method used to estimate sample size with the planned analytic method ensures the sample size needed to achieve the planned power. When using generalized estimating equations (GEE) to analyze a paired binary primary outcome with no covariates, many use an exact McNemar test to calculate sample size. We reviewed the approaches to sample size estimation for paired binary data and compared the sample size estimates on the same numerical examples. We used the hypothesized sample proportions for the 2 × 2 table to calculate the correlation between the marginal proportions to estimate sample size based on GEE. We solved the inside proportions based on the correlation and the marginal proportions to estimate sample size based on exact McNemar, asymptotic unconditional McNemar, and asymptotic conditional McNemar. The asymptotic unconditional McNemar test is a good approximation of GEE method by Pan. The exact McNemar is too conservative and yields unnecessarily large sample size estimates than all other methods. In the special case of a 2 × 2 table, even when a GEE approach to binary logistic regression is the planned analytic method, the asymptotic unconditional McNemar test can be used to estimate sample size. We do not recommend using an exact McNemar test. Copyright © 2015 Elsevier Inc. All rights reserved.
Static penetration resistance of soils
NASA Technical Reports Server (NTRS)
Durgunoglu, H. T.; Mitchell, J. K.
1973-01-01
Model test results were used to define the failure mechanism associated with the static penetration resistance of cohesionless and low-cohesion soils. Knowledge of this mechanism has permitted the development of a new analytical method for calculating the ultimate penetration resistance which explicitly accounts for penetrometer base apex angle and roughness, soil friction angle, and the ratio of penetration depth to base width. Curves relating the bearing capacity factors to the soil friction angle are presented for failure in general shear. Strength parameters and penetrometer interaction properties of a fine sand were determined and used as the basis for prediction of the penetration resistance encountered by wedge, cone, and flat-ended penetrometers of different surface roughness using the proposed analytical method. Because of the close agreement between predicted values and values measured in laboratory tests, it appears possible to deduce in-situ soil strength parameters and their variation with depth from the results of static penetration tests.
Chernyak, Dimitri A; Campbell, Charles E
2003-11-01
Now that excimer laser systems can be programmed to correct complex aberrations of the eye on the basis of wave-front measurements, a method is needed to test the accuracy of the system from measurement through treatment. A closed-loop test method was developed to ensure that treatment plans generated by a wavefront measuring system were accurately transferred to and executed by the excimer laser. A surface was analytically defined, and a Shack-Hartmann-based wave-front system was used to formulate a treatment plan, which was downloaded to an excimer laser system. A plastic lens was ablated by the laser and then returned to the wave-front device, where it was measured and compared with the analytically defined wave-front surface. The two surfaces agreed up to 6th-order Zernike terms, validating the accuracy of the system.
New Material for Surface-Enhanced Raman Spectroscopy
NASA Technical Reports Server (NTRS)
Farquharson, Stuart; Nelson, Chad; Lee, Yuan
2004-01-01
A chemical method of synthesis and application of coating materials that are especially suitable for surface-enhanced Raman spectroscopy (SERS) has been developed. The purpose of this development is to facilitate the utilization of the inherently high sensitivity of SERS to detect chemicals of interest (analytes) in trace amounts, without need for lengthy sample preparation. Up to now, the use of SERS has not become routine because the methods available have not been able to reproduce sampling conditions and provide quantitative measurements. In contrast, the coating materials of the present method enable analysis with minimum preparation of samples, and SERS measurements made using these materials are reproducible and reversible. Moreover, unlike in methods investigated in prior efforts to implement SERS, sampling is not restricted to such specific environments as electrolytes or specific solvents. The coating materials of this method are porous glasses, formed in sol-gel processes, that contain small particles of gold or silver metal. Materials of this type can be applied to the sample-contact surfaces of a variety of sampling and sensing devices, including glass slides, glass vials, fiber-optic probes, and glass tubes. Glass vials with their insides coated according to this method are particularly convenient for SERS to detect trace chemicals in solutions: One simply puts a sample solution containing the analyte(s) into a vial, then puts the vial into a Raman spectrometer for analysis. The chemical ingredients and the physical conditions of the sol-gel process have been selected so that the porous glass formed incorporates particles of the desired metal with size(s) to match the wavelength(s) of the SERS excitation laser in order to optimize the generation of surface plasmons. The ingredients and processing conditions have further been chosen to tailor the porosity and polarity of the glass to optimize the sample flow and the interaction between the analyte(s) and the plasmon field that generates Raman photons. The porous silica network of a sol-gel glass creates a unique environment for stabilizing SERS-active metal particles. Relative to other material structures that could be considered for SERS, the porous silica network offers higher specific surface area and thus greater interaction between analyte molecules and metal particles. Efforts to perform SERS measurements with the help of sampling devices coated by this method have been successful. In tests, numerous organic and inorganic chemicals were analyzed in several solvents, including water. The results of the tests indicate that the SERS measurements were reproducible within 10 percent and linear over five orders of magnitude. One measure of the limits of detectability of chemicals in these tests was found to be a concentration of 300 parts per billion. Further development may eventually make it possible to realize the full potential sensitivity of SERS for detecting some analytes in quantities as small as a single molecule.
Zhang, Dabing; Guo, Jinchao
2011-07-01
As the worldwide commercialization of genetically modified organisms (GMOs) increases and consumers concern the safety of GMOs, many countries and regions are issuing labeling regulations on GMOs and their products. Analytical methods and their standardization for GM ingredients in foods and feed are essential for the implementation of labeling regulations. To date, the GMO testing methods are mainly based on the inserted DNA sequences and newly produced proteins in GMOs. This paper presents an overview of GMO testing methods as well as their standardization. © 2011 Institute of Botany, Chinese Academy of Sciences.
Study designs appropriate for the workplace.
Hogue, C J
1986-01-01
Carlo and Hearn have called for "refinement of old [epidemiologic] methods and an ongoing evaluation of where methods fit in the overall scheme as we address the multiple complexities of reproductive hazard assessment." This review is an attempt to bring together the current state-of-the-art methods for problem definition and hypothesis testing available to the occupational epidemiologist. For problem definition, meta analysis can be utilized to narrow the field of potential causal hypotheses. Passive active surveillance may further refine issues for analytic research. Within analytic epidemiology, several methods may be appropriate for the workplace setting. Those discussed here may be used to estimate the risk ratio in either a fixed or dynamic population.
CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila
2015-03-10
We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less
Bozzolino, Cristina; Leporati, Marta; Gani, Federica; Ferrero, Cinzia; Vincenti, Marco
2018-02-20
A fast analytical method for the simultaneous detection of 24 β 2 -agonists in human urine was developed and validated. The method covers the therapeutic drugs most commonly administered, but also potentially abused β 2 -agonists. The procedure is based on enzymatic deconjugation with β-glucuronidase followed by SPE clean up using mixed-phase cartridges with both ion-exchange and lipophilic properties. Instrumental analysis conducted by UHPLC-MS/MS allowed high peak resolution and rapid chromatographic separation, with reduced time and costs. The method was fully validated according ISO 17025:2005 principles. The following parameters were determined for each analyte: specificity, selectivity, linearity, limit of detection, limit of quantification, precision, accuracy, matrix effect, recovery and carry-over. The method was tested on real samples obtained from patients subjected to clinical treatment under chronic or acute therapy with either formoterol, indacaterol, salbutamol, or salmeterol. The drugs were administered using pressurized metered dose inhalers. All β 2 -agonists administered to the patients were detected in the real samples. The method proved adequate to accurately measure the concentration of these analytes in the real samples. The observed analytical data are discussed with reference to the administered dose and the duration of the therapy. Copyright © 2017 Elsevier B.V. All rights reserved.
Scotter, M J; Castle, L; Roberts, D P T; Macarthur, R; Brereton, P A; Hasnip, S K; Katz, N
2009-05-01
A method for the determination of cyclamate has been developed and single-laboratory validated for a range of foodstuffs including carbonated and fruit-juice drinks, fruit preserves, spreads, and dairy desserts. The method uses the peroxide oxidation of cyclamate to cyclohexylamine followed by derivatization with trinitrobenzenesulfonic acid and analysis by a modified reversed-phase high-performance liquid chromatography-ultraviolet light (HPLC-UV). Cycloheptylamine is used as an internal standard. The limits of detection were in the range 1-20 mg kg(-1) and the analysis was linear up to 1300 mg kg(-1) cyclamic acid in foods and up to 67 mg l(-1) in beverages. Analytical recovery was between 82% and 123%, and results were recovery corrected. Precision was within experimentally predicted levels for all of the matrices tested and Horrat values for the combined standard uncertainty associated with the measurement of cyclamate between 0.4 (water-based drinks) and 1.7 (spreads). The method was used successfully to test three soft drink samples for homogeneity before analytical performance assessment. The method is recommended for use in monitoring compliance and for formal testing by collaborative trial.
Space vehicle engine and heat shield environment review. Volume 1: Engineering analysis
NASA Technical Reports Server (NTRS)
Mcanelly, W. B.; Young, C. T. K.
1973-01-01
Methods for predicting the base heating characteristics of a multiple rocket engine installation are discussed. The environmental data is applied to the design of adequate protection system for the engine components. The methods for predicting the base region thermal environment are categorized as: (1) scale model testing, (2) extrapolation of previous and related flight test results, and (3) semiempirical analytical techniques.
40 CFR 63.90 - Program overview.
Code of Federal Regulations, 2012 CFR
2012-07-01
... calibration gases or test cells; (4) Use of an analytical technology that differs from that specified by a... “proven technology” (generally accepted by the scientific community as equivalent or better) that is... enforceable test method involving “proven technology” (generally accepted by the scientific community as...
40 CFR 63.90 - Program overview.
Code of Federal Regulations, 2013 CFR
2013-07-01
... calibration gases or test cells; (4) Use of an analytical technology that differs from that specified by a... “proven technology” (generally accepted by the scientific community as equivalent or better) that is... enforceable test method involving “proven technology” (generally accepted by the scientific community as...
40 CFR 63.90 - Program overview.
Code of Federal Regulations, 2014 CFR
2014-07-01
... calibration gases or test cells; (4) Use of an analytical technology that differs from that specified by a... “proven technology” (generally accepted by the scientific community as equivalent or better) that is... enforceable test method involving “proven technology” (generally accepted by the scientific community as...
Determination of noscapine, hexylresorcinol and anethole in cough lozenges by liquid chromatography.
Lucangioli, S; Fernández Otero, G; Rodríguez, V; Carducci, C N
1996-06-01
A liquid chromatographic method was developed for the simultaneous separation and determination of noscapine hydrochloride, hexylresorcinol and anethole in cough lozenges. Analysis was performed on a phenyl column with phosphate buffer- acetonitrile as mobile phase and the separated components were detected at 282 mm. Recoveries obtained for the analytes were of 94.6% for noscapine hydrochloride, 99.1% for hexylresorcinol and 96.3% for anethole. The values of the relative standard deviation were 0.8% for noscapine hydrochloride, 1.5% for hexylresorcinol and 1.1% for anethole. The analytical method was validated and a system suitability test was accomplished for the chromatographic method.
Field validation of the dnph method for aldehydes and ketones. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Workman, G.S.; Steger, J.L.
1996-04-01
A stationary source emission test method for selected aldehydes and ketones has been validated. The method employs a sampling train with impingers containing 2,4-dinitrophenylhydrazine (DNPH) to derivatize the analytes. The resulting hydrazones are recovered and analyzed by high performance liquid chromatography. Nine analytes were studied; the method was validated for formaldehyde, acetaldehyde, propionaldehyde, acetophenone and isophorone. Acrolein, menthyl ethyl ketone, menthyl isobutyl ketone, and quinone did not meet the validation criteria. The study employed the validation techniques described in EPA method 301, which uses train spiking to determine bias, and collocated sampling trains to determine precision. The studies were carriedmore » out at a plywood veneer dryer and a polyester manufacturing plant.« less
NASA Technical Reports Server (NTRS)
Baker, L. R., Jr.; Tevepaugh, J. A.; Penny, M. M.
1973-01-01
Variations of nozzle performance characteristics of the model nozzles used in the Space Shuttle IA12B, IA12C, IA36 power-on launch vehicle test series are shown by comparison between experimental and analytical data. The experimental data are nozzle wall pressure distributions and schlieren photographs of the exhaust plume shapes. The exhaust plume shapes were simulated experimentally with cold flow while the analytical data were generated using a method-of-characteristics solution. Exhaust plume boundaries, boundary shockwave locations and nozzle wall pressure measurements calculated analytically agree favorably with the experimental data from the IA12C and IA36 test series. For the IA12B test series condensation was suspected in the exhaust plumes at the higher pressure ratios required to simulate the prototype plume shapes. Nozzle calibration tests for the series were conducted at pressure ratios where condensation either did not occur or if present did not produce a noticeable effect on the plume shapes. However, at the pressure ratios required in the power-on launch vehicle tests condensation probably occurs and could significantly affect the exhaust plume shapes.
A multi-analyte serum test for the detection of non-small cell lung cancer
Farlow, E C; Vercillo, M S; Coon, J S; Basu, S; Kim, A W; Faber, L P; Warren, W H; Bonomi, P; Liptay, M J; Borgia, J A
2010-01-01
Background: In this study, we appraised a wide assortment of biomarkers previously shown to have diagnostic or prognostic value for non-small cell lung cancer (NSCLC) with the intent of establishing a multi-analyte serum test capable of identifying patients with lung cancer. Methods: Circulating levels of 47 biomarkers were evaluated against patient cohorts consisting of 90 NSCLC and 43 non-cancer controls using commercial immunoassays. Multivariate statistical methods were used on all biomarkers achieving statistical relevance to define an optimised panel of diagnostic biomarkers for NSCLC. The resulting biomarkers were fashioned into a classification algorithm and validated against serum from a second patient cohort. Results: A total of 14 analytes achieved statistical relevance upon evaluation. Multivariate statistical methods then identified a panel of six biomarkers (tumour necrosis factor-α, CYFRA 21-1, interleukin-1ra, matrix metalloproteinase-2, monocyte chemotactic protein-1 and sE-selectin) as being the most efficacious for diagnosing early stage NSCLC. When tested against a second patient cohort, the panel successfully classified 75 of 88 patients. Conclusions: Here, we report the development of a serum algorithm with high specificity for classifying patients with NSCLC against cohorts of various ‘high-risk' individuals. A high rate of false positives was observed within the cohort in which patients had non-neoplastic lung nodules, possibly as a consequence of the inflammatory nature of these conditions. PMID:20859284
Lim, Wei Yin; Goh, Boon Tong; Khor, Sook Mei
2017-08-15
Clinicians, working in the health-care diagnostic systems of developing countries, currently face the challenges of rising costs, increased number of patient visits, and limited resources. A significant trend is using low-cost substrates to develop microfluidic devices for diagnostic purposes. Various fabrication techniques, materials, and detection methods have been explored to develop these devices. Microfluidic paper-based analytical devices (μPADs) have gained attention for sensing multiplex analytes, confirming diagnostic test results, rapid sample analysis, and reducing the volume of samples and analytical reagents. μPADs, which can provide accurate and reliable direct measurement without sample pretreatment, can reduce patient medical burden and yield rapid test results, aiding physicians in choosing appropriate treatment. The objectives of this review are to provide an overview of the strategies used for developing paper-based sensors with enhanced analytical performances and to discuss the current challenges, limitations, advantages, disadvantages, and future prospects of paper-based microfluidic platforms in clinical diagnostics. μPADs, with validated and justified analytical performances, can potentially improve the quality of life by providing inexpensive, rapid, portable, biodegradable, and reliable diagnostics. Copyright © 2017 Elsevier B.V. All rights reserved.
Complex Langevin simulation of a random matrix model at nonzero chemical potential
NASA Astrophysics Data System (ADS)
Bloch, J.; Glesaaen, J.; Verbaarschot, J. J. M.; Zafeiropoulos, S.
2018-03-01
In this paper we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass is inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.
Consistent approach to describing aircraft HIRF protection
NASA Technical Reports Server (NTRS)
Rimbey, P. R.; Walen, D. B.
1995-01-01
The high intensity radiated fields (HIRF) certification process as currently implemented is comprised of an inconsistent combination of factors that tend to emphasize worst case scenarios in assessing commercial airplane certification requirements. By examining these factors which include the process definition, the external HIRF environment, the aircraft coupling and corresponding internal fields, and methods of measuring equipment susceptibilities, activities leading to an approach to appraising airplane vulnerability to HIRF are proposed. This approach utilizes technically based criteria to evaluate the nature of the threat, including the probability of encountering the external HIRF environment. No single test or analytic method comprehensively addresses the full HIRF threat frequency spectrum. Additional tools such as statistical methods must be adopted to arrive at more realistic requirements to reflect commercial aircraft vulnerability to the HIRF threat. Test and analytic data are provided to support the conclusions of this report. This work was performed under NASA contract NAS1-19360, Task 52.
Cembrowski, G S; Hackney, J R; Carey, N
1993-04-01
The Clinical Laboratory Improvement Act of 1988 (CLIA 88) has dramatically changed proficiency testing (PT) practices having mandated (1) satisfactory PT for certain analytes as a condition of laboratory operation, (2) fixed PT limits for many of these "regulated" analytes, and (3) an increased number of PT specimens (n = 5) for each testing cycle. For many of these analytes, the fixed limits are much broader than the previously employed Standard Deviation Index (SDI) criteria. Paradoxically, there may be less incentive to identify and evaluate analytically significant outliers to improve the analytical process. Previously described "control rules" to evaluate these PT results are unworkable as they consider only two or three results. We used Monte Carlo simulations of Kodak Ektachem analyzers participating in PT to determine optimal control rules for the identification of PT results that are inconsistent with those from other laboratories using the same methods. The analysis of three representative analytes, potassium, creatine kinase, and iron was simulated with varying intrainstrument and interinstrument standard deviations (si and sg, respectively) obtained from the College of American Pathologists (Northfield, Ill) Quality Assurance Services data and Proficiency Test data, respectively. Analytical errors were simulated in each of the analytes and evaluated in terms of multiples of the interlaboratory SDI. Simple control rules for detecting systematic and random error were evaluated with power function graphs, graphs of probability of error detected vs magnitude of error. Based on the simulation results, we recommend screening all analytes for the occurrence of two or more observations exceeding the same +/- 1 SDI limit. For any analyte satisfying this condition, the mean of the observations should be calculated. For analytes with sg/si ratios between 1.0 and 1.5, a significant systematic error is signaled by the mean exceeding 1.0 SDI. Significant random error is signaled by one observation exceeding the +/- 3-SDI limit or the range of the observations exceeding 4 SDIs. For analytes with higher sg/si, significant systematic or random error is signaled by violation of the screening rule (having at least two observations exceeding the same +/- 1 SDI limit). Random error can also be signaled by one observation exceeding the +/- 1.5-SDI limit or the range of the observations exceeding 3 SDIs. We present a practical approach to the workup of apparent PT errors.
Analytical challenges in sports drug testing.
Thevis, Mario; Krug, Oliver; Geyer, Hans; Walpurgis, Katja; Baume, Norbert; Thomas, Andreas
2018-03-01
Analytical chemistry represents a central aspect of doping controls. Routine sports drug testing approaches are primarily designed to address the question whether a prohibited substance is present in a doping control sample and whether prohibited methods (for example, blood transfusion or sample manipulation) have been conducted by an athlete. As some athletes have availed themselves of the substantial breadth of research and development in the pharmaceutical arena, proactive and preventive measures are required such as the early implementation of new drug candidates and corresponding metabolites into routine doping control assays, even though these drug candidates are to date not approved for human use. Beyond this, analytical data are also cornerstones of investigations into atypical or adverse analytical findings, where the overall picture provides ample reason for follow-up studies. Such studies have been of most diverse nature, and tailored approaches have been required to probe hypotheses and scenarios reported by the involved parties concerning the plausibility and consistency of statements and (analytical) facts. In order to outline the variety of challenges that doping control laboratories are facing besides providing optimal detection capabilities and analytical comprehensiveness, selected case vignettes involving the follow-up of unconventional adverse analytical findings, urine sample manipulation, drug/food contamination issues, and unexpected biotransformation reactions are thematized.
Škrbić, Biljana; Héberger, Károly; Durišić-Mladenović, Nataša
2013-10-01
Sum of ranking differences (SRD) was applied for comparing multianalyte results obtained by several analytical methods used in one or in different laboratories, i.e., for ranking the overall performances of the methods (or laboratories) in simultaneous determination of the same set of analytes. The data sets for testing of the SRD applicability contained the results reported during one of the proficiency tests (PTs) organized by EU Reference Laboratory for Polycyclic Aromatic Hydrocarbons (EU-RL-PAH). In this way, the SRD was also tested as a discriminant method alternative to existing average performance scores used to compare mutlianalyte PT results. SRD should be used along with the z scores--the most commonly used PT performance statistics. SRD was further developed to handle the same rankings (ties) among laboratories. Two benchmark concentration series were selected as reference: (a) the assigned PAH concentrations (determined precisely beforehand by the EU-RL-PAH) and (b) the averages of all individual PAH concentrations determined by each laboratory. Ranking relative to the assigned values and also to the average (or median) values pointed to the laboratories with the most extreme results, as well as revealed groups of laboratories with similar overall performances. SRD reveals differences between methods or laboratories even if classical test(s) cannot. The ranking was validated using comparison of ranks by random numbers (a randomization test) and using seven folds cross-validation, which highlighted the similarities among the (methods used in) laboratories. Principal component analysis and hierarchical cluster analysis justified the findings based on SRD ranking/grouping. If the PAH-concentrations are row-scaled, (i.e., z scores are analyzed as input for ranking) SRD can still be used for checking the normality of errors. Moreover, cross-validation of SRD on z scores groups the laboratories similarly. The SRD technique is general in nature, i.e., it can be applied to any experimental problem in which multianalyte results obtained either by several analytical procedures, analysts, instruments, or laboratories need to be compared.
Analytical validation of a psychiatric pharmacogenomic test.
Jablonski, Michael R; King, Nina; Wang, Yongbao; Winner, Joel G; Watterson, Lucas R; Gunselman, Sandra; Dechairo, Bryan M
2018-05-01
The aim of this study was to validate the analytical performance of a combinatorial pharmacogenomics test designed to aid in the appropriate medication selection for neuropsychiatric conditions. Genomic DNA was isolated from buccal swabs. Twelve genes (65 variants/alleles) associated with psychotropic medication metabolism, side effects, and mechanisms of actions were evaluated by bead array, MALDI-TOF mass spectrometry, and/or capillary electrophoresis methods (GeneSight Psychotropic, Assurex Health, Inc.). The combinatorial pharmacogenomics test has a dynamic range of 2.5-20 ng/μl of input genomic DNA, with comparable performance for all assays included in the test. Both the precision and accuracy of the test were >99.9%, with individual gene components between 99.4 and 100%. This study demonstrates that the combinatorial pharmacogenomics test is robust and reproducible, making it suitable for clinical use.
Ozay, Guner; Seyhan, Ferda; Yilmaz, Aysun; Whitaker, Thomas B; Slate, Andrew B; Giesbrecht, Francis
2006-01-01
The variability associated with the aflatoxin test procedure used to estimate aflatoxin levels in bulk shipments of hazelnuts was investigated. Sixteen 10 kg samples of shelled hazelnuts were taken from each of 20 lots that were suspected of aflatoxin contamination. The total variance associated with testing shelled hazelnuts was estimated and partitioned into sampling, sample preparation, and analytical variance components. Each variance component increased as aflatoxin concentration (either B1 or total) increased. With the use of regression analysis, mathematical expressions were developed to model the relationship between aflatoxin concentration and the total, sampling, sample preparation, and analytical variances. The expressions for these relationships were used to estimate the variance for any sample size, subsample size, and number of analyses for a specific aflatoxin concentration. The sampling, sample preparation, and analytical variances associated with estimating aflatoxin in a hazelnut lot at a total aflatoxin level of 10 ng/g and using a 10 kg sample, a 50 g subsample, dry comminution with a Robot Coupe mill, and a high-performance liquid chromatographic analytical method are 174.40, 0.74, and 0.27, respectively. The sampling, sample preparation, and analytical steps of the aflatoxin test procedure accounted for 99.4, 0.4, and 0.2% of the total variability, respectively.
Kim, Min-A; Sim, Hye-Min; Lee, Hye-Seong
2016-11-01
As reformulations and processing changes are increasingly needed in the food industry to produce healthier, more sustainable, and cost effective products while maintaining superior quality, reliable measurements of consumers' sensory perception and discrimination are becoming more critical. Consumer discrimination methods using a preferred-reference duo-trio test design have been shown to be effective in improving the discrimination performance by customizing sample presentation sequences. However, this design can add complexity to the discrimination task for some consumers, resulting in more errors in sensory discrimination. The objective of the present study was to investigate the effects of different types of test instructions using the preference-reference duo-trio test design where a paired-preference test is followed by 6 repeated preferred-reference duo-trio tests, in comparison to the analytical method using the balanced-reference duo-trio. Analyses of d' estimates (product-related measure) and probabilistic sensory discriminators in momentary numbers of subjects showing statistical significance (subject-related measure) revealed that only preferred-reference duo-trio test using affective reference-framing, either by providing no information about the reference or information on a previously preferred sample, improved the sensory discrimination more than the analytical method. No decrease in discrimination performance was observed with any type of instruction, confirming that consumers could handle the test methods. These results suggest that when repeated tests are feasible, using the affective discrimination method would be operationally more efficient as well as ecologically more reliable for measuring consumers' sensory discrimination ability. Copyright © 2016 Elsevier Ltd. All rights reserved.
A new method of sweat testing: the CF Quantum®sweat test.
Rock, Michael J; Makholm, Linda; Eickhoff, Jens
2014-09-01
Conventional methods of sweat testing are time consuming and have many steps that can and do lead to errors. This study compares conventional sweat testing to a new quantitative method, the CF Quantum® (CFQT) sweat test. This study tests the diagnostic accuracy and analytic validity of the CFQT. Previously diagnosed CF patients and patients who required a sweat test for clinical indications were invited to have the CFQT test performed. Both conventional sweat testing and the CFQT were performed bilaterally on the same day. Pairs of data from each test are plotted as a correlation graph and Bland-Altman plot. Sensitivity and specificity were calculated as well as the means and coefficient of variation by test and by extremity. After completing the study, subjects or their parents were asked for their preference of the CFQT and conventional sweat testing. The correlation coefficient between the CFQT and conventional sweat testing was 0.98 (95% confidence interval: 0.97-0.99). The sensitivity and specificity of the CFQT in diagnosing CF was 100% (95% confidence interval: 94-100%) and 96% (95% confidence interval: 89-99%), respectively. In one center in this three center multicenter study, there were higher sweat chloride values in patients with CF and also more tests that were invalid due to discrepant values between the two extremities. The percentage of invalid tests was higher in the CFQT method (16.5%) compared to conventional sweat testing (3.8%) (p < 0.001). In the post-test questionnaire, 88% of subjects/parents preferred the CFQT test. The CFQT is a fast and simple method of quantitative sweat chloride determination. This technology requires further refinement to improve the analytic accuracy at higher sweat chloride values and to decrease the number of invalid tests. Copyright © 2014 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.
Zakaria, Rosita; Allen, Katrina J.; Koplin, Jennifer J.; Roche, Peter
2016-01-01
Introduction Through the introduction of advanced analytical techniques and improved throughput, the scope of dried blood spot testing utilising mass spectrometric methods, has broadly expanded. Clinicians and researchers have become very enthusiastic about the potential applications of dried blood spot based mass spectrometric applications. Analysts on the other hand face challenges of sensitivity, reproducibility and overall accuracy of dried blood spot quantification. In this review, we aim to bring together these two facets to discuss the advantages and current challenges of non-newborn screening applications of dried blood spot quantification by mass spectrometry. Methods To address these aims we performed a key word search of the PubMed and MEDLINE online databases in conjunction with individual manual searches to gather information. Keywords for the initial search included; “blood spot” and “mass spectrometry”; while excluding “newborn”; and “neonate”. In addition, databases were restricted to English language and human specific. There was no time period limit applied. Results As a result of these selection criteria, 194 references were identified for review. For presentation, this information is divided into: 1) clinical applications; and 2) analytical considerations across the total testing process; being pre-analytical, analytical and post-analytical considerations. Conclusions DBS analysis using MS applications is now broadly applied, with drug monitoring for both therapeutic and toxicological analysis being the most extensively reported. Several parameters can affect the accuracy of DBS measurement and further bridge experiments are required to develop adjustment rules for comparability between dried blood spot measures and the equivalent serum/plasma values. Likewise, the establishment of independent reference intervals for dried blood spot sample matrix is required. PMID:28149263
Hu, B.X.; He, C.
2008-01-01
An iterative inverse method, the sequential self-calibration method, is developed for mapping spatial distribution of a hydraulic conductivity field by conditioning on nonreactive tracer breakthrough curves. A streamline-based, semi-analytical simulator is adopted to simulate solute transport in a heterogeneous aquifer. The simulation is used as the forward modeling step. In this study, the hydraulic conductivity is assumed to be a deterministic or random variable. Within the framework of the streamline-based simulator, the efficient semi-analytical method is used to calculate sensitivity coefficients of the solute concentration with respect to the hydraulic conductivity variation. The calculated sensitivities account for spatial correlations between the solute concentration and parameters. The performance of the inverse method is assessed by two synthetic tracer tests conducted in an aquifer with a distinct spatial pattern of heterogeneity. The study results indicate that the developed iterative inverse method is able to identify and reproduce the large-scale heterogeneity pattern of the aquifer given appropriate observation wells in these synthetic cases. ?? International Association for Mathematical Geology 2008.
Zhang, Meiyu; Li, Erfen; Su, Yijuan; Song, Xuqin; Xie, Jingmeng; Zhang, Yingxia; He, Limin
2018-06-01
Seven drugs from different classes, namely, fluoroquinolones (enrofloxacin, ciprofloxacin, sarafloxacin), sulfonamides (sulfadimidine, sulfamonomethoxine), and macrolides (tilmicosin, tylosin), were used as test compounds in chickens by oral administration, a simple extraction step after cryogenic freezing might allow the effective extraction of multi-class veterinary drug residues from minced chicken muscles by mix vortexing. On basis of the optimized freeze-thaw approach, a convenient, selective, and reproducible liquid chromatography with tandem mass spectrometry method was developed. At three spiking levels in blank chicken and medicated chicken muscles, average recoveries of the analytes were in the range of 71-106 and 63-119%, respectively. All the relative standard deviations were <20%. The limits of quantification of analytes were 0.2-5.0 ng/g. Regardless of the chicken levels, there were no significant differences (P > 0.05) in the average contents of almost any of the analytes in medicated chickens between this method and specific methods in the literature for the determination of specific analytes. Finally, the developed method was successfully extended to the monitoring of residues of 55 common veterinary drugs in food animal muscles. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Liew, Deborah; Linge, Kathryn L; Joll, Cynthia A; Heitz, Anna; Charrois, Jeffrey W A
2012-06-08
Simultaneous quantitation of 6 halonitromethanes (HNMs) and 5 haloacetamides (HAAms) was achieved with a simplified liquid-liquid extraction (LLE) method, followed by gas chromatography-mass spectrometry. Stability tests showed that brominated tri-HNMs immediately degraded in the presence of ascorbic acid, sodium sulphite and sodium borohydride, and also reduced in samples treated with ammonium chloride, or with no preservation. Both ammonium chloride and ascorbic acid were suitable for the preservation of HAAms. Ammonium chloride was most suitable for preserving both HNMs and HAAms, although it is recommended that samples be analysed as soon as possible after collection. While groundwater samples exhibited a greater analytical bias compared to other waters, the good recoveries (>90%) of most analytes in tap water suggest that the method is very appropriate for determining these analytes in treated drinking waters. Application of the method to water from three drinking water treatment plants in Western Australia indicating N-DBP formation did occur, with increased detections after chlorination. The method is recommended for low-cost, rapid screening of both HNMs and HAAms in drinking water. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Takeda, M.; Nakajima, H.; Zhang, M.; Hiratsuka, T.
2008-04-01
To obtain reliable diffusion parameters for diffusion testing, multiple experiments should not only be cross-checked but the internal consistency of each experiment should also be verified. In the through- and in-diffusion tests with solution reservoirs, test interpretation of different phases often makes use of simplified analytical solutions. This study explores the feasibility of steady, quasi-steady, equilibrium and transient-state analyses using simplified analytical solutions with respect to (i) valid conditions for each analytical solution, (ii) potential error, and (iii) experimental time. For increased generality, a series of numerical analyses are performed using unified dimensionless parameters and the results are all related to dimensionless reservoir volume (DRV) which includes only the sorptive parameter as an unknown. This means the above factors can be investigated on the basis of the sorption properties of the testing material and/or tracer. The main findings are that steady, quasi-steady and equilibrium-state analyses are applicable when the tracer is not highly sorptive. However, quasi-steady and equilibrium-state analyses become inefficient or impractical compared to steady state analysis when the tracer is non-sorbing and material porosity is significantly low. Systematic and comprehensive reformulation of analytical models enables the comparison of experimental times between different test methods. The applicability and potential error of each test interpretation can also be studied. These can be applied in designing, performing, and interpreting diffusion experiments by deducing DRV from the available information for the target material and tracer, combined with the results of this study.
2014-01-01
In the current practice, to determine the safety factor of a slope with two-dimensional circular potential failure surface, one of the searching methods for the critical slip surface is Genetic Algorithm (GA), while the method to calculate the slope safety factor is Fellenius' slices method. However GA needs to be validated with more numeric tests, while Fellenius' slices method is just an approximate method like finite element method. This paper proposed a new method to determine the minimum slope safety factor which is the determination of slope safety factor with analytical solution and searching critical slip surface with Genetic-Traversal Random Method. The analytical solution is more accurate than Fellenius' slices method. The Genetic-Traversal Random Method uses random pick to utilize mutation. A computer automatic search program is developed for the Genetic-Traversal Random Method. After comparison with other methods like slope/w software, results indicate that the Genetic-Traversal Random Search Method can give very low safety factor which is about half of the other methods. However the obtained minimum safety factor with Genetic-Traversal Random Search Method is very close to the lower bound solutions of slope safety factor given by the Ansys software. PMID:24782679
NASA Technical Reports Server (NTRS)
Schweikhhard, W. G.; Chen, Y. S.
1983-01-01
Publications prior to March 1981 were surveyed to determine inlet flow dynamic distortion prediction methods and to catalog experimental and analytical information concerning inlet flow dynamic distortion prediction methods and to catalog experimental and analytical information concerning inlet flow dynamics at the engine-inlet interface of conventional aircraft (excluding V/STOL). The sixty-five publications found are briefly summarized and tabulated according to topic and are cross-referenced according to content and nature of the investigation (e.g., predictive, experimental, analytical and types of tests). Three appendices include lists of references, authors, organizations and agencies conducting the studies. Also, selected materials summaries, introductions and conclusions - from the reports are included. Few reports were found covering methods for predicting the probable maximum distortion. The three predictive methods found are those of Melick, Jacox and Motycka. The latter two require extensive high response pressure measurements at the compressor face, while the Melick Technique can function with as few as one or two measurements.
Rational use and interpretation of urine drug testing in chronic opioid therapy.
Reisfield, Gary M; Salazar, Elaine; Bertholf, Roger L
2007-01-01
Urine drug testing (UDT) has become an essential feature of pain management, as physicians seek to verify adherence to prescribed opioid regimens and to detect the use of illicit or unauthorized licit drugs. Results of urine drug tests have important consequences in regard to therapeutic decisions and the trust between physician and patient. However, reliance on UDT to confirm adherence can be problematic if the results are not interpreted correctly, and evidence suggests that many physicians lack an adequate understanding of the complexities of UDT and the factors that can affect test results. These factors include metabolic conversion between drugs, genetic variations in drug metabolism, the sensitivity and specificity of the analytical method for a particular drug or metabolite, and the effects of intentional and unintentional interferants. In this review, we focus on the technical features and limitations of analytical methods used for detecting drugs or their metabolites in urine, the statistical constructs that are pertinent to ordering UDT and interpreting test results, and the application of these concepts to the clinical monitoring of patients maintained on chronic opioid therapy.
Comparison of chemiluminescence methods for analysis of hydrogen peroxide and hydroxyl radicals
NASA Astrophysics Data System (ADS)
Pehrman, R.; Amme, M.; Cachoir, C.
2006-01-01
Assessment of alpha radiolysis influence on the chemistry of geologically disposed spent fuel demands analytical methods for radiolytic product determination at trace levels. Several chemiluminescence methods for the detection of radiolytic oxidants hydrogen peroxide and hydroxyl radicals are tested. Two of hydrogen peroxide methods use luminol, catalyzed by either μ-peroxidase or hemin, one uses 10-methyl-9-(p-formylphenyl)-acridinium carboxylate trifluoromethanesulfonate and one potassium periodate. All recipes are tested as batch systems in basic conditions. For hydroxyl radical detection luminophores selected are 3-hydroxyphthalic hydrazide and rutin. Both methods are tested as batch systems. The results are compared and the applicability of the methods for near-field dissolution studies is discussed.
42 CFR 493.845 - Standard; Toxicology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.851 - Standard; Hematology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.843 - Standard; Endocrinology.
Code of Federal Regulations, 2013 CFR
2013-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.845 - Standard; Toxicology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.845 - Standard; Toxicology.
Code of Federal Regulations, 2013 CFR
2013-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.851 - Standard; Hematology.
Code of Federal Regulations, 2013 CFR
2013-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.843 - Standard; Endocrinology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.843 - Standard; Endocrinology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.851 - Standard; Hematology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
Electrodialytic in-line preconcentration for ionic solute analysis.
Ohira, Shin-Ichi; Yamasaki, Takayuki; Koda, Takumi; Kodama, Yuko; Toda, Kei
2018-04-01
Preconcentration is an effective way to improve analytical sensitivity. Many types of methods are used for enrichment of ionic solute analytes. However, current methods are batchwise and include procedures such as trapping and elution. In this manuscript, we propose in-line electrodialytic enrichment of ionic solutes. The method can enrich ionic solutes within seconds by quantitative transfer of analytes from the sample solution to the acceptor solution under an electric field. Because of quantitative ion transfer, the enrichment factor (the ratio of the concentration in the sample and to that in the obtained acceptor solution) only depends on the flow rate ratio of the sample solution to the acceptor solution. The ratios of the concentrations and flow rates are equal for ratios up to 70, 20, and 70 for the tested ionic solutes of inorganic cations, inorganic anions, and heavy metal ions, respectively. The sensitivity of ionic solute determinations is also improved based on the enrichment factor. The method can also simultaneously achieve matrix isolation and enrichment. The method was successively applied to determine the concentrations of trace amounts of chloroacetic acids in tap water. The regulated concentration levels cannot be determined by conventional high-performance liquid chromatography with ultraviolet detection (HPLC-UV) without enrichment. However, enrichment with the present method is effective for determination of tap water quality by improving the limits of detection of HPLC-UV. The standard addition test with real tap water samples shows good recoveries (94.9-109.6%). Copyright © 2017 Elsevier B.V. All rights reserved.
El-Masry, Amal A; Hammouda, Mohammed E A; El-Wasseef, Dalia R; El-Ashry, Saadia M
2018-02-15
Two simple, sensitive, rapid, validated and cost effective spectroscopic methods were established for quantification of antihistaminic drug azelastine (AZL) in bulk powder as well as in pharmaceutical dosage forms. In the first method (A) the absorbance difference between acidic and basic solutions was measured at 228nm, whereas in the second investigated method (B) the binary complex formed between AZL and Eosin Y in acetate buffer solution (pH3) was measured at 550nm. Different criteria that have critical influence on the intensity of absorption were deeply studied and optimized so as to achieve the highest absorption. The proposed methods obeyed Beer ' s low in the concentration range of (2.0-20.0μg·mL -1 ) and (0.5-15.0μg·mL -1 ) with % recovery±S.D. of (99.84±0.87), (100.02±0.78) for methods (A) and (B), respectively. Furthermore, the proposed methods were easily applied for quality control of pharmaceutical preparations without any conflict with its co-formulated additives, and the analytical results were compatible with those obtained by the comparison one with no significant difference as insured by student's t-test and the variance ratio F-test. Validation of the proposed methods was performed according the ICH guidelines in terms of linearity, limit of quantification, limit of detection, accuracy, precision and specificity, where the analytical results were persuasive. Copyright © 2017 Elsevier B.V. All rights reserved.
Webster, Gregory K; Marsden, Ian; Pommerening, Cynthia A; Tyrakowski, Christina M
2010-05-01
With the changing development paradigms in the pharmaceutical industry, laboratories are challenged to release materials for clinical studies with rapid turnaround times. To minimize cost demands, many businesses are looking to develop ways of using early Good Manufacturing Practice (GMP) materials of active pharmaceutical ingredients (API) for Good Laboratory Practice (GLP) toxicology studies. To make this happen, the analytical laboratory releases the material by one of three scenarios: (1) holding the GLP release until full GMP testing is ready, (2) issuing a separate lot number for a portion of the GMP material and releasing the material for GLP use, or (3) releasing the lot of material for GLP using alternate (equivalent) method(s) not specified for GMP release testing. Many companies are finding the third scenario to be advantageous in terms of cost and efficiency through the use of quantitative nuclear magnetic resonance (q-NMR). The use of q-NMR has proved to be a single-point replacement for routine early development testing that previously combined elements of identity testing, chromatographic assay, moisture analysis, residual solvent analysis, and elemental analysis. This study highlights that q-NMR can be validated to meet current regulatory analytical method guidelines for routine pharmaceutical analysis.
Point-of-care test (POCT) INR: hope or illusion?
Dusse, Luci Maria Sant'Ana; Oliveira, Nataly Carvalho; Rios, Danyelle Romana Alves; Marcolino, Milena Soriano
2012-01-01
In the last decade, point-of-care tests were developed to provide rapid generation of test results. These tests have increasingly broad applications. In the area of hemostasis, the international normalized ratio, INR point-of-care test (POCT INR), is the main test of this new proposal. This test has great potential benefit in situations where the quick INR results influences clinical decision making, as in acute ischemic stroke, before surgical procedures and during cardiac surgery. The INR POCT has the potential to be used for self-monitoring of oral anticoagulation in patients under anticoagulant therapy. However, the precision and accuracy of INR POCT still need to be enhanced to increase effectiveness and efficiency of the test. Additionally, the RDC / ANVISA Number 302 makes clear that the POCT testing must be supervised by the technical manager of the Clinical Laboratory in the pre-analytical, analytical and post-analytical. In practice, the Clinical Laboratory does not participate in the implementation of POCT testing or release of the results. Clinicians have high expectation with the incorporation of INR POCT in clinical practice, despite the limitations of this method. These professionals are willing to train the patient to perform the test, but are not legally responsible for the quality of it and are not prepared for the maintenance of equipment. The definition of who is in charge for the test must be one to ensure the quality control.
Strotmann, Uwe; Reuschenbach, Peter; Schwarz, Helmut; Pagga, Udo
2004-01-01
Well-established biodegradation tests use biogenously evolved carbon dioxide (CO2) as an analytical parameter to determine the ultimate biodegradability of substances. A newly developed analytical technique based on the continuous online measurement of conductivity showed its suitability over other techniques. It could be demonstrated that the method met all criteria of established biodegradation tests, gave continuous biodegradation curves, and was more reliable than other tests. In parallel experiments, only small variations in the biodegradation pattern occurred. When comparing the new online CO2 method with existing CO2 evolution tests, growth rates and lag periods were similar and only the final degree of biodegradation of aniline was slightly lower. A further test development was the unification and parallel measurement of all three important summary parameters for biodegradation—i.e., CO2 evolution, determination of the biochemical oxygen demand (BOD), and removal of dissolved organic carbon (DOC)—in a multicomponent biodegradation test system (MCBTS). The practicability of this test method was demonstrated with aniline. This test system had advantages for poorly water-soluble and highly volatile compounds and allowed the determination of the carbon fraction integrated into biomass (heterotrophic yield). The integrated online measurements of CO2 and BOD systems produced continuous degradation curves, which better met the stringent criteria of ready biodegradability (60% biodegradation in a 10-day window). Furthermore the data could be used to calculate maximal growth rates for the modeling of biodegradation processes. PMID:15294794
Hammack, Thomas S; Valentin-Bon, Iris E; Jacobson, Andrew P; Andrews, Wallace H
2004-05-01
Soak and rinse methods were compared for the recovery of Salmonella from whole cantaloupes. Cantaloupes were surface inoculated with Salmonella cell suspensions and stored for 4 days at 2 to 6 degrees C. Cantaloupes were placed in sterile plastic bags with a nonselective preenrichment broth at a 1:1.5 cantaloupe weight-to-broth volume ratio. The cantaloupe broths were shaken for 5 min at 100 rpm after which 25-ml aliquots (rinse) were removed from the bags. The 25-ml rinses were preenriched in 225-ml portions of the same uninoculated broth type at 35 degrees C for 24 h (rinse method). The remaining cantaloupe broths were incubated at 35 degrees C for 24 h (soak method). The preenrichment broths used were buffered peptone water (BPW), modified BPW, lactose (LAC) broth, and Universal Preenrichment (UP) broth. The Bacteriological Analytical Manual Salmonella culture method was compared with the following rapid methods: the TECRA Unique Salmonella method, the VIDAS ICS/SLM method, and the VIDAS SLM method. The soak method detected significantly more Salmonella-positive cantaloupes (P < 0.05) than did the rinse method: 367 Salmonella-positive cantaloupes of 540 test cantaloupes by the soak method and 24 Salmonella-positive cantaloupes of 540 test cantaloupes by the rinse method. Overall, BPW, LAC, and UP broths were equivalent for the recovery of Salmonella from cantaloupes. Both the VIDAS ICS/SLM and TECRA Unique Salmonella methods detected significantly fewer Salmonella-positive cantaloupes than did the culture method: the VIDAS ICS/SLM method detected 23 of 50 Salmonella-positive cantaloupes (60 tested) and the TECRA Unique Salmonella method detected 16 of 29 Salmonella-positive cantaloupes (60 tested). The VIDAS SLM and culture methods were equivalent: both methods detected 37 of 37 Salmonella-positive cantaloupes (60 tested).
A simple analytical aerodynamic model of Langley Winged-Cone Aerospace Plane concept
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.
1994-01-01
A simple three DOF analytical aerodynamic model of the Langley Winged-Coned Aerospace Plane concept is presented in a form suitable for simulation, trajectory optimization, and guidance and control studies. The analytical model is especially suitable for methods based on variational calculus. Analytical expressions are presented for lift, drag, and pitching moment coefficients from subsonic to hypersonic Mach numbers and angles of attack up to +/- 20 deg. This analytical model has break points at Mach numbers of 1.0, 1.4, 4.0, and 6.0. Across these Mach number break points, the lift, drag, and pitching moment coefficients are made continuous but their derivatives are not. There are no break points in angle of attack. The effect of control surface deflection is not considered. The present analytical model compares well with the APAS calculations and wind tunnel test data for most angles of attack and Mach numbers.
NASA Technical Reports Server (NTRS)
Aniversario, R. B.; Harvey, S. T.; Mccarty, J. E.; Parsons, J. T.; Peterson, D. C.; Pritchett, L. D.; Wilson, D. R.; Wogulis, E. R.
1983-01-01
The horizontal stabilizer of the 737 transport was redesigned. Five shipsets were fabricated using composite materials. Weight reduction greater than the 20% goal was achieved. Parts and assemblies were readily produced on production-type tooling. Quality assurance methods were demonstrated. Repair methods were developed and demonstrated. Strength and stiffness analytical methods were substantiated by comparison with test results. Cost data was accumulated in a semiproduction environment. FAA certification was obtained.
ERIC Educational Resources Information Center
Helms, LuAnn Sherbeck
This paper discusses the fact that reliability is about scores and not tests and how reliability limits effect sizes. The paper also explores the classical reliability coefficients of stability, equivalence, and internal consistency. Stability is concerned with how stable test scores will be over time, while equivalence addresses the relationship…
Statistical Estimation of Heterogeneities: A New Frontier in Well Testing
NASA Astrophysics Data System (ADS)
Neuman, S. P.; Guadagnini, A.; Illman, W. A.; Riva, M.; Vesselinov, V. V.
2001-12-01
Well-testing methods have traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. Geostatistical inverse interpretation of cross-hole tests yields a smoothed but detailed "tomographic" image of how parameters actually vary in three-dimensional space, together with corresponding measures of estimation uncertainty. Moment solutions may soon allow one to interpret well tests in terms of statistical parameters such as the mean and variance of log permeability, its spatial autocorrelation and statistical anisotropy. The idea of geostatistical cross-hole tomography is illustrated through pneumatic injection tests conducted in unsaturated fractured tuff at the Apache Leap Research Site near Superior, Arizona. The idea of using moment equations to interpret well-tests statistically is illustrated through a recently developed three-dimensional solution for steady state flow to a well in a bounded, randomly heterogeneous, statistically anisotropic aquifer.
Mori, Masanobu; Nakano, Koji; Sasaki, Masaya; Shinozaki, Haruka; Suzuki, Shiho; Okawara, Chitose; Miró, Manuel; Itabashi, Hideyuki
2016-02-01
A dynamic flow-through microcolumn extraction system based on extractant re-circulation is herein proposed as a novel analytical approach for simplification of bioaccessibility tests of trace elements in sediments. On-line metal leaching is undertaken in the format of all injection (AI) analysis, which is a sequel of flow injection analysis, but involving extraction under steady-state conditions. The minimum circulation times and flow rates required to determine the maximum bioaccessible pools of target metals (viz., Cu, Zn, Cd, and Pb) from lake and river sediment samples were estimated using Tessier's sequential extraction scheme and an acid single extraction test. The on-line AIA method was successfully validated by mass balance studies of CRM and real sediment samples. Tessier's test in on-line AI format demonstrated to be carried out by one third of extraction time (6h against more than 17 h by the conventional method), with better analytical precision (<9.2% against >15% by the conventional method) and significant decrease in blank readouts as compared with the manual batch counterpart. Copyright © 2015 Elsevier B.V. All rights reserved.
40 CFR 455.50 - Identification of test procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... methods cited and described in Table IG at 40 CFR 136.3(a). Pesticide manufacturers may not use the analytical method cited in Table IB, Table IC, or Table ID of 40 CFR 136.3(a) to make these determinations (except where the method cited in those tables is identical to the method specified in Table IG at 40 CFR...
NASA Technical Reports Server (NTRS)
Gazda, Daniel B.; Schultz, John R.; Clarke, Mark S.
2007-01-01
Phase separation is one of the most significant obstacles encountered during the development of analytical methods for water quality monitoring in spacecraft environments. Removing air bubbles from water samples prior to analysis is a routine task on earth; however, in the absence of gravity, this routine task becomes extremely difficult. This paper details the development and initial ground testing of liquid metering centrifuge sticks (LMCS), devices designed to collect and meter a known volume of bubble-free water in microgravity. The LMCS uses centrifugal force to eliminate entrapped air and reproducibly meter liquid sample volumes for analysis with Colorimetric Solid Phase Extraction (C-SPE). C-SPE is a sorption-spectrophotometric platform that is being developed as a potential spacecraft water quality monitoring system. C-SPE utilizes solid phase extraction membranes impregnated with analyte-specific colorimetric reagents to concentrate and complex target analytes in spacecraft water samples. The mass of analyte extracted from the water sample is determined using diffuse reflectance (DR) data collected from the membrane surface and an analyte-specific calibration curve. The analyte concentration can then be calculated from the mass of extracted analyte and the volume of the sample analyzed. Previous flight experiments conducted in microgravity conditions aboard the NASA KC-135 aircraft demonstrated that the inability to collect and meter a known volume of water using a syringe was a limiting factor in the accuracy of C-SPE measurements. Herein, results obtained from ground based C-SPE experiments using ionic silver as a test analyte and either the LMCS or syringes for sample metering are compared to evaluate the performance of the LMCS. These results indicate very good agreement between the two sample metering methods and clearly illustrate the potential of utilizing centrifugal forces to achieve phase separation and metering of water samples in microgravity.
Microbiological methods for the water recovery systems test, revision 1.1
NASA Technical Reports Server (NTRS)
Rhoads, Tim; Kilgore, M. V., Jr.; Mikell, A. T., Jr.
1990-01-01
Current microbiological parameters specified to verify microbiological quality of Space Station Freedom water quality include the enumeration of total bacteria, anaerobes, aerobes, yeasts and molds, enteric bacteria, gram positives, gram negatives, and E. coli. In addition, other parameters have been identified as necessary to support the Water Recovery Test activities to be conducted at the NASA/MSFC later this year. These other parameters include aerotolerant eutrophic mesophiles, legionellae, and an additional method for heterotrophic bacteria. If inter-laboratory data are to be compared to evaluate quality, analytical methods must be eliminated as a variable. Therefore, each participating laboratory must utilize the same analytical methods and procedures. Without this standardization, data can be neither compared nor validated between laboratories. Multiple laboratory participation represents a conservative approach to insure quality and completeness of data. Invariably, sample loss will occur in transport and analyses. Natural variance is a reality on any test of this magnitude and is further enhanced because biological entities, capable of growth and death, are specific parameters of interest. The large variation due to the participation of human test subjects has been noted with previous testing. The resultant data might be dismissed as 'out of control' unless intra-laboratory control is included as part of the method or if participating laboratories are not available for verification. The purpose of this document is to provide standardized laboratory procedures for the enumeration of certain microorganisms in water and wastewater specific to the water recovery systems test. The document consists of ten separate cultural methods and one direct count procedure. It is not intended nor is it implied to be a complete microbiological methods manual.
Kaufmann, A; Maden, K; Leisser, W; Matera, M; Gude, T
2005-11-01
Inorganic polyphosphates (di-, tri- and higher polyphosphates) can be used to treat fish, fish fillets and shrimps in order to improve their water-binding capacity. The practical relevance of this treatment is a significant gain of weight caused by the retention/uptake of water and natural juice into the fish tissues. This practice is legal; however, the use of phosphates has to be declared. The routine control testing of fish for the presence of polyphosphates, produced some results that were difficult to explain. One of the two analytical methods used determined low diphosphate concentrations in a number of untreated samples, while the other ion chromatography (IC) method did not detect them. This initiated a number of investigations: results showed that polyphosphates in fish and shrimps tissue undergo a rapid enzymatic degradation, producing the ubiquitous orthophosphate. This led to the conclusion that sensitive analytical methods are required in order to detect previous polyphosphate treatment of a sample. The polyphosphate concentrations detected by one of the analytical methods could not be explained by the degradation of endogenous high-energy nucleotides like ATP into diphosphate, but by a coeluting compound. Further investigations by LC-MS-MS proved that the substance responsible for the observed peak was inosine monophsosphate (IMP) and not as thought the inorganic diphosphate. The method producing the false-positive result was modified and both methods were ultimately able to detect polyphosphates well separated from natural nucleotides. Polyphosphates could no longer be detected (<0.5 mg kg-1) after modification of the analytical methodology. The relevance of these findings lies in the fact that similar analytical methods are employed in various control laboratories, which might lead to false interpretation of measurements.
NASA Technical Reports Server (NTRS)
Mitchell, William S.; Throckmorton, David (Technical Monitor)
2002-01-01
The purpose of this research was to further the understanding of a crack initiation problem in a highly strained pressure containment housing. Finite Element Analysis methods were used to model the behavior of shot peened materials undergoing plastic deformation. Analytical results are in agreement with laboratory tensile tests that simulated the actual housing load conditions. These results further validate the original investigation finding that the shot peened residual stress had reversed, changing from compressive to tensile, and demonstrate that analytical finite element methods can be used to predict this behavior.
Llorente Ballesteros, M T; Navarro Serrano, I; López Colón, J L
2015-01-01
The aim of this report is to propose a scheme for validation of an analytical technique according to ISO 17025. According to ISO 17025, the fundamental parameters tested were: selectivity, calibration model, precision, accuracy, uncertainty of measurement, and analytical interference. A protocol has been developed that has been applied successfully to quantify zinc in serum by atomic absorption spectrometry. It is demonstrated that our method is selective, linear, accurate, and precise, making it suitable for use in routine diagnostics. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.
NASA Technical Reports Server (NTRS)
Townsend, J. C.
1980-01-01
In order to provide experimental data for comparison with newly developed finite difference methods for computing supersonic flows over aircraft configurations, wind tunnel tests were conducted on four arrow wing models. The models were machined under numeric control to precisely duplicate analytically defined shapes. They were heavily instrumented with pressure orifices at several cross sections ahead of and in the region where there is a gap between the body and the wing trailing edge. The test Mach numbers were 2.36, 2.96, and 4.63. Tabulated pressure data for the complete test series are presented along with selected oil flow photographs. Comparisons of some preliminary numerical results at zero angle of attack show good to excellent agreement with the experimental pressure distributions.
Calculus domains modelled using an original bool algebra based on polygons
NASA Astrophysics Data System (ADS)
Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.
2016-08-01
Analytical and numerical computer based models require analytical definitions of the calculus domains. The paper presents a method to model a calculus domain based on a bool algebra which uses solid and hollow polygons. The general calculus relations of the geometrical characteristics that are widely used in mechanical engineering are tested using several shapes of the calculus domain in order to draw conclusions regarding the most effective methods to discretize the domain. The paper also tests the results of several CAD commercial software applications which are able to compute the geometrical characteristics, being drawn interesting conclusions. The tests were also targeting the accuracy of the results vs. the number of nodes on the curved boundary of the cross section. The study required the development of an original software consisting of more than 1700 computer code lines. In comparison with other calculus methods, the discretization using convex polygons is a simpler approach. Moreover, this method doesn't lead to large numbers as the spline approximation did, in that case being required special software packages in order to offer multiple, arbitrary precision. The knowledge resulted from this study may be used to develop complex computer based models in engineering.
Cottenet, Geoffrey; Blancpain, Carine; Sonnard, Véronique; Chuah, Poh Fong
2013-08-01
Considering the increase of the total cultivated land area dedicated to genetically modified organisms (GMO), the consumers' perception toward GMO and the need to comply with various local GMO legislations, efficient and accurate analytical methods are needed for their detection and identification. Considered as the gold standard for GMO analysis, the real-time polymerase chain reaction (RTi-PCR) technology was optimised to produce a high-throughput GMO screening method. Based on simultaneous 24 multiplex RTi-PCR running on a ready-to-use 384-well plate, this new procedure allows the detection and identification of 47 targets on seven samples in duplicate. To comply with GMO analytical quality requirements, a negative and a positive control were analysed in parallel. In addition, an internal positive control was also included in each reaction well for the detection of potential PCR inhibition. Tested on non-GM materials, on different GM events and on proficiency test samples, the method offered high specificity and sensitivity with an absolute limit of detection between 1 and 16 copies depending on the target. Easy to use, fast and cost efficient, this multiplex approach fits the purpose of GMO testing laboratories.
Zhao, Xiaoyan; Qureshi, Ferhan; Eastman, P Scott; Manning, William C; Alexander, Claire; Robinson, William H; Hesterberg, Lyndal K
2012-04-30
Variability in pre-analytical blood sampling and handling can significantly impact results obtained in quantitative immunoassays. Understanding the impact of these variables is critical for accurate quantification and validation of biomarker measurements. Particularly, in the design and execution of large clinical trials, even small differences in sample processing and handling can have dramatic effects in analytical reliability, results interpretation, trial management and outcome. The effects of two common blood sampling methods (serum vs. plasma) and two widely-used serum handling methods (on the clot with ambient temperature shipping, "traditional", vs. centrifuged with cold chain shipping, "protocol") on protein and autoantibody concentrations were examined. Matched serum and plasma samples were collected from 32 rheumatoid arthritis (RA) patients representing a wide range of disease activity status. Additionally, a set of matched serum samples with two sample handling methods was collected. One tube was processed per manufacturer's instructions and shipped overnight on cold packs (protocol). The matched tube, without prior centrifugation, was simultaneously shipped overnight at ambient temperatures (traditional). Upon delivery, the traditional tube was centrifuged. All samples were subsequently aliquoted and frozen prior to analysis of protein and autoantibody biomarkers. Median correlation between paired serum and plasma across all autoantibody assays was 0.99 (0.98-1.00) with a median % difference of -3.3 (-7.5 to 6.0). In contrast, observed protein biomarker concentrations were significantly affected by sample types, with median correlation of 0.99 (0.33-1.00) and a median % difference of -10 (-55 to 23). When the two serum collection/handling methods were compared, the median correlation between paired samples for autoantibodies was 0.99 (0.91-1.00) with a median difference of 4%. In contrast, significant increases were observed in protein biomarker concentrations among certain biomarkers in samples processed with the 'traditional' method. Autoantibody quantification appears robust to both sample type (plasma vs. serum) and pre-analytical sample collection/handling methods (protocol vs. traditional). In contrast, for non-antibody protein biomarker concentrations, sample type had a significant impact; plasma samples generally exhibit decreased protein biomarker concentrations relative to serum. Similarly, sample handling significantly impacted the variability of protein biomarker concentrations. When biomarker concentrations are combined algorithmically into a single test score such as a multi-biomarker disease activity test for rheumatoid arthritis (MBDA), changes in protein biomarker concentrations may result in a bias of the score. These results illustrate the importance of characterizing pre-analytical methodology, sample type, sample processing and handling procedures for clinical testing in order to ensure test accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Bunin, Bruce L.
1985-01-01
A program was conducted to develop the technology for critical structural joints in composite wing structure that meets all the design requirements of a 1990 commercial transport aircraft. The results of four large composite multirow bolted joint tests are presented. The tests were conducted to demonstrate the technology for critical joints in highly loaded composite structure and to verify the analytical methods that were developed throughout the program. The test consisted of a wing skin-stringer transition specimen representing a stringer runout and skin splice on the wing lower surface at the side of the fuselage attachment. All tests were static tension tests. The composite material was Toray T-300 fiber with Ciba-Geigy 914 resin in 10 mil tape form. The splice members were metallic, using combinations of aluminum and titanium. Discussions are given of the test article, instrumentation, test setup, test procedures, and test results for each of the four specimens. Some of the analytical predictions are also included.
A review of the analytical simulation of aircraft crash dynamics
NASA Technical Reports Server (NTRS)
Fasanella, Edwin L.; Carden, Huey D.; Boitnott, Richard L.; Hayduk, Robert J.
1990-01-01
A large number of full scale tests of general aviation aircraft, helicopters, and one unique air-to-ground controlled impact of a transport aircraft were performed. Additionally, research was also conducted on seat dynamic performance, load-limiting seats, load limiting subfloor designs, and emergency-locator-transmitters (ELTs). Computer programs were developed to provide designers with methods for predicting accelerations, velocities, and displacements of collapsing structure and for estimating the human response to crash loads. The results of full scale aircraft and component tests were used to verify and guide the development of analytical simulation tools and to demonstrate impact load attenuating concepts. Analytical simulation of metal and composite aircraft crash dynamics are addressed. Finite element models are examined to determine their degree of corroboration by experimental data and to reveal deficiencies requiring further development.
NASA Technical Reports Server (NTRS)
Brinson, R. F.
1985-01-01
A method for lifetime or durability predictions for laminated fiber reinforced plastics is given. The procedure is similar to but not the same as the well known time-temperature-superposition principle for polymers. The method is better described as an analytical adaptation of time-stress-super-position methods. The analytical constitutive modeling is based upon a nonlinear viscoelastic constitutive model developed by Schapery. Time dependent failure models are discussed and are related to the constitutive models. Finally, results of an incremental lamination analysis using the constitutive and failure model are compared to experimental results. Favorable results between theory and predictions are presented using data from creep tests of about two months duration.
Evaluation of analytical performance based on partial order methodology.
Carlsen, Lars; Bruggemann, Rainer; Kenessova, Olga; Erzhigitov, Erkin
2015-01-01
Classical measurements of performances are typically based on linear scales. However, in analytical chemistry a simple scale may be not sufficient to analyze the analytical performance appropriately. Here partial order methodology can be helpful. Within the context described here, partial order analysis can be seen as an ordinal analysis of data matrices, especially to simplify the relative comparisons of objects due to their data profile (the ordered set of values an object have). Hence, partial order methodology offers a unique possibility to evaluate analytical performance. In the present data as, e.g., provided by the laboratories through interlaboratory comparisons or proficiency testings is used as an illustrative example. However, the presented scheme is likewise applicable for comparison of analytical methods or simply as a tool for optimization of an analytical method. The methodology can be applied without presumptions or pretreatment of the analytical data provided in order to evaluate the analytical performance taking into account all indicators simultaneously and thus elucidating a "distance" from the true value. In the present illustrative example it is assumed that the laboratories analyze a given sample several times and subsequently report the mean value, the standard deviation and the skewness, which simultaneously are used for the evaluation of the analytical performance. The analyses lead to information concerning (1) a partial ordering of the laboratories, subsequently, (2) a "distance" to the Reference laboratory and (3) a classification due to the concept of "peculiar points". Copyright © 2014 Elsevier B.V. All rights reserved.
Lee, L.; Helsel, D.
2007-01-01
Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.
Revisitation of the dipole tracer test for heterogeneous porous formations
NASA Astrophysics Data System (ADS)
Zech, Alraune; D'Angelo, Claudia; Attinger, Sabine; Fiori, Aldo
2018-05-01
In this paper, a new analytical solution for interpreting dipole tests in heterogeneous media is derived by associating the shape of the tracer breakthrough curve with the log-conductivity variance. It is presented how the solution can be used for interpretation of dipole field test in view of geostatistical aquifer characterization on three illustrative examples. The analytical solution for the tracer breakthrough curve at the pumping well in a dipole tracer test is developed by considering a perfectly stratified formation. The analysis is carried out making use of the travel time of a generic solute particle, from the injection to the pumping well. Injection conditions are adapted to different possible field setting. Solutions are presented for resident and flux proportional injection mode as well as for an instantaneous pulse of solute and continuous solute injections. The analytical form of the solution allows a detailed investigation on the impact of heterogeneity, the tracer input conditions and ergodicity conditions at the well. The impact of heterogeneity manifests in a significant spreading of solute particles that increases the natural tendency to spreading induced by the dipole setup. Furthermore, with increasing heterogeneity the number of layers needed to reach ergodic conditions become larger. Thus, dipole test in highly heterogeneous aquifers might take place under non-ergodic conditions giving that the log-conductivity variance is underestimated. The method is a promising geostatistical analyzing tool being the first analytical solution for dipole tracer test analysis taking heterogeneity of hydraulic conductivity into account.
Sixty-five years since the New York heat wave: advances in sweat testing for cystic fibrosis.
Collie, Jake T B; Massie, R John; Jones, Oliver A H; LeGrys, Vicky A; Greaves, Ronda F
2014-02-01
The sweat test remains important as a diagnostic test for cystic fibrosis (CF) and has contributed greatly to our understanding of CF as a disease of epithelial electrolyte transport. The standardization of the sweat test, by Gibson and Cooke [Gibson and Cooke (1959) Pediatrics 1959;23:5], followed observations of excessive dehydration amongst patients with CF and confirmed the utility as a diagnostic test. Quantitative pilocarpine iontophoresis remains the gold standard for sweat induction, but there are a number of collection and analytical methods. The pathophysiology of electrolyte transport in sweat was described by Quinton [Quinton (1983) Nature 1983;301:421-422], and this complemented the developments in genetics that discovered the cystic fibrosis transmembrane conductance regulator (CFTR), an epithelial-based electrolyte transport protein. Knowledge of CF has since increased rapidly and further developments in sweat testing include: new collection methods, further standardization of the technique with international recommendations and age related reference intervals. More recently, sweat chloride values have been used as proof of effect for the new drugs that activate CFTR. However, there remain issues with adherence to sweat test guidelines in many countries and there are gaps in our knowledge, including reference intervals for some age groups and stability of sweat samples in transport. Furthermore, modern methods of elemental quantification need to be explored as alternatives to the original analytical methods for sweat electrolyte measurement. The purpose of this review is therefore to describe the development of the sweat test and consider future directions. © 2013 Wiley Periodicals, Inc.
42 CFR 493.959 - Immunohematology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... challenges per testing event a program must provide for each analyte or test procedure is five. Analyte or... Compatibility testing Antibody identification (d) Evaluation of a laboratory's analyte or test performance. HHS... program must compare the laboratory's response for each analyte with the response that reflects agreement...
42 CFR 493.959 - Immunohematology.
Code of Federal Regulations, 2013 CFR
2013-10-01
... challenges per testing event a program must provide for each analyte or test procedure is five. Analyte or... Compatibility testing Antibody identification (d) Evaluation of a laboratory's analyte or test performance. HHS... program must compare the laboratory's response for each analyte with the response that reflects agreement...
42 CFR 493.959 - Immunohematology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... challenges per testing event a program must provide for each analyte or test procedure is five. Analyte or... Compatibility testing Antibody identification (d) Evaluation of a laboratory's analyte or test performance. HHS... program must compare the laboratory's response for each analyte with the response that reflects agreement...
42 CFR 493.959 - Immunohematology.
Code of Federal Regulations, 2011 CFR
2011-10-01
... challenges per testing event a program must provide for each analyte or test procedure is five. Analyte or... Compatibility testing Antibody identification (d) Evaluation of a laboratory's analyte or test performance. HHS... program must compare the laboratory's response for each analyte with the response that reflects agreement...
Horbowy, Jan; Tomczak, Maciej T
2017-01-01
Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR), which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available) to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low.
Horbowy, Jan
2017-01-01
Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR), which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available) to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low. PMID:29131850
Direct structural parameter identification by modal test results
NASA Technical Reports Server (NTRS)
Chen, J.-C.; Kuo, C.-P.; Garba, J. A.
1983-01-01
A direct identification procedure is proposed to obtain the mass and stiffness matrices based on the test measured eigenvalues and eigenvectors. The method is based on the theory of matrix perturbation in which the correct mass and stiffness matrices are expanded in terms of analytical values plus a modification matrix. The simplicity of the procedure enables real time operation during the structural testing.
Patel, Jayshree; Mulhall, Brian; Wolf, Heinz; Klohr, Steven; Guazzo, Dana Morton
2011-01-01
A leak test performed according to ASTM F2338-09 Standard Test Method for Nondestructive Detection of Leaks in Packages by Vacuum Decay Method was developed and validated for container-closure integrity verification of a lyophilized product in a parenteral vial package system. This nondestructive leak test method is intended for use in manufacturing as an in-process package integrity check, and for testing product stored on stability in lieu of sterility tests. Method development and optimization challenge studies incorporated artificially defective packages representing a range of glass vial wall and sealing surface defects, as well as various elastomeric stopper defects. Method validation required 3 days of random-order replicate testing of a test sample population of negative-control, no-defect packages and positive-control, with-defect packages. Positive-control packages were prepared using vials each with a single hole laser-drilled through the glass vial wall. Hole creation and hole size certification was performed by Lenox Laser. Validation study results successfully demonstrated the vacuum decay leak test method's ability to accurately and reliably detect those packages with laser-drilled holes greater than or equal to approximately 5 μm in nominal diameter. All development and validation studies were performed at Whitehouse Analytical Laboratories in Whitehouse, NJ, under the direction of consultant Dana Guazzo of RxPax, LLC, using a VeriPac 455 Micro Leak Test System by Packaging Technologies & Inspection (Tuckahoe, NY). Bristol Myers Squibb (New Brunswick, NJ) fully subsidized all work. A leak test performed according to ASTM F2338-09 Standard Test Method for Nondestructive Detection of Leaks in Packages by Vacuum Decay Method was developed and validated to detect defects in stoppered vial packages containing lyophilized product for injection. This nondestructive leak test method is intended for use in manufacturing as an in-process package integrity check, and for testing product stored on stability in lieu of sterility tests. Test method validation study results proved the method capable of detecting holes laser-drilled through the glass vial wall greater than or equal to 5 μm in nominal diameter. Total test time is less than 1 min per package. All method development and validation studies were performed at Whitehouse Analytical Laboratories in Whitehouse, NJ, under the direction of consultant Dana Guazzo of RxPax, LLC, using a VeriPac 455 Micro Leak Test System by Packaging Technologies & Inspection (Tuckahoe, NY). Bristol Myers Squibb (New Brunswick, NJ) fully subsidized all work.
Learn about the EPA guide (Selected Analytical Methods for Environmental Remediation and Recovery) that helps labs around the country quickly select the appropriate environmental testing and analysis methods to use after a wide-scale chemical event
77 FR 14814 - Tobacco Product Analysis; Scientific Workshop; Request for Comments
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-13
... work to develop tobacco reference products that are not currently available for laboratory use. Discuss... methods used to analyze tobacco products. FDA will invite speakers to address scientific and technical matters relating to the testing of tobacco reference products and the analytical methods used to measure...
Park, Jeong Mee; Yong, Sang Yeol; Kim, Jong Heon; Kim, Hee; Park, Sang-Yoo
2014-01-01
Objective To compare the differences of diagnostic rates, of the two widely used test positions, in measuring vestibular evoked myogenic potentials (VEMP) and selecting the most appropriate analytical method for diagnostic criteria for the patients with vertigo. Methods Thirty-two patients with vertigo were tested in two comparative testing positions: turning the head to the opposite side of the evaluating side and bowing while in seated position, and bowing while in supine positions. Abnormalities were determined by prolonged latency of p13 or n23, shortening of the interpeak latency, and absence of VEMP formation. Results Using the three criteria above for determining abnormalities, both the seated and supine positions showed no significant differences in diagnostic rates, however, the concordance correlation of the two positions was low. When using only the prolonged latency of p13 or n23 in the two positions, diagnostic rates were not significantly different and their concordance correlation was high. On the other hand, using only the shortened interpeak latency in both positions showed no significant difference of diagnostic rates, and the degree of agreement between two positions was low. Conclusion Bowing while in seated position with the head turned in the opposite direction to the area being evaluated is found to be the best VEMP test position due to the consistent level of sternocleidomastoid muscle tension and the high level of compliance. Also, among other diagnostic analysis methods, using prolonged latency of p13 or n23 as the criterion is found to be the most appropriate method of analysis for the VEMP test. PMID:24855617
NASA Astrophysics Data System (ADS)
Tarasenko, Alexander
2018-01-01
Diffusion of particles adsorbed on a homogeneous one-dimensional lattice is investigated using a theoretical approach and MC simulations. The analytical dependencies calculated in the framework of approach are tested using the numerical data. The perfect coincidence of the data obtained by these different methods demonstrates that the correctness of the approach based on the theory of the non-equilibrium statistical operator.
Students' science process skill and analytical thinking ability in chemistry learning
NASA Astrophysics Data System (ADS)
Irwanto, Rohaeti, Eli; Widjajanti, Endang; Suyanta
2017-08-01
Science process skill and analytical thinking ability are needed in chemistry learning in 21st century. Analytical thinking is related with science process skill which is used by students to solve complex and unstructured problems. Thus, this research aims to determine science process skill and analytical thinking ability of senior high school students in chemistry learning. The research was conducted in Tiga Maret Yogyakarta Senior High School, Indonesia, at the middle of the first semester of academic year 2015/2016 is using the survey method. The survey involved 21 grade XI students as participants. Students were given a set of test questions consists of 15 essay questions. The result indicated that the science process skill and analytical thinking ability were relatively low ie. 30.67%. Therefore, teachers need to improve the students' cognitive and psychomotor domains effectively in learning process.
21 CFR 177.1960 - Vinyl chloride-hexene-1 copolymers.
Code of Federal Regulations, 2014 CFR
2014-04-01
... determined by any suitable analytical procedure of generally accepted applicability. (ii) Inherent viscosity... D1243-79, “Standard Test Method for Dilute Solution Viscosity of Vinyl Chloride Polymers,” which is...
Big data in sleep medicine: prospects and pitfalls in phenotyping
Bianchi, Matt T; Russo, Kathryn; Gabbidon, Harriett; Smith, Tiaundra; Goparaju, Balaji; Westover, M Brandon
2017-01-01
Clinical polysomnography (PSG) databases are a rich resource in the era of “big data” analytics. We explore the uses and potential pitfalls of clinical data mining of PSG using statistical principles and analysis of clinical data from our sleep center. We performed retrospective analysis of self-reported and objective PSG data from adults who underwent overnight PSG (diagnostic tests, n=1835). Self-reported symptoms overlapped markedly between the two most common categories, insomnia and sleep apnea, with the majority reporting symptoms of both disorders. Standard clinical metrics routinely reported on objective data were analyzed for basic properties (missing values, distributions), pairwise correlations, and descriptive phenotyping. Of 41 continuous variables, including clinical and PSG derived, none passed testing for normality. Objective findings of sleep apnea and periodic limb movements were common, with 51% having an apnea–hypopnea index (AHI) >5 per hour and 25% having a leg movement index >15 per hour. Different visualization methods are shown for common variables to explore population distributions. Phenotyping methods based on clinical databases are discussed for sleep architecture, sleep apnea, and insomnia. Inferential pitfalls are discussed using the current dataset and case examples from the literature. The increasing availability of clinical databases for large-scale analytics holds important promise in sleep medicine, especially as it becomes increasingly important to demonstrate the utility of clinical testing methods in management of sleep disorders. Awareness of the strengths, as well as caution regarding the limitations, will maximize the productive use of big data analytics in sleep medicine. PMID:28243157
Musijowski, Jacek; Trojanowicz, Marek; Szostek, Bogdan; da Costa Lima, José Luis Fontes; Lapa, Rui; Yamashita, Hiroki; Takayanagi, Toshio; Motomizu, Shoji
2007-09-26
Considering recent reports on widespread occurrence and concerns about perfluoroalkyl substances (PFAS) in environmental and biological systems, analysis of these compounds have gained much attention in recent years. Majority of analyte-specific methods are based on a LC/MS/MS or a GC/MS detection, however many environmental or biological studies would benefit from a total organic fluorine (TOF) determination. Presented work was aimed at developing a method for TOF determination. TOF is determined as an amount of inorganic fluoride obtained after defluorination reaction conducted off-line using sodium biphenyl reagent directly on the sorbent without elution of retained analytes. Recovered fluoride was analyzed using flow-injection system with either fluorimetric or potentiometric detection. The TOF method was tested using perfluorocarboxylic acids (PFCA), including perfluorooctanoic acid (PFOA), as model compounds. Considering low concentrations of PFAS in natural samples, solid-phase extraction as a preconcentration procedure was evaluated. Several carbon-based sorbents were tested, namely multi-wall carbon nanotubes, carbon nanofibres and activated carbon. Good sorption of all analytes was achieved and defluorination reaction was possible to carry out directly on a sorbent bed. Recoveries obtained for PFCAs, adsorbed on an activated carbon sorbent, and measured as TOF, were 99.5+/-1.7, 110+/-9.4, 95+/-26, 120+/-32, 110+/-12 for C4, C6, C8, C10 and C12-PFCA, respectively. Two flow systems that would enable the defluorination reaction and fluoride determination in a single system were designed and tested.
Park, Jin-A; Abd El-Aty, A M; Zheng, Weijia; Kim, Seong-Kwan; Choi, Jeong-Min; Hacımüftüoğlu, Ahmet; Shim, Jae-Han; Shin, Ho-Chul
2018-06-01
In this work, a method was developed for the simultaneous determination of residual metoserpate, buquinolate and diclofenac in pork, milk, and eggs. Samples were extracted with 0.1% formic acid in acetonitrile, defatted with n-hexane, and filtered prior to analysis using liquid chromatography-tandem mass spectrometry. The analytes were separated on a C 18 column using 0.1% acetic acid and methanol as the mobile phase. The matrix-matched calibration curves showed good linearity over a concentration range of 5-50 ng/g with coefficients of determination (R 2 ) ≥0.991. The intra- and inter-day accuracies (expressed as recovery percentage values) calculated using three spiking levels (5, 10, and 20 μg/kg) were 80-108.65 and 74.06-107.15%, respectively, and the precisions (expressed as relative standard deviation) were 2.86-13.67 and 0.05-11.74%, respectively, for the tested drugs determined in various matrices. The limits of quantification (1 and 2 μg/kg) were below the uniform residual level (0.01 mg/kg) set for compounds that have no specific maximum residue limit (MRL). The developed method was tested using market samples and none of the target analytes was detected in any of the samples. The validated method proved to be practicable for detection of the tested analytes in pork, milk, and eggs. Copyright © 2018 John Wiley & Sons, Ltd.
Gordeev, Konstantin; Shinkarev, Sergey; Ilyin, Leonid; Bouville, André; Hoshi, Masaharu; Luckyanov, Nickolas; Simon, Steven L
2006-02-01
A short analysis of all 111 atmospheric events conducted at the Semipalatinsk Test Site (STS) in 1949-1962 with regard to significant off-site exposure (more than 5 mSv of the effective dose during the first year after the explosion) has been made. The analytical method used to assess external exposure to the residents living in settlements near the STS is described. This method makes use of the archival data on the radiological conditions, including the measurements of exposure rate. Special attention was given to the residents of Dolon and Kanonerka villages exposed mainly as a result of the first test, detonated on August 29, 1949. For the residents of those settlements born in 1935, the dose estimates calculated according to the analytical method, are compared to those derived from the thermoluminescence measurements in bricks and electron paramagnetic resonance measurements in teeth. The methods described in this paper were used for external dose assessment for the cohort members at an initial stage of an ongoing epidemiological study conducted by the U.S. National Cancer Institute in the Republic of Kazakhstan. Recently revised methods and estimates of external exposure for that cohort are given in another paper (Simon et al.) in this conference.
NASA Technical Reports Server (NTRS)
Lyle, Karen H.
2015-01-01
Acceptance of new spacecraft structural architectures and concepts requires validated design methods to minimize the expense involved with technology demonstration via flight-testing. Hypersonic Inflatable Aerodynamic Decelerator (HIAD) architectures are attractive for spacecraft deceleration because they are lightweight, store compactly, and utilize the atmosphere to decelerate a spacecraft during entry. However, designers are hesitant to include these inflatable approaches for large payloads or spacecraft because of the lack of flight validation. This publication summarizes results comparing analytical results with test data for two concepts subjected to representative entry, static loading. The level of agreement and ability to predict the load distribution is considered sufficient to enable analytical predictions to be used in the design process.
Parrinello, Christina M.; Grams, Morgan E.; Couper, David; Ballantyne, Christie M.; Hoogeveen, Ron C.; Eckfeldt, John H.; Selvin, Elizabeth; Coresh, Josef
2016-01-01
Background Equivalence of laboratory tests over time is important for longitudinal studies. Even a small systematic difference (bias) can result in substantial misclassification. Methods We selected 200 Atherosclerosis Risk in Communities Study participants attending all 5 study visits over 25 years. Eight analytes were re-measured in 2011–13 from stored blood samples from multiple visits: creatinine, uric acid, glucose, total cholesterol, HDL-cholesterol, LDL-cholesterol, triglycerides, and high-sensitivity C-reactive protein. Original values were recalibrated to re-measured values using Deming regression. Differences >10% were considered to reflect substantial bias, and correction equations were applied to affected analytes in the total study population. We examined trends in chronic kidney disease (CKD) pre- and post-recalibration. Results Repeat measures were highly correlated with original values (Pearson’s r>0.85 after removing outliers [median 4.5% of paired measurements]), but 2 of 8 analytes (creatinine and uric acid) had differences >10%. Original values of creatinine and uric acid were recalibrated to current values using correction equations. CKD prevalence differed substantially after recalibration of creatinine (visits 1, 2, 4 and 5 pre-recalibration: 21.7%, 36.1%, 3.5%, 29.4%; post-recalibration: 1.3%, 2.2%, 6.4%, 29.4%). For HDL-cholesterol, the current direct enzymatic method differed substantially from magnesium dextran precipitation used during visits 1–4. Conclusions Analytes re-measured in samples stored for ~25 years were highly correlated with original values, but two of the 8 analytes showed substantial bias at multiple visits. Laboratory recalibration improved reproducibility of test results across visits and resulted in substantial differences in CKD prevalence. We demonstrate the importance of consistent recalibration of laboratory assays in a cohort study. PMID:25952043
Modeling and Analysis of Structural Dynamics for a One-Tenth Scale Model NGST Sunshield
NASA Technical Reports Server (NTRS)
Johnston, John; Lienard, Sebastien; Brodeur, Steve (Technical Monitor)
2001-01-01
New modeling and analysis techniques have been developed for predicting the dynamic behavior of the Next Generation Space Telescope (NGST) sunshield. The sunshield consists of multiple layers of pretensioned, thin-film membranes supported by deployable booms. Modeling the structural dynamic behavior of the sunshield is a challenging aspect of the problem due to the effects of membrane wrinkling. A finite element model of the sunshield was developed using an approximate engineering approach, the cable network method, to account for membrane wrinkling effects. Ground testing of a one-tenth scale model of the NGST sunshield were carried out to provide data for validating the analytical model. A series of analyses were performed to predict the behavior of the sunshield under the ground test conditions. Modal analyses were performed to predict the frequencies and mode shapes of the test article and transient response analyses were completed to simulate impulse excitation tests. Comparison was made between analytical predictions and test measurements for the dynamic behavior of the sunshield. In general, the results show good agreement with the analytical model correctly predicting the approximate frequency and mode shapes for the significant structural modes.
CAP/ACMG proficiency testing for biochemical genetics laboratories: a summary of performance.
Oglesbee, Devin; Cowan, Tina M; Pasquali, Marzia; Wood, Timothy C; Weck, Karen E; Long, Thomas; Palomaki, Glenn E
2018-01-01
PurposeTesting for inborn errors of metabolism is performed by clinical laboratories worldwide, each utilizing laboratory-developed procedures. We sought to summarize performance in the College of American Pathologists' (CAP) proficiency testing (PT) program and identify opportunities for improving laboratory quality. When evaluating PT data, we focused on a subset of laboratories that have participated in at least one survey since 2010.MethodsAn analysis of laboratory performance (2004 to 2014) on the Biochemical Genetics PT Surveys, a program administered by CAP and the American College of Medical Genetics and Genomics. Analytical and interpretive performance was evaluated for four tests: amino acids, organic acids, acylcarnitines, and mucopolysaccharides.ResultsSince 2010, 150 laboratories have participated in at least one of four PT surveys. Analytic sensitivities ranged from 88.2 to 93.4%, while clinical sensitivities ranged from 82.4 to 91.0%. Performance was higher for US participants and for more recent challenges. Performance was lower for challenges with subtle findings or complex analytical patterns.ConclusionUS clinical biochemical genetics laboratory proficiency is satisfactory, with a minority of laboratories accounting for the majority of errors. Our findings underscore the complex nature of clinical biochemical genetics testing and highlight the necessity of continuous quality management.
NASA Technical Reports Server (NTRS)
Andreadis, Dean; Drake, Alan; Garrett, Joseph L.; Gettinger, Christopher D.; Hoxie, Stephen S.
2003-01-01
The development and ground test of a rocket-based combined cycle (RBCC) propulsion system is being conducted as part of the NASA Marshall Space Flight Center (MSFC) Integrated System Test of an Airbreathing Rocket (ISTAR) program. The eventual flight vehicle (X-43B) is designed to support an air-launched self-powered Mach 0.7 to 7.0 demonstration of an RBCC engine through all of its airbreathing propulsion modes - air augmented rocket (AAR), ramjet (RJ), and scramjet (SJ). Through the use of analytical tools, numerical simulations, and experimental tests the ISTAR program is developing and validating a hydrocarbon-fueled RBCC combustor design methodology. This methodology will then be used to design an integrated RBCC propulsion system that produces robust ignition and combustion stability characteristics while maximizing combustion efficiency and minimizing drag losses. First order analytical and numerical methods used to design hydrocarbon-fueled combustors are discussed with emphasis on the methods and determination of requirements necessary to establish engine operability and performance characteristics.
NASA Technical Reports Server (NTRS)
Andreadis, Dean; Drake, Alan; Garrett, Joseph L.; Gettinger, Christopher D.; Hoxie, Stephen S.
2002-01-01
The development and ground test of a rocket-based combined cycle (RBCC) propulsion system is being conducted as part of the NASA Marshall Space Flight Center (MSFC) Integrated System Test of an Airbreathing Rocket (ISTAR) program. The eventual flight vehicle (X-43B) is designed to support an air-launched self-powered Mach 0.7 to 7.0 demonstration of an RBCC engine through all of its airbreathing propulsion modes - air augmented rocket (AAR), ramjet (RJ), and scramjet (SJ). Through the use of analytical tools, numerical simulations, and experimental tests the ISTAR program is developing and validating a hydrocarbon-fueled RBCC combustor design methodology. This methodology will then be used to design an integrated RBCC propulsion system thai: produces robust ignition and combustion stability characteristics while maximizing combustion efficiency and minimizing drag losses. First order analytical and numerical methods used to design hydrocarbon-fueled combustors are discussed with emphasis on the methods and determination of requirements necessary to establish engine operability and performance characteristics.
NASA Technical Reports Server (NTRS)
Gallardo, V. C.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.
1981-01-01
An analytical technique was developed to predict the behavior of a rotor system subjected to sudden unbalance. The technique is implemented in the Turbine Engine Transient Rotor Analysis (TETRA) computer program using the component element method. The analysis was particularly aimed toward blade-loss phenomena in gas turbine engines. A dual-rotor, casing, and pylon structure can be modeled by the computer program. Blade tip rubs, Coriolis forces, and mechanical clearances are included. The analytical system was verified by modeling and simulating actual test conditions for a rig test as well as a full-engine, blade-release demonstration.
Smalley, James; Marino, Anthony M; Xin, Baomin; Olah, Timothy; Balimane, Praveen V
2007-07-01
Caco-2 cells, the human colon carcinoma cells, are typically used for screening compounds for their permeability characteristics and P-glycoprotein (P-gp) interaction potential during discovery and development. The P-gp inhibition of test compounds is assessed by performing bi-directional permeability studies with digoxin, a well established P-gp substrate probe. Studies performed with digoxin alone as well as digoxin in presence of test compounds as putative inhibitors constitute the P-gp inhibition assay used to assess the potential liability of discovery compounds. Radiolabeled (3)H-digoxin is commonly used in such studies followed by liquid scintillation counting. This manuscript describes the development of a sensitive, accurate, and reproducible LC-MS/MS method for analysis of digoxin and its internal standard digitoxin using an on-line extraction turbulent flow chromatography coupled to tandem mass spectrometric detection that is amendable to high throughput with use of 96-well plates. The standard curve for digoxin was linear between 10 nM and 5000 nM with regression coefficient (R(2)) of 0.99. The applicability and reliability of the analysis method was evaluated by successful demonstration of efflux ratio (permeability B to A over permeability A to B) greater than 10 for digoxin in Caco-2 cells. Additional evaluations were performed on 13 marketed compounds by conducting inhibition studies in Caco-2 cells using classical P-gp inhibitors (ketoconazole, cyclosporin, verapamil, quinidine, saquinavir etc.) and comparing the results to historical data with (3)H-digoxin studies. Similarly, P-gp inhibition studies with LC-MS/MS analytical method for digoxin were also performed for 21 additional test compounds classified as negative, moderate, and potent P-gp inhibitors spanning multiple chemo types and results compared with the historical P-gp inhibition data from the (3)H-digoxin studies. A very good correlation coefficient (R(2)) of 0.89 between the results from the two analytical methods affords an attractive LC-MS/MS analytical option for labs that need to conduct the P-gp inhibition assay without using radiolabeled compounds.
NASA Technical Reports Server (NTRS)
Naumann, E. C.; Catherines, D. S.; Walton, W. C., Jr.
1971-01-01
Experimental and analytical investigations of the vibratory behavior of ring-stiffened truncated-cone shells are described. Vibration tests were conducted on 60 deg conical shells having up to four ring stiffeners and for free-free and clamped-free edge constraints and 9 deg conical shells, for two thicknesses, each with two angle rings and for free-free, free-clamped, and clamped-clamped edge constraints. The analytical method is based on linear thin shell theory, employing the Rayleigh-Ritz method. Discrete rings are represented as composed of one or more segments, each of which is a short truncated-cone shell of uniform thickness. Equations of constraint are used to join a ring and shell along a circumferential line connection. Excellent agreement was obtained for comparisons of experimental and calculated frequencies.
Lísa, Miroslav; Cífková, Eva; Khalikova, Maria; Ovčačíková, Magdaléna; Holčapek, Michal
2017-11-24
Lipidomic analysis of biological samples in a clinical research represents challenging task for analytical methods given by the large number of samples and their extreme complexity. In this work, we compare direct infusion (DI) and chromatography - mass spectrometry (MS) lipidomic approaches represented by three analytical methods in terms of comprehensiveness, sample throughput, and validation results for the lipidomic analysis of biological samples represented by tumor tissue, surrounding normal tissue, plasma, and erythrocytes of kidney cancer patients. Methods are compared in one laboratory using the identical analytical protocol to ensure comparable conditions. Ultrahigh-performance liquid chromatography/MS (UHPLC/MS) method in hydrophilic interaction liquid chromatography mode and DI-MS method are used for this comparison as the most widely used methods for the lipidomic analysis together with ultrahigh-performance supercritical fluid chromatography/MS (UHPSFC/MS) method showing promising results in metabolomics analyses. The nontargeted analysis of pooled samples is performed using all tested methods and 610 lipid species within 23 lipid classes are identified. DI method provides the most comprehensive results due to identification of some polar lipid classes, which are not identified by UHPLC and UHPSFC methods. On the other hand, UHPSFC method provides an excellent sensitivity for less polar lipid classes and the highest sample throughput within 10min method time. The sample consumption of DI method is 125 times higher than for other methods, while only 40μL of organic solvent is used for one sample analysis compared to 3.5mL and 4.9mL in case of UHPLC and UHPSFC methods, respectively. Methods are validated for the quantitative lipidomic analysis of plasma samples with one internal standard for each lipid class. Results show applicability of all tested methods for the lipidomic analysis of biological samples depending on the analysis requirements. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.
We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.
Poore, Joshua C; Forlines, Clifton L; Miller, Sarah M; Regan, John R; Irvine, John M
2014-12-01
The decision sciences are increasingly challenged to advance methods for modeling analysts, accounting for both analytic strengths and weaknesses, to improve inferences taken from increasingly large and complex sources of data. We examine whether psychometric measures-personality, cognitive style, motivated cognition-predict analytic performance and whether psychometric measures are competitive with aptitude measures (i.e., SAT scores) as analyst sample selection criteria. A heterogeneous, national sample of 927 participants completed an extensive battery of psychometric measures and aptitude tests and was asked 129 geopolitical forecasting questions over the course of 1 year. Factor analysis reveals four dimensions among psychometric measures; dimensions characterized by differently motivated "top-down" cognitive styles predicted distinctive patterns in aptitude and forecasting behavior. These dimensions were not better predictors of forecasting accuracy than aptitude measures. However, multiple regression and mediation analysis reveals that these dimensions influenced forecasting accuracy primarily through bias in forecasting confidence. We also found that these facets were competitive with aptitude tests as forecast sampling criteria designed to mitigate biases in forecasting confidence while maximizing accuracy. These findings inform the understanding of individual difference dimensions at the intersection of analytic aptitude and demonstrate that they wield predictive power in applied, analytic domains.
Forlines, Clifton L.; Miller, Sarah M.; Regan, John R.; Irvine, John M.
2014-01-01
The decision sciences are increasingly challenged to advance methods for modeling analysts, accounting for both analytic strengths and weaknesses, to improve inferences taken from increasingly large and complex sources of data. We examine whether psychometric measures—personality, cognitive style, motivated cognition—predict analytic performance and whether psychometric measures are competitive with aptitude measures (i.e., SAT scores) as analyst sample selection criteria. A heterogeneous, national sample of 927 participants completed an extensive battery of psychometric measures and aptitude tests and was asked 129 geopolitical forecasting questions over the course of 1 year. Factor analysis reveals four dimensions among psychometric measures; dimensions characterized by differently motivated “top-down” cognitive styles predicted distinctive patterns in aptitude and forecasting behavior. These dimensions were not better predictors of forecasting accuracy than aptitude measures. However, multiple regression and mediation analysis reveals that these dimensions influenced forecasting accuracy primarily through bias in forecasting confidence. We also found that these facets were competitive with aptitude tests as forecast sampling criteria designed to mitigate biases in forecasting confidence while maximizing accuracy. These findings inform the understanding of individual difference dimensions at the intersection of analytic aptitude and demonstrate that they wield predictive power in applied, analytic domains. PMID:25983670
Development and Preparation of Lead-Containing Paint Films and Diagnostic Test Materials
Lead in paint continues to be a threat to children’s health in cities across the United States, which means there is an ongoing need for testing and analysis of paint. This ongoing analytical effort and especially development of new methods continue to drive the need for diagnost...
Applications of flight control system methods to an advanced combat rotorcraft
NASA Technical Reports Server (NTRS)
Tischler, Mark B.; Fletcher, Jay W.; Morris, Patrick M.; Tucker, George T.
1989-01-01
Advanced flight control system design, analysis, and testing methodologies developed at the Ames Research Center are applied in an analytical and flight test evaluation of the Advanced Digital Optical Control System (ADOCS) demonstrator. The primary objectives are to describe the knowledge gained about the implications of digital flight control system design for rotorcraft, and to illustrate the analysis of the resulting handling-qualities in the context of the proposed new handling-qualities specification for rotorcraft. Topics covered in-depth are digital flight control design and analysis methods, flight testing techniques, ADOCS handling-qualities evaluation results, and correlation of flight test results with analytical models and the proposed handling-qualities specification. The evaluation of the ADOCS demonstrator indicates desirable response characteristics based on equivalent damping and frequency, but undersirably large effective time-delays (exceeding 240 m sec in all axes). Piloted handling-qualities are found to be desirable or adequate for all low, medium, and high pilot gain tasks; but handling-qualities are inadequate for ultra-high gain tasks such as slope and running landings.
Improvement of sampling plans for Salmonella detection in pooled table eggs by use of real-time PCR.
Pasquali, Frédérique; De Cesare, Alessandra; Valero, Antonio; Olsen, John Emerdhal; Manfreda, Gerardo
2014-08-01
Eggs and egg products have been described as the most critical food vehicles of salmonellosis. The prevalence and level of contamination of Salmonella on table eggs are low, which severely affects the sensitivity of sampling plans applied voluntarily in some European countries, where one to five pools of 10 eggs are tested by the culture based reference method ISO 6579:2004. In the current study we have compared the testing-sensitivity of the reference culture method ISO 6579:2004 and an alternative real-time PCR method on Salmonella contaminated egg-pool of different sizes (4-9 uninfected eggs mixed with one contaminated egg) and contamination levels (10°-10(1), 10(1)-10(2), 10(2)-10(3)CFU/eggshell). Two hundred and seventy samples corresponding to 15 replicates per pool size and inoculum level were tested. At the lowest contamination level real-time PCR detected Salmonella in 40% of contaminated pools vs 12% using ISO 6579. The results were used to estimate the lowest number of sample units needed to be tested in order to have a 95% certainty not falsely to accept a contaminated lot by Monte Carlo simulation. According to this simulation, at least 16 pools of 10 eggs each are needed to be tested by ISO 6579 in order to obtain this confidence level, while the minimum number of pools to be tested was reduced to 8 pools of 9 eggs each, when real-time PCR was applied as analytical method. This result underlines the importance of including analytical methods with higher sensitivity in order to improve the efficiency of sampling and reduce the number of samples to be tested. Copyright © 2013 Elsevier B.V. All rights reserved.
Three-dimensional eddy current solution of a polyphase machine test model (abstract)
NASA Astrophysics Data System (ADS)
Pahner, Uwe; Belmans, Ronnie; Ostovic, Vlado
1994-05-01
This abstract describes a three-dimensional (3D) finite element solution of a test model that has been reported in the literature. The model is a basis for calculating the current redistribution effects in the end windings of turbogenerators. The aim of the study is to see whether the analytical results of the test model can be found using a general purpose finite element package, thus indicating that the finite element model is accurate enough to treat real end winding problems. The real end winding problems cannot be solved analytically, as the geometry is far too complicated. The model consists of a polyphase coil set, containing 44 individual coils. This set generates a two pole mmf distribution on a cylindrical surface. The rotating field causes eddy currents to flow in the inner massive and conducting rotor. In the analytical solution a perfect sinusoidal mmf distribution is put forward. The finite element model contains 85824 tetrahedra and 16451 nodes. A complex single scalar potential representation is used in the nonconducting parts. The computation time required was 3 h and 42 min. The flux plots show that the field distribution is acceptable. Furthermore, the induced currents are calculated and compared with the values found from the analytical solution. The distribution of the eddy currents is very close to the distribution of the analytical solution. The most important results are the losses, both local and global. The value of the overall losses is less than 2% away from those of the analytical solution. Also the local distribution of the losses is at any given point less than 7% away from the analytical solution. The deviations of the results are acceptable and are partially due to the fact that the sinusoidal mmf distribution was not modeled perfectly in the finite element method.
Laminar flow control perforated wing panel development
NASA Technical Reports Server (NTRS)
Fischler, J. E.
1986-01-01
Many structural concepts for a wing leading edge laminar flow control hybrid panel were analytically investigated. After many small, medium, and large tests, the selected design was verified. New analytic methods were developed to combine porous titanium sheet bonded to a substructure of fiberglass and carbon/epoxy cloth. At -65 and +160 F test conditions, the critical bond of the porous titanium to the composite failed at lower than anticipated test loads. New cure cycles, design improvements, and test improvements significantly improved the strength and reduced the deflections from thermal and lateral loadings. The wave tolerance limits for turbulence were not exceeded. Consideration of the beam column midbay deflections from the combinations of the axial and lateral loadings and thermal bowing at -65 F, room temperature, and +160 F were included. Many lap shear tests were performed at several cure cycles. Results indicate that sufficient verification was obtained to fabricate a demonstration vehicle.
Design, fabrication and test of graphite/epoxy metering truss structure components, phase 3
NASA Technical Reports Server (NTRS)
1974-01-01
The design, materials, tooling, manufacturing processes, quality control, test procedures, and results associated with the fabrication and test of graphite/epoxy metering truss structure components exhibiting a near zero coefficient of thermal expansion are described. Analytical methods were utilized, with the aid of a computer program, to define the most efficient laminate configurations in terms of thermal behavior and structural requirements. This was followed by an extensive material characterization and selection program, conducted for several graphite/graphite/hybrid laminate systems to obtain experimental data in support of the analytical predictions. Mechanical property tests as well as the coefficient of thermal expansion tests were run on each laminate under study, the results of which were used as the selection criteria for the single most promising laminate. Further coefficient of thermal expansion measurement was successfully performed on three subcomponent tubes utilizing the selected laminate.
Immunochemical analytical methods for the determination of peanut proteins in foods.
Whitaker, Thomas B; Williams, Kristina M; Trucksess, Mary W; Slate, Andrew B
2005-01-01
Peanut proteins can cause allergenic reactions that can result in respiratory and circulatory effects in the body sometimes leading to shock and death. The determination of peanut proteins in foods by analytical methods can reduce the risk of serious reactions in the highly sensitized individual by allowing for the detection of these proteins in a food at various stages of the manufacturing process. The method performance of 4 commercially available enzyme-linked immunosorbent assay (ELISA) kits was evaluated for the detection of peanut proteins in milk chocolate, ice cream, cookies, and breakfast cereals: ELISA-TEK Peanut Protein Assay, now known as "Bio-Kit" for peanut proteins, from ELISA Technologies Inc.; Veratox for Peanut Allergens from Neogen Corp.; RIDASCREEN Peanut Kit from R-Biopharm GmbH; and ProLisa from Canadian Food Technology Ltd. The 4 test kits were evaluated for accuracy (recovery) and precision using known concentrations of peanut or peanut proteins in the 4 food matrixes. Two different techniques, incurred and spiked, were used to prepare samples with 4 known concentrations of peanut protein. Defatted peanut flour was added in the incurred samples, and water-soluble peanut proteins were added in the spiked samples. The incurred levels were 0.0, 10, 20, and 100 microg whole peanut per g food; the spiked levels were 0.0, 5, 10, and 20 microg peanut protein per g food. Performance varied by test kit, protein concentration, and food matrix. The Veratox kit had the best accuracy or lowest percent difference between measured and incurred levels of 15.7% when averaged across all incurred levels and food matrixes. Recoveries associated with the Veratox kit varied from 93 to 115% for all food matrixes except cookies. Recoveries for all kits were about 50% for cookies. The analytical precision, as measured by the variance, increased with an increase in protein concentration. However, the coefficient of variation (CV) was stable across the 4 incurred protein levels and was 7.0% when averaged across the 4 food matrixes and analytical kits. The R-Biopharm test kit had the best precision or a CV of 4.2% when averaged across all incurred levels and food matrixes. Because measured protein values varied by test kit and food matrix, a method was developed to normalize or transform measured protein concentrations to an adjusted protein value that was equal to the known protein concentration. The normalization method adjusts measured protein values to equal the true protein value regardless of the type test kit or type food matrix.
Lubrication and cooling for high speed gears
NASA Technical Reports Server (NTRS)
Townsend, D. P.
1985-01-01
The problems and failures occurring with the operation of high speed gears are discussed. The gearing losses associated with high speed gearing such as tooth mesh friction, bearing friction, churning, and windage are discussed with various ways shown to help reduce these losses and thereby improve efficiency. Several different methods of oil jet lubrication for high speed gearing are given such as into mesh, out of mesh, and radial jet lubrication. The experiments and analytical results for the various methods of oil jet lubrication are shown with the strengths and weaknesses of each method discussed. The analytical and experimental results of gear lubrication and cooling at various test conditions are presented. These results show the very definite need of improved methods of gear cooling at high speed and high load conditions.
Complex Langevin simulation of a random matrix model at nonzero chemical potential
Bloch, Jacques; Glesaaen, Jonas; Verbaarschot, Jacobus J. M.; ...
2018-03-06
In this study we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass ismore » inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.« less
Nanometric depth resolution from multi-focal images in microscopy.
Dalgarno, Heather I C; Dalgarno, Paul A; Dada, Adetunmise C; Towers, Catherine E; Gibson, Gavin J; Parton, Richard M; Davis, Ilan; Warburton, Richard J; Greenaway, Alan H
2011-07-06
We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels.
Nanometric depth resolution from multi-focal images in microscopy
Dalgarno, Heather I. C.; Dalgarno, Paul A.; Dada, Adetunmise C.; Towers, Catherine E.; Gibson, Gavin J.; Parton, Richard M.; Davis, Ilan; Warburton, Richard J.; Greenaway, Alan H.
2011-01-01
We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels. PMID:21247948
Streby, Ashleigh; Mull, Bonnie J; Levy, Karen; Hill, Vincent R
2015-05-01
Naegleria fowleri is a thermophilic free-living ameba found in freshwater environments worldwide. It is the cause of a rare but potentially fatal disease in humans known as primary amebic meningoencephalitis. Established N. fowleri detection methods rely on conventional culture techniques and morphological examination followed by molecular testing. Multiple alternative real-time PCR assays have been published for rapid detection of Naegleria spp. and N. fowleri. Foursuch assays were evaluated for the detection of N. fowleri from surface water and sediment. The assays were compared for thermodynamic stability, analytical sensitivity and specificity, detection limits, humic acid inhibition effects, and performance with seeded environmental matrices. Twenty-one ameba isolates were included in the DNA panel used for analytical sensitivity and specificity analyses. N. fowleri genotypes I and III were used for method performance testing. Two of the real-time PCR assays were determined to yield similar performance data for specificity and sensitivity for detecting N. fowleri in environmental matrices.
Streby, Ashleigh; Mull, Bonnie J.; Levy, Karen
2015-01-01
Naegleria fowleri is a thermophilic free-living ameba found in freshwater environments worldwide. It is the cause of a rare but potentially fatal disease in humans known as primary amebic meningoencephalitis. Established N. fowleri detection methods rely on conventional culture techniques and morphological examination followed by molecular testing. Multiple alternative real-time PCR assays have been published for rapid detection of Naegleria spp. and N. fowleri. Four such assays were evaluated for the detection of N. fowleri from surface water and sediment. The assays were compared for thermodynamic stability, analytical sensitivity and specificity, detection limits, humic acid inhibition effects, and performance with seeded environmental matrices. Twenty-one ameba isolates were included in the DNA panel used for analytical sensitivity and specificity analyses. N. fowleri genotypes I and III were used for method performance testing. Two of the real-time PCR assays were determined to yield similar performance data for specificity and sensitivity for detecting N. fowleri in environmental matrices. PMID:25855343
Complex Langevin simulation of a random matrix model at nonzero chemical potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bloch, Jacques; Glesaaen, Jonas; Verbaarschot, Jacobus J. M.
In this study we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass ismore » inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.« less
Parboosing, Raveen; Siyaca, Ntombizandile; Moodley, Pravikrishnen
2016-01-01
Background Poor quality dried blood spot (DBS) specimens are usually rejected by virology laboratories, affecting early infant diagnosis of HIV. The practice of combining two incompletely-filled DBS in one specimen preparation tube during pre-analytical specimen processing (i.e., the two-spot method) has been implemented to reduce the number of specimens being rejected for insufficient volume. Objectives This study analysed laboratory data to describe the quality of DBS specimens and the use of the two-spot method over a one-year period, then validated the two-spot method against the standard (one-spot) method. Methods Data on HIV-1 PCR test requests submitted in 2014 to the Department of Virology at Inkosi Albert Luthuli Central Hospital in KwaZulu-Natal province, South Africa were analysed to describe reasons for specimen rejection, as well as results of the two-spot method. The accuracy, lower limit of detection and precision of the two-spot method were assessed. Results Of the 88 481 specimens received, 3.7% were rejected for pre-analytical problems. Of those, 48.9% were rejected as a result of insufficient specimen volume. Two health facilities had significantly more specimen rejections than other facilities. The two-spot method prevented 10 504 specimen rejections. The Pearson correlation coefficient comparing the standard to the two-spot method was 0.997. Conclusions The two-spot method was comparable with the standard method of pre-analytical specimen processing. Two health facilities were identified for targeted retraining on specimen quality. The two-spot method of DBS specimen processing can be used as an adjunct to retraining, to reduce the number of specimens rejected and improve linkage to care. PMID:28879108
Location of Biomarkers and Reagents within Agarose Beads of a Programmable Bio-nano-chip
Jokerst, Jesse V.; Chou, Jie; Camp, James P.; Wong, Jorge; Lennart, Alexis; Pollard, Amanda A.; Floriano, Pierre N.; Christodoulides, Nicolaos; Simmons, Glennon W.; Zhou, Yanjie; Ali, Mehnaaz F.
2012-01-01
The slow development of cost-effective medical microdevices with strong analytical performance characteristics is due to a lack of selective and efficient analyte capture and signaling. The recently developed programmable bio-nano-chip (PBNC) is a flexible detection device with analytical behavior rivaling established macroscopic methods. The PBNC system employs ≈300 μm-diameter bead sensors composed of agarose “nanonets” that populate a microelectromechanical support structure with integrated microfluidic elements. The beads are an efficient and selective protein-capture medium suitable for the analysis of complex fluid samples. Microscopy and computational studies probe the 3D interior of the beads. The relative contributions that the capture and detection of moieties, analyte size, and bead porosity make to signal distribution and intensity are reported. Agarose pore sizes ranging from 45 to 620 nm are examined and those near 140 nm provide optimal transport characteristics for rapid (<15 min) tests. The system exhibits efficient (99.5%) detection of bead-bound analyte along with low (≈2%) nonspecific immobilization of the detection probe for carcinoembryonic antigen assay. Furthermore, the role analyte dimensions play in signal distribution is explored, and enhanced methods for assay building that consider the unique features of biomarker size are offered. PMID:21290601
Athar Masood, M; Veenstra, Timothy D
2017-08-26
Urine Drug Testing (UDT) is an important analytical/bio-analytical technique that has inevitably become an integral and vital part of a testing program for diagnostic purposes. This manuscript presents a tailor-made LC-MS/MS quantitative assay method development and validation for a custom group of 33 pain panel drugs and their metabolites belonging to different classes (opiates, opioids, benzodiazepines, illicit, amphetamines, etc.) that are prescribed in pain management and depressant therapies. The LC-MS/MS method incorporates two experiments to enhance the sensitivity of the assay and has a run time of about 7 min. with no prior purification of the samples required and a flow rate of 0.7 mL/min. The method also includes the second stage metabolites for some drugs that belong to different classes but have first stage similar metabolic pathways that will enable to correctly identify the right drug or to flag the drug that might be due to specimen tampering. Some real case examples and difficulties in peak picking were provided with some of the analytes in subject samples. Finally, the method was deliberated with some randomly selected de-identified clinical subject samples, and the data evaluated from "direct dilute and shoot analysis" and after "glucuronide hydrolysis" were compared. This method is now used to run routinely more than 100 clinical subjects samples on a daily basis. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
SS-HORSE method for studying resonances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blokhintsev, L. D.; Mazur, A. I.; Mazur, I. A., E-mail: 008043@pnu.edu.ru
A new method for analyzing resonance states based on the Harmonic-Oscillator Representation of Scattering Equations (HORSE) formalism and analytic properties of partial-wave scattering amplitudes is proposed. The method is tested by applying it to the model problem of neutral-particle scattering and can be used to study resonance states on the basis of microscopic calculations performed within various versions of the shell model.
NASA Astrophysics Data System (ADS)
Dinç, Erdal; Kanbur, Murat; Baleanu, Dumitru
2007-10-01
Comparative simultaneous determination of chlortetracycline and benzocaine in the commercial veterinary powder product was carried out by continuous wavelet transform (CWT) and classical derivative transform (or classical derivative spectrophotometry). In this quantitative spectral analysis, two proposed analytical methods do not require any chemical separation process. In the first step, several wavelet families were tested to find an optimal CWT for the overlapping signal processing of the analyzed compounds. Subsequently, we observed that the coiflets (COIF-CWT) method with dilation parameter, a = 400, gives suitable results for this analytical application. For a comparison, the classical derivative spectrophotometry (CDS) approach was also applied to the simultaneous quantitative resolution of the same analytical problem. Calibration functions were obtained by measuring the transform amplitudes corresponding to zero-crossing points for both CWT and CDS methods. The utility of these two analytical approaches were verified by analyzing various synthetic mixtures consisting of chlortetracycline and benzocaine and they were applied to the real samples consisting of veterinary powder formulation. The experimental results obtained from the COIF-CWT approach were statistically compared with those obtained by classical derivative spectrophotometry and successful results were reported.
Lozano, Valeria A; Ibañez, Gabriela A; Olivieri, Alejandro C
2009-10-05
In the presence of analyte-background interactions and a significant background signal, both second-order multivariate calibration and standard addition are required for successful analyte quantitation achieving the second-order advantage. This report discusses a modified second-order standard addition method, in which the test data matrix is subtracted from the standard addition matrices, and quantitation proceeds via the classical external calibration procedure. It is shown that this novel data processing method allows one to apply not only parallel factor analysis (PARAFAC) and multivariate curve resolution-alternating least-squares (MCR-ALS), but also the recently introduced and more flexible partial least-squares (PLS) models coupled to residual bilinearization (RBL). In particular, the multidimensional variant N-PLS/RBL is shown to produce the best analytical results. The comparison is carried out with the aid of a set of simulated data, as well as two experimental data sets: one aimed at the determination of salicylate in human serum in the presence of naproxen as an additional interferent, and the second one devoted to the analysis of danofloxacin in human serum in the presence of salicylate.
Analytical methods for toxic gases from thermal degradation of polymers
NASA Technical Reports Server (NTRS)
Hsu, M.-T. S.
1977-01-01
Toxic gases evolved from the thermal oxidative degradation of synthetic or natural polymers in small laboratory chambers or in large scale fire tests are measured by several different analytical methods. Gas detector tubes are used for fast on-site detection of suspect toxic gases. The infrared spectroscopic method is an excellent qualitative and quantitative analysis for some toxic gases. Permanent gases such as carbon monoxide, carbon dioxide, methane and ethylene, can be quantitatively determined by gas chromatography. Highly toxic and corrosive gases such as nitrogen oxides, hydrogen cyanide, hydrogen fluoride, hydrogen chloride and sulfur dioxide should be passed into a scrubbing solution for subsequent analysis by either specific ion electrodes or spectrophotometric methods. Low-concentration toxic organic vapors can be concentrated in a cold trap and then analyzed by gas chromatography and mass spectrometry. The limitations of different methods are discussed.
Artificial neural network and classical least-squares methods for neurotransmitter mixture analysis.
Schulze, H G; Greek, L S; Gorzalka, B B; Bree, A V; Blades, M W; Turner, R F
1995-02-01
Identification of individual components in biological mixtures can be a difficult problem regardless of the analytical method employed. In this work, Raman spectroscopy was chosen as a prototype analytical method due to its inherent versatility and applicability to aqueous media, making it useful for the study of biological samples. Artificial neural networks (ANNs) and the classical least-squares (CLS) method were used to identify and quantify the Raman spectra of the small-molecule neurotransmitters and mixtures of such molecules. The transfer functions used by a network, as well as the architecture of a network, played an important role in the ability of the network to identify the Raman spectra of individual neurotransmitters and the Raman spectra of neurotransmitter mixtures. Specifically, networks using sigmoid and hyperbolic tangent transfer functions generalized better from the mixtures in the training data set to those in the testing data sets than networks using sine functions. Networks with connections that permit the local processing of inputs generally performed better than other networks on all the testing data sets. and better than the CLS method of curve fitting, on novel spectra of some neurotransmitters. The CLS method was found to perform well on noisy, shifted, and difference spectra.
Semi-Analytic Reconstruction of Flux in Finite Volume Formulations
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2006-01-01
Semi-analytic reconstruction uses the analytic solution to a second-order, steady, ordinary differential equation (ODE) to simultaneously evaluate the convective and diffusive flux at all interfaces of a finite volume formulation. The second-order ODE is itself a linearized approximation to the governing first- and second- order partial differential equation conservation laws. Thus, semi-analytic reconstruction defines a family of formulations for finite volume interface fluxes using analytic solutions to approximating equations. Limiters are not applied in a conventional sense; rather, diffusivity is adjusted in the vicinity of changes in sign of eigenvalues in order to achieve a sufficiently small cell Reynolds number in the analytic formulation across critical points. Several approaches for application of semi-analytic reconstruction for the solution of one-dimensional scalar equations are introduced. Results are compared with exact analytic solutions to Burger s Equation as well as a conventional, upwind discretization using Roe s method. One approach, the end-point wave speed (EPWS) approximation, is further developed for more complex applications. One-dimensional vector equations are tested on a quasi one-dimensional nozzle application. The EPWS algorithm has a more compact difference stencil than Roe s algorithm but reconstruction time is approximately a factor of four larger than for Roe. Though both are second-order accurate schemes, Roe s method approaches a grid converged solution with fewer grid points. Reconstruction of flux in the context of multi-dimensional, vector conservation laws including effects of thermochemical nonequilibrium in the Navier-Stokes equations is developed.
Multi-center evaluation of analytical performance of the Beckman Coulter AU5822 chemistry analyzer.
Zimmerman, M K; Friesen, L R; Nice, A; Vollmer, P A; Dockery, E A; Rankin, J D; Zmuda, K; Wong, S H
2015-09-01
Our three academic institutions, Indiana University, Northwestern Memorial Hospital, and Wake Forest, were among the first in the United States to implement the Beckman Coulter AU5822 series chemistry analyzers. We undertook this post-hoc multi-center study by merging our data to determine performance characteristics and the impact of methodology changes on analyte measurement. We independently completed performance validation studies including precision, linearity/analytical measurement range, method comparison, and reference range verification. Complete data sets were available from at least one institution for 66 analytes with the following groups: 51 from all three institutions, and 15 from 1 or 2 institutions for a total sample size of 12,064. Precision was similar among institutions. Coefficients of variation (CV) were <10% for 97%. Analytes with CVs >10% included direct bilirubin and digoxin. All analytes exhibited linearity over the analytical measurement range. Method comparison data showed slopes between 0.900-1.100 for 87.9% of the analytes. Slopes for amylase, tobramycin and urine amylase were <0.8; the slope for lipase was >1.5, due to known methodology or standardization differences. Consequently, reference ranges of amylase, urine amylase and lipase required only minor or no modification. The four AU5822 analyzers independently evaluated at three sites showed consistent precision, linearity, and correlation results. Since installations, the test results had been well received by clinicians from all three institutions. Copyright © 2015. Published by Elsevier Inc.
Baghdady, Mariam T; Carnahan, Heather; Lam, Ernest W N; Woods, Nicole N
2014-09-01
There has been much debate surrounding diagnostic strategies and the most appropriate training models for novices in oral radiology. It has been argued that an analytic approach, using a step-by-step analysis of the radiographic features of an abnormality, is ideal. Alternative research suggests that novices can successfully employ non-analytic reasoning. Many of these studies do not take instructional methodology into account. This study evaluated the effectiveness of non-analytic and analytic strategies in radiographic interpretation and explored the relationship between instructional methodology and diagnostic strategy. Second-year dental and dental hygiene students were taught four radiographic abnormalities using basic science instructions or a step-by-step algorithm. The students were tested on diagnostic accuracy and memory immediately after learning and one week later. A total of seventy-three students completed both immediate and delayed sessions and were included in the analysis. Students were randomly divided into two instructional conditions: one group provided a diagnostic hypothesis for the image and then identified specific features to support it, while the other group first identified features and then provided a diagnosis. Participants in the diagnosis-first condition (non-analytic reasoning) had higher diagnostic accuracy then those in the features-first condition (analytic reasoning), regardless of their learning condition. No main effect of learning condition or interaction with diagnostic strategy was observed. Educators should be mindful of the potential influence of analytic and non-analytic approaches on the effectiveness of the instructional method.
Validated LC–MS-MS Method for Multiresidual Analysis of 13 Illicit Phenethylamines in Amniotic Fluid
Burrai, Lucia; Nieddu, Maria; Carta, Antonio; Trignano, Claudia; Sanna, Raimonda; Boatto, Gianpiero
2016-01-01
A multi-residue analytical method was developed for the determination in amniotic fluid (AF) of 13 illicit phenethylamines, including 12 compounds never investigated in this matrix before. Samples were subject to solid-phase extraction using; hydrophilic–lipophilic balance cartridges which gave good recoveries and low matrix effects on analysis of the extracts. The quantification was performed by liquid chromatography electrospray tandem mass spectrometry. The water–acetonitrile mobile phase containing 0.1% formic acid, used with a C18 reversed phase column, provided adequate separation, resolution and signal-to-noise ratio for the analytes and the internal standard. The final optimized method was validated according to international guidelines. A monitoring campaign to assess fetal exposure to these 13 substances of abuse has been performed on AF test samples obtained from pregnant women. All mothers (n = 194) reported no use of drugs of abuse during pregnancy, and this was confirmed by the analytical data. PMID:26755540
Locating bomb factories by detecting hydrogen peroxide.
Romolo, Francesco Saverio; Connell, Samantha; Ferrari, Carlotta; Suarez, Guillaume; Sauvain, Jean-Jacques; Hopf, Nancy B
2016-11-01
The analytical capability to detect hydrogen peroxide vapour can play a key role in localizing a site where a H2O2 based Improvised Explosive (IE) is manufactured. In security activities it is very important to obtain information in a short time. For this reason, an analytical method to be used in security activity needs portable devices. The authors have developed the first analytical method based on a portable luminometer, specifically designed and validated to locate IE manufacturing sites using quantitative on-site vapour analysis for H2O2. The method was tested both indoor and outdoor. The results demonstrate that the detection of H2O2 vapours could allow police forces to locate the site, while terrorists are preparing an attack. The collected data are also very important in developing new sensors, able to give an early alarm if located at a proper distance from a site where an H2O2 based IE is prepared. Copyright © 2016 Elsevier B.V. All rights reserved.
Safety and Suitability for Service Assessment Testing for Surface and Underwater Launched Munitions
2014-12-05
test efficiency that tend to associate the Analytical S3 Test Approach with large, complex munition systems and the Empirical S3 Test Approach with...the smaller, less complex munition systems . 8.1 ANALYTICAL S3 TEST APPROACH. The Analytical S3 test approach, as shown in Figure 3, evaluates...assets than the Analytical S3 Test approach to establish the safety margin of the system . This approach is generally applicable to small munitions
Measuring salivary analytes from free-ranging monkeys
Higham, James P.; Vitale, Alison; Rivera, Adaris Mas; Ayala, James E.; Maestripieri, Dario
2014-01-01
Studies of large free-ranging mammals have been revolutionized by non-invasive methods for assessing physiology, which usually involve the measurement of fecal or urinary biomarkers. However, such techniques are limited by numerous factors. To expand the range of physiological variables measurable non-invasively from free-ranging primates, we developed techniques for sampling monkey saliva by offering monkeys ropes with oral swabs sewn on the ends. We evaluated different attractants for encouraging individuals to offer samples, and proportions of individuals in different age/sex categories willing to give samples. We tested the saliva samples we obtained in three commercially available assays: cortisol, Salivary Alpha Amylase, and Secretory Immunoglobulin A. We show that habituated free-ranging rhesus macaques will give saliva samples voluntarily without training, with 100% of infants, and over 50% of adults willing to chew on collection devices. Our field methods are robust even for analytes that show poor recovery from cotton, and/or that have concentrations dependent on salivary flow rate. We validated the cortisol and SAA assays for use in rhesus macaques by showing aspects of analytical validation, such as that samples dilute linearly and in parallel to assay standards. We also found that values measured correlated with biologically meaningful characteristics of sampled individuals (age and dominance rank). The SIgA assay tested did not react to samples. Given the wide range of analytes measurable in saliva but not in feces or urine, our methods considerably improve our ability to study physiological aspects of the behavior and ecology of free-ranging primates, and are also potentially adaptable to other mammalian taxa. PMID:20837036
WIPP waste characterization program sampling and analysis guidance manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-01-01
The Waste Isolation Pilot Plant (WIPP) Waste Characterization Program Sampling and Analysis Guidance Manual (Guidance Manual) provides a unified source of information on the sampling and analytical techniques that enable Department of Energy (DOE) facilities to comply with the requirements established in the current revision of the Quality Assurance Program Plan (QAPP) for the WIPP Experimental-Waste Characterization Program (the Program). This Guidance Manual includes all of the sampling and testing methodologies accepted by the WIPP Project Office (DOE/WPO) for use in implementing the Program requirements specified in the QAPP. This includes methods for characterizing representative samples of transuranic (TRU) wastesmore » at DOE generator sites with respect to the gas generation controlling variables defined in the WIPP bin-scale and alcove test plans, as well as waste container headspace gas sampling and analytical procedures to support waste characterization requirements under the WIPP test program and the Resource Conservation and Recovery Act (RCRA). The procedures in this Guidance Manual are comprehensive and detailed and are designed to provide the necessary guidance for the preparation of site specific procedures. The use of these procedures is intended to provide the necessary sensitivity, specificity, precision, and comparability of analyses and test results. The solutions to achieving specific program objectives will depend upon facility constraints, compliance with DOE Orders and DOE facilities' operating contractor requirements, and the knowledge and experience of the TRU waste handlers and analysts. With some analytical methods, such as gas chromatography/mass spectrometry, the Guidance Manual procedures may be used directly. With other methods, such as nondestructive/destructive characterization, the Guidance Manual provides guidance rather than a step-by-step procedure.« less
Mengoli, Carlo; Springer, Jan; Bretagne, Stéphane; Cuenca-Estrella, Manuel; Klingspor, Lena; Lagrou, Katrien; Melchers, Willem J. G.; Morton, C. Oliver; Barnes, Rosemary A.; Donnelly, J. Peter; White, P. Lewis
2015-01-01
The use of serum or plasma for Aspergillus PCR testing facilitates automated and standardized technology. Recommendations for serum testing are available, and while serum and plasma are regularly considered interchangeable for use in fungal diagnostics, differences in galactomannan enzyme immunoassay (GM-EIA) performance have been reported and are attributed to clot formation. Therefore, it is important to assess plasma PCR testing to determine if previous recommendations for serum are applicable and also to compare analytical performance with that of serum PCR. Molecular methods testing serum and plasma were compared through multicenter distribution of quality control panels, with additional studies to investigate the effect of clot formation and blood fractionation on DNA availability. Analytical sensitivity and time to positivity (TTP) were compared, and a regression analysis was performed to identify variables that enhanced plasma PCR performance. When testing plasma, sample volume, preextraction-to-postextraction volume ratio, PCR volume, duplicate testing, and the use of an internal control for PCR were positively associated with performance. When whole-blood samples were spiked and then fractionated, the analytical sensitivity and TTP were superior when testing plasma. Centrifugation had no effect on DNA availability, whereas the presence of clot material significantly lowered the concentration (P = 0.028). Technically, there are no major differences in the molecular processing of serum and plasma, but the formation of clot material potentially reduces available DNA in serum. During disease, Aspergillus DNA burdens in blood are often at the limits of PCR performance. Using plasma might improve performance while maintaining the methodological simplicity of serum testing. PMID:26085614
Loeffler, Juergen; Mengoli, Carlo; Springer, Jan; Bretagne, Stéphane; Cuenca-Estrella, Manuel; Klingspor, Lena; Lagrou, Katrien; Melchers, Willem J G; Morton, C Oliver; Barnes, Rosemary A; Donnelly, J Peter; White, P Lewis
2015-09-01
The use of serum or plasma for Aspergillus PCR testing facilitates automated and standardized technology. Recommendations for serum testing are available, and while serum and plasma are regularly considered interchangeable for use in fungal diagnostics, differences in galactomannan enzyme immunoassay (GM-EIA) performance have been reported and are attributed to clot formation. Therefore, it is important to assess plasma PCR testing to determine if previous recommendations for serum are applicable and also to compare analytical performance with that of serum PCR. Molecular methods testing serum and plasma were compared through multicenter distribution of quality control panels, with additional studies to investigate the effect of clot formation and blood fractionation on DNA availability. Analytical sensitivity and time to positivity (TTP) were compared, and a regression analysis was performed to identify variables that enhanced plasma PCR performance. When testing plasma, sample volume, preextraction-to-postextraction volume ratio, PCR volume, duplicate testing, and the use of an internal control for PCR were positively associated with performance. When whole-blood samples were spiked and then fractionated, the analytical sensitivity and TTP were superior when testing plasma. Centrifugation had no effect on DNA availability, whereas the presence of clot material significantly lowered the concentration (P = 0.028). Technically, there are no major differences in the molecular processing of serum and plasma, but the formation of clot material potentially reduces available DNA in serum. During disease, Aspergillus DNA burdens in blood are often at the limits of PCR performance. Using plasma might improve performance while maintaining the methodological simplicity of serum testing. Copyright © 2015 Loeffler et al.
Accelerated characterization of graphite/epoxy composites
NASA Technical Reports Server (NTRS)
Griffith, W. I.; Morris, D. H.; Brinson, H. F.
1980-01-01
A method to predict the long term compliance of unidirectional off-axis laminates from short term laboratory tests is presented. The method uses an orthotropic transformation equation and the time-stress-temperature superposition principle. Short term tests are used to construct master curves for two off-axis unidirectional laminates with fiber angles of 10 and 90 degrees. Analytical predictions of long term compliance for 30 and 60 degrees laminates are made. Comparisons with experimental data are also given.
A Method for Direct-Measurement of the Energy of Rupture of Impact Specimens
1953-01-01
CONTENTS SECTION A - Poreword SFCTION B » ObjectiTes of the Current Investigation SECTION C - Basic Elements of an Impact Testing System ...SECTION D - Discussion lo Linear System 2 c Rotary System 3o Methods for Ifeasui ing the Energy of Rupture SECTION E « The Energy Measuring System ...has followed and to siironarize our techni<»l findings, Co BASIC ELEKEMTS OF AN IMPACT TESTING SYSTEM For the analytical purposes of this
Cylinder Expansion Experiments and Measured Product Isentropes for XTX-8004 Explosive
NASA Astrophysics Data System (ADS)
Jackson, Scott
2015-06-01
We present cylinder expansion data from full-scale (25.4-mm inner diameter) and half-scale (12.7-mm inner diameter) experiments with XTX-8004 explosive, composed of 80% RDX explosive and 20% Sylgard 182 silicone elastomer. An analytic method is reviewed and used to recover detonation product isentropes from the experimental data, which are presented in the standard JWL form. The cylinder expansion data was found to scale well, indicating ideal detonation behavior across the test scales. The analytically determined product JWLs were found to agree well with those produced via iterative hydrocode methods, but required significantly less computational effort.
21 CFR 118.8 - Testing methodology for Salmonella Enteritidis (SE).
Code of Federal Regulations, 2010 CFR
2010-04-01
... Salmonella Web site is located at http://www.fda.gov/Food/ScienceResearch/LaboratoryMethods/ucm114716.htm... you may examine a copy at the Center for Food Safety and Applied Nutrition's Library, 5100 Paint... Edition, is located at http://www.fda.gov/Food/ScienceResearch/LaboratoryMethods/BacteriologicalAnalytical...
A new experimental method for the accelerated characterization of composite materials
NASA Technical Reports Server (NTRS)
Yeow, Y. T.; Morris, D. H.; Brinson, H. F.
1978-01-01
The use of composite materials for a variety of practical structural applications is presented and the need for an accelerated characterization procedure is assessed. A new experimental and analytical method is presented which allows the prediction of long term properties from short term tests. Some preliminary experimental results are presented.
USDA-ARS?s Scientific Manuscript database
A qualitative botanical identification method (BIM) is an analytical procedure which returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) mate...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-28
... of semivolatile organic compounds in finished drinking water. The method analytes are extracted and... semivolatile organic contaminants: Alachlor, atrazine, polychlorinated biphenyls (PCBs), benzo[a]pyrene... approved EPA Method 525.2, Revision 2.0 for each of the 17 regulated semivolatile organic contaminants. EPA...
Govender, Kerusha; Parboosing, Raveen; Siyaca, Ntombizandile; Moodley, Pravikrishnen
2016-01-01
Poor quality dried blood spot (DBS) specimens are usually rejected by virology laboratories, affecting early infant diagnosis of HIV. The practice of combining two incompletely-filled DBS in one specimen preparation tube during pre-analytical specimen processing (i.e., the two-spot method) has been implemented to reduce the number of specimens being rejected for insufficient volume. This study analysed laboratory data to describe the quality of DBS specimens and the use of the two-spot method over a one-year period, then validated the two-spot method against the standard (one-spot) method. Data on HIV-1 PCR test requests submitted in 2014 to the Department of Virology at Inkosi Albert Luthuli Central Hospital in KwaZulu-Natal province, South Africa were analysed to describe reasons for specimen rejection, as well as results of the two-spot method. The accuracy, lower limit of detection and precision of the two-spot method were assessed. Of the 88 481 specimens received, 3.7% were rejected for pre-analytical problems. Of those, 48.9% were rejected as a result of insufficient specimen volume. Two health facilities had significantly more specimen rejections than other facilities. The two-spot method prevented 10 504 specimen rejections. The Pearson correlation coefficient comparing the standard to the two-spot method was 0.997. The two-spot method was comparable with the standard method of pre-analytical specimen processing. Two health facilities were identified for targeted retraining on specimen quality. The two-spot method of DBS specimen processing can be used as an adjunct to retraining, to reduce the number of specimens rejected and improve linkage to care.
Immunogenicity of therapeutics: a matter of efficacy and safety.
Nechansky, Andreas; Kircheis, Ralf
2010-11-01
The unwanted immunogenicity of therapeutic proteins is a major concern regarding patient safety. Furthermore, pharmacokinetic, pharmacodynamic and clinical efficacy can be seriously affected by the immunogenicity of therapeutic proteins. Authorities have fully recognized this issue and demand appropriate and well-characterized assays to detect anti-drug antibodies (ADAs). We provide an overview of the immunogenicity topic in general, the regulatory background and insight into underlying immunological mechanisms and the limited ability to predict clinical immunogenicity a priori. Furthermore, we comment on the analytical testing approach and the status-quo of appropriate method validation. The review provides insight regarding the analytical approach that is expected by regulatory authorities overseeing immunogenicity testing requirements. Additionally, the factors influencing immunogenicity are summarized and key references regarding immunogenicity testing approaches and method validation are discussed. The unwanted immunogenicity of protein therapeutics is of major concern because of its potential to affect patient safety and drug efficacy. Analytical testing is sophisticated and requires more than one assay. Because immunogenicity in humans is hardly predictable, assay development has to start in a timely fashion and for clinical studies immunogenicity assay validation is mandatory prior to analyzing patient serum samples. Regarding ADAs, the question remains as to when such antibodies are regarded of clinical relevance and what levels are, if at all, acceptable. In summary, the detection of ADAs should raise the awareness of the physician concerning patient safety and of the sponsor/manufacture concerning the immunogenic potential of the drug product.
Bidny, Sergei; Gago, Kim; Chung, Phuong; Albertyn, Desdemona; Pasin, Daniel
2017-04-01
An analytical method using ultra performance liquid chromatography (UPLC) quadrupole time-of-flight mass spectrometry (QTOF-MS) was developed and validated for the targeted toxicological screening and quantification of commonly used pharmaceuticals and drugs of abuse in postmortem blood using 100 µL sample. It screens for more than 185 drugs and metabolites and quantifies more than 90 drugs. The selected compounds include classes of pharmaceuticals and drugs of abuse such as: antidepressants, antipsychotics, analgesics (including narcotic analgesics), anti-inflammatory drugs, benzodiazepines, beta-blockers, amphetamines, new psychoactive substances (NPS), cocaine and metabolites. Compounds were extracted into acetonitrile using a salting-out assisted liquid-liquid extraction (SALLE) procedure. The extracts were analyzed using a Waters ACQUITY UPLC coupled with a XEVO QTOF mass spectrometer. Separation of the analytes was achieved by gradient elution using Waters ACQUITY HSS C18 column (2.1 mm x 150 mm, 1.8 μm). The mass spectrometer was operated in both positive and negative electrospray ionization modes. The high-resolution mass spectrometry (HRMS) data was acquired using a patented Waters MSE acquisition mode which collected low and high energy spectra alternatively during the same acquisition. Positive identification of target analytes was based on accurate mass measurements of the molecular ion, product ion, peak area ratio and retention times. Calibration curves were linear over the concentration range 0.05-2 mg/L for basic and neutral analytes and 0.1-6 mg/L for acidic analytes with the correlation coefficients (r2) > 0.96 for most analytes. The limits of detection (LOD) were between 0.001-0.05 mg/L for all analytes. Good recoveries were achieved ranging from 80% to 100% for most analytes using the SALLE method. The method was validated for sensitivity, selectivity, accuracy, precision, stability, carryover and matrix effects. The developed method was tested on a number of authentic forensic samples producing consistent results that correlated with results obtained from other validated methods. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Analytical and multibody modeling for the power analysis of standing jumps.
Palmieri, G; Callegari, M; Fioretti, S
2015-01-01
Two methods for the power analysis of standing jumps are proposed and compared in this article. The first method is based on a simple analytical formulation which requires as input the coordinates of the center of gravity in three specified instants of the jump. The second method is based on a multibody model that simulates the jumps processing the data obtained by a three-dimensional (3D) motion capture system and the dynamometric measurements obtained by the force platforms. The multibody model is developed with OpenSim, an open-source software which provides tools for the kinematic and dynamic analyses of 3D human body models. The study is focused on two of the typical tests used to evaluate the muscular activity of lower limbs, which are the counter movement jump and the standing long jump. The comparison between the results obtained by the two methods confirms that the proposed analytical formulation is correct and represents a simple tool suitable for a preliminary analysis of total mechanical work and the mean power exerted in standing jumps.
Paseiro-Cerrato, R; de Quirós, A Rodríguez-Bernaldo; Sendón, Raquel; Bustos, Juana; Ruíz, E; Cruz, J M; Paseiro-Losada, P
2011-10-07
This paper describes the development of a multi-analyte method for the determination of polyfunctional amines commonly used as monomers in the manufacture of food contact materials. Amines were analyzed by high-performance-liquid chromatography with diode-array detection (HPLC-DAD) after derivatization with dansyl chloride. The chromatographic analysis and the derivatization conditions were optimized. The proposed method was validated in terms of linearity, limits of detection and repeatabilities. The method showed an excellent sensitivity (LOD≤0.05 μg/mL) and appropriate repeatabilites (RSD (n=7)≤5%)). LC-MS/MS was used as a confirmatory technique. The stability of the amines in five food simulants (distilled water, 3% acetic acid, 10% ethanol, 50% ethanol and olive oil) under the most common testing conditions (10 days at 40 °C) was also studied. Results showed that amines had an acceptable stability in aqueous simulants but in the olive oil a loss of 100% was observed for all analytes. Copyright © 2011. Published by Elsevier B.V.
Precise determination of N-acetylcysteine in pharmaceuticals by microchip electrophoresis.
Rudašová, Marína; Masár, Marián
2016-01-01
A novel microchip electrophoresis method for the rapid and high-precision determination of N-acetylcysteine, a pharmaceutically active ingredient, in mucolytics has been developed. Isotachophoresis separations were carried out at pH 6.0 on a microchip with conductivity detection. The methods of external calibration and internal standard were used to evaluate the results. The internal standard method effectively eliminated variations in various working parameters, mainly run-to-run fluctuations of an injected volume. The repeatability and accuracy of N-acetylcysteine determination in all mucolytic preparations tested (Solmucol 90 and 200, and ACC Long 600) were more than satisfactory with the relative standard deviation and relative error values <0.7 and <1.9%, respectively. A recovery range of 99-101% of N-acetylcysteine in the analyzed pharmaceuticals predetermines the proposed method for accurate analysis as well. This work, in general, indicates analytical possibilities of microchip isotachophoresis for the quantitative analysis of simplified samples such as pharmaceuticals that contain the analyte(s) at relatively high concentrations. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Saito, Tatsuhito; Kondo, Keiichiro; Koseki, Takafumi
A DC-electrified railway system that is fed by diode rectifiers at a substation is unable to return the electric power to an AC grid. Accordingly, the braking cars have to restrict regenerative braking power when the power consumption of the powering cars is not sufficient. However, the characteristics of a DC-electrified railway system, including the powering cars, is not known, and a mathematical model for designing a controller has not been established yet. Hence, the object of this study is to obtain the mathematical model for an analytical design method of the regenerative braking control system. In the first part of this paper, the static characteristics of this system are presented to show the position of the equilibrium point. The linearization of this system at the equilibrium point is then performed to describe the dynamic characteristics of the system. An analytical design method is then proposed on the basis of these characteristics. The proposed design method is verified by experimental tests with a 1kW class miniature model, and numerical simulations.
Results of the first provisional technical secretariat interlaboratory comparison test
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuff, J.R.; Hoffland, L.
1995-06-01
The principal task of this laboratory in the first Provisional Technical Secretariat (PTS) Interlaboratory Comparison Test was to verify and test the extraction and preparation procedures outlined in the Recommended Operating Procedures for Sampling and Analysis in the Verification of Chemical Disarmament in addition to our laboratory extraction methods and our laboratory analysis methods. Sample preparation began on 16 May 1994 and analysis was completed on 12 June 1994. The analytical methods used included NMR ({sup 1}H and {sup 31}P) GC/AED, GC/MS (EI and methane CI), GC/IRD, HPLC/IC, HPLC/TSP/MS, MS/MS(Electrospray), and CZE.
Blade loss transient dynamic analysis of turbomachinery
NASA Technical Reports Server (NTRS)
Stallone, M. J.; Gallardo, V.; Storace, A. F.; Bach, L. J.; Black, G.; Gaffney, E. F.
1982-01-01
This paper reports on work completed to develop an analytical method for predicting the transient non-linear response of a complete aircraft engine system due to the loss of a fan blade, and to validate the analysis by comparing the results against actual blade loss test data. The solution, which is based on the component element method, accounts for rotor-to-casing rubs, high damping and rapid deceleration rates associated with the blade loss event. A comparison of test results and predicted response show good agreement except for an initial overshoot spike not observed in test. The method is effective for analysis of large systems.
NASA Astrophysics Data System (ADS)
Ravi, J. T.; Nidhan, S.; Muthu, N.; Maiti, S. K.
2018-02-01
An analytical method for determination of dimensions of longitudinal crack in monolithic beams, based on frequency measurements, has been extended to model L and inverted T cracks. Such cracks including longitudinal crack arise in beams made of layered isotropic or composite materials. A new formulation for modelling cracks in bi-material beams is presented. Longitudinal crack segment sizes, for L and inverted T cracks, varying from 2.7% to 13.6% of length of Euler-Bernoulli beams are considered. Both forward and inverse problems have been examined. In the forward problems, the analytical results are compared with finite element (FE) solutions. In the inverse problems, the accuracy of prediction of crack dimensions is verified using FE results as input for virtual testing. The analytical results show good agreement with the actual crack dimensions. Further, experimental studies have been done to verify the accuracy of the analytical method for prediction of dimensions of three types of crack in isotropic and bi-material beams. The results show that the proposed formulation is reliable and can be employed for crack detection in slender beam like structures in practice.
Analysis of structural dynamic data from Skylab. Volume 2: Skylab analytical and test model data
NASA Technical Reports Server (NTRS)
Demchak, L.; Harcrow, H.
1976-01-01
The orbital configuration test modal data, analytical test correlation modal data, and analytical flight configuration modal data are presented. Tables showing the generalized mass contributions (GMCs) for each of the thirty tests modes are given along with the two dimensional mode shape plots and tables of GMCs for the test correlated analytical modes. The two dimensional mode shape plots for the analytical modes and uncoupled and coupled modes of the orbital flight configuration at three development phases of the model are included.
Analytical investigation of aerodynamic characteristics of highly swept wings with separated flow
NASA Technical Reports Server (NTRS)
Reddy, C. S.
1980-01-01
Many modern aircraft designed for supersonic speeds employ highly swept-back and low-aspect-ratio wings with sharp or thin edges. Flow separation occurs near the leading and tip edges of such wings at moderate to high angles of attack. Attempts have been made over the years to develop analytical methods for predicting the aerodynamic characteristics of such aircraft. Before any method can really be useful, it must be tested against a standard set of data to determine its capabilities and limitations. The present work undertakes such an investigation. Three methods are considered: the free-vortex-sheet method (Weber et al., 1975), the vortex-lattice method with suction analogy (Lamar and Gloss, 1975), and the quasi-vortex lattice method of Mehrotra (1977). Both flat and cambered wings of different configurations, for which experimental data are available, are studied and comparisons made.
Sokoliess, Torsten; Köller, Gerhard
2005-06-01
A chiral capillary electrophoresis system allowing the determination of the enantiomeric purity of an investigational new drug was developed using a generic method development approach for basic analytes. The method was optimized in terms of type and concentration of both cyclodextrin (CD) and electrolyte, buffer pH, temperature, voltage, and rinsing procedure. Optimal chiral separation of the analyte was obtained using an electrolyte with 2.5% carboxymethyl-beta-CD in 25 mM NaH2PO4 (pH 4.0). Interchanging the inlet and outlet vials after each run improved the method's precision. To assure the method's suitability for the control of enantiomeric impurities in pharmaceutical quality control, its specificity, linearity, precision, accuracy, and robustness were validated according to the requirements of the International Conference on Harmonization. The usefulness of our generic method development approach for the validation of robustness was demonstrated.
Dönmez, Ozlem Aksu; Aşçi, Bürge; Bozdoğan, Abdürrezzak; Sungur, Sidika
2011-02-15
A simple and rapid analytical procedure was proposed for the determination of chromatographic peaks by means of partial least squares multivariate calibration (PLS) of high-performance liquid chromatography with diode array detection (HPLC-DAD). The method is exemplified with analysis of quaternary mixtures of potassium guaiacolsulfonate (PG), guaifenesin (GU), diphenhydramine HCI (DP) and carbetapentane citrate (CP) in syrup preparations. In this method, the area does not need to be directly measured and predictions are more accurate. Though the chromatographic and spectral peaks of the analytes were heavily overlapped and interferents coeluted with the compounds studied, good recoveries of analytes could be obtained with HPLC-DAD coupled with PLS calibration. This method was tested by analyzing the synthetic mixture of PG, GU, DP and CP. As a comparison method, a classsical HPLC method was used. The proposed methods were applied to syrups samples containing four drugs and the obtained results were statistically compared with each other. Finally, the main advantage of HPLC-PLS method over the classical HPLC method tried to emphasized as the using of simple mobile phase, shorter analysis time and no use of internal standard and gradient elution. Copyright © 2010 Elsevier B.V. All rights reserved.
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-15
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 - 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.
Asgharzadeh, Hafez; Borazjani, Iman
2016-01-01
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 – 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80–90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future. PMID:28042172
NASA Astrophysics Data System (ADS)
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-01
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for non-linear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form a preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42-74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal and full Jacobian, respectivley, when the stretching factor was increased. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.
Xu, Xiaoming; Gupta, Abhay; Sayeed, Vilayat A; Khan, Mansoor A
2013-05-01
Various adverse events including esophagus irritations have been reported with the use of alendronate tablets, likely attributed to the rapid tablet disintegration in the mouth or esophagus. Accordingly, the disintegration of six alendronate tablet drug products was studied using a newly developed testing device equipped with in-line sensors, in addition to the official compendial procedure for measuring the disintegration time. The in-line sensors were used to monitor the particle count and solution pH change to assess the onset and duration of disintegration. A relatively large variation was observed in the disintegration time of the tested drug products using the compendial method. The data collected using the in-line sensors suggested that all tested drug products exhibited almost instantaneous onset of disintegration, under 2 s, and a sharp drop in solution pH. The drop in pH was slower for tablets with slower disintegration. The in-house prepared alendronate test tablets also showed similar trends suggesting rapid solubilization of the drug contributed to the fast tablet disintegration. This research highlights the usefulness of the newly developed in-line analytical method in combination with the compendial method in providing a better understanding of the disintegration and the accompanying drug solubilization processes for fast disintegrating tablet drug products. Copyright © 2013 Wiley Periodicals, Inc.
Contacts in the Office of Pesticide Programs, Biological and Economic Analysis Division
BEAD provides pesticide use-related information and economic analyses in support of pesticide regulatory activities. BEAD's laboratories validate analytical methods and test public health antimicrobials to ensure that they work as intended.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelms, Benjamin; Stambaugh, Cassandra; Hunt, Dylan
2015-08-15
Purpose: The authors designed data, methods, and metrics that can serve as a standard, independent of any software package, to evaluate dose-volume histogram (DVH) calculation accuracy and detect limitations. The authors use simple geometrical objects at different orientations combined with dose grids of varying spatial resolution with linear 1D dose gradients; when combined, ground truth DVH curves can be calculated analytically in closed form to serve as the absolute standards. Methods: DICOM RT structure sets containing a small sphere, cylinder, and cone were created programmatically with axial plane spacing varying from 0.2 to 3 mm. Cylinders and cones were modeledmore » in two different orientations with respect to the IEC 1217 Y axis. The contours were designed to stringently but methodically test voxelation methods required for DVH. Synthetic RT dose files were generated with 1D linear dose gradient and with grid resolution varying from 0.4 to 3 mm. Two commercial DVH algorithms—PINNACLE (Philips Radiation Oncology Systems) and PlanIQ (Sun Nuclear Corp.)—were tested against analytical values using custom, noncommercial analysis software. In Test 1, axial contour spacing was constant at 0.2 mm while dose grid resolution varied. In Tests 2 and 3, the dose grid resolution was matched to varying subsampled axial contours with spacing of 1, 2, and 3 mm, and difference analysis and metrics were employed: (1) histograms of the accuracy of various DVH parameters (total volume, D{sub max}, D{sub min}, and doses to % volume: D99, D95, D5, D1, D0.03 cm{sup 3}) and (2) volume errors extracted along the DVH curves were generated and summarized in tabular and graphical forms. Results: In Test 1, PINNACLE produced 52 deviations (15%) while PlanIQ produced 5 (1.5%). In Test 2, PINNACLE and PlanIQ differed from analytical by >3% in 93 (36%) and 18 (7%) times, respectively. Excluding D{sub min} and D{sub max} as least clinically relevant would result in 32 (15%) vs 5 (2%) scored deviations for PINNACLE vs PlanIQ in Test 1, while Test 2 would yield 53 (25%) vs 17 (8%). In Test 3, statistical analyses of volume errors extracted continuously along the curves show PINNACLE to have more errors and higher variability (relative to PlanIQ), primarily due to PINNACLE’s lack of sufficient 3D grid supersampling. Another major driver for PINNACLE errors is an inconsistency in implementation of the “end-capping”; the additional volume resulting from expanding superior and inferior contours halfway to the next slice is included in the total volume calculation, but dose voxels in this expanded volume are excluded from the DVH. PlanIQ had fewer deviations, and most were associated with a rotated cylinder modeled by rectangular axial contours; for coarser axial spacing, the limited number of cross-sectional rectangles hinders the ability to render the true structure volume. Conclusions: The method is applicable to any DVH-calculating software capable of importing DICOM RT structure set and dose objects (the authors’ examples are available for download). It includes a collection of tests that probe the design of the DVH algorithm, measure its accuracy, and identify failure modes. Merits and applicability of each test are discussed.« less
NASA Astrophysics Data System (ADS)
Zhou, Xuhong; Cao, Liang; Chen, Y. Frank; Liu, Jiepeng; Li, Jiang
2016-01-01
The developed pre-stressed cable reinforced concrete truss (PCT) floor system is a relatively new floor structure, which can be applied to various long-span structures such as buildings, stadiums, and bridges. Due to the lighter mass and longer span, floor vibration would be a serviceability concern problem for such systems. In this paper, field testing and theoretical analysis for the PCT floor system were conducted. Specifically, heel-drop impact and walking tests were performed on the PCT floor system to capture the dynamic properties including natural frequencies, mode shapes, damping ratios, and acceleration response. The PCT floor system was found to be a low frequency (<10 Hz) and low damping (damping ratio<2 percent) structural system. The comparison of the experimental results with the AISC's limiting values indicates that the investigated PCT system exhibits satisfactory vibration perceptibility, however. The analytical solution obtained from the weighted residual method agrees well with the experimental results and thus validates the proposed analytical expression. Sensitivity studies using the analytical solution were also conducted to investigate the vibration performance of the PCT floor system.
Clarkson, Douglas McG; Manna, Avinish; Hero, Mark
2014-02-01
We describe the use of an analytical weighing balance of measurement accuracy 0.00001g for determination of concentrations of perfluropropane (C3F8) gas used in ophthalmic surgical vitrectomy procedures. A range of test eyes corresponding to an eye volume of 6.1ml were constructed using 27 gauge needle exit ducts and separately 20 gauge (straight) and 23 gauge (angled) entrance ports. This method allowed determination of concentration levels in the sample preparation syringe and also levels in test eyes. It was determined that a key factor influencing gas concentrations accuracy related to the method of gas fill and the value of dead space of the gas preparation/delivery system and with a significant contribution arising from the use of the particle filter. The weighing balance technique was identified as an appropriate technique for estimation of gas concentrations. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, Bo; Hsieh, Chen-Yu; Golnaraghi, Farid; Moallem, Mehrdad
2015-11-01
In this paper a vehicle suspension system with energy harvesting capability is developed, and an analytical methodology for the optimal design of the system is proposed. The optimization technique provides design guidelines for determining the stiffness and damping coefficients aimed at the optimal performance in terms of ride comfort and energy regeneration. The corresponding performance metrics are selected as root-mean-square (RMS) of sprung mass acceleration and expectation of generated power. The actual road roughness is considered as the stochastic excitation defined by ISO 8608:1995 standard road profiles and used in deriving the optimization method. An electronic circuit is proposed to provide variable damping in the real-time based on the optimization rule. A test-bed is utilized and the experiments under different driving conditions are conducted to verify the effectiveness of the proposed method. The test results suggest that the analytical approach is credible in determining the optimality of system performance.
Control research in the NASA high-alpha technology program
NASA Technical Reports Server (NTRS)
Gilbert, William P.; Nguyen, Luat T.; Gera, Joseph
1990-01-01
NASA is conducting a focused technology program, known as the High-Angle-of-Attack Technology Program, to accelerate the development of flight-validated technology applicable to the design of fighters with superior stall and post-stall characteristics and agility. A carefully integrated effort is underway combining wind tunnel testing, analytical predictions, piloted simulation, and full-scale flight research. A modified F-18 aircraft has been extensively instrumented for use as the NASA High-Angle-of-Attack Research Vehicle used for flight verification of new methods and concepts. This program stresses the importance of providing improved aircraft control capabilities both by powered control (such as thrust-vectoring) and by innovative aerodynamic control concepts. The program is accomplishing extensive coordinated ground and flight testing to assess and improve available experimental and analytical methods and to develop new concepts for enhanced aerodynamics and for effective control, guidance, and cockpit displays essential for effective pilot utilization of the increased agility provided.
Experimental performance and acoustic investigation of modern, counterrotating blade concepts
NASA Technical Reports Server (NTRS)
Hoff, G. E.
1990-01-01
The aerodynamic, acoustic, and aeromechanical performance of counterrotating blade concepts were evaluated both theoretically and experimentally. Analytical methods development and design are addressed. Utilizing the analytical methods which evolved during the conduct of this work, aerodynamic and aeroacoustic predictions were developed, which were compared to NASA and GE wind tunnel test results. The detailed mechanical design and fabrication of five different composite shell/titanium spar counterrotating blade set configurations are presented. Design philosophy, analyses methods, and material geometry are addressed, as well as the influence of aerodynamics, aeromechanics, and aeroacoustics on the design procedures. Blade fabrication and quality control procedures are detailed; bench testing procedures and results of blade integrity verification are presented; and instrumentation associated with the bench testing also is identified. Additional hardware to support specialized testing is described, as are operating blade instrumentation and the associated stress limits. The five counterrotating blade concepts were scaled to a tip diameter of 2 feet, so they could be incorporated into MPS (model propulsion simulators). Aerodynamic and aeroacoustic performance testing was conducted in the NASA Lewis 8 x 6 supersonic and 9 x 15 V/STOL (vertical or short takeoff and landing) wind tunnels and in the GE freejet anechoic test chamber (Cell 41) to generate an experimental data base for these counterrotating blade designs. Test facility and MPS vehicle matrices are provided, and test procedures are presented. Effects on performance of rotor-to-rotor spacing, angle-of-attack, pylon proximity, blade number, reduced-diameter aft blades, and mismatched rotor speeds are addressed. Counterrotating blade and specialized aeromechanical hub stability test results are also furnished.
An Investigation of the Raudenbush (1988) Test for Studying Variance Heterogeneity.
ERIC Educational Resources Information Center
Harwell, Michael
1997-01-01
The meta-analytic method proposed by S. W. Raudenbush (1988) for studying variance heterogeneity was studied. Results of a Monte Carlo study indicate that the Type I error rate of the test is sensitive to even modestly platykurtic score distributions and to the ratio of study sample size to the number of studies. (SLD)
Shum, Bennett O V; Henner, Ilya; Belluoccio, Daniele; Hinchcliffe, Marcus J
2017-07-01
The sensitivity and specificity of next-generation sequencing laboratory developed tests (LDTs) are typically determined by an analyte-specific approach. Analyte-specific validations use disease-specific controls to assess an LDT's ability to detect known pathogenic variants. Alternatively, a methods-based approach can be used for LDT technical validations. Methods-focused validations do not use disease-specific controls but use benchmark reference DNA that contains known variants (benign, variants of unknown significance, and pathogenic) to assess variant calling accuracy of a next-generation sequencing workflow. Recently, four whole-genome reference materials (RMs) from the National Institute of Standards and Technology (NIST) were released to standardize methods-based validations of next-generation sequencing panels across laboratories. We provide a practical method for using NIST RMs to validate multigene panels. We analyzed the utility of RMs in validating a novel newborn screening test that targets 70 genes, called NEO1. Despite the NIST RM variant truth set originating from multiple sequencing platforms, replicates, and library types, we discovered a 5.2% false-negative variant detection rate in the RM truth set genes that were assessed in our validation. We developed a strategy using complementary non-RM controls to demonstrate 99.6% sensitivity of the NEO1 test in detecting variants. Our findings have implications for laboratories or proficiency testing organizations using whole-genome NIST RMs for testing. Copyright © 2017 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Reich, Christian G; Ryan, Patrick B; Schuemie, Martijn J
2013-10-01
A systematic risk identification system has the potential to test marketed drugs for important Health Outcomes of Interest or HOI. For each HOI, multiple definitions are used in the literature, and some of them are validated for certain databases. However, little is known about the effect of different definitions on the ability of methods to estimate their association with medical products. Alternative definitions of HOI were studied for their effect on the performance of analytical methods in observational outcome studies. A set of alternative definitions for three HOI were defined based on literature review and clinical diagnosis guidelines: acute kidney injury, acute liver injury and acute myocardial infarction. The definitions varied by the choice of diagnostic codes and the inclusion of procedure codes and lab values. They were then used to empirically study an array of analytical methods with various analytical choices in four observational healthcare databases. The methods were executed against predefined drug-HOI pairs to generate an effect estimate and standard error for each pair. These test cases included positive controls (active ingredients with evidence to suspect a positive association with the outcome) and negative controls (active ingredients with no evidence to expect an effect on the outcome). Three different performance metrics where used: (i) Area Under the Receiver Operator Characteristics (ROC) curve (AUC) as a measure of a method's ability to distinguish between positive and negative test cases, (ii) Measure of bias by estimation of distribution of observed effect estimates for the negative test pairs where the true effect can be assumed to be one (no relative risk), and (iii) Minimal Detectable Relative Risk (MDRR) as a measure of whether there is sufficient power to generate effect estimates. In the three outcomes studied, different definitions of outcomes show comparable ability to differentiate true from false control cases (AUC) and a similar bias estimation. However, broader definitions generating larger outcome cohorts allowed more drugs to be studied with sufficient statistical power. Broader definitions are preferred since they allow studying drugs with lower prevalence than the more precise or narrow definitions while showing comparable performance characteristics in differentiation of signal vs. no signal as well as effect size estimation.
One-year test-retest reliability of intrinsic connectivity network fMRI in older adults
Guo, Cong C.; Kurth, Florian; Zhou, Juan; Mayer, Emeran A.; Eickhoff, Simon B; Kramer, Joel H.; Seeley, William W.
2014-01-01
“Resting-state” or task-free fMRI can assess intrinsic connectivity network (ICN) integrity in health and disease, suggesting a potential for use of these methods as disease-monitoring biomarkers. Numerous analytical options are available, including model-driven ROI-based correlation analysis and model-free, independent component analysis (ICA). High test-retest reliability will be a necessary feature of a successful ICN biomarker, yet available reliability data remains limited. Here, we examined ICN fMRI test-retest reliability in 24 healthy older subjects scanned roughly one year apart. We focused on the salience network, a disease-relevant ICN not previously subjected to reliability analysis. Most ICN analytical methods proved reliable (intraclass coefficients > 0.4) and could be further improved by wavelet analysis. Seed-based ROI correlation analysis showed high map-wise reliability, whereas graph theoretical measures and temporal concatenation group ICA produced the most reliable individual unit-wise outcomes. Including global signal regression in ROI-based correlation analyses reduced reliability. Our study provides a direct comparison between the most commonly used ICN fMRI methods and potential guidelines for measuring intrinsic connectivity in aging control and patient populations over time. PMID:22446491
NASA Astrophysics Data System (ADS)
Sepulveda, N.; Rohrer, K.
2008-05-01
The permeability of the semiconfining layers of the highly productive Floridan Aquifer System may be large enough to invalidate the assumptions of the leaky aquifer theory. These layers are the intermediate confining and the middle semiconfining units. The analysis of aquifer-test data with analytical solutions of the ground-water flow equation developed with the approximation of a low hydraulic conductivity ratio between the semiconfining layer and the aquifer may lead to inaccurate hydraulic parameters. An analytical solution is presented here for the flow in a confined leaky aquifer, the overlying storative semiconfining layer, and the unconfined aquifer, generated by a partially penetrating well in a two-aquifer system, and allowing vertical and lateral flow components to occur in the semiconfining layer. The equations describing flow caused by a partially penetrating production well are solved analytically to provide a method to accurately determine the hydraulic parameters in the confined aquifer, semiconfining layer, and unconfined aquifer from aquifer-test data. Analysis of the drawdown data from an aquifer test performed in central Florida showed that the flow solution presented here for the semiconfining layer provides a better match and a more unique identification of the hydraulic parameters than an analytical solution that considers only vertical flow in the semiconfining layer.
Analytical performances of the Diazyme ADA assay on the Cobas® 6000 system.
Delacour, Hervé; Sauvanet, Christophe; Ceppa, Franck; Burnat, Pascal
2010-12-01
To evaluate the analytical performance of the Diazyme ADA assay on the Cobas® 6000 system for pleural fluid samples analysis. Imprecision, linearity, calibration curve stability, interference, and correlation studies were completed. The Diazyme ADA assay demonstrated excellent precision (CV<4%) over the analytical measurement range (0.5-117 U/L). Bilirubin above 50 μmol/L and haemoglobin above 177 μmol/L interfered with the test, inducing a negative and a positive interference respectively. The Diazyme ADA assay correlated well with the Giusti method (r(2)=0.93) but exhibited a negative bias (~ -30%). The Diazyme ADA assay on the Cobas® 6000 system represents a rapid, accurate, precise and reliable method for determination of ADA activity in pleural fluid samples. Copyright © 2010 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Harries, Megan; Bukovsky-Reyes, Santiago; Bruno, Thomas J
2016-01-15
This paper details the sampling methods used with the field portable porous layer open tubular cryoadsorption (PLOT-cryo) approach, described in Part I of this two-part series, applied to several analytes of interest. We conducted tests with coumarin and 2,4,6-trinitrotoluene (two solutes that were used in initial development of PLOT-cryo technology), naphthalene, aviation turbine kerosene, and diesel fuel, on a variety of matrices and test beds. We demonstrated that these analytes can be easily detected and reliably identified using the portable unit for analyte collection. By leveraging efficiency-boosting temperature control and the high flow rate multiple capillary wafer, very short collection times (as low as 3s) yielded accurate detection. For diesel fuel spiked on glass beads, we determined a method detection limit below 1 ppm. We observed greater variability among separate samples analyzed with the portable unit than previously documented in work using the laboratory-based PLOT-cryo technology. We identify three likely sources that may help explain the additional variation: the use of a compressed air source to generate suction, matrix geometry, and variability in the local vapor concentration around the sampling probe as solute depletion occurs both locally around the probe and in the test bed as a whole. This field-portable adaptation of the PLOT-cryo approach has numerous and diverse potential applications. Published by Elsevier B.V.
Harries, Megan; Bukovsky-Reyes, Santiago; Bruno, Thomas J.
2016-01-01
This paper details the sampling methods used with the field portable porous layer open tubular cryoadsorption (PLOT-cryo) approach, described in Part I of this two-part series, applied to several analytes of interest. We conducted tests with coumarin and 2,4,6-trinitrotoluene (two solutes that were used in initial development of PLOT-cryo technology), naphthalene, aviation turbine kerosene, and diesel fuel, on a variety of matrices and test beds. We demonstrated that these analytes can be easily detected and reliably identified using the portable unit for analyte collection. By leveraging efficiency-boosting temperature control and the high flow rate multiple capillary wafer, very short collection times (as low as 3 s) yielded accurate detection. For diesel fuel spiked on glass beads, we determined a method detection limit below 1 ppm. We observed greater variability among separate samples analyzed with the portable unit than previously documented in work using the laboratory-based PLOT-cryo technology. We identify three likely sources that may help explain the additional variation: the use of a compressed air source to generate suction, matrix geometry, and variability in the local vapor concentration around the sampling probe as solute depletion occurs both locally around the probe and in the test bed as a whole. This field-portable adaptation of the PLOT-cryo approach has numerous and diverse potential applications. PMID:26726934
[The Scope, Quality and Safety Requirements of Drug Abuse Testing].
Küme, Tuncay; Karakükcü, Çiğdem; Pınar, Aslı; Coşkunol, Hakan
2017-01-01
The aim of this review is to inform about the scopes and requirements of drug abuse testing. Drug abuse testing is one of the tools for determination of drug use. It must fulfill the quality and safety requirements in judgmental legal and administrative decisions. Drug abuse testing must fulfill some requirements like selection of the appropriate test matrix, appropriate screening test panel, sampling in detection window, patient consent, identification of the donor, appropriate collection site, sample collection with observation, identification and control of the sample, specimen custody chain in preanalytical phase; analysis in authorized laboratories, specimen validity tests, reliable testing METHODS, strict quality control, two-step analysis in analytical phase; storage of the split specimen, confirmation of the split specimen in the objection, result custody chain, appropriate cut-off concentration, the appropriate interpretation of the result in postanalytical phase. The workflow and analytical processes of drug abuse testing are explained in last regulation of the Department of Medical Laboratory Services, Ministry of Health in Turkey. The clinical physicians have to know and apply the quality and safety requirements in drug abuse testing according to last regulations in Turkey.
Rudzki, Piotr J; Gniazdowska, Elżbieta; Buś-Kwaśnik, Katarzyna
2018-06-05
Liquid chromatography coupled to mass spectrometry (LC-MS) is a powerful tool for studying pharmacokinetics and toxicokinetics. Reliable bioanalysis requires the characterization of the matrix effect, i.e. influence of the endogenous or exogenous compounds on the analyte signal intensity. We have compared two methods for the quantitation of matrix effect. The CVs(%) of internal standard normalized matrix factors recommended by the European Medicines Agency were evaluated against internal standard normalized relative matrix effects derived from Matuszewski et al. (2003). Both methods use post-extraction spiked samples, but matrix factors require also neat solutions. We have tested both approaches using analytes of diverse chemical structures. The study did not reveal relevant differences in the results obtained with both calculation methods. After normalization with the internal standard, the CV(%) of the matrix factor was on average 0.5% higher than the corresponding relative matrix effect. The method adopted by the European Medicines Agency seems to be slightly more conservative in the analyzed datasets. Nine analytes of different structures enabled a general overview of the problem, still, further studies are encouraged to confirm our observations. Copyright © 2018 Elsevier B.V. All rights reserved.
East Europe Report, Economic and Industrial Affairs.
1984-06-07
undeniable that even objectivized norms often lack the required quality. Many of them have not been determined by dependable analytical met methods ...incentive methods , such as, for example, contract wages and their various modifications, are applied only to a limited extent. Many economic managers...discipline. Method of the Future—Work Team Khozrashchet We have been testing and gradually expanding work team forms of labor organiza- tion and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, M.D.
Analytical Chemistry of PCBs offers a review of physical, chemical, commercial, environmental and biological properties of PCBs. It also defines and discusses six discrete steps of analysis: sampling, extraction, cleanup, determination, data reduction, and quality assurance. The final chapter provides a discussion on collaborative testing - the ultimate step in method evaluation. Dr. Erickson also provides a bibliography of over 1200 references, critical reviews of primary literature, and five appendices which present ancillary material on PCB nomen-clature, physical properties, composition of commercial mixtures, mass spectra characteristics, and PGC/ECD chromatograms.
Garcia Hejl, Carine; Ramirez, Jose Manuel; Vest, Philippe; Chianea, Denis; Renard, Christophe
2014-09-01
Laboratories working towards accreditation by the International Standards Organization (ISO) 15189 standard are required to demonstrate the validity of their analytical methods. The different guidelines set by various accreditation organizations make it difficult to provide objective evidence that an in-house method is fit for the intended purpose. Besides, the required performance characteristics tests and acceptance criteria are not always detailed. The laboratory must choose the most suitable validation protocol and set the acceptance criteria. Therefore, we propose a validation protocol to evaluate the performance of an in-house method. As an example, we validated the process for the detection and quantification of lead in whole blood by electrothermal absorption spectrometry. The fundamental parameters tested were, selectivity, calibration model, precision, accuracy (and uncertainty of measurement), contamination, stability of the sample, reference interval, and analytical interference. We have developed a protocol that has been applied successfully to quantify lead in whole blood by electrothermal atomic absorption spectrometry (ETAAS). In particular, our method is selective, linear, accurate, and precise, making it suitable for use in routine diagnostics.
Methodological evaluation and comparison of five urinary albumin measurements.
Liu, Rui; Li, Gang; Cui, Xiao-Fan; Zhang, Dong-Ling; Yang, Qing-Hong; Mu, Xiao-Yan; Pan, Wen-Jie
2011-01-01
Microalbuminuria is an indicator of kidney damage and a risk factor for the progression kidney disease, cardiovascular disease, and so on. Therefore, accurate and precise measurement of urinary albumin is critical. However, there are no reference measurement procedures and reference materials for urinary albumin. Nephelometry, turbidimetry, colloidal gold method, radioimmunoassay, and chemiluminescence immunoassay were performed for methodological evaluation, based on imprecision test, recovery rate, linearity, haemoglobin interference rate, and verified reference interval. Then we tested 40 urine samples from diabetic patients by each method, and compared the result between assays. The results indicate that nephelometry is the method with best analytical performance among the five methods, with an average intraassay coefficient of variation (CV) of 2.6%, an average interassay CV of 1.7%, a mean recovery of 99.6%, a linearity of R=1.00 from 2 to 250 mg/l, and an interference rate of <10% at haemoglobin concentrations of <1.82 g/l. The correlation (r) between assays was from 0.701 to 0.982, and the Bland-Altman plots indicated each assay provided significantly different results from each other. Nephelometry is the clinical urinary albumin method with best analytical performance in our study. © 2011 Wiley-Liss, Inc.
Glycidyl fatty acid esters in food by LC-MS/MS: method development.
Becalski, A; Feng, S Y; Lau, B P-Y; Zhao, T
2012-07-01
An improved method based on liquid chromatography-tandem mass spectrometry (LC-MS/MS) for the analysis of glycidyl fatty acid esters in oils was developed. The method incorporates stable isotope dilution analysis (SIDA) for quantifying the five target analytes: glycidyl esters of palmitic (C16:0), stearic (C18:0), oleic (C18:1), linoleic (C18:2) and linolenic acid (C18:3). For the analysis, 10 mg sample of edible oil or fat is dissolved in acetone, spiked with deuterium labelled analogs of glycidyl esters and purified by a two-step chromatography on C18 and normal silica solid phase extraction (SPE) cartridges using methanol and 5% ethyl acetate in hexane, respectively. If the concentration of analytes is expected to be below 0.5 mg/kg, 0.5 g sample of oil is pre-concentrated first using a silica column. The dried final extract is re-dissolved in 250 μL of a mixture of methanol/isopropanol (1:1, v/v), 15 μL is injected on the analytical C18 LC column and analytes are eluted with 100% methanol. Detection of target glycidyl fatty acid esters is accomplished by LC-MS/MS using positive ion atmospheric pressure chemical ionization operating in Multiple Reaction Monitoring mode monitoring 2 ion transitions for each analyte. The method was tested on replicates of a virgin olive oil which was free of glycidyl esters. The method detection limit was calculated to be in the range of 70-150 μg/kg for each analyte using 10 mg sample and 1-3 μg/kg using 0.5 g sample of oil. Average recoveries of 5 glycidyl esters spiked at 10, 1 and 0.1 mg/kg were in the range 84% to 108%. The major advantage of our method is use of SIDA for all analytes using commercially available internal standards and detection limits that are lower by a factor of 5-10 from published methods when 0.5 g sample of oil is used. Additionally, MS/MS mass chromatograms offer greater specificity than liquid chromatography-mass spectrometry operated in selected ion monitoring mode. The method will be applied to the survey of glycidyl fatty acid esters in food products on the Canadian market.
Microfluidic-Based Robotic Sampling System for Radioactive Solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jack D. Law; Julia L. Tripp; Tara E. Smith
A novel microfluidic based robotic sampling system has been developed for sampling and analysis of liquid solutions in nuclear processes. This system couples the use of a microfluidic sample chip with a robotic system designed to allow remote, automated sampling of process solutions in-cell and facilitates direct coupling of the microfluidic sample chip with analytical instrumentation. This system provides the capability for near real time analysis, reduces analytical waste, and minimizes the potential for personnel exposure associated with traditional sampling methods. A prototype sampling system was designed, built and tested. System testing demonstrated operability of the microfluidic based sample systemmore » and identified system modifications to optimize performance.« less
Jenke, Dennis; Sadain, Salma; Nunez, Karen; Byrne, Frances
2007-01-01
The performance of an ion chromatographic method for measuring citrate and phosphate in pharmaceutical solutions is evaluated. Performance characteristics examined include accuracy, precision, specificity, response linearity, robustness, and the ability to meet system suitability criteria. In general, the method is found to be robust within reasonable deviations from its specified operating conditions. Analytical accuracy is typically 100 +/- 3%, and short-term precision is not more than 1.5% relative standard deviation. The instrument response is linear over a range of 50% to 150% of the standard preparation target concentrations (12 mg/L for phosphate and 20 mg/L for citrate), and the results obtained using a single-point standard versus a calibration curve are essentially equivalent. A small analytical bias is observed and ascribed to the relative purity of the differing salts, used as raw materials in tested finished products and as reference standards in the analytical method. The assay is specific in that no phosphate or citrate peaks are observed in a variety of method-related solutions and matrix blanks (with and without autoclaving). The assay with manual preparation of the eluents is sensitive to the composition of the eluent in the sense that the eluent must be effectively degassed and protected from CO(2) ingress during use. In order for the assay to perform effectively, extensive system equilibration and conditioning is required. However, a properly conditioned and equilibrated system can be used to test a number of samples via chromatographic runs that include many (> 50) injections.
NASA Astrophysics Data System (ADS)
Yuan, H. Z.; Chen, Z.; Shu, C.; Wang, Y.; Niu, X. D.; Shu, S.
2017-09-01
In this paper, a free energy-based surface tension force (FESF) model is presented for accurately resolving the surface tension force in numerical simulation of multiphase flows by the level set method. By using the analytical form of order parameter along the normal direction to the interface in the phase-field method and the free energy principle, FESF model offers an explicit and analytical formulation for the surface tension force. The only variable in this formulation is the normal distance to the interface, which can be substituted by the distance function solved by the level set method. On one hand, as compared to conventional continuum surface force (CSF) model in the level set method, FESF model introduces no regularized delta function, due to which it suffers less from numerical diffusions and performs better in mass conservation. On the other hand, as compared to the phase field surface tension force (PFSF) model, the evaluation of surface tension force in FESF model is based on an analytical approach rather than numerical approximations of spatial derivatives. Therefore, better numerical stability and higher accuracy can be expected. Various numerical examples are tested to validate the robustness of the proposed FESF model. It turns out that FESF model performs better than CSF model and PFSF model in terms of accuracy, stability, convergence speed and mass conservation. It is also shown in numerical tests that FESF model can effectively simulate problems with high density/viscosity ratio, high Reynolds number and severe topological interfacial changes.
Accelerated characterization of graphite/epoxy composites
NASA Technical Reports Server (NTRS)
Griffith, W. I.; Morris, D. H.; Brinson, H. F.
1980-01-01
A method to predict the long-term compliance of unidirectional off-axis laminates from short-term laboratory tests is presented. The method uses an orthotropic transformation equation and the time-stress-temperature superposition principle. Short-term tests are used to construct master curves for two off-axis unidirectional laminates with fiber angles of 10 deg and 90 deg. In addition, analytical predictions of long-term compliance for 30 deg and 60 deg laminates are made. Comparisons with experimental data are also given.
Preliminary Tests For Development Of A Non-Pertechnetate Analysis Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diprete, D.; McCabe, D.
2016-09-28
The objective of this task was to develop a non-pertechnetate analysis method that 222-S lab could easily implement. The initial scope involved working with 222-S laboratory personnel to adapt the existing Tc analytical method to fractionate the non-pertechnetate and pertechnetate. SRNL then developed and tested a method using commercial sorbents containing Aliquat ® 336 to extract the pertechnetate (thereby separating it from non-pertechnetate), followed by oxidation, extraction, and stripping steps, and finally analysis by beta counting and Mass Spectroscopy. Several additional items were partially investigated, including impacts of a 137Cs removal step. The method was initially tested on SRS tankmore » waste samples to determine its viability. Although SRS tank waste does not contain non-pertechnetate, testing with it was useful to investigate the compatibility, separation efficiency, interference removal efficacy, and method sensitivity.« less
40 CFR Appendix 2 to Subpart A of... - Drilling Fluids Toxicity Test (EPA Method 1619)
Code of Federal Regulations, 2014 CFR
2014-07-01
... any excess air removed by flushing the storage containers with nitrogen under pressure anytime the... on an analytical balance, adding the chemical to a 100-milliliter volumetric flask, and bringing the...
40 CFR Appendix 2 to Subpart A of... - Drilling Fluids Toxicity Test (EPA Method 1619)
Code of Federal Regulations, 2012 CFR
2012-07-01
... any excess air removed by flushing the storage containers with nitrogen under pressure anytime the... on an analytical balance, adding the chemical to a 100-milliliter volumetric flask, and bringing the...
40 CFR Appendix 2 to Subpart A of... - Drilling Fluids Toxicity Test (EPA Method 1619)
Code of Federal Regulations, 2013 CFR
2013-07-01
... any excess air removed by flushing the storage containers with nitrogen under pressure anytime the... on an analytical balance, adding the chemical to a 100-milliliter volumetric flask, and bringing the...
40 CFR 60.234 - Test methods and procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... equation: P = Mp Rp where: Mp total mass flow rate of phosphorus-bearing feed, Mg/hr (ton/hr). Rp=P2O5... mass flow rate (Mp) of the phosphorus-bearing feed. (ii) The Association of Official Analytical...
40 CFR 60.234 - Test methods and procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... equation: P = Mp Rp where: Mp total mass flow rate of phosphorus-bearing feed, Mg/hr (ton/hr). Rp=P2O5... mass flow rate (Mp) of the phosphorus-bearing feed. (ii) The Association of Official Analytical...
Learn about the EPA chemists' efforts to develop methods for detecting extremely low concentrations of nerve agents, such as sarin, VX, soman and cyclohexyl sarin, and the blister agent sulfur mustard.
Analytical Method to Estimate the Complex Permittivity of Oil Samples.
Su, Lijuan; Mata-Contreras, Javier; Vélez, Paris; Fernández-Prieto, Armando; Martín, Ferran
2018-03-26
In this paper, an analytical method to estimate the complex dielectric constant of liquids is presented. The method is based on the measurement of the transmission coefficient in an embedded microstrip line loaded with a complementary split ring resonator (CSRR), which is etched in the ground plane. From this response, the dielectric constant and loss tangent of the liquid under test (LUT) can be extracted, provided that the CSRR is surrounded by such LUT, and the liquid level extends beyond the region where the electromagnetic fields generated by the CSRR are present. For that purpose, a liquid container acting as a pool is added to the structure. The main advantage of this method, which is validated from the measurement of the complex dielectric constant of olive and castor oil, is that reference samples for calibration are not required.
An Improved Method of AGM for High Precision Geolocation of SAR Images
NASA Astrophysics Data System (ADS)
Zhou, G.; He, C.; Yue, T.; Huang, W.; Huang, Y.; Li, X.; Chen, Y.
2018-05-01
In order to take full advantage of SAR images, it is necessary to obtain the high precision location of the image. During the geometric correction process of images, to ensure the accuracy of image geometric correction and extract the effective mapping information from the images, precise image geolocation is important. This paper presents an improved analytical geolocation method (IAGM) that determine the high precision geolocation of each pixel in a digital SAR image. This method is based on analytical geolocation method (AGM) proposed by X. K. Yuan aiming at realizing the solution of RD model. Tests will be conducted using RADARSAT-2 SAR image. Comparing the predicted feature geolocation with the position as determined by high precision orthophoto, results indicate an accuracy of 50m is attainable with this method. Error sources will be analyzed and some recommendations about improving image location accuracy in future spaceborne SAR's will be given.
NASA Astrophysics Data System (ADS)
Fales, B. Scott; Shu, Yinan; Levine, Benjamin G.; Hohenstein, Edward G.
2017-09-01
A new complete active space configuration interaction (CASCI) method was recently introduced that uses state-averaged natural orbitals from the configuration interaction singles method (configuration interaction singles natural orbital CASCI, CISNO-CASCI). This method has been shown to perform as well or better than state-averaged complete active space self-consistent field for a variety of systems. However, further development and testing of this method have been limited by the lack of available analytic first derivatives of the CISNO-CASCI energy as well as the derivative coupling between electronic states. In the present work, we present a Lagrangian-based formulation of these derivatives as well as a highly efficient implementation of the resulting equations accelerated with graphical processing units. We demonstrate that the CISNO-CASCI method is practical for dynamical simulations of photochemical processes in molecular systems containing hundreds of atoms.
Fales, B Scott; Shu, Yinan; Levine, Benjamin G; Hohenstein, Edward G
2017-09-07
A new complete active space configuration interaction (CASCI) method was recently introduced that uses state-averaged natural orbitals from the configuration interaction singles method (configuration interaction singles natural orbital CASCI, CISNO-CASCI). This method has been shown to perform as well or better than state-averaged complete active space self-consistent field for a variety of systems. However, further development and testing of this method have been limited by the lack of available analytic first derivatives of the CISNO-CASCI energy as well as the derivative coupling between electronic states. In the present work, we present a Lagrangian-based formulation of these derivatives as well as a highly efficient implementation of the resulting equations accelerated with graphical processing units. We demonstrate that the CISNO-CASCI method is practical for dynamical simulations of photochemical processes in molecular systems containing hundreds of atoms.
Llorente-Mirandes, Toni; Rubio, Roser; López-Sánchez, José Fermín
2017-01-01
Here we review recent developments in analytical proposals for the assessment of inorganic arsenic (iAs) content in food products. Interest in the determination of iAs in products for human consumption such as food commodities, wine, and seaweed among others is fueled by the wide recognition of its toxic effects on humans, even at low concentrations. Currently, the need for robust and reliable analytical methods is recognized by various international safety and health agencies, and by organizations in charge of establishing acceptable tolerance levels of iAs in food. This review summarizes the state of the art of analytical methods while highlighting tools for the assessment of quality assessment of the results, such as the production and evaluation of certified reference materials (CRMs) and the availability of specific proficiency testing (PT) programmes. Because the number of studies dedicated to the subject of this review has increased considerably over recent years, the sources consulted and cited here are limited to those from 2010 to the end of 2015.
Active Control of Inlet Noise on the JT15D Turbofan Engine
NASA Technical Reports Server (NTRS)
Smith, Jerome P.; Hutcheson, Florence V.; Burdisso, Ricardo A.; Fuller, Chris R.
1999-01-01
This report presents the key results obtained by the Vibration and Acoustics Laboratories at Virginia Tech over the year from November 1997 to December 1998 on the Active Noise Control of Turbofan Engines research project funded by NASA Langley Research Center. The concept of implementing active noise control techniques with fuselage-mounted error sensors is investigated both analytically and experimentally. The analytical part of the project involves the continued development of an advanced modeling technique to provide prediction and design guidelines for application of active noise control techniques to large, realistic high bypass engines of the type on which active control methods are expected to be applied. Results from the advanced analytical model are presented that show the effectiveness of the control strategies, and the analytical results presented for fuselage error sensors show good agreement with the experimentally observed results and provide additional insight into the control phenomena. Additional analytical results are presented for active noise control used in conjunction with a wavenumber sensing technique. The experimental work is carried out on a running JT15D turbofan jet engine in a test stand at Virginia Tech. The control strategy used in these tests was the feedforward Filtered-X LMS algorithm. The control inputs were supplied by single and multiple circumferential arrays of acoustic sources equipped with neodymium iron cobalt magnets mounted upstream of the fan. The reference signal was obtained from an inlet mounted eddy current probe. The error signals were obtained from a number of pressure transducers flush-mounted in a simulated fuselage section mounted in the engine test cell. The active control methods are investigated when implemented with the control sources embedded within the acoustically absorptive material on a passively-lined inlet. The experimental results show that the combination of active control techniques with fuselage-mounted error sensors and passive control techniques is an effective means of reducing radiated noise from turbofan engines. Strategic selection of the location of the error transducers is shown to be effective for reducing the radiation towards particular directions in the farfield. An analytical model is used to predict the behavior of the control system and to guide the experimental design configurations, and the analytical results presented show good agreement with the experimentally observed results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srinivasan, M.G.; Kot, C.A.; Mojtahed, M.
The paper describes the analytical modeling, calculations, and results of the posttest nonlinear simulation of high-level seismic testing of the VKL piping system at the HDR Test Facility in Germany. One of the objectives of the tests was to evaluate analytical methods for calculating the nonlinear response of realistic piping systems subjected to high-level seismic excitation that would induce significant plastic deformation. Two out of the six different pipe-support configurations, (ranging from a stiff system with struts and snubbers to a very flexible system with practically no seismic supports), subjected to simulated earthquakes, were tested at very high levels. Themore » posttest nonlinear calculations cover the KWU configuration, a reasonably compliant system with only rigid struts. Responses for 800% safe-shutdown-earthquake loading were calculated using the NONPIPE code. The responses calculated with NONPIPE were found generally to have the same time trends as the measurements but contained under-, over-, and correct estimates of peak values, almost in equal proportions. The only exceptions were the peak strut forces, which were underestimated as a group. The scatter in the peak value estimate of displacements and strut forces was smaller than that for the strains. The possible reasons for the differences and the effort on further analysis are discussed.« less
USDA-ARS?s Scientific Manuscript database
In this study, optimization, extension, and validation of a streamlined, qualitative and quantitative multiclass, multiresidue method was conducted to monitor great than100 veterinary drug residues in meat using ultrahigh-performance liquid chromatography – tandem mass spectrometry (UHPLC-MS/MS). I...
Nondestructive assessment of timber bridges using a vibration-based method
Xiping Wang; James P. Wacker; Robert J. Ross; Brian K. Brashaw
2005-01-01
This paper describes an effort to develop a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the natural frequency of single-span timber bridges in the laboratory and field. An analytical model based on simple beam theory was proposed to represent the relationship...
ERIC Educational Resources Information Center
Cheng, Anran; Tyne, Rebecca; Kwok, Yu Ting; Rees, Louis; Craig, Lorraine; Lapinee, Chaipat; D'Arcy, Mitch; Weiss, Dominik J.; Salau¨n, Pascal
2016-01-01
Testing water samples for arsenic contamination has become an important water quality issue worldwide. Arsenic usually occurs in very small concentrations, and a sensitive analytical method is needed. We present here a 1-day laboratory module developed to introduce Earth Sciences and/or Chemistry student undergraduates to key aspects of this…
Measurement of Enzyme Kinetics by Use of a Blood Glucometer: Hydrolysis of Sucrose and Lactose
ERIC Educational Resources Information Center
Heinzerling, Peter; Schrader, Frank; Schanze, Sascha
2012-01-01
An alternative analytical method for measuring the kinetic parameters of the enzymes invertase and lactase is described. Invertase hydrolyzes sucrose to glucose and fructose and lactase hydrolyzes lactose to glucose and galactose. In most enzyme kinetics studies, photometric methods or test strips are used to quantify the derivates of the…
Evaluation of the analytical variability of dipstick protein pads in canine urine.
Giraldi, Marco; Paltrinieri, Saverio; Zatelli, Andrea
2018-06-01
The dipstick is a first-line and inexpensive test that can exclude the presence of proteinuria in dogs. However, no information is available about the analytical variability of canine urine dipstick analysis. The aim of this study was to assess the analytical variability in 2 dipsticks and the inter-operator variability in dipstick interpretation. Canine urine supernatants (n = 174) were analyzed with 2 commercially available dipsticks. Two observers evaluated each result blinded to the other observer and to the results of the other dipstick. Intra- and inter-assay variability was assessed in 5 samples (corresponding to the 5 different semi-quantitative results) tested 10 consecutive times over 5 consecutive days. The agreement between observers and between dipsticks was evaluated with Cohen's k test. Intra-assay repeatability was good (≤3/10 errors), whereas inter-assay variability was higher (from 1/5 to 4/5 discordant results). The concordance between the operators (k = 0.68 and 0.79 for the 2 dipsticks) and that of the dipsticks (k = 0.66 and 0.74 for the 2 operators) was good. However, 1 observer and 1 dipstick overestimated the results compared with the second observer or dipstick. In any case, discordant results accounted for a single unit of the semi-quantitative scale. As for any other method, analytic variability may affect the semi-quantitation of urinary proteins when using the dipstick method. Subjective interpretation of the pad and, to a lesser extent, intrinsic staining properties of the pads could affect the results. Further studies are warranted to evaluate the effect of this variability on clinical decisions. © 2018 American Society for Veterinary Clinical Pathology.
Zill, Oliver A.; Sebisanovic, Dragan; Lopez, Rene; Blau, Sibel; Collisson, Eric A.; Divers, Stephen G.; Hoon, Dave S. B.; Kopetz, E. Scott; Lee, Jeeyun; Nikolinakos, Petros G.; Baca, Arthur M.; Kermani, Bahram G.; Eltoukhy, Helmy; Talasaz, AmirAli
2015-01-01
Next-generation sequencing of cell-free circulating solid tumor DNA addresses two challenges in contemporary cancer care. First this method of massively parallel and deep sequencing enables assessment of a comprehensive panel of genomic targets from a single sample, and second, it obviates the need for repeat invasive tissue biopsies. Digital SequencingTM is a novel method for high-quality sequencing of circulating tumor DNA simultaneously across a comprehensive panel of over 50 cancer-related genes with a simple blood test. Here we report the analytic and clinical validation of the gene panel. Analytic sensitivity down to 0.1% mutant allele fraction is demonstrated via serial dilution studies of known samples. Near-perfect analytic specificity (> 99.9999%) enables complete coverage of many genes without the false positives typically seen with traditional sequencing assays at mutant allele frequencies or fractions below 5%. We compared digital sequencing of plasma-derived cell-free DNA to tissue-based sequencing on 165 consecutive matched samples from five outside centers in patients with stage III-IV solid tumor cancers. Clinical sensitivity of plasma-derived NGS was 85.0%, comparable to 80.7% sensitivity for tissue. The assay success rate on 1,000 consecutive samples in clinical practice was 99.8%. Digital sequencing of plasma-derived DNA is indicated in advanced cancer patients to prevent repeated invasive biopsies when the initial biopsy is inadequate, unobtainable for genomic testing, or uninformative, or when the patient’s cancer has progressed despite treatment. Its clinical utility is derived from reduction in the costs, complications and delays associated with invasive tissue biopsies for genomic testing. PMID:26474073
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-31
...This action announces the U.S. Environmental Protection Agency's (EPA's) approval of alternative testing methods for use in measuring the levels of contaminants in drinking water and determining compliance with national primary drinking water regulations. The Safe Drinking Water Act (SDWA) authorizes EPA to approve the use of alternative testing methods through publication in the Federal Register. EPA is using this streamlined authority to make 84 additional methods available for analyzing drinking water samples. This expedited approach provides public water systems, laboratories, and primacy agencies with more timely access to new measurement techniques and greater flexibility in the selection of analytical methods, thereby reducing monitoring costs while maintaining public health protection.
El-Yazbi, Amira F
2017-07-01
Sofosbuvir (SOFO) was approved by the U.S. Food and Drug Administration in 2013 for the treatment of hepatitis C virus infection with enhanced antiviral potency compared with earlier analogs. Notwithstanding, all current editions of the pharmacopeias still do not present any analytical methods for the quantification of SOFO. Thus, rapid, simple, and ecofriendly methods for the routine analysis of commercial formulations of SOFO are desirable. In this study, five accurate methods for the determination of SOFO in pharmaceutical tablets were developed and validated. These methods include HPLC, capillary zone electrophoresis, HPTLC, and UV spectrophotometric and derivative spectrometry methods. The proposed methods proved to be rapid, simple, sensitive, selective, and accurate analytical procedures that were suitable for the reliable determination of SOFO in pharmaceutical tablets. An analysis of variance test with P-value > 0.05 confirmed that there were no significant differences between the proposed assays. Thus, any of these methods can be used for the routine analysis of SOFO in commercial tablets.
Build-Up Approach to Updating the Mock Quiet Spike Beam Model
NASA Technical Reports Server (NTRS)
Herrera, Claudia Y.; Pak, Chan-gi
2007-01-01
When a new aircraft is designed or a modification is done to an existing aircraft, the aeroelastic properties of the aircraft should be examined to ensure the aircraft is flight worthy. Evaluating the aeroelastic properties of a new or modified aircraft can include performing a variety of analyses, such as modal and flutter analyses. In order to produce accurate results from these analyses, it is imperative to work with finite element models (FEM) that have been validated by or correlated to ground vibration test (GVT) data, Updating an analytical model using measured data is a challenge in the area of structural dynamics. The analytical model update process encompasses a series of optimizations that match analytical frequencies and mode shapes to the measured modal characteristics of structure. In the past, the method used to update a model to test data was "trial and error." This is an inefficient method - running a modal analysis, comparing the analytical results to the GVT data, manually modifying one or more structural parameters (mass, CG, inertia, area, etc.), rerunning the analysis, and comparing the new analytical modal characteristics to the GVT modal data. If the match is close enough (close enough defined by analyst's updating requirements), then the updating process is completed. If the match does not meet updating-requirements, then the parameters are changed again and the process is repeated. Clearly, this manual optimization process is highly inefficient for large FEM's and/or a large number of structural parameters. NASA Dryden Flight Research Center (DFRC) has developed, in-house, a Mode Matching Code that automates the above-mentioned optimization process, DFRC's in-house Mode Matching Code reads mode shapes and frequencies acquired from GVT to create the target model. It also reads the current analytical model, as we11 as the design variables and their upper and lower limits. It performs a modal analysis on this model and modifies it to create an updated model that has similar mode shapes and frequencies as those of the target model. The Mode Matching Code output frequencies and modal assurance criteria (MAC) values that allow for the quantified comparison of the updated model versus the target model. A recent application of this code is the F453 supersonic flight testing platform, NASA DFRC possesses a modified F-15B that is used as a test bed aircraft for supersonic flight experiments. Traditionally, the finite element model of the test article is generated. A GVT is done on the test article ta validate and update its FEM. This FEM is then mated to the F-15B model, which was correlated to GVT data in fall of 2004, A GVT is conducted with the test article mated to the aircraft, and this mated F-15B/ test article FEM is correlated to this final GVT.
Block, Darci R; Algeciras-Schimnich, Alicia
2013-01-01
Requests for testing various analytes in serous fluids (e.g., pleural, peritoneal, pericardial effusions) are submitted daily to clinical laboratories. Testing of these fluids deviates from assay manufacturers' specifications, as most laboratory assays are optimized for testing blood or urine specimens. These requests add a burden to clinical laboratories, which need to validate assay performance characteristics in these fluids to exclude matrix interferences (given the different composition of body fluids) while maintaining regulatory compliance. Body fluid testing for a number of analytes has been reported in the literature; however, understanding the clinical utility of these analytes is critical because laboratories must address the analytic and clinical validation requirements, while educating clinicians on proper test utilization. In this article, we review the published data to evaluate the clinical utility of testing for numerous analytes in body fluid specimens. We also highlight the pre-analytic and analytic variables that need to be considered when reviewing published studies in body fluid testing. Finally, we provide guidance on how published studies might (or might not) guide interpretation of test results in today's clinical laboratories.
Compensating for Effects of Humidity on Electronic Noses
NASA Technical Reports Server (NTRS)
Homer, Margie; Ryan, Margaret A.; Manatt, Kenneth; Zhou, Hanying; Manfreda, Allison
2004-01-01
A method of compensating for the effects of humidity on the readouts of electronic noses has been devised and tested. The method is especially appropriate for use in environments in which humidity is not or cannot be controlled for example, in the vicinity of a chemical spill, which can be accompanied by large local changes in humidity. Heretofore, it has been common practice to treat water vapor as merely another analyte, the concentration of which is determined, along with that of the other analytes, in a computational process based on deconvolution. This practice works well, but leaves room for improvement: changes in humidity can give rise to large changes in electronic-nose responses. If corrections for humidity are not made, the large humidity-induced responses may swamp smaller responses associated with low concentrations of analytes. The present method offers an improvement. The underlying concept is simple: One augments an electronic nose with a separate humidity and a separate temperature sensor. The outputs of the humidity and temperature sensors are used to generate values that are subtracted from the readings of the other sensors in an electronic nose to correct for the temperature-dependent contributions of humidity to those readings. Hence, in principle, what remains after corrections are the contributions of the analytes only. Laboratory experiments on a first-generation electronic nose have shown that this method is effective and improves the success rate of identification of analyte/ water mixtures. Work on a second-generation device was in progress at the time of reporting the information for this article.
Carbon dioxide gas purification and analytical measurement for leading edge 193nm lithography
NASA Astrophysics Data System (ADS)
Riddle Vogt, Sarah; Landoni, Cristian; Applegarth, Chuck; Browning, Matt; Succi, Marco; Pirola, Simona; Macchi, Giorgio
2015-03-01
The use of purified carbon dioxide (CO2) has become a reality for leading edge 193 nm immersion lithography scanners. Traditionally, both dry and immersion 193 nm lithographic processes have constantly purged the optics stack with ultrahigh purity compressed dry air (UHPCDA). CO2 has been utilized for a similar purpose as UHPCDA. Airborne molecular contamniation (AMC) purification technologies and analytical measurement methods have been extensively developed to support the Lithography Tool Manufacturers purity requirements. This paper covers the analytical tests and characterizations carried out to assess impurity removal from 3.0 N CO2 (beverage grade) for its final utilization in 193 nm and EUV scanners.
Kavvalakis, Matthaios P; Tzatzarakis, Manolis N; Theodoropoulou, Eleftheria P; Barbounis, Emmanouil G; Tsakalof, Andreas K; Tsatsakis, Aristidis M
2013-11-01
Imidacloprid (IMI) is a relatively new neuro-active neonicotinoid insecticide and nowadays one of the largest selling insecticides worldwide. In the present study a LC–APCI–MS based method was developed and validated for the quantification of imidacloprid and its main metabolite 6-chloronicotinic acid (6- CINA) in urine and hair specimens. The method was tested in biomonitoring of intentionally exposed animals and subsequently applied for biomonitoring of Cretan urban and rural population. The developed analytical method comprises two main steps of analytes isolation from specimen (solid– liquid extraction with methanol for hair, liquid–liquid extraction with methanol for urine) and subsequent instrumental analysis by LC–APCI–MS. The developed method was applied for the monitoring of IMI and 6-ClNA in hair and urine of laboratory animals (rabbits) intentionally fed with insecticide at low or high doses (40 and 80 mg kg(-1) weight d(-1) respectively) for 24 weeks. The analytes were detected in the regularly acquired hair and urine specimens and their found levels were proportional to the feeding dose and time of exposure with the exception of slight decline of IMI levels in high dose fed rabbits after 24 weeks of feeding. This decline can be explained by the induction of IMI metabolizing enzymes by the substrate. After testing on animal models the method was applied for pilot biomonitoring of Crete urban (n = 26) and rural (n = 32) population. Rural but not urban population is exposed to IMI with 21 positive samples (65.6%) and found median concentration 0.03 ng mg(-1). Maximum concentration detected was 27 ng mg(-1)
Cordeiro, Fernando; Robouch, Piotr; de la Calle, Maria Beatriz; Emteborg, Håkan; Charoud-Got, Jean; Schmitz, Franz
2011-01-01
A collaborative study, International Evaluation Measurement Programme-25a, was conducted in accordance with international protocols to determine the performance characteristics of an analytical method for the determination of dissolved bromate in drinking water. The method should fulfill the analytical requirements of Council Directive 98/83/EC (referred to in this work as the Drinking Water Directive; DWD). The new draft standard method under investigation is based on ion chromatography followed by post-column reaction and UV detection. The collaborating laboratories used the Draft International Organization for Standardization (ISO)/Draft International Standard (DIS) 11206 document. The existing standard method (ISO 15061:2001) is based on ion chromatography using suppressed conductivity detection, in which a preconcentration step may be required for the determination of bromate concentrations as low as 3 to 5 microg/L. The new method includes a dilution step that reduces the matrix effects, thus allowing the determination of bromate concentrations down to 0.5 microg/L. Furthermore, the method aims to minimize any potential interference of chlorite ions. The collaborative study investigated different types of drinking water, such as soft, hard, and mineral water. Other types of water, such as raw water (untreated), swimming pool water, a blank (named river water), and a bromate standard solution, were included as test samples. All test matrixes except the swimming pool water were spiked with high-purity potassium bromate to obtain bromate concentrations ranging from 1.67 to 10.0 microg/L. Swimming pool water was not spiked, as this water was incurred with bromate. Test samples were dispatched to 17 laboratories from nine different countries. Sixteen participants reported results. The repeatability RSD (RSD(r)) ranged from 1.2 to 4.1%, while the reproducibility RSD (RSDR) ranged from 2.3 to 5.9%. These precision characteristics compare favorably with those of ISO 15601. A thorough comparison of the performance characteristics is presented in this report. All method performance characteristics obtained in the frame of this collaborative study indicate that the draft ISO/DIS 11206 standard method meets the requirements set down by the DWD. It can, therefore, be considered to fit its intended analytical purpose.
Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study
NASA Astrophysics Data System (ADS)
Troudi, Molka; Alimi, Adel M.; Saoudi, Samir
2008-12-01
The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.
Ledermüller, Katrin; Schütz, Martin
2014-04-28
A multistate local CC2 response method for the calculation of analytic energy gradients with respect to nuclear displacements is presented for ground and electronically excited states. The gradient enables the search for equilibrium geometries of extended molecular systems. Laplace transform is used to partition the eigenvalue problem in order to obtain an effective singles eigenvalue problem and adaptive, state-specific local approximations. This leads to an approximation in the energy Lagrangian, which however is shown (by comparison with the corresponding gradient method without Laplace transform) to be of no concern for geometry optimizations. The accuracy of the local approximation is tested and the efficiency of the new code is demonstrated by application calculations devoted to a photocatalytic decarboxylation process of present interest.
Annual banned-substance review: analytical approaches in human sports drug testing.
Thevis, Mario; Kuuranne, Tiia; Geyer, Hans; Schänzer, Wilhelm
2010-04-01
The annual update of the list of prohibited substances and doping methods as issued by the World Anti-Doping Agency (WADA) allows the implementation of most recent considerations of performance manipulation and emerging therapeutics into human sports doping control programmes. The annual banned-substance review for human doping controls critically summarizes recent innovations in analytical approaches that support the efforts of convicting cheating athletes by improved or newly established methods that focus on known as well as newly outlawed substances and doping methods. In the current review, literature published between October 2008 and September 2009 reporting on new and/or enhanced procedures and techniques for doping analysis, as well as aspects relevant to the doping control arena, was considered to complement the 2009 annual banned-substance review.
Bogdanovska-Todorovska, Magdalena; Petrushevska, Gordana; Janevska, Vesna; Spasevska, Liljana; Kostadinova-Kunovska, Slavica
2018-05-20
Accurate assessment of human epidermal growth factor receptor 2 (HER-2) is crucial in selecting patients for targeted therapy. Commonly used methods for HER-2 testing are immunohistochemistry (IHC) and fluorescence in situ hybridization (FISH). Here we presented the implementation, optimization and standardization of two FISH protocols using breast cancer samples and assessed the impact of pre-analytical and analytical factors on HER-2 testing. Formalin fixed paraffin embedded (FFPE) tissue samples from 70 breast cancer patients were tested for HER-2 using PathVysion™ HER-2 DNA Probe Kit and two different paraffin pretreatment kits, Vysis/Abbott Paraffin Pretreatment Reagent Kit (40 samples) and DAKO Histology FISH Accessory Kit (30 samples). The concordance between FISH and IHC results was determined. Pre-analytical and analytical factors (i.e., fixation, baking, digestion, and post-hybridization washing) affected the efficiency and quality of hybridization. The overall hybridization success in our study was 98.6% (69/70); the failure rate was 1.4%. The DAKO pretreatment kit was more time-efficient and resulted in more uniform signals that were easier to interpret, compared to the Vysis/Abbott kit. The overall concordance between IHC and FISH was 84.06%, kappa coefficient 0.5976 (p < 0.0001). The greatest discordance (82%) between IHC and FISH was observed in IHC 2+ group. A standardized FISH protocol for HER-2 assessment, with high hybridization efficiency, is necessary due to variability in tissue processing and individual tissue characteristics. Differences in the pre-analytical and analytical steps can affect the hybridization quality and efficiency. The use of DAKO pretreatment kit is time-saving and cost-effective.
42 CFR 493.17 - Test categorization.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., analytic or postanalytic phases of the testing. (2) Training and experience—(i) Score 1. (A) Minimal training is required for preanalytic, analytic and postanalytic phases of the testing process; and (B... necessary for analytic test performance. (3) Reagents and materials preparation—(i) Score 1. (A) Reagents...
42 CFR 493.17 - Test categorization.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., analytic or postanalytic phases of the testing. (2) Training and experience—(i) Score 1. (A) Minimal training is required for preanalytic, analytic and postanalytic phases of the testing process; and (B... necessary for analytic test performance. (3) Reagents and materials preparation—(i) Score 1. (A) Reagents...
Analytical quality by design: a tool for regulatory flexibility and robust analytics.
Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).
Analytical Quality by Design: A Tool for Regulatory Flexibility and Robust Analytics
Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT). PMID:25722723
Merli, Daniele; Zamboni, Daniele; Protti, Stefano; Pesavento, Maria; Profumo, Antonella
2014-12-01
Lysergic acid diethylamide (LSD) is hardly detectable and quantifiable in biological samples because of its low active dose. Although several analytical tests are available, routine analysis of this drug is rarely performed. In this article, we report a simple and accurate method for the determination of LSD, based on adsorptive stripping voltammetry in DMF/tetrabutylammonium perchlorate, with a linear range of 1-90 ng L(-1) for deposition times of 50s. LOD of 1.4 ng L(-1) and LOQ of 4.3 ng L(-1) were found. The method can be also applied to biological samples after a simple extraction with 1-chlorobutane. Copyright © 2014 Elsevier B.V. All rights reserved.
Modeling and Analysis of Large Amplitude Flight Maneuvers
NASA Technical Reports Server (NTRS)
Anderson, Mark R.
2004-01-01
Analytical methods for stability analysis of large amplitude aircraft motion have been slow to develop because many nonlinear system stability assessment methods are restricted to a state-space dimension of less than three. The proffered approach is to create regional cell-to-cell maps for strategically located two-dimensional subspaces within the higher-dimensional model statespace. These regional solutions capture nonlinear behavior better than linearized point solutions. They also avoid the computational difficulties that emerge when attempting to create a cell map for the entire state-space. Example stability results are presented for a general aviation aircraft and a micro-aerial vehicle configuration. The analytical results are consistent with characteristics that were discovered during previous flight-testing.
Mass Spectrometry for Paper-Based Immunoassays: Toward On-Demand Diagnosis.
Chen, Suming; Wan, Qiongqiong; Badu-Tawiah, Abraham K
2016-05-25
Current analytical methods, either point-of-care or centralized detection, are not able to meet recent demands of patient-friendly testing and increased reliability of results. Here, we describe a two-point separation on-demand diagnostic strategy based on a paper-based mass spectrometry immunoassay platform that adopts stable and cleavable ionic probes as mass reporter; these probes make possible sensitive, interruptible, storable, and restorable on-demand detection. In addition, a new touch paper spray method was developed for on-chip, sensitive, and cost-effective analyte detection. This concept is successfully demonstrated via (i) the detection of Plasmodium falciparum histidine-rich protein 2 antigen and (ii) multiplexed and simultaneous detection of cancer antigen 125 and carcinoembryonic antigen.
Commutability of food microbiology proficiency testing samples.
Abdelmassih, M; Polet, M; Goffaux, M-J; Planchon, V; Dierick, K; Mahillon, J
2014-03-01
Food microbiology proficiency testing (PT) is a useful tool to assess the analytical performances among laboratories. PT items should be close to routine samples to accurately evaluate the acceptability of the methods. However, most PT providers distribute exclusively artificial samples such as reference materials or irradiated foods. This raises the issue of the suitability of these samples because the equivalence-or 'commutability'-between results obtained on artificial vs. authentic food samples has not been demonstrated. In the clinical field, the use of noncommutable PT samples has led to erroneous evaluation of the performances when different analytical methods were used. This study aimed to provide a first assessment of the commutability of samples distributed in food microbiology PT. REQUASUD and IPH organized 13 food microbiology PTs including 10-28 participants. Three types of PT items were used: genuine food samples, sterile food samples and reference materials. The commutability of the artificial samples (reference material or sterile samples) was assessed by plotting the distribution of the results on natural and artificial PT samples. This comparison highlighted matrix-correlated issues when nonfood matrices, such as reference materials, were used. Artificially inoculated food samples, on the other hand, raised only isolated commutability issues. In the organization of a PT-scheme, authentic or artificially inoculated food samples are necessary to accurately evaluate the analytical performances. Reference materials, used as PT items because of their convenience, may present commutability issues leading to inaccurate penalizing conclusions for methods that would have provided accurate results on food samples. For the first time, the commutability of food microbiology PT samples was investigated. The nature of the samples provided by the organizer turned out to be an important factor because matrix effects can impact on the analytical results. © 2013 The Society for Applied Microbiology.
Cretini, Kari F.; Visser, Jenneke M.; Krauss, Ken W.; Steyer, Gregory D.
2011-01-01
This document identifies the main objectives of the Coastwide Reference Monitoring System (CRMS) vegetation analytical team, which are to provide (1) collection and development methods for vegetation response variables and (2) the ways in which these response variables will be used to evaluate restoration project effectiveness. The vegetation parameters (that is, response variables) collected in CRMS and other coastal restoration projects funded under the Coastal Wetlands Planning, Protection and Restoration Act (CWPPRA) are identified, and the field collection methods for these parameters are summarized. Existing knowledge on community and plant responses to changes in environmental drivers (for example, flooding and salinity) from published literature and from the CRMS and CWPPRA monitoring dataset are used to develop a suite of indices to assess wetland condition in coastal Louisiana. Two indices, the floristic quality index (FQI) and a productivity index, are described for herbaceous and forested vegetation. The FQI for herbaceous vegetation is tested with a long-term dataset from a CWPPRA marsh creation project. Example graphics for this index are provided and discussed. The other indices, an FQI for forest vegetation (that is, trees and shrubs) and productivity indices for herbaceous and forest vegetation, are proposed but not tested. New response variables may be added or current response variables removed as data become available and as our understanding of restoration success indicators develops. Once indices are fully developed, each will be used by the vegetation analytical team to assess and evaluate CRMS/CWPPRA project and program effectiveness. The vegetation analytical teams plan to summarize their results in the form of written reports and/or graphics and present these items to CRMS Federal and State sponsors, restoration project managers, landowners, and other data users for their input.