Bourget, Philippe; Amin, Alexandre; Vidal, Fabrice; Merlette, Christophe; Troude, Pénélope; Baillet-Guffroy, Arlette
2014-08-15
The purpose of the study was to perform a comparative analysis of the technical performance, respective costs and environmental effect of two invasive analytical methods (HPLC and UV/visible-FTIR) as compared to a new non-invasive analytical technique (Raman spectroscopy). Three pharmacotherapeutic models were used to compare the analytical performances of the three analytical techniques. Statistical inter-method correlation analysis was performed using non-parametric correlation rank tests. The study's economic component combined calculations relative to the depreciation of the equipment and the estimated cost of an AQC unit of work. In any case, analytical validation parameters of the three techniques were satisfactory, and strong correlations between the two spectroscopic techniques vs. HPLC were found. In addition, Raman spectroscopy was found to be superior as compared to the other techniques for numerous key criteria including a complete safety for operators and their occupational environment, a non-invasive procedure, no need for consumables, and a low operating cost. Finally, Raman spectroscopy appears superior for technical, economic and environmental objectives, as compared with the other invasive analytical methods. Copyright © 2014 Elsevier B.V. All rights reserved.
Holistic rubric vs. analytic rubric for measuring clinical performance levels in medical students.
Yune, So Jung; Lee, Sang Yeoup; Im, Sun Ju; Kam, Bee Sung; Baek, Sun Yong
2018-06-05
Task-specific checklists, holistic rubrics, and analytic rubrics are often used for performance assessments. We examined what factors evaluators consider important in holistic scoring of clinical performance assessment, and compared the usefulness of applying holistic and analytic rubrics respectively, and analytic rubrics in addition to task-specific checklists based on traditional standards. We compared the usefulness of a holistic rubric versus an analytic rubric in effectively measuring the clinical skill performances of 126 third-year medical students who participated in a clinical performance assessment conducted by Pusan National University School of Medicine. We conducted a questionnaire survey of 37 evaluators who used all three evaluation methods-holistic rubric, analytic rubric, and task-specific checklist-for each student. The relationship between the scores on the three evaluation methods was analyzed using Pearson's correlation. Inter-rater agreement was analyzed by Kappa index. The effect of holistic and analytic rubric scores on the task-specific checklist score was analyzed using multiple regression analysis. Evaluators perceived accuracy and proficiency to be major factors in objective structured clinical examinations evaluation, and history taking and physical examination to be major factors in clinical performance examinations evaluation. Holistic rubric scores were highly related to the scores of the task-specific checklist and analytic rubric. Relatively low agreement was found in clinical performance examinations compared to objective structured clinical examinations. Meanwhile, the holistic and analytic rubric scores explained 59.1% of the task-specific checklist score in objective structured clinical examinations and 51.6% in clinical performance examinations. The results show the usefulness of holistic and analytic rubrics in clinical performance assessment, which can be used in conjunction with task-specific checklists for more efficient evaluation.
The importance of quality control in validating concentrations ...
A national-scale survey of 247 contaminants of emerging concern (CECs), including organic and inorganic chemical compounds, and microbial contaminants, was conducted in source and treated drinking water samples from 25 treatment plants across the United States. Multiple methods were used to determine these CECs, including six analytical methods to measure 174 pharmaceuticals, personal care products, and pesticides. A three-component quality assurance/quality control (QA/QC) program was designed for the subset of 174 CECs which allowed us to assess and compare performances of the methods used. The three components included: 1) a common field QA/QC protocol and sample design, 2) individual investigator-developed method-specific QA/QC protocols, and 3) a suite of 46 method comparison analytes that were determined in two or more analytical methods. Overall method performance for the 174 organic chemical CECs was assessed by comparing spiked recoveries in reagent, source, and treated water over a two-year period. In addition to the 247 CECs reported in the larger drinking water study, another 48 pharmaceutical compounds measured did not consistently meet predetermined quality standards. Methodologies that did not seem suitable for these analytes are overviewed. The need to exclude analytes based on method performance demonstrates the importance of additional QA/QC protocols. This paper compares the method performance of six analytical methods used to measure 174 emer
Analysis of a virtual memory model for maintaining database views
NASA Technical Reports Server (NTRS)
Kinsley, Kathryn C.; Hughes, Charles E.
1992-01-01
This paper presents an analytical model for predicting the performance of a new support strategy for database views. This strategy, called the virtual method, is compared with traditional methods for supporting views. The analytical model's predictions of improved performance by the virtual method are then validated by comparing these results with those achieved in an experimental implementation.
Long, H. Keith; Daddow, Richard L.; Farrar, Jerry W.
1998-01-01
Since 1962, the U.S. Geological Survey (USGS) has operated the Standard Reference Sample Project to evaluate the performance of USGS, cooperator, and contractor analytical laboratories that analyze chemical constituents of environmental samples. The laboratories are evaluated by using performance evaluation samples, called Standard Reference Samples (SRSs). SRSs are submitted to laboratories semi-annually for round-robin laboratory performance comparison purposes. Currently, approximately 100 laboratories are evaluated for their analytical performance on six SRSs for inorganic and nutrient constituents. As part of the SRS Project, a surplus of homogeneous, stable SRSs is maintained for purchase by USGS offices and participating laboratories for use in continuing quality-assurance and quality-control activities. Statistical evaluation of the laboratories results provides information to compare the analytical performance of the laboratories and to determine possible analytical deficiences and problems. SRS results also provide information on the bias and variability of different analytical methods used in the SRS analyses.
42 CFR 493.959 - Immunohematology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... challenges per testing event a program must provide for each analyte or test procedure is five. Analyte or... Compatibility testing Antibody identification (d) Evaluation of a laboratory's analyte or test performance. HHS... program must compare the laboratory's response for each analyte with the response that reflects agreement...
42 CFR 493.959 - Immunohematology.
Code of Federal Regulations, 2013 CFR
2013-10-01
... challenges per testing event a program must provide for each analyte or test procedure is five. Analyte or... Compatibility testing Antibody identification (d) Evaluation of a laboratory's analyte or test performance. HHS... program must compare the laboratory's response for each analyte with the response that reflects agreement...
42 CFR 493.959 - Immunohematology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... challenges per testing event a program must provide for each analyte or test procedure is five. Analyte or... Compatibility testing Antibody identification (d) Evaluation of a laboratory's analyte or test performance. HHS... program must compare the laboratory's response for each analyte with the response that reflects agreement...
42 CFR 493.959 - Immunohematology.
Code of Federal Regulations, 2011 CFR
2011-10-01
... challenges per testing event a program must provide for each analyte or test procedure is five. Analyte or... Compatibility testing Antibody identification (d) Evaluation of a laboratory's analyte or test performance. HHS... program must compare the laboratory's response for each analyte with the response that reflects agreement...
European Multicenter Study on Analytical Performance of DxN Veris System HCV Assay.
Braun, Patrick; Delgado, Rafael; Drago, Monica; Fanti, Diana; Fleury, Hervé; Gismondo, Maria Rita; Hofmann, Jörg; Izopet, Jacques; Kühn, Sebastian; Lombardi, Alessandra; Marcos, Maria Angeles; Sauné, Karine; O'Shea, Siobhan; Pérez-Rivilla, Alfredo; Ramble, John; Trimoulet, Pascale; Vila, Jordi; Whittaker, Duncan; Artus, Alain; Rhodes, Daniel W
2017-04-01
The analytical performance of the Veris HCV Assay for use on the new and fully automated Beckman Coulter DxN Veris Molecular Diagnostics System (DxN Veris System) was evaluated at 10 European virology laboratories. Precision, analytical sensitivity, specificity, and performance with negative samples, linearity, and performance with hepatitis C virus (HCV) genotypes were evaluated. Precision for all sites showed a standard deviation (SD) of 0.22 log 10 IU/ml or lower for each level tested. Analytical sensitivity determined by probit analysis was between 6.2 and 9.0 IU/ml. Specificity on 94 unique patient samples was 100%, and performance with 1,089 negative samples demonstrated 100% not-detected results. Linearity using patient samples was shown from 1.34 to 6.94 log 10 IU/ml. The assay demonstrated linearity upon dilution with all HCV genotypes. The Veris HCV Assay demonstrated an analytical performance comparable to that of currently marketed HCV assays when tested across multiple European sites. Copyright © 2017 American Society for Microbiology.
NASA Technical Reports Server (NTRS)
Tirres, Lizet
1991-01-01
An evaluation of the aerodynamic performance of the solid version of an Allison-designed cooled radial turbine was conducted at NASA Lewis' Warm Turbine Test Facility. The resulting pressure and temperature measurements are used to calculate vane, rotor, and overall stage performance. These performance results are then compared to the analytical results obtained by using NASA's MTSB (MERIDL-TSONIC-BLAYER) code.
The Development of MST Test Information for the Prediction of Test Performances
ERIC Educational Resources Information Center
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.
2017-01-01
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
An analytical method of estimating turbine performance
NASA Technical Reports Server (NTRS)
Kochendorfer, Fred D; Nettles, J Cary
1949-01-01
A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and the friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and the tuning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of a blading-loss parameter.
Median of patient results as a tool for assessment of analytical stability.
Jørgensen, Lars Mønster; Hansen, Steen Ingemann; Petersen, Per Hyltoft; Sölétormos, György
2015-06-15
In spite of the well-established external quality assessment and proficiency testing surveys of analytical quality performance in laboratory medicine, a simple tool to monitor the long-term analytical stability as a supplement to the internal control procedures is often needed. Patient data from daily internal control schemes was used for monthly appraisal of the analytical stability. This was accomplished by using the monthly medians of patient results to disclose deviations from analytical stability, and by comparing divergences with the quality specifications for allowable analytical bias based on biological variation. Seventy five percent of the twenty analytes achieved on two COBASs INTEGRA 800 instruments performed in accordance with the optimum and with the desirable specifications for bias. Patient results applied in analytical quality performance control procedures are the most reliable sources of material as they represent the genuine substance of the measurements and therefore circumvent the problems associated with non-commutable materials in external assessment. Patient medians in the monthly monitoring of analytical stability in laboratory medicine are an inexpensive, simple and reliable tool to monitor the steadiness of the analytical practice. Copyright © 2015 Elsevier B.V. All rights reserved.
Holistic versus Analytic Evaluation of EFL Writing: A Case Study
ERIC Educational Resources Information Center
Ghalib, Thikra K.; Al-Hattami, Abdulghani A.
2015-01-01
This paper investigates the performance of holistic and analytic scoring rubrics in the context of EFL writing. Specifically, the paper compares EFL students' scores on a writing task using holistic and analytic scoring rubrics. The data for the study was collected from 30 participants attending an English undergraduate program in a Yemeni…
NASA Astrophysics Data System (ADS)
Li, Jiangui; Wang, Junhua; Zhigang, Zhao; Yan, Weili
2012-04-01
In this paper, analytical analysis of the permanent magnet vernier (PMV) is presented. The key is to analytically solve the governing Laplacian/quasi-Poissonian field equations in the motor regions. By using the time-stepping finite element method, the analytical method is verified. Hence, the performances of the PMV machine are quantitatively compared with that of the analytical results. The analytical results agree well with the finite element method results. Finally, the experimental results are given to further show the validity of the analysis.
O'Neal, Wanda K; Anderson, Wayne; Basta, Patricia V; Carretta, Elizabeth E; Doerschuk, Claire M; Barr, R Graham; Bleecker, Eugene R; Christenson, Stephanie A; Curtis, Jeffrey L; Han, Meilan K; Hansel, Nadia N; Kanner, Richard E; Kleerup, Eric C; Martinez, Fernando J; Miller, Bruce E; Peters, Stephen P; Rennard, Stephen I; Scholand, Mary Beth; Tal-Singer, Ruth; Woodruff, Prescott G; Couper, David J; Davis, Sonia M
2014-01-08
As a part of the longitudinal Chronic Obstructive Pulmonary Disease (COPD) study, Subpopulations and Intermediate Outcome Measures in COPD study (SPIROMICS), blood samples are being collected from 3200 subjects with the goal of identifying blood biomarkers for sub-phenotyping patients and predicting disease progression. To determine the most reliable sample type for measuring specific blood analytes in the cohort, a pilot study was performed from a subset of 24 subjects comparing serum, Ethylenediaminetetraacetic acid (EDTA) plasma, and EDTA plasma with proteinase inhibitors (P100). 105 analytes, chosen for potential relevance to COPD, arranged in 12 multiplex and one simplex platform (Myriad-RBM) were evaluated in duplicate from the three sample types from 24 subjects. The reliability coefficient and the coefficient of variation (CV) were calculated. The performance of each analyte and mean analyte levels were evaluated across sample types. 20% of analytes were not consistently detectable in any sample type. Higher reliability and/or smaller CV were determined for 12 analytes in EDTA plasma compared to serum, and for 11 analytes in serum compared to EDTA plasma. While reliability measures were similar for EDTA plasma and P100 plasma for a majority of analytes, CV was modestly increased in P100 plasma for eight analytes. Each analyte within a multiplex produced independent measurement characteristics, complicating selection of sample type for individual multiplexes. There were notable detectability and measurability differences between serum and plasma. Multiplexing may not be ideal if large reliability differences exist across analytes measured within the multiplex, especially if values differ based on sample type. For some analytes, the large CV should be considered during experimental design, and the use of duplicate and/or triplicate samples may be necessary. These results should prove useful for studies evaluating selection of samples for evaluation of potential blood biomarkers.
2014-01-01
Background As a part of the longitudinal Chronic Obstructive Pulmonary Disease (COPD) study, Subpopulations and Intermediate Outcome Measures in COPD study (SPIROMICS), blood samples are being collected from 3200 subjects with the goal of identifying blood biomarkers for sub-phenotyping patients and predicting disease progression. To determine the most reliable sample type for measuring specific blood analytes in the cohort, a pilot study was performed from a subset of 24 subjects comparing serum, Ethylenediaminetetraacetic acid (EDTA) plasma, and EDTA plasma with proteinase inhibitors (P100™). Methods 105 analytes, chosen for potential relevance to COPD, arranged in 12 multiplex and one simplex platform (Myriad-RBM) were evaluated in duplicate from the three sample types from 24 subjects. The reliability coefficient and the coefficient of variation (CV) were calculated. The performance of each analyte and mean analyte levels were evaluated across sample types. Results 20% of analytes were not consistently detectable in any sample type. Higher reliability and/or smaller CV were determined for 12 analytes in EDTA plasma compared to serum, and for 11 analytes in serum compared to EDTA plasma. While reliability measures were similar for EDTA plasma and P100 plasma for a majority of analytes, CV was modestly increased in P100 plasma for eight analytes. Each analyte within a multiplex produced independent measurement characteristics, complicating selection of sample type for individual multiplexes. Conclusions There were notable detectability and measurability differences between serum and plasma. Multiplexing may not be ideal if large reliability differences exist across analytes measured within the multiplex, especially if values differ based on sample type. For some analytes, the large CV should be considered during experimental design, and the use of duplicate and/or triplicate samples may be necessary. These results should prove useful for studies evaluating selection of samples for evaluation of potential blood biomarkers. PMID:24397870
Analytical Study of 90Sr Betavoltaic Nuclear Battery Performance Based on p-n Junction Silicon
NASA Astrophysics Data System (ADS)
Rahastama, Swastya; Waris, Abdul
2016-08-01
Previously, an analytical calculation of 63Ni p-n junction betavoltaic battery has been published. As the basic approach, we reproduced the analytical simulation of 63Ni betavoltaic battery and then compared it to previous results using the same design of the battery. Furthermore, we calculated its maximum power output and radiation- electricity conversion efficiency using semiconductor analysis method.Then, the same method were applied to calculate and analyse the performance of 90Sr betavoltaic battery. The aim of this project is to compare the analytical perfomance results of 90Sr betavoltaic battery to 63Ni betavoltaic battery and the source activity influences to performance. Since it has a higher power density, 90Sr betavoltaic battery yields more power than 63Ni betavoltaic battery but less radiation-electricity conversion efficiency. However, beta particles emitted from 90Sr source could travel further inside the silicon corresponding to stopping range of beta particles, thus the 90Sr betavoltaic battery could be designed thicker than 63Ni betavoltaic battery to achieve higher conversion efficiency.
A European multicenter study on the analytical performance of the VERIS HBV assay.
Braun, Patrick; Delgado, Rafael; Drago, Monica; Fanti, Diana; Fleury, Hervé; Izopet, Jacques; Lombardi, Alessandra; Mancon, Alessandro; Marcos, Maria Angeles; Sauné, Karine; O Shea, Siobhan; Pérez-Rivilla, Alfredo; Ramble, John; Trimoulet, Pascale; Vila, Jordi; Whittaker, Duncan; Artus, Alain; Rhodes, Daniel
Hepatitis B viral load monitoring is an essential part of managing patients with chronic Hepatits B infection. Beckman Coulter has developed the VERIS HBV Assay for use on the fully automated Beckman Coulter DxN VERIS Molecular Diagnostics System. 1 OBJECTIVES: To evaluate the analytical performance of the VERIS HBV Assay at multiple European virology laboratories. Precision, analytical sensitivity, negative sample performance, linearity and performance with major HBV genotypes/subtypes for the VERIS HBV Assay was evaluated. Precision showed an SD of 0.15 log 10 IU/mL or less for each level tested. Analytical sensitivity determined by probit analysis was between 6.8-8.0 IU/mL. Clinical specificity on 90 unique patient samples was 100.0%. Performance with 754 negative samples demonstrated 100.0% not detected results, and a carryover study showed no cross contamination. Linearity using clinical samples was shown from 1.23-8.23 log 10 IU/mL and the assay detected and showed linearity with major HBV genotypes/subtypes. The VERIS HBV Assay demonstrated comparable analytical performance to other currently marketed assays for HBV DNA monitoring. Copyright © 2017 Elsevier B.V. All rights reserved.
Mengoli, Carlo; Springer, Jan; Bretagne, Stéphane; Cuenca-Estrella, Manuel; Klingspor, Lena; Lagrou, Katrien; Melchers, Willem J. G.; Morton, C. Oliver; Barnes, Rosemary A.; Donnelly, J. Peter; White, P. Lewis
2015-01-01
The use of serum or plasma for Aspergillus PCR testing facilitates automated and standardized technology. Recommendations for serum testing are available, and while serum and plasma are regularly considered interchangeable for use in fungal diagnostics, differences in galactomannan enzyme immunoassay (GM-EIA) performance have been reported and are attributed to clot formation. Therefore, it is important to assess plasma PCR testing to determine if previous recommendations for serum are applicable and also to compare analytical performance with that of serum PCR. Molecular methods testing serum and plasma were compared through multicenter distribution of quality control panels, with additional studies to investigate the effect of clot formation and blood fractionation on DNA availability. Analytical sensitivity and time to positivity (TTP) were compared, and a regression analysis was performed to identify variables that enhanced plasma PCR performance. When testing plasma, sample volume, preextraction-to-postextraction volume ratio, PCR volume, duplicate testing, and the use of an internal control for PCR were positively associated with performance. When whole-blood samples were spiked and then fractionated, the analytical sensitivity and TTP were superior when testing plasma. Centrifugation had no effect on DNA availability, whereas the presence of clot material significantly lowered the concentration (P = 0.028). Technically, there are no major differences in the molecular processing of serum and plasma, but the formation of clot material potentially reduces available DNA in serum. During disease, Aspergillus DNA burdens in blood are often at the limits of PCR performance. Using plasma might improve performance while maintaining the methodological simplicity of serum testing. PMID:26085614
Loeffler, Juergen; Mengoli, Carlo; Springer, Jan; Bretagne, Stéphane; Cuenca-Estrella, Manuel; Klingspor, Lena; Lagrou, Katrien; Melchers, Willem J G; Morton, C Oliver; Barnes, Rosemary A; Donnelly, J Peter; White, P Lewis
2015-09-01
The use of serum or plasma for Aspergillus PCR testing facilitates automated and standardized technology. Recommendations for serum testing are available, and while serum and plasma are regularly considered interchangeable for use in fungal diagnostics, differences in galactomannan enzyme immunoassay (GM-EIA) performance have been reported and are attributed to clot formation. Therefore, it is important to assess plasma PCR testing to determine if previous recommendations for serum are applicable and also to compare analytical performance with that of serum PCR. Molecular methods testing serum and plasma were compared through multicenter distribution of quality control panels, with additional studies to investigate the effect of clot formation and blood fractionation on DNA availability. Analytical sensitivity and time to positivity (TTP) were compared, and a regression analysis was performed to identify variables that enhanced plasma PCR performance. When testing plasma, sample volume, preextraction-to-postextraction volume ratio, PCR volume, duplicate testing, and the use of an internal control for PCR were positively associated with performance. When whole-blood samples were spiked and then fractionated, the analytical sensitivity and TTP were superior when testing plasma. Centrifugation had no effect on DNA availability, whereas the presence of clot material significantly lowered the concentration (P = 0.028). Technically, there are no major differences in the molecular processing of serum and plasma, but the formation of clot material potentially reduces available DNA in serum. During disease, Aspergillus DNA burdens in blood are often at the limits of PCR performance. Using plasma might improve performance while maintaining the methodological simplicity of serum testing. Copyright © 2015 Loeffler et al.
Finding accurate frontiers: A knowledge-intensive approach to relational learning
NASA Technical Reports Server (NTRS)
Pazzani, Michael; Brunk, Clifford
1994-01-01
An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory.
Teachable, high-content analytics for live-cell, phase contrast movies.
Alworth, Samuel V; Watanabe, Hirotada; Lee, James S J
2010-09-01
CL-Quant is a new solution platform for broad, high-content, live-cell image analysis. Powered by novel machine learning technologies and teach-by-example interfaces, CL-Quant provides a platform for the rapid development and application of scalable, high-performance, and fully automated analytics for a broad range of live-cell microscopy imaging applications, including label-free phase contrast imaging. The authors used CL-Quant to teach off-the-shelf universal analytics, called standard recipes, for cell proliferation, wound healing, cell counting, and cell motility assays using phase contrast movies collected on the BioStation CT and BioStation IM platforms. Similar to application modules, standard recipes are intended to work robustly across a wide range of imaging conditions without requiring customization by the end user. The authors validated the performance of the standard recipes by comparing their performance with truth created manually, or by custom analytics optimized for each individual movie (and therefore yielding the best possible result for the image), and validated by independent review. The validation data show that the standard recipes' performance is comparable with the validated truth with low variation. The data validate that the CL-Quant standard recipes can provide robust results without customization for live-cell assays in broad cell types and laboratory settings.
Fischer, David J.; Hulvey, Matthew K.; Regel, Anne R.; Lunte, Susan M.
2012-01-01
The fabrication and evaluation of different electrode materials and electrode alignments for microchip electrophoresis with electrochemical (EC) detection is described. The influences of electrode material, both metal and carbon-based, on sensitivity and limits of detection (LOD) were examined. In addition, the effects of working electrode alignment on analytical performance (in terms of peak shape, resolution, sensitivity, and LOD) were directly compared. Using dopamine (DA), norepinephrine (NE), and catechol (CAT) as test analytes, it was found that pyrolyzed photoresist electrodes with end-channel alignment yielded the lowest limit of detection (35 nM for DA). In addition to being easier to implement, end-channel alignment also offered better analytical performance than off-channel alignment for the detection of all three analytes. In-channel electrode alignment resulted in a 3.6-fold reduction in peak skew and reduced peak tailing by a factor of 2.1 for catechol in comparison to end-channel alignment. PMID:19802847
Bias Assessment of General Chemistry Analytes using Commutable Samples.
Koerbin, Gus; Tate, Jillian R; Ryan, Julie; Jones, Graham Rd; Sikaris, Ken A; Kanowski, David; Reed, Maxine; Gill, Janice; Koumantakis, George; Yen, Tina; St John, Andrew; Hickman, Peter E; Simpson, Aaron; Graham, Peter
2014-11-01
Harmonisation of reference intervals for routine general chemistry analytes has been a goal for many years. Analytical bias may prevent this harmonisation. To determine if analytical bias is present when comparing methods, the use of commutable samples, or samples that have the same properties as the clinical samples routinely analysed, should be used as reference samples to eliminate the possibility of matrix effect. The use of commutable samples has improved the identification of unacceptable analytical performance in the Netherlands and Spain. The International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) has undertaken a pilot study using commutable samples in an attempt to determine not only country specific reference intervals but to make them comparable between countries. Australia and New Zealand, through the Australasian Association of Clinical Biochemists (AACB), have also undertaken an assessment of analytical bias using commutable samples and determined that of the 27 general chemistry analytes studied, 19 showed sufficiently small between method biases as to not prevent harmonisation of reference intervals. Application of evidence based approaches including the determination of analytical bias using commutable material is necessary when seeking to harmonise reference intervals.
Batt, Angela L; Furlong, Edward T; Mash, Heath E; Glassmeyer, Susan T; Kolpin, Dana W
2017-02-01
A national-scale survey of 247 contaminants of emerging concern (CECs), including organic and inorganic chemical compounds, and microbial contaminants, was conducted in source and treated drinking water samples from 25 treatment plants across the United States. Multiple methods were used to determine these CECs, including six analytical methods to measure 174 pharmaceuticals, personal care products, and pesticides. A three-component quality assurance/quality control (QA/QC) program was designed for the subset of 174 CECs which allowed us to assess and compare performances of the methods used. The three components included: 1) a common field QA/QC protocol and sample design, 2) individual investigator-developed method-specific QA/QC protocols, and 3) a suite of 46 method comparison analytes that were determined in two or more analytical methods. Overall method performance for the 174 organic chemical CECs was assessed by comparing spiked recoveries in reagent, source, and treated water over a two-year period. In addition to the 247 CECs reported in the larger drinking water study, another 48 pharmaceutical compounds measured did not consistently meet predetermined quality standards. Methodologies that did not seem suitable for these analytes are overviewed. The need to exclude analytes based on method performance demonstrates the importance of additional QA/QC protocols. Published by Elsevier B.V.
Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.
2016-01-01
A new engine cycle analysis tool, called Pycycle, was built using the OpenMDAO framework. Pycycle provides analytic derivatives allowing for an efficient use of gradient-based optimization methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.
The Use and Abuse of Limits of Detection in Environmental Analytical Chemistry
Brown, Richard J. C.
2008-01-01
The limit of detection (LoD) serves as an important method performance measure that is useful for the comparison of measurement techniques and the assessment of likely signal to noise performance, especially in environmental analytical chemistry. However, the LoD is only truly related to the precision characteristics of the analytical instrument employed for the analysis and the content of analyte in the blank sample. This article discusses how other criteria, such as sampling volume, can serve to distort the quoted LoD artificially and make comparison between various analytical methods inequitable. In order to compare LoDs between methods properly, it is necessary to state clearly all of the input parameters relating to the measurements that have been used in the calculation of the LoD. Additionally, the article discusses that the use of LoDs in contexts other than the comparison of the attributes of analytical methods, in particular when reporting analytical results, may be confusing, less informative than quoting the actual result with an accompanying statement of uncertainty, and may act to bias descriptive statistics. PMID:18690384
Reliability-based structural optimization: A proposed analytical-experimental study
NASA Technical Reports Server (NTRS)
Stroud, W. Jefferson; Nikolaidis, Efstratios
1993-01-01
An analytical and experimental study for assessing the potential of reliability-based structural optimization is proposed and described. In the study, competing designs obtained by deterministic and reliability-based optimization are compared. The experimental portion of the study is practical because the structure selected is a modular, actively and passively controlled truss that consists of many identical members, and because the competing designs are compared in terms of their dynamic performance and are not destroyed if failure occurs. The analytical portion of this study is illustrated on a 10-bar truss example. In the illustrative example, it is shown that reliability-based optimization can yield a design that is superior to an alternative design obtained by deterministic optimization. These analytical results provide motivation for the proposed study, which is underway.
Eckfeldt, J H; Copeland, K R
1993-04-01
Proficiency testing using stabilized control materials has been used for decades as a means of monitoring and improving performance in the clinical laboratory. Often, the commonly used proficiency testing materials exhibit "matrix effects" that cause them to behave differently from fresh human specimens in certain clinical analytic systems. Because proficiency testing is the primary method in which regulatory agencies have chosen to evaluate clinical laboratory performance, the College of American Pathologists (CAP) has proposed guidelines for investigating the influence of matrix effects on their Survey results. The purpose of this investigation was to determine the feasibility, usefulness, and potential problems associated with this CAP Matrix Effect Analytical Protocol, in which fresh patient specimens and CAP proficiency specimens are analyzed simultaneously by a field method and a definitive, reference, or other comparative method. The optimal outcome would be that both the fresh human and CAP Survey specimens agree closely with the comparative method result. However, this was not always the case. Using several different analytic configurations, we were able to demonstrate matrix and calibration biases for several of the analytes investigated.
Stochastic modelling of the hydrologic operation of rainwater harvesting systems
NASA Astrophysics Data System (ADS)
Guo, Rui; Guo, Yiping
2018-07-01
Rainwater harvesting (RWH) systems are an effective low impact development practice that provides both water supply and runoff reduction benefits. A stochastic modelling approach is proposed in this paper to quantify the water supply reliability and stormwater capture efficiency of RWH systems. The input rainfall series is represented as a marked Poisson process and two typical water use patterns are analytically described. The stochastic mass balance equation is solved analytically, and based on this, explicit expressions relating system performance to system characteristics are derived. The performances of a wide variety of RWH systems located in five representative climatic regions of the United States are examined using the newly derived analytical equations. Close agreements between analytical and continuous simulation results are shown for all the compared cases. In addition, an analytical equation is obtained expressing the required storage size as a function of the desired water supply reliability, average water use rate, as well as rainfall and catchment characteristics. The equations developed herein constitute a convenient and effective tool for sizing RWH systems and evaluating their performances.
Pereira, Jorge; Câmara, José S; Colmsjö, Anders; Abdel-Rehim, Mohamed
2014-06-01
Sample preparation is an important analytical step regarding the isolation and concentration of desired components from complex matrices and greatly influences their reliable and accurate analysis and data quality. It is the most labor-intensive and error-prone process in analytical methodology and, therefore, may influence the analytical performance of the target analytes quantification. Many conventional sample preparation methods are relatively complicated, involving time-consuming procedures and requiring large volumes of organic solvents. Recent trends in sample preparation include miniaturization, automation, high-throughput performance, on-line coupling with analytical instruments and low-cost operation through extremely low volume or no solvent consumption. Micro-extraction techniques, such as micro-extraction by packed sorbent (MEPS), have these advantages over the traditional techniques. This paper gives an overview of MEPS technique, including the role of sample preparation in bioanalysis, the MEPS description namely MEPS formats (on- and off-line), sorbents, experimental and protocols, factors that affect the MEPS performance, and the major advantages and limitations of MEPS compared with other sample preparation techniques. We also summarize MEPS recent applications in bioanalysis. Copyright © 2014 John Wiley & Sons, Ltd.
Real-Time Analytics for the Healthcare Industry: Arrhythmia Detection.
Agneeswaran, Vijay Srinivas; Mukherjee, Joydeb; Gupta, Ashutosh; Tonpay, Pranay; Tiwari, Jayati; Agarwal, Nitin
2013-09-01
It is time for the healthcare industry to move from the era of "analyzing our health history" to the age of "managing the future of our health." In this article, we illustrate the importance of real-time analytics across the healthcare industry by providing a generic mechanism to reengineer traditional analytics expressed in the R programming language into Storm-based real-time analytics code. This is a powerful abstraction, since most data scientists use R to write the analytics and are not clear on how to make the data work in real-time and on high-velocity data. Our paper focuses on the applications necessary to a healthcare analytics scenario, specifically focusing on the importance of electrocardiogram (ECG) monitoring. A physician can use our framework to compare ECG reports by categorization and consequently detect Arrhythmia. The framework can read the ECG signals and uses a machine learning-based categorizer that runs within a Storm environment to compare different ECG signals. The paper also presents some performance studies of the framework to illustrate the throughput and accuracy trade-off in real-time analytics.
Apanovich, V V; Bezdenezhnykh, B N; Sams, M; Jääskeläinen, I P; Alexandrov, YuI
2018-01-01
It has been presented that Western cultures (USA, Western Europe) are mostly characterized by competitive forms of social interaction, whereas Eastern cultures (Japan, China, Russia) are mostly characterized by cooperative forms. It has also been stated that thinking in Eastern countries is predominantly holistic and in Western countries analytic. Based on this, we hypothesized that subjects with analytic vs. holistic thinking styles show differences in decision making in different types of social interaction conditions. We investigated behavioural and brain-activity differences between subjects with analytic and holistic thinking during a choice reaction time (ChRT) task, wherein the subjects either cooperated, competed (in pairs), or performed the task without interaction with other participants. Healthy Russian subjects (N=78) were divided into two groups based on having analytic or holistic thinking as determined with an established questionnaire. We measured reaction times as well as event-related brain potentials. There were significant differences between the interaction conditions in task performance between subjects with analytic and holistic thinking. Both behavioral performance and physiological measures exhibited higher variance in holistic than in analytic subjects. Differences in amplitude and P300 latency suggest that decision making was easier for the holistic subjects in the cooperation condition, in contrast to analytic subjects for whom decision making based on these measures seemed to be easier in the competition condition. The P300 amplitude was higher in the individual condition as compared with the collective conditions. Overall, our results support the notion that the brains of analytic and holistic subjects work differently in different types of social interaction conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Westgard, Sten A
2016-06-01
To assess the analytical performance of instruments and methods through external quality assessment and proficiency testing data on the Sigma scale. A representative report from five different EQA/PT programs around the world (2 US, 1 Canadian, 1 UK, and 1 Australasian) was accessed. The instrument group standard deviations were used as surrogate estimates of instrument imprecision. Performance specifications from the US CLIA proficiency testing criteria were used to establish a common quality goal. Then Sigma-metrics were calculated to grade the analytical performance. Different methods have different Sigma-metrics for each analyte reviewed. Summary Sigma-metrics estimate the percentage of the chemistry analytes that are expected to perform above Five Sigma, which is where optimized QC design can be implemented. The range of performance varies from 37% to 88%, exhibiting significant differentiation between instruments and manufacturers. Median Sigmas for the different manufacturers in three analytes (albumin, glucose, sodium) showed significant differentiation. Chemistry tests are not commodities. Quality varies significantly from manufacturer to manufacturer, instrument to instrument, and method to method. The Sigma-assessments from multiple EQA/PT programs provide more insight into the performance of methods and instruments than any single program by itself. It is possible to produce a ranking of performance by manufacturer, instrument and individual method. Laboratories seeking optimal instrumentation would do well to consult this data as part of their decision-making process. To confirm that these assessments are stable and reliable, a longer term study should be conducted that examines more results over a longer time period. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Alberer, Martin; Hoefele, Julia; Benz, Marcus R; Bökenkamp, Arend; Weber, Lutz T
2017-01-01
Measurement of inulin clearance is considered to be the gold standard for determining kidney function in children, but this method is time consuming and expensive. The glomerular filtration rate (GFR) is on the other hand easier to calculate by using various creatinine- and/or cystatin C (Cys C)-based formulas. However, for the determination of serum creatinine (Scr) and Cys C, different and non-interchangeable analytical methods exist. Given the fact that different analytical methods for the determination of creatinine and Cys C were used in order to validate existing GFR formulas, clinicians should be aware of the type used in their local laboratory. In this study, we compared GFR results calculated on the basis of different GFR formulas and either used Scr and Cys C values as determined by the analytical method originally employed for validation or values obtained by an alternative analytical method to evaluate any possible effects on the performance. Cys C values determined by means of an immunoturbidimetric assay were used for calculating the GFR using equations in which this analytical method had originally been used for validation. Additionally, these same values were then used in other GFR formulas that had originally been validated using a nephelometric immunoassay for determining Cys C. The effect of using either the compatible or the possibly incompatible analytical method for determining Cys C in the calculation of GFR was assessed in comparison with the GFR measured by creatinine clearance (CrCl). Unexpectedly, using GFR equations that employed Cys C values derived from a possibly incompatible analytical method did not result in a significant difference concerning the classification of patients as having normal or reduced GFR compared to the classification obtained on the basis of CrCl. Sensitivity and specificity were adequate. On the other hand, formulas using Cys C values derived from a compatible analytical method partly showed insufficient performance when compared to CrCl. Although clinicians should be aware of applying a GFR formula that is compatible with the locally used analytical method for determining Cys C and creatinine, other factors might be more crucial for the calculation of correct GFR values.
Accuracy of selected techniques for estimating ice-affected streamflow
Walker, John F.
1991-01-01
This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.
Structural assessment of a Space Station solar dynamic heat receiver thermal energy storage canister
NASA Technical Reports Server (NTRS)
Tong, M. T.; Kerslake, T. W.; Thompson, R. L.
1988-01-01
This paper assesses the structural performance of a Space Station thermal energy storage (TES) canister subject to orbital solar flux variation and engine cold start-up operating conditions. The impact of working fluid temperature and salt-void distribution on the canister structure are assessed. Both analytical and experimental studies were conducted to determine the temperature distribution of the canister. Subsequent finite-element structural analyses of the canister were performed using both analytically and experimentally obtained temperatures. The Arrhenius creep law was incorporated into the procedure, using secondary creep data for the canister material, Haynes-188 alloy. The predicted cyclic creep strain accumulations at the hot spot were used to assess the structural performance of the canister. In addition, the structural performance of the canister based on the analytically-determined temperature was compared with that based on the experimentally-measured temperature data.
Structural assessment of a space station solar dynamic heat receiver thermal energy storage canister
NASA Technical Reports Server (NTRS)
Thompson, R. L.; Kerslake, T. W.; Tong, M. T.
1988-01-01
The structural performance of a space station thermal energy storage (TES) canister subject to orbital solar flux variation and engine cold start up operating conditions was assessed. The impact of working fluid temperature and salt-void distribution on the canister structure are assessed. Both analytical and experimental studies were conducted to determine the temperature distribution of the canister. Subsequent finite element structural analyses of the canister were performed using both analytically and experimentally obtained temperatures. The Arrhenius creep law was incorporated into the procedure, using secondary creep data for the canister material, Haynes 188 alloy. The predicted cyclic creep strain accumulations at the hot spot were used to assess the structural performance of the canister. In addition, the structural performance of the canister based on the analytically determined temperature was compared with that based on the experimentally measured temperature data.
Analyses of ACPL thermal/fluid conditioning system
NASA Technical Reports Server (NTRS)
Stephen, L. A.; Usher, L. H.
1976-01-01
Results of engineering analyses are reported. Initial computations were made using a modified control transfer function where the systems performance was characterized parametrically using an analytical model. The analytical model was revised to represent the latest expansion chamber fluid manifold design, and systems performance predictions were made. Parameters which were independently varied in these computations are listed. Systems predictions which were used to characterize performance are primarily transient computer plots comparing the deviation between average chamber temperature and the chamber temperature requirement. Additional computer plots were prepared. Results of parametric computations with the latest fluid manifold design are included.
Hao, Bibo; Sun, Wen; Yu, Yiqin; Li, Jing; Hu, Gang; Xie, Guotong
2016-01-01
Recent advances in cloud computing and machine learning made it more convenient for researchers to gain insights from massive healthcare data, while performing analyses on healthcare data in current practice still lacks efficiency for researchers. What's more, collaborating among different researchers and sharing analysis results are challenging issues. In this paper, we developed a practice to make analytics process collaborative and analysis results reproducible by exploiting and extending Jupyter Notebook. After applying this practice in our use cases, we can perform analyses and deliver results with less efforts in shorter time comparing to our previous practice.
B-52 control configured vehicles: Flight test results
NASA Technical Reports Server (NTRS)
Arnold, J. I.; Murphy, F. B.
1976-01-01
Recently completed B-52 Control Configured Vehicles (CCV) flight testing is summarized, and results are compared to analytical predictions. Results are presented for five CCV system concepts: ride control, maneuver load control, flutter mode control, augmented stability, and fatigue reduction. Test results confirm analytical predictions and show that CCV system concepts achieve performance goals when operated individually or collectively.
Empirical Evaluation of Meta-Analytic Approaches for Nutrient and Health Outcome Dose-Response Data
ERIC Educational Resources Information Center
Yu, Winifred W.; Schmid, Christopher H.; Lichtenstein, Alice H.; Lau, Joseph; Trikalinos, Thomas A.
2013-01-01
The objective of this study is to empirically compare alternative meta-analytic methods for combining dose-response data from epidemiological studies. We identified meta-analyses of epidemiological studies that analyzed the association between a single nutrient and a dichotomous outcome. For each topic, we performed meta-analyses of odds ratios…
Stability and Curving Performance of Conventional and Advanced Rail Transit Vehicles
DOT National Transportation Integrated Search
1984-01-01
Analytical studies are presented which compare the curving performance and speed capability of conventional rail transit trucks with self steering (cross-braced) and forced steering (linkages between carbody and wheelsets) radial trucks. Truck curvin...
Leion, Felicia; Hegbrant, Josefine; den Bakker, Emil; Jonsson, Magnus; Abrahamson, Magnus; Nyman, Ulf; Björk, Jonas; Lindström, Veronica; Larsson, Anders; Bökenkamp, Arend; Grubb, Anders
2017-09-01
Estimating glomerular filtration rate (GFR) in adults by using the average of values obtained by a cystatin C- (eGFR cystatin C ) and a creatinine-based (eGFR creatinine ) equation shows at least the same diagnostic performance as GFR estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparison of eGFR cystatin C and eGFR creatinine plays a pivotal role in the diagnosis of Shrunken Pore Syndrome, where low eGFR cystatin C compared to eGFR creatinine has been associated with higher mortality in adults. The present study was undertaken to elucidate if this concept can also be applied in children. Using iohexol and inulin clearance as gold standard in 702 children, we studied the diagnostic performance of 10 creatinine-based, 5 cystatin C-based and 3 combined cystatin C-creatinine eGFR equations and compared them to the result of the average of 9 pairs of a eGFR cystatin C and a eGFR creatinine estimate. While creatinine-based GFR estimations are unsuitable in children unless calibrated in a pediatric or mixed pediatric-adult population, cystatin C-based estimations in general performed well in children. The average of a suitable creatinine-based and a cystatin C-based equation generally displayed a better diagnostic performance than estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparing eGFR cystatin and eGFR creatinine may help identify pediatric patients with Shrunken Pore Syndrome.
Luo, Yuan; Szolovits, Peter; Dighe, Anand S; Baron, Jason M
2018-06-01
A key challenge in clinical data mining is that most clinical datasets contain missing data. Since many commonly used machine learning algorithms require complete datasets (no missing data), clinical analytic approaches often entail an imputation procedure to "fill in" missing data. However, although most clinical datasets contain a temporal component, most commonly used imputation methods do not adequately accommodate longitudinal time-based data. We sought to develop a new imputation algorithm, 3-dimensional multiple imputation with chained equations (3D-MICE), that can perform accurate imputation of missing clinical time series data. We extracted clinical laboratory test results for 13 commonly measured analytes (clinical laboratory tests). We imputed missing test results for the 13 analytes using 3 imputation methods: multiple imputation with chained equations (MICE), Gaussian process (GP), and 3D-MICE. 3D-MICE utilizes both MICE and GP imputation to integrate cross-sectional and longitudinal information. To evaluate imputation method performance, we randomly masked selected test results and imputed these masked results alongside results missing from our original data. We compared predicted results to measured results for masked data points. 3D-MICE performed significantly better than MICE and GP-based imputation in a composite of all 13 analytes, predicting missing results with a normalized root-mean-square error of 0.342, compared to 0.373 for MICE alone and 0.358 for GP alone. 3D-MICE offers a novel and practical approach to imputing clinical laboratory time series data. 3D-MICE may provide an additional tool for use as a foundation in clinical predictive analytics and intelligent clinical decision support.
Evaluation of analytical performance of a new high-sensitivity immunoassay for cardiac troponin I.
Masotti, Silvia; Prontera, Concetta; Musetti, Veronica; Storti, Simona; Ndreu, Rudina; Zucchelli, Gian Carlo; Passino, Claudio; Clerico, Aldo
2018-02-23
The study aim was to evaluate and compare the analytical performance of the new chemiluminescent immunoassay for cardiac troponin I (cTnI), called Access hs-TnI using DxI platform, with those of Access AccuTnI+3 method, and high-sensitivity (hs) cTnI method for ARCHITECT platform. The limits of blank (LoB), detection (LoD) and quantitation (LoQ) at 10% and 20% CV were evaluated according to international standardized protocols. For the evaluation of analytical performance and comparison of cTnI results, both heparinized plasma samples, collected from healthy subjects and patients with cardiac diseases, and quality control samples distributed in external quality assessment programs were used. LoB, LoD and LoQ at 20% and 10% CV values of the Access hs-cTnI method were 0.6, 1.3, 2.1 and 5.3 ng/L, respectively. Access hs-cTnI method showed analytical performance significantly better than that of Access AccuTnI+3 method and similar results to those of hs ARCHITECT cTnI method. Moreover, the cTnI concentrations measured with Access hs-cTnI method showed close linear regressions with both Access AccuTnI+3 and ARCHITECT hs-cTnI methods, although there were systematic differences between these methods. There was no difference between cTnI values measured by Access hs-cTnI in heparinized plasma and serum samples, whereas there was a significant difference between cTnI values, respectively measured in EDTA and heparin plasma samples. Access hs-cTnI has analytical sensitivity parameters significantly improved compared to Access AccuTnI+3 method and is similar to those of the high-sensitivity method using ARCHITECT platform.
NASA Astrophysics Data System (ADS)
Usta, Metin; Tufan, Mustafa Çağatay; Aydın, Güral; Bozkurt, Ahmet
2018-07-01
In this study, we have performed the calculations stopping power, depth dose, and range verification for proton beams using dielectric and Bethe-Bloch theories and FLUKA, Geant4 and MCNPX Monte Carlo codes. In the framework, as analytical studies, Drude model was applied for dielectric theory and effective charge approach with Roothaan-Hartree-Fock charge densities was used in Bethe theory. In the simulations different setup parameters were selected to evaluate the performance of three distinct Monte Carlo codes. The lung and breast tissues were investigated are considered to be related to the most common types of cancer throughout the world. The results were compared with each other and the available data in literature. In addition, the obtained results were verified with prompt gamma range data. In both stopping power values and depth-dose distributions, it was found that the Monte Carlo values give better results compared with the analytical ones while the results that agree best with ICRU data in terms of stopping power are those of the effective charge approach between the analytical methods and of the FLUKA code among the MC packages. In the depth dose distributions of the examined tissues, although the Bragg curves for Monte Carlo almost overlap, the analytical ones show significant deviations that become more pronounce with increasing energy. Verifications with the results of prompt gamma photons were attempted for 100-200 MeV protons which are regarded important for proton therapy. The analytical results are within 2%-5% and the Monte Carlo values are within 0%-2% as compared with those of the prompt gammas.
Rapid B-rep model preprocessing for immersogeometric analysis using analytic surfaces
Wang, Chenglong; Xu, Fei; Hsu, Ming-Chen; Krishnamurthy, Adarsh
2017-01-01
Computational fluid dynamics (CFD) simulations of flow over complex objects have been performed traditionally using fluid-domain meshes that conform to the shape of the object. However, creating shape conforming meshes for complicated geometries like automobiles require extensive geometry preprocessing. This process is usually tedious and requires modifying the geometry, including specialized operations such as defeaturing and filling of small gaps. Hsu et al. (2016) developed a novel immersogeometric fluid-flow method that does not require the generation of a boundary-fitted mesh for the fluid domain. However, their method used the NURBS parameterization of the surfaces for generating the surface quadrature points to enforce the boundary conditions, which required the B-rep model to be converted completely to NURBS before analysis can be performed. This conversion usually leads to poorly parameterized NURBS surfaces and can lead to poorly trimmed or missing surface features. In addition, converting simple geometries such as cylinders to NURBS imposes a performance penalty since these geometries have to be dealt with as rational splines. As a result, the geometry has to be inspected again after conversion to ensure analysis compatibility and can increase the computational cost. In this work, we have extended the immersogeometric method to generate surface quadrature points directly using analytic surfaces. We have developed quadrature rules for all four kinds of analytic surfaces: planes, cones, spheres, and toroids. We have also developed methods for performing adaptive quadrature on trimmed analytic surfaces. Since analytic surfaces have frequently been used for constructing solid models, this method is also faster to generate quadrature points on real-world geometries than using only NURBS surfaces. To assess the accuracy of the proposed method, we perform simulations of a benchmark problem of flow over a torpedo shape made of analytic surfaces and compare those to immersogeometric simulations of the same model with NURBS surfaces. We also compare the results of our immersogeometric method with those obtained using boundary-fitted CFD of a tessellated torpedo shape, and quantities of interest such as drag coefficient are in good agreement. Finally, we demonstrate the effectiveness of our immersogeometric method for high-fidelity industrial scale simulations by performing an aerodynamic analysis of a truck that has a large percentage of analytic surfaces. Using analytic surfaces over NURBS avoids unnecessary surface type conversion and significantly reduces model-preprocessing time, while providing the same accuracy for the aerodynamic quantities of interest. PMID:29051678
Harvey, John J; Chester, Stephanie; Burke, Stephen A; Ansbro, Marisela; Aden, Tricia; Gose, Remedios; Sciulli, Rebecca; Bai, Jing; DesJardin, Lucy; Benfer, Jeffrey L; Hall, Joshua; Smole, Sandra; Doan, Kimberly; Popowich, Michael D; St George, Kirsten; Quinlan, Tammy; Halse, Tanya A; Li, Zhen; Pérez-Osorio, Ailyn C; Glover, William A; Russell, Denny; Reisdorf, Erik; Whyte, Thomas; Whitaker, Brett; Hatcher, Cynthia; Srinivasan, Velusamy; Tatti, Kathleen; Tondella, Maria Lucia; Wang, Xin; Winchell, Jonas M; Mayer, Leonard W; Jernigan, Daniel; Mawle, Alison C
2016-02-01
In this study, a multicenter evaluation of the Life Technologies TaqMan(®) Array Card (TAC) with 21 custom viral and bacterial respiratory assays was performed on the Applied Biosystems ViiA™ 7 Real-Time PCR System. The goal of the study was to demonstrate the analytical performance of this platform when compared to identical individual pathogen specific laboratory developed tests (LDTs) designed at the Centers for Disease Control and Prevention (CDC), equivalent LDTs provided by state public health laboratories, or to three different commercial multi-respiratory panels. CDC and Association of Public Health Laboratories (APHL) LDTs had similar analytical sensitivities for viral pathogens, while several of the bacterial pathogen APHL LDTs demonstrated sensitivities one log higher than the corresponding CDC LDT. When compared to CDC LDTs, TAC assays were generally one to two logs less sensitive depending on the site performing the analysis. Finally, TAC assays were generally more sensitive than their counterparts in three different commercial multi-respiratory panels. TAC technology allows users to spot customized assays and design TAC layout, simplify assay setup, conserve specimen, dramatically reduce contamination potential, and as demonstrated in this study, analyze multiple samples in parallel with good reproducibility between instruments and operators. Copyright © 2015 Elsevier B.V. All rights reserved.
Mitchell, Elizabeth O; Stewart, Greg; Bajzik, Olivier; Ferret, Mathieu; Bentsen, Christopher; Shriver, M Kathleen
2013-12-01
A multisite study was conducted to evaluate the performance of the Bio-Rad 4th generation GS HIV Combo Ag/Ab EIA versus Abbott 4th generation ARCHITECT HIV Ag/Ab Combo. The performance of two 3rd generation EIAs, Ortho Diagnostics Anti-HIV 1+2 EIA and Siemens HIV 1/O/2 was also evaluated. Study objective was comparison of analytical HIV-1 p24 antigen detection, sensitivity in HIV-1 seroconversion panels, specificity in blood donors and two HIV false reactive panels. Analytical sensitivity was evaluated with International HIV-1 p24 antigen standards, the AFFSAPS (pg/mL) and WHO 90/636 (IU/mL) standards; sensitivity in acute infection was compared on 55 seroconversion samples, and specificity was evaluated on 1000 negative blood donors and two false reactive panels. GS HIV Combo Ag/Ab demonstrated better analytical HIV antigen sensitivity compared to ARCHITECT HIV Ag/Ab Combo: 0.41 IU/mL versus 1.2 IU/mL (WHO) and 12.7 pg/mL versus 20.1 pg/mL (AFSSAPS); GS HIV Combo Ag/Ab EIA also demonstrated slightly better specificity compared to ARCHITECT HIV Ag/Ab Combo (100% versus 99.7%). The 4th generation HIV Combo tests detected seroconversion 7-11 days earlier than the 3rd generation HIV antibody only EIAs. Both 4th generation immunoassays demonstrated excellent performance in sensitivity, with the reduction of the serological window period (7-11 days earlier detection than the 3rd generation HIV tests). However, GS HIV Combo Ag/Ab demonstrated improved HIV antigen analytical sensitivity and slightly better specificity when compared to ARCHITECT HIV Ag/Ab Combo assay, with higher positive predictive values (PPV) for low prevalence populations. Copyright © 2013 Elsevier B.V. All rights reserved.
Schot, Marjolein J C; van Delft, Sanne; Kooijman-Buiting, Antoinette M J; de Wit, Niek J; Hopstaken, Rogier M
2015-01-01
Objective Various point-of-care testing (POCT) urine analysers are commercially available for routine urine analysis in general practice. The present study compares analytical performance, agreement and user-friendliness of six different POCT urine analysers for diagnosing urinary tract infection in general practice. Setting All testing procedures were performed at a diagnostic centre for primary care in the Netherlands. Urine samples were collected at four general practices. Primary and secondary outcome measures Analytical performance and agreement of the POCT analysers regarding nitrite, leucocytes and erythrocytes, with the laboratory reference standard, was the primary outcome measure, and analysed by calculating sensitivity, specificity, positive and negative predictive value, and Cohen's κ coefficient for agreement. Secondary outcome measures were the user-friendliness of the POCT analysers, in addition to other characteristics of the analysers. Results The following six POCT analysers were evaluated: Uryxxon Relax (Macherey Nagel), Urisys 1100 (Roche), Clinitek Status (Siemens), Aution 11 (Menarini), Aution Micro (Menarini) and Urilyzer (Analyticon). Analytical performance was good for all analysers. Compared with laboratory reference standards, overall agreement was good, but differed per parameter and per analyser. Concerning the nitrite test, the most important test for clinical practice, all but one showed perfect agreement with the laboratory standard. For leucocytes and erythrocytes specificity was high, but sensitivity was considerably lower. Agreement for leucocytes varied between good to very good, and for the erythrocyte test between fair and good. First-time users indicated that the analysers were easy to use. They expected higher productivity and accuracy when using these analysers in daily practice. Conclusions The overall performance and user-friendliness of all six commercially available POCT urine analysers was sufficient to justify routine use in suspected urinary tract infections in general practice. PMID:25986635
Ringo: Interactive Graph Analytics on Big-Memory Machines
Perez, Yonathan; Sosič, Rok; Banerjee, Arijit; Puttagunta, Rohan; Raison, Martin; Shah, Pararth; Leskovec, Jure
2016-01-01
We present Ringo, a system for analysis of large graphs. Graphs provide a way to represent and analyze systems of interacting objects (people, proteins, webpages) with edges between the objects denoting interactions (friendships, physical interactions, links). Mining graphs provides valuable insights about individual objects as well as the relationships among them. In building Ringo, we take advantage of the fact that machines with large memory and many cores are widely available and also relatively affordable. This allows us to build an easy-to-use interactive high-performance graph analytics system. Graphs also need to be built from input data, which often resides in the form of relational tables. Thus, Ringo provides rich functionality for manipulating raw input data tables into various kinds of graphs. Furthermore, Ringo also provides over 200 graph analytics functions that can then be applied to constructed graphs. We show that a single big-memory machine provides a very attractive platform for performing analytics on all but the largest graphs as it offers excellent performance and ease of use as compared to alternative approaches. With Ringo, we also demonstrate how to integrate graph analytics with an iterative process of trial-and-error data exploration and rapid experimentation, common in data mining workloads. PMID:27081215
Ringo: Interactive Graph Analytics on Big-Memory Machines.
Perez, Yonathan; Sosič, Rok; Banerjee, Arijit; Puttagunta, Rohan; Raison, Martin; Shah, Pararth; Leskovec, Jure
2015-01-01
We present Ringo, a system for analysis of large graphs. Graphs provide a way to represent and analyze systems of interacting objects (people, proteins, webpages) with edges between the objects denoting interactions (friendships, physical interactions, links). Mining graphs provides valuable insights about individual objects as well as the relationships among them. In building Ringo, we take advantage of the fact that machines with large memory and many cores are widely available and also relatively affordable. This allows us to build an easy-to-use interactive high-performance graph analytics system. Graphs also need to be built from input data, which often resides in the form of relational tables. Thus, Ringo provides rich functionality for manipulating raw input data tables into various kinds of graphs. Furthermore, Ringo also provides over 200 graph analytics functions that can then be applied to constructed graphs. We show that a single big-memory machine provides a very attractive platform for performing analytics on all but the largest graphs as it offers excellent performance and ease of use as compared to alternative approaches. With Ringo, we also demonstrate how to integrate graph analytics with an iterative process of trial-and-error data exploration and rapid experimentation, common in data mining workloads.
Gauging the Success of Your Web Site
ERIC Educational Resources Information Center
Goldsborough, Reid
2005-01-01
Web analytics is a way to measure and optimize Web site performance, says Jason Burby, director of Web analytics for ZAAZ Inc., a Web design and development firm in Seattle with a countrywide client base. He compares it to using Evite, which is a useful, free web service that makes it easy to send out party and other invitations and,…
Improved explosive collection and detection with rationally assembled surface sampling materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chouyyok, Wilaiwan; Bays, J. Timothy; Gerasimenko, Aleksandr A.
Sampling and detection of trace explosives is a key analytical process in modern transportation safety. In this work we have explored some of the fundamental analytical processes for collection and detection of trace level explosive on surfaces with the most widely utilized system, thermal desorption IMS. The performance of the standard muslin swipe material was compared with chemically modified fiberglass cloth. The fiberglass surface was modified to include phenyl functional groups. When compared to standard muslin, the phenyl functionalized fiberglass sampling material showed better analyte release from the sampling material as well as improved response and repeatability from multiple usesmore » of the same swipe. The improved sample release of the functionalized fiberglass swipes resulted in a significant increase in sensitivity. Various physical and chemical properties were systematically explored to determine optimal performance. The results herein have relevance to improving the detection of other explosive compounds and potentially to a wide range of other chemical sampling and field detection challenges.« less
The Empower project - a new way of assessing and monitoring test comparability and stability.
De Grande, Linde A C; Goossens, Kenneth; Van Uytfanghe, Katleen; Stöckl, Dietmar; Thienpont, Linda M
2015-07-01
Manufacturers and laboratories might benefit from using a modern integrated tool for quality management/assurance. The tool should not be confounded by commutability issues and focus on the intrinsic analytical quality and comparability of assays as performed in routine laboratories. In addition, it should enable monitoring of long-term stability of performance, with the possibility to quasi "real-time" remedial action. Therefore, we developed the "Empower" project. The project comprises four pillars: (i) master comparisons with panels of frozen single-donation samples, (ii) monitoring of patient percentiles and (iii) internal quality control data, and (iv) conceptual and statistical education about analytical quality. In the pillars described here (i and ii), state-of-the-art as well as biologically derived specifications are used. In the 2014 master comparisons survey, 125 laboratories forming 8 peer groups participated. It showed not only good intrinsic analytical quality of assays but also assay biases/non-comparability. Although laboratory performance was mostly satisfactory, sometimes huge between-laboratory differences were observed. In patient percentile monitoring, currently, 100 laboratories participate with 182 devices. Particularly, laboratories with a high daily throughput and low patient population variation show a stable moving median in time with good between-instrument concordance. Shifts/drifts due to lot changes are sometimes revealed. There is evidence that outpatient medians mirror the calibration set-points shown in the master comparisons. The Empower project gives manufacturers and laboratories a realistic view on assay quality/comparability as well as stability of performance and/or the reasons for increased variation. Therefore, it is a modern tool for quality management/assurance toward improved patient care.
Ultramicroelectrode Array Based Sensors: A Promising Analytical Tool for Environmental Monitoring
Orozco, Jahir; Fernández-Sánchez, César; Jiménez-Jorquera, Cecilia
2010-01-01
The particular analytical performance of ultramicroelectrode arrays (UMEAs) has attracted a high interest by the research community and has led to the development of a variety of electroanalytical applications. UMEA-based approaches have demonstrated to be powerful, simple, rapid and cost-effective analytical tools for environmental analysis compared to available conventional electrodes and standardised analytical techniques. An overview of the fabrication processes of UMEAs, their characterization and applications carried out by the Spanish scientific community is presented. A brief explanation of theoretical aspects that highlight their electrochemical behavior is also given. Finally, the applications of this transducer platform in the environmental field are discussed. PMID:22315551
Martin, Jeffrey D.; Norman, Julia E.; Sandstrom, Mark W.; Rose, Claire E.
2017-09-06
U.S. Geological Survey monitoring programs extensively used two analytical methods, gas chromatography/mass spectrometry and liquid chromatography/mass spectrometry, to measure pesticides in filtered water samples during 1992–2012. In October 2012, the monitoring programs began using direct aqueous-injection liquid chromatography tandem mass spectrometry as a new analytical method for pesticides. The change in analytical methods, however, has the potential to inadvertently introduce bias in analysis of datasets that span the change.A field study was designed to document performance of the new method in a variety of stream-water matrices and to quantify any potential changes in measurement bias or variability that could be attributed to changes in analytical methods. The goals of the field study were to (1) summarize performance (bias and variability of pesticide recovery) of the new method in a variety of stream-water matrices; (2) compare performance of the new method in laboratory blank water (laboratory reagent spikes) to that in a variety of stream-water matrices; (3) compare performance (analytical recovery) of the new method to that of the old methods in a variety of stream-water matrices; (4) compare pesticide detections and concentrations measured by the new method to those of the old methods in a variety of stream-water matrices; (5) compare contamination measured by field blank water samples in old and new methods; (6) summarize the variability of pesticide detections and concentrations measured by the new method in field duplicate water samples; and (7) identify matrix characteristics of environmental water samples that adversely influence the performance of the new method. Stream-water samples and a variety of field quality-control samples were collected at 48 sites in the U.S. Geological Survey monitoring networks during June–September 2012. Stream sites were located across the United States and included sites in agricultural and urban land-use settings, as well as sites on major rivers.The results of the field study identified several challenges for the analysis and interpretation of data analyzed by both old and new methods, particularly when data span the change in methods and are combined for analysis of temporal trends in water quality. The main challenges identified are large (greater than 30 percent), statistically significant differences in analytical recovery, detection capability, and (or) measured concentrations for selected pesticides. These challenges are documented and discussed, but specific guidance or statistical methods to resolve these differences in methods are beyond the scope of the report. The results of the field study indicate that the implications of the change in analytical methods must be assessed individually for each pesticide and method.Understanding the possible causes of the systematic differences in concentrations between methods that remain after recovery adjustment might be necessary to determine how to account for the differences in data analysis. Because recoveries for each method are independently determined from separate reference standards and spiking solutions, the differences might be due to an error in one of the reference standards or solutions or some other basic aspect of standard procedure in the analytical process. Further investigation of the possible causes is needed, which will lead to specific decisions on how to compensate for these differences in concentrations in data analysis. In the event that further investigations do not provide insight into the causes of systematic differences in concentrations between methods, the authors recommend continuing to collect and analyze paired environmental water samples by both old and new methods. This effort should be targeted to seasons, sites, and expected concentrations to supplement those concentrations already assessed and to compare the ongoing analytical recovery of old and new methods to those observed in the summer and fall of 2012.
Fast analytical scatter estimation using graphics processing units.
Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris
2015-01-01
To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.
Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives
NASA Technical Reports Server (NTRS)
Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.
2016-01-01
A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.
Analytical and Clinical Performance Evaluation of the Abbott Architect PIVKA Assay.
Ko, Dae-Hyun; Hyun, Jungwon; Kim, Hyun Soo; Park, Min-Jeong; Kim, Jae-Seok; Park, Ji-Young; Shin, Dong Hoon; Cho, Hyoun Chan
2018-01-01
Protein induced by vitamin K absence (PIVKA) is measured using various assays and is used to help diagnose hepatocellular carcinoma. The present study evaluated the analytical and clinical performances of the recently released Abbott Architect PIVKA assay. Precision, linearity, and correlation tests were performed in accordance with the Clinical Laboratory Standardization Institute guidelines. Sample type suitability was assessed using serum and plasma samples from the same patients, and the reference interval was established using sera from 204 healthy individuals. The assay had coefficients of variation of 3.2-3.5% and intra-laboratory variation of 3.6-5.5%. Linearity was confirmed across the entire measurable range. The Architect PIVKA assay was comparable to the Lumipulse PIVKA assay, and the plasma and serum samples provided similar results. The lower reference limit was 13.0 mAU/mL and the upper reference limit was 37.4 mAU/mL. The ability of the Architect PIVKA assay to detect hepatocellular carcinoma was comparable to that of the alpha-fetoprotein test and the Lumipulse PIVKA assay. The Architect PIVKA assay provides excellent analytical and clinical performance, is simple for clinical laboratories to adopt, and has improved sample type suitability that could broaden the assay's utility. © 2018 by the Association of Clinical Scientists, Inc.
Ahmad, Rafiq; Tripathy, Nirmalya; Park, Jin-Ho; Hahn, Yoon-Bong
2015-08-04
We report a novel straightforward approach for simultaneous and highly-selective detection of multi-analytes (i.e. glucose, cholesterol and urea) using an integrated field-effect transistor (i-FET) array biosensor without any interference in each sensor response. Compared to analytically-measured data, performance of the ZnO nanorod based i-FET array biosensor is found to be highly reliable for rapid detection of multi-analytes in mice blood, and serum and blood samples of diabetic dogs.
Becker, J S; Boulyga, S F
2001-07-01
This paper describes an analytical procedure for determining the stoichiometry of BaxSr1-xTiO3 perovskite layers using inductively coupled plasma mass spectrometry (ICP-MS). The analytical results of mass spectrometry measurements are compared to those of X-ray fluorescence analysis (XRF). The performance and the limits of solid-state mass spectrometry analytical methods for the surface analysis of thin BaxSr1-xTiO3 perovskite layers sputtered neutral mass spectrometry (SNMS)--are investigated and discussed.
Comparative Kinetic Analysis of Closed-Ended and Open-Ended Porous Sensors
NASA Astrophysics Data System (ADS)
Zhao, Yiliang; Gaur, Girija; Mernaugh, Raymond L.; Laibinis, Paul E.; Weiss, Sharon M.
2016-09-01
Efficient mass transport through porous networks is essential for achieving rapid response times in sensing applications utilizing porous materials. In this work, we show that open-ended porous membranes can overcome diffusion challenges experienced by closed-ended porous materials in a microfluidic environment. A theoretical model including both transport and reaction kinetics is employed to study the influence of flow velocity, bulk analyte concentration, analyte diffusivity, and adsorption rate on the performance of open-ended and closed-ended porous sensors integrated with flow cells. The analysis shows that open-ended pores enable analyte flow through the pores and greatly reduce the response time and analyte consumption for detecting large molecules with slow diffusivities compared with closed-ended pores for which analytes largely flow over the pores. Experimental confirmation of the results was carried out with open- and closed-ended porous silicon (PSi) microcavities fabricated in flow-through and flow-over sensor configurations, respectively. The adsorption behavior of small analytes onto the inner surfaces of closed-ended and open-ended PSi membrane microcavities was similar. However, for large analytes, PSi membranes in a flow-through scheme showed significant improvement in response times due to more efficient convective transport of analytes. The experimental results and theoretical analysis provide quantitative estimates of the benefits offered by open-ended porous membranes for different analyte systems.
McMillin, Gwendolyn A; Marin, Stephanie J; Johnson-Davis, Kamisha L; Lawlor, Bryan G; Strathmann, Frederick G
2015-02-01
The major objective of this research was to propose a simplified approach for the evaluation of medication adherence in chronic pain management patients, using liquid chromatography time-of-flight (TOF) mass spectrometry, performed in parallel with select homogeneous enzyme immunoassays (HEIAs). We called it a "hybrid" approach to urine drug testing. The hybrid approach was defined based on anticipated positivity rates, availability of commercial reagents for HEIAs, and assay performance, particularly analytical sensitivity and specificity for drug(s) of interest. Subsequent to implementation of the hybrid approach, time to result was compared with that observed with other urine drug testing approaches. Opioids, benzodiazepines, zolpidem, amphetamine-like stimulants, and methylphenidate metabolite were detected by TOF mass spectrometry to maximize specificity and sensitivity of these 37 drug analytes. Barbiturates, cannabinoid metabolite, carisoprodol, cocaine metabolite, ethyl glucuronide, methadone, phencyclidine, propoxyphene, and tramadol were detected by HEIAs that performed adequately and/or for which positivity rates were very low. Time to result was significantly reduced compared with the traditional approach. The hybrid approach to urine drug testing provides a simplified and analytically specific testing process that minimizes the need for secondary confirmation. Copyright© by the American Society for Clinical Pathology.
Mischak, Harald; Vlahou, Antonia; Ioannidis, John P A
2013-04-01
Mass spectrometry platforms have attracted a lot of interest in the last 2 decades as profiling tools for native peptides and proteins with clinical potential. However, limitations associated with reproducibility and analytical robustness, especially pronounced with the initial SELDI systems, hindered the application of such platforms in biomarker qualification and clinical implementation. The scope of this article is to give a short overview on data available on performance and on analytical robustness of the different platforms for peptide profiling. Using the CE-MS platform as a paradigm, data on analytical performance are described including reproducibility (short-term and intermediate repeatability), stability, interference, quantification capabilities (limits of detection), and inter-laboratory variability. We discuss these issues by using as an example our experience with the development of a 273-peptide marker for chronic kidney disease. Finally, we discuss pros and cons and means for improvement and emphasize the need to test in terms of comparative clinical performance and impact, different platforms that pass reasonably well analytical validation tests. Copyright © 2012 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
A novel derivatizing agent, 5-chloro-2,2,3,3,4,4,5,5-octafluoropentyl chloroformate (ClOFPCF), was synthesized and tested as a reagent for direct water derivatization of highly polar and hydrophilic analytes. Its analytical performance satisfactorily compared to a perfluorinated ...
Schønning, Kristian; Pedersen, Martin Schou; Johansen, Kim; Landt, Bodil; Nielsen, Lone Gilmor; Weis, Nina; Westh, Henrik
2017-10-01
Chronic hepatitis C virus (HCV) infection can be effectively treated with directly acting antiviral (DAA) therapy. Measurement of HCV RNA is used to evaluate patient compliance and virological response during and after treatment. To compare the analytical performance of the Aptima HCV Quant Dx Assay (Aptima) and the COBAS Ampliprep/COBAS TaqMan HCV Test v2.0 (CAPCTMv2) for the quantification of HCV RNA in plasma samples, and compare the clinical utility of the two tests in patients undergoing treatment with DAA therapy. Analytical performance was evaluated on two sets of plasma samples: 125 genotyped samples and 172 samples referred for quantification of HCV RNA. Furthermore, performance was evaluated using dilutions series of four samples containing HCV genotype 1a, 2b, 3a, and 4a, respectively. Clinical utility was evaluated on 118 plasma samples obtained from 13 patients undergoing treatment with DAAs. Deming regression of results from 187 plasma samples with HCV RNA >2 Log IU/mL indicated that the Aptima assay quantified higher than the CAPCTMv2 test for HCV RNA >4.9 Log IU/mL. The linearity of the Aptima assay was excellent across dilution series of four HCV genotypes (slope of the regression line: 1.00-1.02). The Aptima assay detected significantly more replicates below targeted 2 Log IU/mL than the CAPCTMv2 test, and yielded clearly interpretable results when used to analyze samples from patients treated with DAAs. The analytical performance of the Aptima assay makes it well suited for monitoring patients with chronic HCV infection undergoing antiviral treatment. Copyright © 2017 Elsevier B.V. All rights reserved.
Tomassetti, Mauro; Merola, Giovanni; Martini, Elisabetta; Campanella, Luigi; Sanzò, Gabriella; Favero, Gabriele; Mazzei, Franco
2017-01-01
In this research, we developed a direct-flow surface plasmon resonance (SPR) immunosensor for ampicillin to perform direct, simple, and fast measurements of this important antibiotic. In order to better evaluate the performance, it was compared with a conventional amperometric immunosensor, working with a competitive format with the aim of finding out experimental real advantages and disadvantages of two respective methods. Results showed that certain analytical features of the new SPR immunodevice, such as the lower limit of detection (LOD) value and the width of the linear range, are poorer than those of a conventional amperometric immunosensor, which adversely affects the application to samples such as natural waters. On the other hand, the SPR immunosensor was more selective to ampicillin, and measurements were more easily and quickly attained compared to those performed with the conventional competitive immunosensor. PMID:28394296
Park, Seungman; Kang, Youjin; Kim, Dong Geun; Kim, Eui-Chong; Park, Sung Sup; Seong, Moon-Woo
2013-08-01
The detection of high-risk (HR) HPV in cervical cancer screening is important for early diagnosis of cervical cancer or pre-cancerous lesions. We evaluated the analytical and clinical performances of 3 HR HPV assays in Gynecology patients. A total of 991 specimens were included in this study: 787 specimens for use with a Hybrid Capture 2 (HC2) and 204 specimens for a HPV DNA microarray (DNA Chip). All specimens were tested using an Abbott RealTime High Risk HPV assay (Real-time HR), PGMY PCR, and sequence analysis. Clinical sensitivities for severe abnormal cytology (severe than high-grade squamous intraepithelial lesion) were 81.8% for Real-time HR, 77.3% for HC2, and 66.7% for DNA Chip, and clinical sensitivities for severe abnormal histology (cervical intraepithelial neoplasia grade 2+) were 91.7% for HC2, 87.5% for Real-time HR, and 73.3% for DNA Chip. As compared to results of the sequence analysis, HC2, Real-time HR, and DNA Chip showed concordance rates of 94.3% (115/122), 90.0% (117/130), and 61.5% (16/26), respectively. The HC2 assay and Real-time HR assay showed comparable results to each other in both clinical and analytical performances, while the DNA Chip assay showed poor clinical and analytical performances. The Real-time HR assay can be a good alternative option for HR HPV testing with advantages of allowing full automation and simultaneous genotyping of HR types 16 and 18. Copyright © 2013 Elsevier Inc. All rights reserved.
Code of Federal Regulations, 2011 CFR
2011-04-01
... authorizations, records of authorized positions, and terminations 6 years. (g) Comparative or analytical..., detail drawings, and records of engineering studies that are part of or performed by the company within...
Code of Federal Regulations, 2010 CFR
2010-04-01
... authorizations, records of authorized positions, and terminations 6 years. (g) Comparative or analytical..., detail drawings, and records of engineering studies that are part of or performed by the company within...
Detonation Performance Analyses for Recent Energetic Molecules
NASA Astrophysics Data System (ADS)
Stiel, Leonard; Samuels, Philip; Spangler, Kimberly; Iwaniuk, Daniel; Cornell, Rodger; Baker, Ernest
2017-06-01
Detonation performance analyses were conducted for a number of evolving and potential high explosive materials. The calculations were completed for theoretical maximum densities of the explosives using the Jaguar thermo-chemical equation of state computer programs for performance evaluations and JWL/JWLB equations of state parameterizations. A number of recently synthesized materials were investigated for performance characterizations and comparisons to existing explosives, including TNT, RDX, HMX, and Cl-20. The analytic cylinder model was utilized to establish cylinder and Gurney velocities as functions of the radial expansions of the cylinder for each explosive. The densities and heats of formulation utilized in the calculations are primarily experimental values from Picatinny Arsenal and other sources. Several of the new materials considered were predicted to have enhanced detonation characteristics compared to conventional explosives. In order to confirm the accuracy of the Jaguar and analytic cylinder model results, available experimental detonation and Gurney velocities for representative energetic molecules and their formulations were compared with the corresponding calculated values. Close agreement was obtained with most of the data. Presently at NATO.
Effect of steady flight loads on JT9D-7 performance deterioration
NASA Technical Reports Server (NTRS)
Jay, A.; Todd, E. S.
1978-01-01
Short term engine deterioration occurs in less than 250 flights on a new engine and in the first flights following engine repair; while long term deterioration involves primarily hot section distress and compression system losses which occur at a somewhat slower rate. The causes for short-term deterioration are associated with clearance changes which occur in the flight environment. Analytical techniques utilized to examine the effects of flight loads and engine operating conditions on performance deterioration are presented. The role of gyroscopic, gravitational, and aerodynamic loads are discussed along with the effect of variations in engine build clearances. These analytical results are compared to engine test data along with the correlation between analytically predicted and measured clearances and rub patterns. Conclusions are drawn and important issues are discussed.
How Should Blood Glucose Meter System Analytical Performance Be Assessed?
Simmons, David A
2015-08-31
Blood glucose meter system analytical performance is assessed by comparing pairs of meter system and reference instrument blood glucose measurements measured over time and across a broad array of glucose values. Consequently, no single, complete, and ideal parameter can fully describe the difference between meter system and reference results. Instead, a number of assessment tools, both graphical (eg, regression plots, modified Bland-Altman plots, and error grid analysis) and tabular (eg, International Organization for Standardization guidelines, mean absolute difference, and mean absolute relative difference) have been developed to evaluate meter system performance. The strengths and weaknesses of these methods of presenting meter system performance data, including a new method known as Radar Plots, are described here. © 2015 Diabetes Technology Society.
Mirasoli, Mara; Guardigli, Massimo; Michelini, Elisa; Roda, Aldo
2014-01-01
Miniaturization of analytical procedures through microchips, lab-on-a-chip or micro total analysis systems is one of the most recent trends in chemical and biological analysis. These systems are designed to perform all the steps in an analytical procedure, with the advantages of low sample and reagent consumption, fast analysis, reduced costs, possibility of extra-laboratory application. A range of detection technologies have been employed in miniaturized analytical systems, but most applications relied on fluorescence and electrochemical detection. Chemical luminescence (which includes chemiluminescence, bioluminescence, and electrogenerated chemiluminescence) represents an alternative detection principle that offered comparable (or better) analytical performance and easier implementation in miniaturized analytical devices. Nevertheless, chemical luminescence-based ones represents only a small fraction of the microfluidic devices reported in the literature, and until now no review has been focused on these devices. Here we review the most relevant applications (since 2009) of miniaturized analytical devices based on chemical luminescence detection. After a brief overview of the main chemical luminescence systems and of the recent technological advancements regarding their implementation in miniaturized analytical devices, analytical applications are reviewed according to the nature of the device (microfluidic chips, microchip electrophoresis, lateral flow- and paper-based devices) and the type of application (micro-flow injection assays, enzyme assays, immunoassays, gene probe hybridization assays, cell assays, whole-cell biosensors). Copyright © 2013 Elsevier B.V. All rights reserved.
Experimental Validation of the Transverse Shear Behavior of a Nomex Core for Sandwich Panels
NASA Astrophysics Data System (ADS)
Farooqi, M. I.; Nasir, M. A.; Ali, H. M.; Ali, Y.
2017-05-01
This work deals with determination of the transverse shear moduli of a Nomex® honeycomb core of sandwich panels. Their out-of-plane shear characteristics depend on the transverse shear moduli of the honeycomb core. These moduli were determined experimentally, numerically, and analytically. Numerical simulations were performed by using a unit cell model and three analytical approaches. Analytical calculations showed that two of the approaches provided reasonable predictions for the transverse shear modulus as compared with experimental results. However, the approach based upon the classical lamination theory showed large deviations from experimental data. Numerical simulations also showed a trend similar to that resulting from the analytical models.
NASA Astrophysics Data System (ADS)
Seo, Sung-Won; Kim, Young-Hyun; Lee, Jung-Ho; Choi, Jang-Young
2018-05-01
This paper presents analytical torque calculation and experimental verification of synchronous permanent magnet couplings (SPMCs) with Halbach arrays. A Halbach array is composed of various numbers of segments per pole; we calculate and compare the magnetic torques for 2, 3, and 4 segments. Firstly, based on the magnetic vector potential, and using a 2D polar coordinate system, we obtain analytical solutions for the magnetic field. Next, through a series of processes, we perform magnetic torque calculations using the derived solutions and a Maxwell stress tensor. Finally, the analytical results are verified by comparison with the results of 2D and 3D finite element analysis and the results of an experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lumb, Matthew P.; Naval Research Laboratory, Washington, DC 20375; Steiner, Myles A.
The analytical drift-diffusion formalism is able to accurately simulate a wide range of solar cell architectures and was recently extended to include those with back surface reflectors. However, as solar cells approach the limits of material quality, photon recycling effects become increasingly important in predicting the behavior of these cells. In particular, the minority carrier diffusion length is significantly affected by the photon recycling, with consequences for the solar cell performance. In this paper, we outline an approach to account for photon recycling in the analytical Hovel model and compare analytical model predictions to GaAs-based experimental devices operating close tomore » the fundamental efficiency limit.« less
NASA Technical Reports Server (NTRS)
Oglebay, J. C.
1977-01-01
A thermal analytic model for a 30-cm engineering model mercury-ion thruster was developed and calibrated using the experimental test results of tests of a pre-engineering model 30-cm thruster. A series of tests, performed later, simulated a wide range of thermal environments on an operating 30-cm engineering model thruster, which was instrumented to measure the temperature distribution within it. The modified analytic model is described and analytic and experimental results compared for various operating conditions. Based on the comparisons, it is concluded that the analytic model can be used as a preliminary design tool to predict thruster steady-state temperature distributions for stage and mission studies and to define the thermal interface bewteen the thruster and other elements of a spacecraft.
Multidisciplinary design optimization using multiobjective formulation techniques
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Pagaldipti, Narayanan S.
1995-01-01
This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.
Ammar, T A; Abid, K Y; El-Bindary, A A; El-Sonbati, A Z
2015-12-01
Most drinking water industries are closely examining options to maintain a certain level of disinfectant residual through the entire distribution system. Chlorine dioxide is one of the promising disinfectants that is usually used as a secondary disinfectant, whereas the selection of the proper monitoring analytical technique to ensure disinfection and regulatory compliance has been debated within the industry. This research endeavored to objectively compare the performance of commercially available analytical techniques used for chlorine dioxide measurements (namely, chronoamperometry, DPD (N,N-diethyl-p-phenylenediamine), Lissamine Green B (LGB WET) and amperometric titration), to determine the superior technique. The commonly available commercial analytical techniques were evaluated over a wide range of chlorine dioxide concentrations. In reference to pre-defined criteria, the superior analytical technique was determined. To discern the effectiveness of such superior technique, various factors, such as sample temperature, high ionic strength, and other interferences that might influence the performance were examined. Among the four techniques, chronoamperometry technique indicates a significant level of accuracy and precision. Furthermore, the various influencing factors studied did not diminish the technique's performance where it was fairly adequate in all matrices. This study is a step towards proper disinfection monitoring and it confidently assists engineers with chlorine dioxide disinfection system planning and management.
Implicit motor learning promotes neural efficiency during laparoscopy.
Zhu, Frank F; Poolton, Jamie M; Wilson, Mark R; Hu, Yong; Maxwell, Jon P; Masters, Rich S W
2011-09-01
An understanding of differences in expert and novice neural behavior can inform surgical skills training. Outside the surgical domain, electroencephalographic (EEG) coherence analyses have shown that during motor performance, experts display less coactivation between the verbal-analytic and motor planning regions than their less skilled counterparts. Reduced involvement of verbal-analytic processes suggests greater neural efficiency. The authors tested the utility of an implicit motor learning intervention specifically devised to promote neural efficiency by reducing verbal-analytic involvement in laparoscopic performance. In this study, 18 novices practiced a movement pattern on a laparoscopic trainer with either conscious awareness of the movement pattern (explicit motor learning) or suppressed awareness of the movement pattern (implicit motor learning). In a retention test, movement accuracy was compared between the conditions, and coactivation (EEG coherence) was assessed between the motor planning (Fz) region and both the verbal-analytic (T3) and the visuospatial (T4) cortical regions (T3-Fz and T4-Fz, respectively). Movement accuracy in the conditions was not different in a retention test (P = 0.231). Findings showed that the EEG coherence scores for the T3-Fz regions were lower for the implicit learners than for the explicit learners (P = 0.027), but no differences were apparent for the T4-Fz regions (P = 0.882). Implicit motor learning reduced EEG coactivation between verbal-analytic and motor planning regions, suggesting that verbal-analytic processes were less involved in laparoscopic performance. The findings imply that training techniques that discourage nonessential coactivation during motor performance may provide surgeons with more neural resources with which to manage other aspects of surgery.
European Multicenter Study on Analytical Performance of Veris HIV-1 Assay.
Braun, Patrick; Delgado, Rafael; Drago, Monica; Fanti, Diana; Fleury, Hervé; Hofmann, Jörg; Izopet, Jacques; Kalus, Ulrich; Lombardi, Alessandra; Marcos, Maria Angeles; Mileto, Davide; Sauné, Karine; O'Shea, Siobhan; Pérez-Rivilla, Alfredo; Ramble, John; Trimoulet, Pascale; Vila, Jordi; Whittaker, Duncan; Artus, Alain; Rhodes, Daniel W
2017-07-01
The analytical performance of the Veris HIV-1 assay for use on the new, fully automated Beckman Coulter DxN Veris molecular diagnostics system was evaluated at 10 European virology laboratories. The precision, analytical sensitivity, performance with negative samples, linearity, and performance with HIV-1 groups/subtypes were evaluated. The precision for the 1-ml assay showed a standard deviation (SD) of 0.14 log 10 copies/ml or less and a coefficient of variation (CV) of ≤6.1% for each level tested. The 0.175-ml assay showed an SD of 0.17 log 10 copies/ml or less and a CV of ≤5.2% for each level tested. The analytical sensitivities determined by probit analysis were 19.3 copies/ml for the 1-ml assay and 126 copies/ml for the 0.175-ml assay. The performance with 1,357 negative samples demonstrated 99.2% with not detected results. Linearity using patient samples was shown from 1.54 to 6.93 log 10 copies/ml. The assay performed well, detecting and showing linearity with all HIV-1 genotypes tested. The Veris HIV-1 assay demonstrated analytical performance comparable to that of currently marketed HIV-1 assays. (DxN Veris products are Conformité Européenne [CE]-marked in vitro diagnostic products. The DxN Veris product line has not been submitted to the U.S. FDA and is not available in the U.S. market. The DxN Veris molecular diagnostics system is also known as the Veris MDx molecular diagnostics system and the Veris MDx system.). Copyright © 2017 American Society for Microbiology.
Brooks, Myron H.; Schroder, LeRoy J.; Willoughby, Timothy C.
1987-01-01
Four laboratories involved in the routine analysis of wet-deposition samples participated in an interlaboratory comparison program managed by the U.S. Geological Survey. The four participants were: Illinois State Water Survey central analytical laboratory in Champaign, Illinois; U.S. Geological Survey national water-quality laboratories in Atlanta, Georgia, and Denver, Colorado; and Inland Waters Directorate national water-quality laboratory in Burlington, Ontario, Canada. Analyses of interlaboratory samples performed by the four laboratories from October 1983 through December 1984 were compared.Participating laboratories analyzed three types of interlaboratory samples--natural wet deposition, simulated wet deposition, and deionized water--for pH and specific conductance, and for dissolved calcium, magnesium, sodium, sodium, potassium, chloride, sulfate, nitrate, ammonium, and orthophosphate. Natural wet-deposition samples were aliquots of actual wet-deposition samples. Analyses of these samples by the four laboratories were compared using analysis of variance. Test results indicated that pH, calcium, nitrate, and ammonium results were not directly comparable among the four laboratories. Statistically significant differences between laboratory results probably only were meaningful for analyses of dissolved calcium. Simulated wet-deposition samples with known analyte concentrations were used to test each laboratory for analyte bias. Laboratory analyses of calcium, magnesium, sodium, potassium, chloride, sulfate, and nitrate were not significantly different from the known concentrations of these analytes when tested using analysis of variance. Deionized-water samples were used to test each laboratory for reporting of false positive values. The Illinois State Water Survey Laboratory reported the smallest percentage of false positive values for most analytes. Analyte precision was estimated for each laboratory from results of replicate measurements. In general, the Illinois State Water Survey laboratory achieved the greatest precision, whereas the U.S. Geological Survey laboratories achieved the least precision.
Amplitudes of doping striations: comparison of numerical calculations and analytical approaches
NASA Astrophysics Data System (ADS)
Jung, T.; Müller, G.
1997-02-01
Transient, axisymmetric numerical calculations of the heat and species transport including convection were performed for a simplified vertical gradient freeze (Bridgman) process with bottom seeding for GaAs. Periodical oscillations were superimposed onto the transient heater temperature profile. The amplitudes of the resulting oscillations of the growth rate and the dopant concentration (striations) in the growing crystals are compared with the predictions of analytical models.
High-performance heat pipes for heat recovery applications
NASA Technical Reports Server (NTRS)
Saaski, E. W.; Hartl, J. H.
1980-01-01
Methods to improve the performance of reflux heat pipes for heat recovery applications were examined both analytically and experimentally. Various models for the estimation of reflux heat pipe transport capacity were surveyed in the literature and compared with experimental data. A high transport capacity reflux heat pipe was developed that provides up to a factor of 10 capacity improvement over conventional open tube designs; analytical models were developed for this device and incorporated into a computer program HPIPE. Good agreement of the model predictions with data for R-11 and benzene reflux heat pipes was obtained.
Oyaert, Matthijs; Van Maerken, Tom; Bridts, Silke; Van Loon, Silvi; Laverge, Heleen; Stove, Veronique
2018-03-01
Point-of-care blood gas test results may benefit therapeutic decision making by their immediate impact on patient care. We evaluated the (pre-)analytical performance of a novel cartridge-type blood gas analyzer, the GEM Premier 5000 (Werfen), for the determination of pH, partial carbon dioxide pressure (pCO 2 ), partial oxygen pressure (pO 2 ), sodium (Na + ), potassium (K + ), chloride (Cl - ), ionized calcium ( i Ca 2+ ), glucose, lactate, and total hemoglobin (tHb). Total imprecision was estimated according to the CLSI EP5-A2 protocol. The estimated total error was calculated based on the mean of the range claimed by the manufacturer. Based on the CLSI EP9-A2 evaluation protocol, a method comparison with the Siemens RapidPoint 500 and Abbott i-STAT CG8+ was performed. Obtained data were compared against preset quality specifications. Interference of potential pre-analytical confounders on co-oximetry and electrolyte concentrations were studied. The analytical performance was acceptable for all parameters tested. Method comparison demonstrated good agreement to the RapidPoint 500 and i-STAT CG8+, except for some parameters (RapidPoint 500: pCO 2 , K + , lactate and tHb; i-STAT CG8+: pO 2 , Na + , i Ca 2+ and tHb) for which significant differences between analyzers were recorded. No interference of lipemia or methylene blue on CO-oximetry results was found. On the contrary, significant interference for benzalkonium and hemolysis on electrolyte measurements were found, for which the user is notified by an interferent specific flag. Identification of sample errors from pre-analytical sources, such as interferences and automatic corrective actions, along with the analytical performance, ease of use and low maintenance time of the instrument, makes the evaluated instrument a suitable blood gas analyzer for both POCT and laboratory use. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Wu, Xiaobin; Chao, Yan; Wan, Zemin; Wang, Yunxiu; Ma, Yan; Ke, Peifeng; Wu, Xinzhong; Xu, Jianhua; Zhuang, Junhua; Huang, Xianzhang
2016-10-15
Haemoglobin A 1c (HbA 1c ) is widely used in the management of diabetes. Therefore, the reliability and comparability among different analytical methods for its detection have become very important. A comparative evaluation of the analytical performances (precision, linearity, accuracy, method comparison, and interferences including bilirubin, triglyceride, cholesterol, labile HbA 1c (LA 1c ), vitamin C, aspirin, fetal haemoglobin (HbF), and haemoglobin E (Hb E)) were performed on Capillarys 2 Flex Piercing (Capillarys 2FP) (Sebia, France), Tosoh HLC-723 G8 (Tosoh G8) (Tosoh, Japan), Premier Hb9210 (Trinity Biotech, Ireland) and Roche Cobas c501 (Roche c501) (Roche Diagnostics, Germany). A good precision was shown at both low and high HbA 1c levels on all four systems, with all individual CVs below 2% (IFCC units) or 1.5% (NGSP units). Linearity analysis for each analyzer had achieved a good correlation coefficient (R 2 > 0.99) over the entire range tested. The analytical bias of the four systems against the IFCC targets was less than ± 6% (NGSP units), indicating a good accuracy. Method comparison showed a great correlation and agreement between methods. Very high levels of triglycerides and cholesterol (≥ 15.28 and ≥ 8.72 mmol/L, respectively) led to falsely low HbA 1c concentrations on Roche c501. Elevated HbF induced false HbA 1c detection on Capillarys 2FP (> 10%), Tosoh G8 (> 30%), Premier Hb9210 (> 15%), and Roche c501 (> 5%). On Tosoh G8, HbE induced an extra peak on chromatogram, and significantly lower results were reported. The four HbA 1c methods commonly used with commercial analyzers showed a good reliability and comparability, although some interference may falsely alter the result.
Dousty, Faezeh; O'Brien, Rob
2015-06-15
As in the case with positive ion atmospheric pressure photoionization (PI-APPI), the addition of dopants significantly improves the sensitivity of negative ion APPI (NI-APPI). However, the research on dopant-assisted-NI-APPI has been quite limited compared to the studies on dopant-assisted PI-APPI. This work presents the potential of isoprene as a novel dopant for NI-APPI. Thirteen compounds, possessing suitable gas-phase ion energetic properties in order to make stable negative ions, were selected. Dopants were continuously introduced into a tee junction prior to the ion source through a fused-silica capillary, while analytes were directly injected into the same tee. Then both were mixed with the continuous solvent from high-performance liquid chromatography (HPLC), nebulized, and entered the source. The nebulized stream was analyzed by APPI tandem quadrupole mass spectrometry in the negative ion mode. The results obtained using isoprene were compared with those obtained by using toluene as a dopant and dopant-free NI-APPI. Isoprene enhanced the ionization intensities of the studied compounds, which were found to be comparable and, in some cases, more effective than toluene. The mechanisms leading to the observed set of negative analyte ions were also discussed. Because in NI-APPI, thermal electrons, which are produced during the photoionization of a dopant, are considered the main reagent ions, both isoprene and toluene promoted the ionization of analytes through the same mechanisms, as expected. Isoprene was shown to perform well as a novel dopant for NI-APPI. Isoprene has a high photoabsorption cross section in the VUV region; therefore, its photoionization leads to a highly effective production of thermal electrons, which further promotes the ionization of analytes. In addition, isoprene is environmentally benign and less toxic compared to currently used dopants. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Joh, Daniel Y.; Hucknall, Angus M.; Wei, Qingshan; Mason, Kelly A.; Lund, Margaret L.; Fontes, Cassio M.; Hill, Ryan T.; Blair, Rebecca; Zimmers, Zackary; Achar, Rohan K.; Tseng, Derek; Gordan, Raluca; Freemark, Michael; Ozcan, Aydogan; Chilkoti, Ashutosh
2017-08-01
The ELISA is the mainstay for sensitive and quantitative detection of protein analytes. Despite its utility, ELISA is time-consuming, resource-intensive, and infrastructure-dependent, limiting its availability in resource-limited regions. Here, we describe a self-contained immunoassay platform (the “D4 assay”) that converts the sandwich immunoassay into a point-of-care test (POCT). The D4 assay is fabricated by inkjet printing assay reagents as microarrays on nanoscale polymer brushes on glass chips, so that all reagents are “on-chip,” and these chips show durable storage stability without cold storage. The D4 assay can interrogate multiple analytes from a drop of blood, is compatible with a smartphone detector, and displays analytical figures of merit that are comparable to standard laboratory-based ELISA in whole blood. These attributes of the D4 POCT have the potential to democratize access to high-performance immunoassays in resource-limited settings without sacrificing their performance.
Panev, T
1991-01-01
The present work has the purpose to make a comparative evaluation of the different types detector tubes--for analysis, long-term and passive for determination of NO2 and the results to be compared, with those received by the spectrophotometric method and the reagent of Zaltsman. Studies were performed in the hall of the garage for repair of diesel buses during one working shift. The results point out that the analysing tubes for NO2 give good results with the spectrophotometric method. The measured average-shift concentrations of NO2 by long-term and passive tubes are juxtaposed with the average-received values with the analytical tubes and with the analytical method.
Comparison of Three Different Commercial Kits for the Human Papilloma Virus Genotyping.
Lim, Yong Kwan; Choi, Jee-Hye; Park, Serah; Kweon, Oh Joo; Park, Ae Ja
2016-11-01
High-risk type human papilloma virus (HPV) is the most important cause of cervical cancer. Recently, real-time polymerase chain reaction and reverse blot hybridization assay-based HPV DNA genotyping kits are developed. So, we compared the performances of different three HPV genotyping kits using different analytical principles and methods. Two hundred positive and 100 negative cervical swab specimens were used. DNA was extracted and all samples were tested by the MolecuTech REBA HPV-ID, Anyplex II HPV28 Detection, and HPVDNAChip. Direct sequencing was performed as a reference method for confirming high-risk HPV genotypes 16, 18, 45, 52, and 58. Although high-level agreement results were observed in negative samples, three kits showed decreased interassay agreement as screening setting in positive samples. Comparing the genotyping results, three assays showed acceptable sensitivity and specificity for the detection of HPV 16 and 18. Otherwise, various sensitivities showed in the detection of HPV 45, 52, and 58. The three assays had dissimilar performance of HPV screening capacity and exhibited moderate level of concordance in HPV genotyping. These discrepant results were unavoidable due to difference in type-specific analytical sensitivity and lack of standardization; therefore, we suggested that the efforts to standardization of HPV genotyping kits and adjusting analytical sensitivity would be important for the best clinical performance. © 2016 Wiley Periodicals, Inc.
Light-Frame Wall Systems: Performance and Predictability.
David S. Gromala
1983-01-01
This paper compares results of all wall tests with analytical predictions of performance.Conventional wood-stud walls of one configuration failed at bending loads that were 4 to 6 times design load.The computer model overpredicted wall strength by and average of 10 percent and deflection by an average of 6 percent.
Performance in Basic Mathematics of Indigenous Students
ERIC Educational Resources Information Center
Sicat, Lolita V.; David, Ma. Elena D.
2016-01-01
This analytical study analyzed the performance in Basic Mathematics of the indigenous students, the Aeta students (Grade 6) of Sta. Juliana Elementary School, Capas, Tarlac, and the APC students of Malaybalay City, Bukidnon. Results were compared with regular students in rural, urban, private, and public schools to analyze indigenous students'…
Schot, Marjolein J C; van Delft, Sanne; Kooijman-Buiting, Antoinette M J; de Wit, Niek J; Hopstaken, Rogier M
2015-05-18
Various point-of-care testing (POCT) urine analysers are commercially available for routine urine analysis in general practice. The present study compares analytical performance, agreement and user-friendliness of six different POCT urine analysers for diagnosing urinary tract infection in general practice. All testing procedures were performed at a diagnostic centre for primary care in the Netherlands. Urine samples were collected at four general practices. Analytical performance and agreement of the POCT analysers regarding nitrite, leucocytes and erythrocytes, with the laboratory reference standard, was the primary outcome measure, and analysed by calculating sensitivity, specificity, positive and negative predictive value, and Cohen's κ coefficient for agreement. Secondary outcome measures were the user-friendliness of the POCT analysers, in addition to other characteristics of the analysers. The following six POCT analysers were evaluated: Uryxxon Relax (Macherey Nagel), Urisys 1100 (Roche), Clinitek Status (Siemens), Aution 11 (Menarini), Aution Micro (Menarini) and Urilyzer (Analyticon). Analytical performance was good for all analysers. Compared with laboratory reference standards, overall agreement was good, but differed per parameter and per analyser. Concerning the nitrite test, the most important test for clinical practice, all but one showed perfect agreement with the laboratory standard. For leucocytes and erythrocytes specificity was high, but sensitivity was considerably lower. Agreement for leucocytes varied between good to very good, and for the erythrocyte test between fair and good. First-time users indicated that the analysers were easy to use. They expected higher productivity and accuracy when using these analysers in daily practice. The overall performance and user-friendliness of all six commercially available POCT urine analysers was sufficient to justify routine use in suspected urinary tract infections in general practice. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
NASA Astrophysics Data System (ADS)
Chang, Chia-Ming; Keefe, Andrew; Carter, William B.; Henry, Christopher P.; McKnight, Geoff P.
2014-04-01
Structural assemblies incorporating negative stiffness elements have been shown to provide both tunable damping properties and simultaneous high stiffness and damping over prescribed displacement regions. In this paper we explore the design space for negative stiffness based assemblies using analytical modeling combined with finite element analysis. A simplified spring model demonstrates the effects of element stiffness, geometry, and preloads on the damping and stiffness performance. Simplified analytical models were validated for realistic structural implementations through finite element analysis. A series of complementary experiments was conducted to compare with modeling and determine the effects of each element on the system response. The measured damping performance follows the theoretical predictions obtained by analytical modeling. We applied these concepts to a novel sandwich core structure that exhibited combined stiffness and damping properties 8 times greater than existing foam core technologies.
NASA Astrophysics Data System (ADS)
Takahashi, D.; Sawaki, S.; Mu, R.-L.
2016-06-01
A new method for improving the sound insulation performance of double-glazed windows is proposed. This technique uses viscoelastic materials as connectors between the two glass panels to ensure that the appropriate spacing is maintained. An analytical model that makes it possible to discuss the effects of spacing, contact area, and viscoelastic properties of the connectors on the performance in terms of sound insulation is developed. The validity of the model is verified by comparing its results with measured data. The numerical experiments using this analytical model showed the importance of the ability of the connectors to achieve the appropriate spacing and their viscoelastic properties, both of which are necessary for improving the sound insulation performance. In addition, it was shown that the most effective factor is damping: the stronger the damping, the more the insulation performance increases.
International Space Station Model Correlation Analysis
NASA Technical Reports Server (NTRS)
Laible, Michael R.; Fitzpatrick, Kristin; Hodge, Jennifer; Grygier, Michael
2018-01-01
This paper summarizes the on-orbit structural dynamic data and the related modal analysis, model validation and correlation performed for the International Space Station (ISS) configuration ISS Stage ULF7, 2015 Dedicated Thruster Firing (DTF). The objective of this analysis is to validate and correlate the analytical models used to calculate the ISS internal dynamic loads and compare the 2015 DTF with previous tests. During the ISS configurations under consideration, on-orbit dynamic measurements were collected using the three main ISS instrumentation systems; Internal Wireless Instrumentation System (IWIS), External Wireless Instrumentation System (EWIS) and the Structural Dynamic Measurement System (SDMS). The measurements were recorded during several nominal on-orbit DTF tests on August 18, 2015. Experimental modal analyses were performed on the measured data to extract modal parameters including frequency, damping, and mode shape information. Correlation and comparisons between test and analytical frequencies and mode shapes were performed to assess the accuracy of the analytical models for the configurations under consideration. These mode shapes were also compared to earlier tests. Based on the frequency comparisons, the accuracy of the mathematical models is assessed and model refinement recommendations are given. In particular, results of the first fundamental mode will be discussed, nonlinear results will be shown, and accelerometer placement will be assessed.
CENTRIFUGAL VIBRATION TEST OF RC PILE FOUNDATION
NASA Astrophysics Data System (ADS)
Higuchi, Shunichi; Tsutsumiuchi, Takahiro; Otsuka, Rinna; Ito, Koji; Ejiri, Joji
It is necessary that nonlinear responses of structures are clarified by soil-structure interaction analysis for the purpose of evaluating the seismic performances of underground structure or foundation structure. In this research, centrifuge shake table tests of reinforced concrete pile foundation installed in the liquefied ground were conducted. Then, finite element analyses for the tests were conducted to confirm an applicability of the analytical method by comparing the experimental results and analytical results.
ERIC Educational Resources Information Center
Dennis, Quincita
2014-01-01
This study examined the effectiveness of using laptops to teach and deliver instruction to students. The meta-analytic approach was employed to compare the means of End-of Course Test scores from North Carolina one-to-one high schools during the traditional teaching period and the laptop teaching period in order to determine if there are…
Vandenabeele-Trambouze, O; Claeys-Bruno, M; Dobrijevic, M; Rodier, C; Borruat, G; Commeyras, A; Garrelly, L
2005-02-01
The need for criteria to compare different analytical methods for measuring extraterrestrial organic matter at ultra-trace levels in relatively small and unique samples (e.g., fragments of meteorites, micrometeorites, planetary samples) is discussed. We emphasize the need to standardize the description of future analyses, and take the first step toward a proposed international laboratory network for performance testing.
An interactive website for analytical method comparison and bias estimation.
Bahar, Burak; Tuncel, Ayse F; Holmes, Earle W; Holmes, Daniel T
2017-12-01
Regulatory standards mandate laboratories to perform studies to ensure accuracy and reliability of their test results. Method comparison and bias estimation are important components of these studies. We developed an interactive website for evaluating the relative performance of two analytical methods using R programming language tools. The website can be accessed at https://bahar.shinyapps.io/method_compare/. The site has an easy-to-use interface that allows both copy-pasting and manual entry of data. It also allows selection of a regression model and creation of regression and difference plots. Available regression models include Ordinary Least Squares, Weighted-Ordinary Least Squares, Deming, Weighted-Deming, Passing-Bablok and Passing-Bablok for large datasets. The server processes the data and generates downloadable reports in PDF or HTML format. Our website provides clinical laboratories a practical way to assess the relative performance of two analytical methods. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
La-oxides as tracers for PuO{sub 2} to simulate contaminated aerosol behavior
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, L.C.; Newton, G.J.; Cronenberg, A.W.
1994-04-01
An analytical and experimental study was performed on the use of lanthanide oxides (La-oxides) as surrogates for plutonium oxides (PuO{sub 2}) during simulated buried waste retrieval. This study determined how well the La-oxides move compared to PuO{sub 2} in aerosolized soils during retrieval scenarios. As part of the analytical study, physical properties of La-oxides and PuO{sub 2}, such as molecular diameter, diffusivity, density, and molecular weight are compared. In addition, an experimental study was performed in which Idaho National Engineering Laboratory (INEL) soil, INEL soil with lanthanides, and INEL soil with plutonium were aerosolized and collected in filters. Comparison ofmore » particle size distribution parameters from this experimental study show similarity between INEL soil, INEL soil with lanthanides, and INEL soil with plutonium.« less
Integrated pest management of "Golden Delicious" apples.
Simončič, A; Stopar, M; Velikonja Bolta, Š; Bavčar, D; Leskovšek, R; Baša Česnik, H
2015-01-01
Monitoring of plant protection product (PPP) residues in "Golden Delicious" apples was performed in 2011-2013, where 216 active substances were analysed with three analytical methods. Integrated pest management (IPM) production and improved IPM production were compared. Results were in favour of improved IPM production. Some active compounds determined in IPM production (boscalid, pyraclostrobin, thiacloprid and thiametoxam) were not found in improved IPM production. Besides that, in 2011 and 2012, captan residues were lower in improved IPM production. Risk assessment was also performed. Chronic exposure of consumers was low in general, but showed no major differences for IPM and improved IPM production for active substances determined in both types of production. Analytical results were compared with the European Union report of 2010 where 1.3% of apple samples exceeded maximum residue levels (MRLs), while MRL exceedances were not observed in this survey.
Design, fabrication, and test of a steel spar wind turbine blade
NASA Technical Reports Server (NTRS)
Sullivan, T. L.; Sirocky, P. J., Jr.; Viterna, L. A.
1979-01-01
The design and fabrication of wind turbine blades based on 60 foot steel spars are discussed. Performance and blade load information is given and compared to analytical prediction. In addition, performance is compared to that of the original MOD-O aluminum blades. Costs for building the two blades are given, and a projection is made for the cost in mass production. Design improvements to reduce weight and improve fatigue life are suggested.
Limitations and Tolerances in Optical Devices
NASA Astrophysics Data System (ADS)
Jackman, Neil Allan
The performance of optical systems is limited by the imperfections of their components. Many of the devices in optical systems including optical fiber amplifiers, multimode transmission lines and multilayered media such as mirrors, windows and filters, are modeled by coupled line equations. This investigation includes: (i) a study of the limitations imposed on a wavelength multiplexed unidirectional ring by the non-uniformities of the gain spectra of Erbium-doped optical fiber amplifiers. We find numerical solutions for non-linear coupled power differential equations and use these solutions to compare the signal -to-noise ratios and signal levels at different nodes. (ii) An analytical study of the tolerances of imperfect multimode media which support forward traveling modes. The complex mode amplitudes are related by linear coupled differential equations. We use analytical methods to derive extended equations for the expected mode powers and give heuristic limits for their regions of validity. These results compare favorably to exact solutions found for a special case. (iii) A study of the tolerances of multilayered media in the presence of optical thickness imperfections. We use analytical methods including Kronecker producers, to calculate the reflection and transmission statistics of the media. Monte Carlo simulations compare well to our analytical method.
Shameli, Seyed Mostafa; Glawdel, Tomasz; Ren, Carolyn L
2015-03-01
Counter-flow gradient electrofocusing allows the simultaneous concentration and separation of analytes by generating a gradient in the total velocity of each analyte that is the sum of its electrophoretic velocity and the bulk counter-flow velocity. In the scanning format, the bulk counter-flow velocity is varying with time so that a number of analytes with large differences in electrophoretic mobility can be sequentially focused and passed by a single detection point. Studies have shown that nonlinear (such as a bilinear) velocity gradients along the separation channel can improve both peak capacity and separation resolution simultaneously, which cannot be realized by using a single linear gradient. Developing an effective separation system based on the scanning counter-flow nonlinear gradient electrofocusing technique usually requires extensive experimental and numerical efforts, which can be reduced significantly with the help of analytical models for design optimization and guiding experimental studies. Therefore, this study focuses on developing an analytical model to evaluate the separation performance of scanning counter-flow bilinear gradient electrofocusing methods. In particular, this model allows a bilinear gradient and a scanning rate to be optimized for the desired separation performance. The results based on this model indicate that any bilinear gradient provides a higher separation resolution (up to 100%) compared to the linear case. This model is validated by numerical studies. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.
Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L
2010-02-01
An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors.
NASA Technical Reports Server (NTRS)
Brewin, Robert J.W.; Sathyendranath, Shubha; Muller, Dagmar; Brockmann, Carsten; Deschamps, Pierre-Yves; Devred, Emmanuel; Doerffer, Roland; Fomferra, Norman; Franz, Bryan; Grant, Mike;
2013-01-01
Satellite-derived remote-sensing reflectance (Rrs) can be used for mapping biogeochemically relevant variables, such as the chlorophyll concentration and the Inherent Optical Properties (IOPs) of the water, at global scale for use in climate-change studies. Prior to generating such products, suitable algorithms have to be selected that are appropriate for the purpose. Algorithm selection needs to account for both qualitative and quantitative requirements. In this paper we develop an objective methodology designed to rank the quantitative performance of a suite of bio-optical models. The objective classification is applied using the NASA bio-Optical Marine Algorithm Dataset (NOMAD). Using in situ Rrs as input to the models, the performance of eleven semianalytical models, as well as five empirical chlorophyll algorithms and an empirical diffuse attenuation coefficient algorithm, is ranked for spectrally-resolved IOPs, chlorophyll concentration and the diffuse attenuation coefficient at 489 nm. The sensitivity of the objective classification and the uncertainty in the ranking are tested using a Monte-Carlo approach (bootstrapping). Results indicate that the performance of the semi-analytical models varies depending on the product and wavelength of interest. For chlorophyll retrieval, empirical algorithms perform better than semi-analytical models, in general. The performance of these empirical models reflects either their immunity to scale errors or instrument noise in Rrs data, or simply that the data used for model parameterisation were not independent of NOMAD. Nonetheless, uncertainty in the classification suggests that the performance of some semi-analytical algorithms at retrieving chlorophyll is comparable with the empirical algorithms. For phytoplankton absorption at 443 nm, some semi-analytical models also perform with similar accuracy to an empirical model. We discuss the potential biases, limitations and uncertainty in the approach, as well as additional qualitative considerations for algorithm selection for climate-change studies. Our classification has the potential to be routinely implemented, such that the performance of emerging algorithms can be compared with existing algorithms as they become available. In the long-term, such an approach will further aid algorithm development for ocean-colour studies.
An analytical study of various telecomminication networks using markov models
NASA Astrophysics Data System (ADS)
Ramakrishnan, M.; Jayamani, E.; Ezhumalai, P.
2015-04-01
The main aim of this paper is to examine issues relating to the performance of various Telecommunication networks, and applied queuing theory for better design and improved efficiency. Firstly, giving an analytical study of queues deals with quantifying the phenomenon of waiting lines using representative measures of performances, such as average queue length (on average number of customers in the queue), average waiting time in queue (on average time to wait) and average facility utilization (proportion of time the service facility is in use). In the second, using Matlab simulator, summarizes the finding of the investigations, from which and where we obtain results and describing methodology for a) compare the waiting time and average number of messages in the queue in M/M/1 and M/M/2 queues b) Compare the performance of M/M/1 and M/D/1 queues and study the effect of increasing the number of servers on the blocking probability M/M/k/k queue model.
NASA Technical Reports Server (NTRS)
Haas, J. E.
1982-01-01
Three stator configurations were studied to determine the effect of stator outer endwall contouring on stator performance. One configuration was a cylindrical stator design. One contoured stator configuration had an S-shaped outer endwall, the other had a conical-convergent outer endwall. The experimental investigation consisted of annular surveys of stator exit total pressure and flow angle for each stator configuration over a range of stator pressure ratio. Radial variations in stator loss and aftermixed flow conditions were obtained when these data were compared with the analytical results to assess the validity of the analysis, good agreement was found.
Measuring Link-Resolver Success: Comparing 360 Link with a Local Implementation of WebBridge
ERIC Educational Resources Information Center
Herrera, Gail
2011-01-01
This study reviewed link resolver success comparing 360 Link and a local implementation of WebBridge. Two methods were used: (1) comparing article-level access and (2) examining technical issues for 384 randomly sampled OpenURLs. Google Analytics was used to collect user-generated OpenURLs. For both methods, 360 Link out-performed the local…
An Evaluation of the IntelliMetric[SM] Essay Scoring System
ERIC Educational Resources Information Center
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine
2006-01-01
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Renz, Nora; Cabric, Sabrina; Morgenstern, Christian; Schuetz, Michael A; Trampuz, Andrej
2018-04-01
Bone healing disturbance following fracture fixation represents a continuing challenge. We evaluated a novel fully automated polymerase chain reaction (PCR) assay using sonication fluid from retrieved orthopedic hardware to diagnose infection. In this prospective diagnostic cohort study, explanted orthopedic hardware materials from consecutive patients were investigated by sonication and the resulting sonication fluid was analyzed by culture (standard procedure) and multiplex PCR (investigational procedure). Hardware-associated infection was defined as visible purulence, presence of a sinus tract, implant on view, inflammation in peri-implant tissue or positive culture. McNemar's chi-squared test was used to compare the performance of diagnostic tests. For the clinical performance all pathogens were considered, whereas for analytical performance only microorganisms were considered for which primers are included in the PCR assay. Among 51 patients, hardware-associated infection was diagnosed in 38 cases (75%) and non-infectious causes in 13 patients (25%). The sensitivity for diagnosing infection was 66% for peri-implant tissue culture, 84% for sonication fluid culture, 71% (clinical performance) and 77% (analytical performance) for sonication fluid PCR, the specificity of all tests was >90%. The analytical sensitivity of PCR was higher for gram-negative bacilli (100%), coagulase-negative staphylococci (89%) and Staphylococcus aureus (75%) than for Cutibacterium (formerly Propionibacterium) acnes (57%), enterococci (50%) and Candida spp. (25%). The performance of sonication fluid PCR for diagnosis of orthopedic hardware-associated infection was comparable to culture tests. The additional advantage of PCR was short processing time (<5 h) and fully automated procedure. With further improvement of the performance, PCR has the potential to complement conventional cultures. Copyright © 2018 Elsevier Ltd. All rights reserved.
An analytical model for the detection of levitated nanoparticles in optomechanics
NASA Astrophysics Data System (ADS)
Rahman, A. T. M. Anishur; Frangeskou, A. C.; Barker, P. F.; Morley, G. W.
2018-02-01
Interferometric position detection of levitated particles is crucial for the centre-of-mass (CM) motion cooling and manipulation of levitated particles. In combination with balanced detection and feedback cooling, this system has provided picometer scale position sensitivity, zeptonewton force detection, and sub-millikelvin CM temperatures. In this article, we develop an analytical model of this detection system and compare its performance with experimental results allowing us to explain the presence of spurious frequencies in the spectra.
NASA Astrophysics Data System (ADS)
Setty, Srinivas J.; Cefola, Paul J.; Montenbruck, Oliver; Fiedler, Hauke
2016-05-01
Catalog maintenance for Space Situational Awareness (SSA) demands an accurate and computationally lean orbit propagation and orbit determination technique to cope with the ever increasing number of observed space objects. As an alternative to established numerical and analytical methods, we investigate the accuracy and computational load of the Draper Semi-analytical Satellite Theory (DSST). The standalone version of the DSST was enhanced with additional perturbation models to improve its recovery of short periodic motion. The accuracy of DSST is, for the first time, compared to a numerical propagator with fidelity force models for a comprehensive grid of low, medium, and high altitude orbits with varying eccentricity and different inclinations. Furthermore, the run-time of both propagators is compared as a function of propagation arc, output step size and gravity field order to assess its performance for a full range of relevant use cases. For use in orbit determination, a robust performance of DSST is demonstrated even in the case of sparse observations, which is most sensitive to mismodeled short periodic perturbations. Overall, DSST is shown to exhibit adequate accuracy at favorable computational speed for the full set of orbits that need to be considered in space surveillance. Along with the inherent benefits of a semi-analytical orbit representation, DSST provides an attractive alternative to the more common numerical orbit propagation techniques.
Improving laboratory results turnaround time by reducing pre analytical phase.
Khalifa, Mohamed; Khalid, Parwaiz
2014-01-01
Laboratory turnaround time is considered one of the most important indicators of work efficiency in hospitals, physicians always need timely results to take effective clinical decisions especially in the emergency department where these results can guide physicians whether to admit patients to the hospital, discharge them home or do further investigations. A retrospective data analysis study was performed to identify the effects of ER and Lab staff training on new routines for sample collection and transportation on the pre-analytical phase of turnaround time. Renal profile tests requested by the ER and performed in 2013 has been selected as a sample, and data about 7,519 tests were retrieved and analyzed to compare turnaround time intervals before and after implementing new routines. Results showed significant time reduction on "Request to Sample Collection" and "Collection to In Lab Delivery" time intervals with less significant improvement on the analytical phase of the turnaround time.
Analysis and testing of high entrainment single nozzle jet pumps with variable mixing tubes
NASA Technical Reports Server (NTRS)
Hickman, K. E.; Hill, P. G.; Gilbert, G. B.
1972-01-01
An analytical model was developed to predict the performance characteristics of axisymmetric single-nozzle jet pumps with variable area mixing tubes. The primary flow may be subsonic or supersonic. The computer program uses integral techniques to calculate the velocity profiles and the wall static pressures that result from the mixing of the supersonic primary jet and the subsonic secondary flow. An experimental program was conducted to measure mixing tube wall static pressure variations, velocity profiles, and temperature profiles in a variable area mixing tube with a supersonic primary jet. Static pressure variations were measured at four different secondary flow rates. These test results were used to evaluate the analytical model. The analytical results compared well to the experimental data. Therefore, the analysis is believed to be ready for use to relate jet pump performance characteristics to mixing tube design.
Calculated and measured fields in superferric wiggler magnets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blum, E.B.; Solomon, L.
1995-02-01
Although Klaus Halbach is widely known and appreciated as the originator of the computer program POISSON for electromagnetic field calculation, Klaus has always believed that analytical methods can give much more insight into the performance of a magnet than numerical simulation. Analytical approximations readily show how the different aspects of a magnet`s design such as pole dimensions, current, and coil configuration contribute to the performance. These methods yield accuracies of better than 10%. Analytical methods should therefore be used when conceptualizing a magnet design. Computer analysis can then be used for refinement. A simple model is presented for the peakmore » on-axis field of an electro-magnetic wiggler with iron poles and superconducting coils. The model is applied to the radiator section of the superconducting wiggler for the BNL Harmonic Generation Free Electron Laser. The predictions of the model are compared to the measured field and the results from POISSON.« less
Corrigan, Damion K; Piletsky, Sergey; McCrossen, Sean
2009-01-01
This article compares the technical performances of several different commercially available swabbing materials for the purpose of cleaning verification. A steel surface was soiled with solutions of acetaminophen, nicotinic acid, diclofenac, and benzamidine and wiped with each swabbing material. The compounds were extracted with water or ethanol (depending on polarity of analyte) and their concentration in extract was quantified spectrophotometrically. The study also investigated swab debris on the wiped surface. The swab performances were compared and the best swab material was identified.
Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko
2017-07-10
This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.
Evaluation of an in-practice wet-chemistry analyzer using canine and feline serum samples.
Irvine, Katherine L; Burt, Kay; Papasouliotis, Kostas
2016-01-01
A wet-chemistry biochemical analyzer was assessed for in-practice veterinary use. Its small size may mean a cost-effective method for low-throughput in-house biochemical analyses for first-opinion practice. The objectives of our study were to determine imprecision, total observed error, and acceptability of the analyzer for measurement of common canine and feline serum analytes, and to compare clinical sample results to those from a commercial reference analyzer. Imprecision was determined by within- and between-run repeatability for canine and feline pooled samples, and manufacturer-supplied quality control material (QCM). Total observed error (TEobs) was determined for pooled samples and QCM. Performance was assessed for canine and feline pooled samples by sigma metric determination. Agreement and errors between the in-practice and reference analyzers were determined for canine and feline clinical samples by Bland-Altman and Deming regression analyses. Within- and between-run precision was high for most analytes, and TEobs(%) was mostly lower than total allowable error. Performance based on sigma metrics was good (σ > 4) for many analytes and marginal (σ > 3) for most of the remainder. Correlation between the analyzers was very high for most canine analytes and high for most feline analytes. Between-analyzer bias was generally attributed to high constant error. The in-practice analyzer showed good overall performance, with only calcium and phosphate analyses identified as significantly problematic. Agreement for most analytes was insufficient for transposition of reference intervals, and we recommend that in-practice-specific reference intervals be established in the laboratory. © 2015 The Author(s).
NASA Astrophysics Data System (ADS)
Yoon, Heonjun; Kim, Miso; Park, Choon-Su; Youn, Byeng D.
2018-01-01
Piezoelectric vibration energy harvesting (PVEH) has received much attention as a potential solution that could ultimately realize self-powered wireless sensor networks. Since most ambient vibrations in nature are inherently random and nonstationary, the output performances of PVEH devices also randomly change with time. However, little attention has been paid to investigating the randomly time-varying electroelastic behaviors of PVEH systems both analytically and experimentally. The objective of this study is thus to make a step forward towards a deep understanding of the time-varying performances of PVEH devices under nonstationary random vibrations. Two typical cases of nonstationary random vibration signals are considered: (1) randomly-varying amplitude (amplitude modulation; AM) and (2) randomly-varying amplitude with randomly-varying instantaneous frequency (amplitude and frequency modulation; AM-FM). In both cases, this study pursues well-balanced correlations of analytical predictions and experimental observations to deduce the relationships between the time-varying output performances of the PVEH device and two primary input parameters, such as a central frequency and an external electrical resistance. We introduce three correlation metrics to quantitatively compare analytical prediction and experimental observation, including the normalized root mean square error, the correlation coefficient, and the weighted integrated factor. Analytical predictions are in an excellent agreement with experimental observations both mechanically and electrically. This study provides insightful guidelines for designing PVEH devices to reliably generate electric power under nonstationary random vibrations.
Smart phone: a popular device supports amylase activity assay in fisheries research.
Thongprajukaew, Karun; Choodum, Aree; Sa-E, Barunee; Hayee, Ummah
2014-11-15
Colourimetric determinations of amylase activity were developed based on a standard dinitrosalicylic acid (DNS) staining method, using maltose as the analyte. Intensities and absorbances of red, green and blue (RGB) were obtained with iPhone imaging and Adobe Photoshop image analysis. Correlation of green and analyte concentrations was highly significant, and the accuracy of the developed method was excellent in analytical performance. The common iPhone has sufficient imaging ability for accurate quantification of maltose concentrations. Detection limits, sensitivity and linearity were comparable to a spectrophotometric method, but provided better inter-day precision. In quantifying amylase specific activity from a commercial source (P>0.02) and fish samples (P>0.05), differences compared with spectrophotometric measurements were not significant. We have demonstrated that iPhone imaging with image analysis in Adobe Photoshop has potential for field and laboratory studies of amylase. Copyright © 2014 Elsevier Ltd. All rights reserved.
Sterkers, Yvon; Varlet-Marie, Emmanuelle; Cassaing, Sophie; Brenier-Pinchart, Marie-Pierre; Brun, Sophie; Dalle, Frédéric; Delhaes, Laurence; Filisetti, Denis; Pelloux, Hervé; Yera, Hélène; Bastien, Patrick
2010-01-01
Although screening for maternal toxoplasmic seroconversion during pregnancy is based on immunodiagnostic assays, the diagnosis of clinically relevant toxoplasmosis greatly relies upon molecular methods. A problem is that this molecular diagnosis is subject to variation of performances, mainly due to a large diversity of PCR methods and primers and the lack of standardization. The present multicentric prospective study, involving eight laboratories proficient in the molecular prenatal diagnosis of toxoplasmosis, was a first step toward the harmonization of this diagnosis among university hospitals in France. Its aim was to compare the analytical performances of different PCR protocols used for Toxoplasma detection. Each center extracted the same concentrated Toxoplasma gondii suspension and tested serial dilutions of the DNA using its own assays. Differences in analytical sensitivities were observed between assays, particularly at low parasite concentrations (≤2 T. gondii genomes per reaction tube), with “performance scores” differing by a 20-fold factor among laboratories. Our data stress the fact that differences do exist in the performances of molecular assays in spite of expertise in the matter; we propose that laboratories work toward a detection threshold defined for a best sensitivity of this diagnosis. Moreover, on the one hand, intralaboratory comparisons confirmed previous studies showing that rep529 is a more adequate DNA target for this diagnosis than the widely used B1 gene. But, on the other hand, interlaboratory comparisons showed differences that appear independent of the target, primers, or technology and that hence rely essentially on proficiency and care in the optimization of PCR conditions. PMID:20610670
Lack of grading agreement among international hemostasis external quality assessment programs
Olson, John D.; Jennings, Ian; Meijer, Piet; Bon, Chantal; Bonar, Roslyn; Favaloro, Emmanuel J.; Higgins, Russell A.; Keeney, Michael; Mammen, Joy; Marlar, Richard A.; Meley, Roland; Nair, Sukesh C.; Nichols, William L.; Raby, Anne; Reverter, Joan C.; Srivastava, Alok; Walker, Isobel
2018-01-01
Laboratory quality programs rely on internal quality control and external quality assessment (EQA). EQA programs provide unknown specimens for the laboratory to test. The laboratory's result is compared with other (peer) laboratories performing the same test. EQA programs assign target values using a variety of methods statistical tools and performance assessment of ‘pass’ or ‘fail’ is made. EQA provider members of the international organization, external quality assurance in thrombosis and hemostasis, took part in a study to compare outcome of performance analysis using the same data set of laboratory results. Eleven EQA organizations using eight different analytical approaches participated. Data for a normal and prolonged activated partial thromboplastin time (aPTT) and a normal and reduced factor VIII (FVIII) from 218 laboratories were sent to the EQA providers who analyzed the data set using their method of evaluation for aPTT and FVIII, determining the performance for each laboratory record in the data set. Providers also summarized their statistical approach to assignment of target values and laboratory performance. Each laboratory record in the data set was graded pass/fail by all EQA providers for each of the four analytes. There was a lack of agreement of pass/fail grading among EQA programs. Discordance in the grading was 17.9 and 11% of normal and prolonged aPTT results, respectively, and 20.2 and 17.4% of normal and reduced FVIII results, respectively. All EQA programs in this study employed statistical methods compliant with the International Standardization Organization (ISO), ISO 13528, yet the evaluation of laboratory results for all four analytes showed remarkable grading discordance. PMID:29232255
Yu, Honglian; Merib, Josias; Anderson, Jared L
2016-03-18
Neat crosslinked polymeric ionic liquid (PIL) sorbent coatings for solid-phase microextraction (SPME) compatible with high-performance liquid chromatography (HPLC) are reported for the first time. Six structurally different PILs were crosslinked to nitinol supports and applied for the determination of select pharmaceutical drugs, phenolics, and insecticides. Sampling conditions including sample solution pH, extraction time, desorption solvent, desorption time, and desorption solvent volume were optimized using design of experiment (DOE). The developed PIL sorbent coatings were stable when performing extractions under acidic pH and remained intact in various organic desorption solvents (i.e., methanol, acetonitrile, acetone). The PIL-based sorbent coating polymerized from the IL monomer 1-vinyl-3-(10-hydroxydecyl) imidazolium chloride [VC10OHIM][Cl] and IL crosslinker 1,12-di(3-vinylbenzylimidazolium) dodecane dichloride [(VBIM)2C12] 2[Cl] exhibited superior extraction performance compared to the other studied PILs. The extraction efficiency of pharmaceutical drugs and phenolics increased when the film thickness of the PIL-based sorbent coating was increased while many insecticides were largely unaffected. Satisfactory analytical performance was obtained with limits of detection (LODs) ranging from 0.2 to 2 μg L(-1) for the target analytes. The accuracy of the analytical method was examined by studying the relative recovery of analytes in real water samples, including tap water and lake water, with recoveries varying from 50.2% to 115.9% and from 48.8% to 116.6%, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Shin, Kyung-Hun; Park, Hyung-Il; Kim, Kwan-Ho; Jang, Seok-Myeong; Choi, Jang-Young
2017-05-01
The shape of the magnet is essential to the performance of a slotless permanent magnet linear synchronous machine (PMLSM) because it is directly related to desirable machine performance. This paper presents a reduction in the thrust ripple of a PMLSM through the use of arc-shaped magnets based on electromagnetic field theory. The magnetic field solutions were obtained by considering end effect using a magnetic vector potential and two-dimensional Cartesian coordinate system. The analytical solution of each subdomain (PM, air-gap, coil, and end region) is derived, and the field solution is obtained by applying the boundary and interface conditions between the subdomains. In particular, an analytical method was derived for the instantaneous thrust and thrust ripple reduction of a PMLSM with arc-shaped magnets. In order to demonstrate the validity of the analytical results, the back electromotive force results of a finite element analysis and experiment on the manufactured prototype model were compared. The optimal point for thrust ripple minimization is suggested.
David, Frank; Tienpont, Bart; Devos, Christophe; Lerch, Oliver; Sandra, Pat
2013-10-25
Laboratories focusing on residue analysis in food are continuously seeking to increase sample throughput by minimizing sample preparation. Generic sample extraction methods such as QuEChERS lack selectivity and consequently extracts are not free from non-volatile material that contaminates the analytical system. Co-extracted matrix constituents interfere with target analytes, even if highly sensitive and selective GC-MS/MS is used. A number of GC approaches are described that can be used to increase laboratory productivity. These techniques include automated inlet liner exchange and column backflushing for preservation of the performance of the analytical system and heart-cutting two-dimensional GC for increasing sensitivity and selectivity. The application of these tools is illustrated by the analysis of pesticides in vegetables and fruits, PCBs in milk powder and coplanar PCBs in fish. It is demonstrated that considerable increase in productivity can be achieved by decreasing instrument down-time, while analytical performance is equal or better compared to conventional trace contaminant analysis. Copyright © 2013 Elsevier B.V. All rights reserved.
Comparison of three methods for wind turbine capacity factor estimation.
Ditkovich, Y; Kuperman, A
2014-01-01
Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first "quasiexact" approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second "analytic" approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third "approximate" approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation.
Lupu, Stelian; Lete, Cecilia; Balaure, Paul Cătălin; Caval, Dan Ion; Mihailciuc, Constantin; Lakard, Boris; Hihn, Jean-Yves; del Campo, Francisco Javier
2013-01-01
Bio-composite coatings consisting of poly(3,4-ethylenedioxythiophene) (PEDOT) and tyrosinase (Ty) were successfully electrodeposited on conventional size gold (Au) disk electrodes and microelectrode arrays using sinusoidal voltages. Electrochemical polymerization of the corresponding monomer was carried out in the presence of various Ty amounts in aqueous buffered solutions. The bio-composite coatings prepared using sinusoidal voltages and potentiostatic electrodeposition methods were compared in terms of morphology, electrochemical properties, and biocatalytic activity towards various analytes. The amperometric biosensors were tested in dopamine (DA) and catechol (CT) electroanalysis in aqueous buffered solutions. The analytical performance of the developed biosensors was investigated in terms of linear response range, detection limit, sensitivity, and repeatability. A semi-quantitative multi-analyte procedure for simultaneous determination of DA and CT was developed. The amperometric biosensor prepared using sinusoidal voltages showed much better analytical performance. The Au disk biosensor obtained by 50 mV alternating voltage amplitude displayed a linear response for DA concentrations ranging from 10 to 300 μM, with a detection limit of 4.18 μM. PMID:23698270
Nanomaterials-based biosensors for detection of microorganisms and microbial toxins.
Sutarlie, Laura; Ow, Sian Yang; Su, Xiaodi
2017-04-01
Detection of microorganisms and microbial toxins is important for health and safety. Due to their unique physical and chemical properties, nanomaterials have been extensively used to develop biosensors for rapid detection of microorganisms with microbial cells and toxins as target analytes. In this paper, the design principles of nanomaterials-based biosensors for four selected analyte categories (bacteria cells, toxins, mycotoxins, and protozoa cells), closely associated with the target analytes' properties is reviewed. Five signal transducing methods that are less equipment intensive (colorimetric, fluorimetric, surface enhanced Raman scattering, electrochemical, and magnetic relaxometry methods) is described and compared for their sensory performance (in term oflimit of detection, dynamic range, and response time) for all analyte categories. In the end, the suitability of these five sensing principles for on-site or field applications is discussed. With a comprehensive coverage of nanomaterials, design principles, sensing principles, and assessment on the sensory performance and suitability for on-site application, this review offers valuable insight and perspective for designing suitable nanomaterials-based microorganism biosensors for a given application. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Trujillo-Rodríguez, María J; Yu, Honglian; Cole, William T S; Ho, Tien D; Pino, Verónica; Anderson, Jared L; Afonso, Ana M
2014-04-01
The extraction performance of four polymeric ionic liquid (PIL)-based solid-phase microextraction (SPME) coatings has been studied and compared to that of commercial SPME coatings for the extraction of 16 volatile compounds in cheeses. The analytes include 2 free fatty acids, 2 aldehydes, 2 ketones and 10 phenols and were determined by headspace (HS)-SPME coupled to gas chromatography (GC) with flame-ionization detection (FID). The PIL-based coatings produced by UV co-polymerization were more efficient than PIL-based coatings produced by thermal AIBN polymerization. Partition coefficients of analytes between the sample and the coating (Kfs) were estimated for all PIL-based coatings and the commercial SPME fiber showing the best performance among the commercial fibers tested: carboxen-polydimethylsyloxane (CAR-PDMS). For the PIL-based fibers, the highest K(fs) value (1.96 ± 0.03) was obtained for eugenol. The normalized calibration slope, which takes into account the SPME coating thickness, was also used as a simpler approximate tool to compare the nature of the coating within the determinations, with results entirely comparable to those obtained with estimated K(fs) values. The PIL-based materials obtained by UV co-polymerization containing the 1-vinyl-3-hexylimidazolium chloride IL monomer and 1,12-di(3-vinylimiazolium)dodecane dibromide IL crosslinker exhibited the best performance in the extraction of the select analytes from cheeses. Despite a coating thickness of only 7 µm, this copolymeric sorbent coating was capable of quantitating analytes in HS-SPME in a 30 to 2000 µg L(-1) concentration range, with correlation coefficient (R) values higher than 0.9938, inter-day precision values (as relative standard deviation in %) varying from 6.1 to 20%, and detection limits down to 1.6 µg L(-1). Copyright © 2013 Elsevier B.V. All rights reserved.
Shebanova, A S; Bogdanov, A G; Ismagulova, T T; Feofanov, A V; Semenyuk, P I; Muronets, V I; Erokhina, M V; Onishchenko, G E; Kirpichnikov, M P; Shaitan, K V
2014-01-01
This work represents the results of the study on applicability of the modern methods of analytical transmission electron microscopy for detection, identification and visualization of localization of nanoparticles of titanium and cerium oxides in A549 cell, human lung adenocarcinoma cell line. A comparative analysis of images of the nanoparticles in the cells obtained in the bright field mode of transmission electron microscopy, under dark-field scanning transmission electron microscopy and high-angle annular dark field scanning transmission electron was performed. For identification of nanoparticles in the cells the analytical techniques, energy-dispersive X-ray spectroscopy and electron energy loss spectroscopy, were compared when used in the mode of obtaining energy spectrum from different particles and element mapping. It was shown that the method for electron tomography is applicable to confirm that nanoparticles are localized in the sample but not coated by contamination. The possibilities and fields of utilizing different techniques for analytical transmission electron microscopy for detection, visualization and identification of nanoparticles in the biological samples are discussed.
Bigus, Paulina; Tsakovski, Stefan; Simeonov, Vasil; Namieśnik, Jacek; Tobiszewski, Marek
2016-05-01
This study presents an application of the Hasse diagram technique (HDT) as the assessment tool to select the most appropriate analytical procedures according to their greenness or the best analytical performance. The dataset consists of analytical procedures for benzo[a]pyrene determination in sediment samples, which were described by 11 variables concerning their greenness and analytical performance. Two analyses with the HDT were performed-the first one with metrological variables and the second one with "green" variables as input data. Both HDT analyses ranked different analytical procedures as the most valuable, suggesting that green analytical chemistry is not in accordance with metrology when benzo[a]pyrene in sediment samples is determined. The HDT can be used as a good decision support tool to choose the proper analytical procedure concerning green analytical chemistry principles and analytical performance merits.
Poljak, Mario; Ostrbenk, Anja; Seme, Katja; Ucakar, Veronika; Hillemanns, Peter; Bokal, Eda Vrtacnik; Jancar, Nina; Klavs, Irena
2011-05-01
The clinical performance of the Abbott RealTime High Risk HPV (human papillomavirus) test (RealTime) and that of the Hybrid Capture 2 HPV DNA test (hc2) were prospectively compared in the population-based cervical cancer screening setting. In women >30 years old (n = 3,129), the clinical sensitivity of RealTime for detection of cervical intraepithelial neoplasia of grade 2 (CIN2) or worse (38 cases) and its clinical specificity for lesions of less than CIN2 (3,091 controls) were 100% and 93.3%, respectively, and those of hc2 were 97.4% and 91.8%, respectively. A noninferiority score test showed that the clinical specificity (P < 0.0001) and clinical sensitivity (P = 0.011) of RealTime were noninferior to those of hc2 at the recommended thresholds of 98% and 90%. In the total study population (women 20 to 64 years old; n = 4,432; 57 cases, 4,375 controls), the clinical sensitivity and specificity of RealTime were 98.2% and 89.5%, and those of hc2 were 94.7% and 87.7%, respectively. The analytical sensitivity and analytical specificity of RealTime in detecting targeted HPV types evaluated with the largest sample collection to date (4,479 samples) were 94.8% and 99.8%, and those of hc2 were 93.4% and 97.8%, respectively. Excellent analytical agreement between the two assays was obtained (kappa value, 0.84), while the analytical accuracy of RealTime was significantly higher than that of hc2. RealTime demonstrated high intralaboratory reproducibility and interlaboratory agreement with 500 samples retested 61 to 226 days after initial testing in two different laboratories. RealTime can be considered to be a reliable and robust HPV assay clinically comparable to hc2 for the detection of CIN2+ lesions in a population-based cervical cancer screening setting.
Svagera, Zdeněk; Hanzlíková, Dagmar; Simek, Petr; Hušek, Petr
2012-03-01
Four disulfide-reducing agents, dithiothreitol (DTT), 2,3-dimercaptopropanesulfonate (DMPS), and the newly tested 2-mercaptoethanesulfonate (MESNA) and Tris(hydroxypropyl)phosphine (THP), were investigated in detail for release of sulfur amino acids in human plasma. After protein precipitation with trichloroacetic acid (TCA), the plasma supernatant was treated with methyl, ethyl, or propyl chloroformate via the well-proven derivatization-extraction technique and the products were subjected to gas chromatographic-mass spectrometric (GC-MS) analysis. All the tested agents proved to be rapid and effective reducing agents for the assay of plasma thiols. When compared with DTT, the novel reducing agents DMPS, MESNA, and THP provided much cleaner extracts and improved analytical performance. Quantification of homocysteine, cysteine, and methionine was performed using their deuterated analogues, whereas other analytes were quantified by means of 4-chlorophenylalanine. Precise and reliable assay of all examined analytes was achieved, irrespective of the chloroformate reagent used. Average relative standard deviations at each analyte level were ≤6%, quantification limits were 0.1-0.2 μmol L(-1), recoveries were 94-121%, and linearity was over three orders of magnitude (r(2) equal to 0.997-0.998). Validation performed with the THP agent and propyl chloroformate derivatization demonstrated the robustness and reliability of this simple sample-preparation methodology.
Churchwell, Mona I; Twaddle, Nathan C; Meeker, Larry R; Doerge, Daniel R
2005-10-25
Recent technological advances have made available reverse phase chromatographic media with a 1.7 microm particle size along with a liquid handling system that can operate such columns at much higher pressures. This technology, termed ultra performance liquid chromatography (UPLC), offers significant theoretical advantages in resolution, speed, and sensitivity for analytical determinations, particularly when coupled with mass spectrometers capable of high-speed acquisitions. This paper explores the differences in LC-MS performance by conducting a side-by-side comparison of UPLC for several methods previously optimized for HPLC-based separation and quantification of multiple analytes with maximum throughput. In general, UPLC produced significant improvements in method sensitivity, speed, and resolution. Sensitivity increases with UPLC, which were found to be analyte-dependent, were as large as 10-fold and improvements in method speed were as large as 5-fold under conditions of comparable peak separations. Improvements in chromatographic resolution with UPLC were apparent from generally narrower peak widths and from a separation of diastereomers not possible using HPLC. Overall, the improvements in LC-MS method sensitivity, speed, and resolution provided by UPLC show that further advances can be made in analytical methodology to add significant value to hypothesis-driven research.
Hawkins, Robert C; Badrick, Tony
2015-08-01
In this study we aimed to compare the reporting unit size used by Australian laboratories for routine chemistry and haematology tests to the unit size used by learned authorities and in standard laboratory textbooks and to the justified unit size based on measurement uncertainty (MU) estimates from quality assurance program data. MU was determined from Royal College of Pathologists of Australasia (RCPA) - Australasian Association of Clinical Biochemists (AACB) and RCPA Haematology Quality Assurance Program survey reports. The reporting unit size implicitly suggested in authoritative textbooks, the RCPA Manual, and the General Serum Chemistry program itself was noted. We also used published data on Australian laboratory practices.The best performing laboratories could justify their chemistry unit size for 55% of analytes while comparable figures for the 50% and 90% laboratories were 14% and 8%, respectively. Reporting unit size was justifiable for all laboratories for red cell count, >50% for haemoglobin but only the top 10% for haematocrit. Few, if any, could justify their mean cell volume (MCV) and mean cell haemoglobin concentration (MCHC) reporting unit sizes.The reporting unit size used by many laboratories is not justified by present analytical performance. Using MU estimates to determine the reporting interval for quantitative laboratory results ensures reporting practices match local analytical performance and recognises the inherent error of the measurement process.
Fitzgibbons, Patrick L; Goldsmith, Jeffrey D; Souers, Rhona J; Fatheree, Lisa A; Volmar, Keith E; Stuart, Lauren N; Nowak, Jan A; Astles, J Rex; Nakhleh, Raouf E
2017-09-01
- Laboratories must demonstrate analytic validity before any test can be used clinically, but studies have shown inconsistent practices in immunohistochemical assay validation. - To assess changes in immunohistochemistry analytic validation practices after publication of an evidence-based laboratory practice guideline. - A survey on current immunohistochemistry assay validation practices and on the awareness and adoption of a recently published guideline was sent to subscribers enrolled in one of 3 relevant College of American Pathologists proficiency testing programs and to additional nonsubscribing laboratories that perform immunohistochemical testing. The results were compared with an earlier survey of validation practices. - Analysis was based on responses from 1085 laboratories that perform immunohistochemical staining. Of 1057 responses, 65.4% (691) were aware of the guideline recommendations before this survey was sent and 79.9% (550 of 688) of those have already adopted some or all of the recommendations. Compared with the 2010 survey, a significant number of laboratories now have written validation procedures for both predictive and nonpredictive marker assays and specifications for the minimum numbers of cases needed for validation. There was also significant improvement in compliance with validation requirements, with 99% (100 of 102) having validated their most recently introduced predictive marker assay, compared with 74.9% (326 of 435) in 2010. The difficulty in finding validation cases for rare antigens and resource limitations were cited as the biggest challenges in implementing the guideline. - Dissemination of the 2014 evidence-based guideline validation practices had a positive impact on laboratory performance; some or all of the recommendations have been adopted by nearly 80% of respondents.
Boiano, J M; Wallace, M E; Sieber, W K; Groff, J H; Wang, J; Ashley, K
2000-08-01
A field study was conducted with the goal of comparing the performance of three recently developed or modified sampling and analytical methods for the determination of airborne hexavalent chromium (Cr(VI)). The study was carried out in a hard chrome electroplating facility and in a jet engine manufacturing facility where airborne Cr(VI) was expected to be present. The analytical methods evaluated included two laboratory-based procedures (OSHA Method ID-215 and NIOSH Method 7605) and a field-portable method (NIOSH Method 7703). These three methods employ an identical sampling methodology: collection of Cr(VI)-containing aerosol on a polyvinyl chloride (PVC) filter housed in a sampling cassette, which is connected to a personal sampling pump calibrated at an appropriate flow rate. The basis of the analytical methods for all three methods involves extraction of the PVC filter in alkaline buffer solution, chemical isolation of the Cr(VI) ion, complexation of the Cr(VI) ion with 1,5-diphenylcarbazide, and spectrometric measurement of the violet chromium diphenylcarbazone complex at 540 nm. However, there are notable specific differences within the sample preparation procedures used in three methods. To assess the comparability of the three measurement protocols, a total of 20 side-by-side air samples were collected, equally divided between a chromic acid electroplating operation and a spray paint operation where water soluble forms of Cr(VI) were used. A range of Cr(VI) concentrations from 0.6 to 960 microg m(-3), with Cr(VI) mass loadings ranging from 0.4 to 32 microg, was measured at the two operations. The equivalence of the means of the log-transformed Cr(VI) concentrations obtained from the different analytical methods was compared. Based on analysis of variance (ANOVA) results, no statistically significant differences were observed between mean values measured using each of the three methods. Small but statistically significant differences were observed between results obtained from performance evaluation samples for the NIOSH field method and the OSHA laboratory method.
Pretorius, Carel J; Tate, Jillian R; Wilgen, Urs; Cullen, Louise; Ungerer, Jacobus P J
2018-05-01
We investigated the analytical performance, outlier rate, carryover and reference interval of the Beckman Coulter Access hsTnI in detail and compared it with historical and other commercial assays. We compared the imprecision, detection capability, analytical sensitivity, outlier rate and carryover against two previous Access AccuTnI assay versions. We established the reference interval with stored samples from a previous study and compared the concordances and variances with the Access AccuTnI+3 as well as with two commercial assays. The Access hsTnI had excellent analytical sensitivity with the calibration slope 5.6 times steeper than the Access AccuTnI+3. The detection capability was markedly improved with the SD of the blank 0.18-0.20 ng/L, LoB 0.29-0.33 ng/L and LoD 0.58-0.69 ng/L. All the reference interval samples had a result above the LoB value. At a mean concentration of 2.83 ng/L the SD was 0.28 ng/L (CV 9.8%). Carryover (0.005%) and outlier (0.046%) rates were similar to the Access AccuTnI+3. The combined male and female 99th percentile reference interval was 18.2 ng/L (90% CI 13.2-21.1 ng/L). Concordance amongst the assays was poor with only 16.7%, 19.6% and 15.2% of samples identified by all 4 assays as above the 99th, 97.5th and 95th percentiles. Analytical imprecision was a minor contributor to the observed variances between assays. The Beckman Coulter Access hsTnI assay has excellent analytical sensitivity and precision characteristics close to zero. This allows cTnI measurement in all healthy individuals and the capability to identify numerically small differences between serial samples as statistically significant. Concordance in healthy individuals remains poor amongst assays. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dalipi, Rogerta; Marguí, Eva; Borgese, Laura; Bilo, Fabjola; Depero, Laura E.
2016-06-01
Recent technological improvements have led to a widespread adoption of benchtop total reflection X-ray fluorescence systems (TXRF) for analysis of liquid samples. However, benchtop TXRF systems usually present limited sensitivity compared with high-scale instrumentation which can restrict its application in some fields. The aim of the present work was to evaluate and compare the analytical capabilities of two TXRF systems, equipped with low power Mo and W target X-ray tubes, for multielemental analysis of wine samples. Using the Mo-TXRF system, the detection limits for most elements were one order of magnitude lower than those attained using the W-TXRF system. For the detection of high Z elements like Cd and Ag, however, W-TXRF remains a very good option due to the possibility of K-Lines detection. Accuracy and precision of the obtained results have been evaluated analyzing spiked real wine samples and comparing the TXRF results with those obtained by inductively coupled plasma emission spectroscopy (ICP-OES). In general, good agreement was obtained between ICP-OES and TXRF results for the analysis of both red and white wine samples except for light elements (i.e., K) which TXRF concentrations were underestimated. However, a further achievement of analytical quality of TXRF results can be achieved if wine analysis is performed after dilution of the sample with de-ionized water.
Mechanics of additively manufactured porous biomaterials based on the rhombicuboctahedron unit cell.
Hedayati, R; Sadighi, M; Mohammadi-Aghdam, M; Zadpoor, A A
2016-01-01
Thanks to recent developments in additive manufacturing techniques, it is now possible to fabricate porous biomaterials with arbitrarily complex micro-architectures. Micro-architectures of such biomaterials determine their physical and biological properties, meaning that one could potentially improve the performance of such biomaterials through rational design of micro-architecture. The relationship between the micro-architecture of porous biomaterials and their physical and biological properties has therefore received increasing attention recently. In this paper, we studied the mechanical properties of porous biomaterials made from a relatively unexplored unit cell, namely rhombicuboctahedron. We derived analytical relationships that relate the micro-architecture of such porous biomaterials, i.e. the dimensions of the rhombicuboctahedron unit cell, to their elastic modulus, Poisson's ratio, and yield stress. Finite element models were also developed to validate the analytical solutions. Analytical and numerical results were compared with experimental data from one of our recent studies. It was found that analytical solutions and numerical results show a very good agreement particularly for smaller values of apparent density. The elastic moduli predicted by analytical and numerical models were in very good agreement with experimental observations too. While in excellent agreement with each other, analytical and numerical models somewhat over-predicted the yield stress of the porous structures as compared to experimental data. As the ratio of the vertical struts to the inclined struts, α, approaches zero and infinity, the rhombicuboctahedron unit cell respectively approaches the octahedron (or truncated cube) and cube unit cells. For those limits, the analytical solutions presented here were found to approach the analytic solutions obtained for the octahedron, truncated cube, and cube unit cells, meaning that the presented solutions are generalizations of the analytical solutions obtained for several other types of porous biomaterials. Copyright © 2015 Elsevier Ltd. All rights reserved.
Determination of Perfluorinated Compounds in the Upper Mississippi River Basin
Despite ongoing efforts to develop robust analytical methods for the determination of perfluorinated compounds (PFCs) such as perfluorooctanesulfonate (PFOS) and perfluorooctanoic acid (PFOA) in surface water, comparatively little has been published on method performance, and the...
Conceptual Design Study on Bolts for Self-Loosing Preventable Threaded Fasteners
NASA Astrophysics Data System (ADS)
Noma, Atsushi; He, Jianmei
2017-11-01
Threaded fasteners using bolts is widely applied in industrial field as well as various fields. However, threaded fasteners using bolts have loosing problems and cause many accidents. In this study, the purpose is to obtain self-loosing preventable threaded fasteners by applying spring characteristic effects on bolt structures. Helical-cutting applied bolt structures is introduced through three dimensional (3D) CAD modeling tools. Analytical approaches for evaluations on the spring characteristic effects helical-cutting applied bolt structures and self-loosing preventable performance of threaded fasteners were performed using finite element method and results are reported. Comparing slackness test results with analytical results and more details on evaluating mechanical properties will be executed in future study.
Ho, Sirikit; Lukacs, Zoltan; Hoffmann, Georg F; Lindner, Martin; Wetter, Thomas
2007-07-01
In newborn screening with tandem mass spectrometry, multiple intermediary metabolites are quantified in a single analytical run for the diagnosis of fatty-acid oxidation disorders, organic acidurias, and aminoacidurias. Published diagnostic criteria for these disorders normally incorporate a primary metabolic marker combined with secondary markers, often analyte ratios, for which the markers have been chosen to reflect metabolic pathway deviations. We applied a procedure to extract new markers and diagnostic criteria for newborn screening to the data of newborns with confirmed medium-chain acyl-CoA dehydrogenase deficiency (MCADD) and a control group from the newborn screening program, Heidelberg, Germany. We validated the results with external data of the screening center in Hamburg, Germany. We extracted new markers by performing a systematic search for analyte combinations (features) with high discriminatory performance for MCADD. To select feature thresholds, we applied automated procedures to separate controls and cases on the basis of the feature values. Finally, we built classifiers from these new markers to serve as diagnostic criteria in screening for MCADD. On the basis of chi(2) scores, we identified approximately 800 of >628,000 new analyte combinations with superior discriminatory performance compared with the best published combinations. Classifiers built with the new features achieved diagnostic sensitivities and specificities approaching 100%. Feature construction methods provide ways to disclose information hidden in the set of measured analytes. Other diagnostic tasks based on high-dimensional metabolic data might also profit from this approach.
Morishita, Y
2001-05-01
The subject matters concerned with use of so-called simplified analytical systems for the purpose of useful utilizing are mentioned from the perspective of a laboratory technician. 1. The data from simplified analytical systems should to be agreed with those of particular reference methods not to occur the discrepancy of the data from different laboratories. 2. Accuracy of the measured results using simplified analytical systems is hard to be scrutinized thoroughly and correctly with the quality control surveillance procedure on the stored pooled serum or partly-processed blood. 3. It is necessary to present the guide line to follow about the contents of evaluation to guarantee on quality of simplified analytical systems. 4. Maintenance and manual performance of simplified analytical systems have to be standardized by a laboratory technician and a selling agent technician. 5. It calls attention, further that the cost of simplified analytical systems is much expensive compared to that of routine method with liquid reagents. 6. Various substances in human serum, like cytokine, hormone, tumor marker, and vitamin, etc. are also hoped to be measured by simplified analytical systems.
UV-Vis as quantification tool for solubilized lignin following a single-shot steam process.
Lee, Roland A; Bédard, Charles; Berberi, Véronique; Beauchet, Romain; Lavoie, Jean-Michel
2013-09-01
In this short communication, UV/Vis was used as an analytical tool for the quantification of lignin concentrations in aqueous mediums. A significant correlation was determined between absorbance and concentration of lignin in solution. For this study, lignin was produced from different types of biomasses (willow, aspen, softwood, canary grass and hemp) using steam processes. Quantification was performed at 212, 225, 237, 270, 280 and 287 nm. UV-Vis quantification of lignin was found suitable for different types of biomass making this a timesaving analytical system that could lead to uses as Process Analytical Tool (PAT) in biorefineries utilizing steam processes or comparable approaches. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Sanger, N. L.
1973-01-01
The flow characteristics of several tandem bladed compressor stators were analytically evaluated over a range of inlet incidence angles. The ratios of rear-segment to front-segment chord and camber were varied. Results were also compared to the analytical performance of a reference solid blade section. All tandem blade sections exhibited lower calculated losses than the solid stator. But no one geometric configuration exhibited clearly superior characteristics. The front segment accepts the major effect of overall incidence angle change. Rear- to front-segment camber ratios of 4 and greater appeared to be limited by boundary-layer separation from the pressure surface of the rear segment.
Validating a faster method for reconstitution of Crotalidae Polyvalent Immune Fab (ovine).
Gerring, David; King, Thomas R; Branton, Richard
2013-07-01
Reconstitution of CroFab(®) (Crotalidae Polyvalent Immune Fab [ovine]) lyophilized drug product was previously performed using 10 mL sterile water for injection followed by up to 36 min of gentle swirling of the vial. CroFab has been clinically demonstrated to be most effective when administered within 6 h of snake envenomation, and improved clinical outcomes are correlated with quicker timing of administration. An alternate reconstitution method was devised, using 18 mL 0.9% saline with manual inversion, with the goal of shortening reconstitution time while maintaining a high quality, efficacious product. An analytical study was designed to compare the physicochemical properties of 3 separate batches of CroFab when reconstituted using the standard procedure (10 mL WFI with gentle swirling) and a modified rapid procedure using 18 mL 0.9% saline and manual inversion. The physical and chemical characteristics of the same 3 batches were assessed using various analytic methodologies associated with routine quality control release testing. In addition further analytical methodologies were applied in order to elucidate possible structural changes that may be induced by the changed reconstitution procedure. Batches A, B, and C required mean reconstitution times of 25 min 51 s using the label method and 3 min 07 s (a 88.0% mean decrease) using the modified method. Physicochemical characteristics (color and clarity, pH, purity, protein content, potency) were found to be highly comparable. Characterization assays (dynamic light scattering, analytical ultracentrifugation, LC-MS, SDS-PAGE and circular dichroism spectroscopy were also all found to be comparable between methods. When comparing CroFab batches that were reconstituted using the labeled and modified methods, the physicochemical and biological (potency) characteristics of CroFab were not significantly changed when challenged by the various standard analytical methodologies applied in routine quality control analysis. Additionally, no changes in the CroFab molecule regarding degradation, aggregation, purity, structure, or mass were observed. The analyses performed validated the use of the more rapid reconstitution method using 18 mL 0.9% saline in order to allow a significantly reduced time to administration of CroFab to patients in need. Copyright © 2013 Elsevier Ltd. All rights reserved.
Experimental and analytical assessment of the thermal behavior of spiral bevel gears
NASA Technical Reports Server (NTRS)
Handschuh, Robert F.; Kicher, Thomas P.
1995-01-01
An experimental and analytical study of spiral bevel gears operating in an aerospace environment has been performed. Tests were conducted within a closed loop test stand at NASA Lewis Research Center. Tests were conducted to 537 kW (720 hp) at 14,400 rpm. The effects of various operating conditions on spiral bevel gear steady state and transient temperature are presented. Also, a three-dimensional analysis of the thermal behavior was conducted using a nonlinear finite element analysis computer code. The analysis was compared to the experimental results attained in this study. The results agreed well with each other for the cases compared and were no more than 10 percent different in magnitude.
Experimental evaluation of expendable supersonic nozzle concepts
NASA Technical Reports Server (NTRS)
Baker, V.; Kwon, O.; Vittal, B.; Berrier, B.; Re, R.
1990-01-01
Exhaust nozzles for expendable supersonic turbojet engine missile propulsion systems are required to be simple, short and compact, in addition to having good broad-range thrust-minus-drag performance. A series of convergent-divergent nozzle scale model configurations were designed and wind tunnel tested for a wide range of free stream Mach numbers and nozzle pressure ratios. The models included fixed geometry and simple variable exit area concepts. The experimental and analytical results show that the fixed geometry configurations tested have inferior off-design thrust-minus-drag performance in the transonic Mach range. A simple variable exit area configuration called the Axi-Quad nozzle, combining features of both axisymmetric and two-dimensional convergent-divergent nozzles, performed well over a broad range of operating conditions. Analytical predictions of the flow pattern as well as overall performance of the nozzles, using a fully viscous, compressible CFD code, compared very well with the test data.
Influence of Wake Models on Calculated Tiltrotor Aerodynamics
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2001-01-01
The tiltrotor aircraft configuration has the potential to revolutionize air transportation by providing an economical combination of vertical take-off and landing capability with efficient, high-speed cruise flight. To achieve this potential it is necessary to have validated analytical tools that will support future tiltrotor aircraft development. These analytical tools must calculate tiltrotor aeromechanical behavior, including performance, structural loads, vibration, and aeroelastic stability, with an accuracy established by correlation with measured tiltrotor data. The recent test of the Tilt Rotor Aeroacoustic Model (TRAM) with a single,l/4-scale V-22 rotor in the German-Dutch Wind Tunnel (DNW) provides an extensive set of aeroacoustic, performance, and structural loads data. This paper will examine the influence of wake models on calculated tiltrotor aerodynamics, comparing calculations of performance and airloads with TRAM DNW measurements. The calculations will be performed using the comprehensive analysis CAMRAD II.
High-frequency phase shift measurement greatly enhances the sensitivity of QCM immunosensors.
March, Carmen; García, José V; Sánchez, Ángel; Arnau, Antonio; Jiménez, Yolanda; García, Pablo; Manclús, Juan J; Montoya, Ángel
2015-03-15
In spite of being widely used for in liquid biosensing applications, sensitivity improvement of conventional (5-20MHz) quartz crystal microbalance (QCM) sensors remains an unsolved challenging task. With the help of a new electronic characterization approach based on phase change measurements at a constant fixed frequency, a highly sensitive and versatile high fundamental frequency (HFF) QCM immunosensor has successfully been developed and tested for its use in pesticide (carbaryl and thiabendazole) analysis. The analytical performance of several immunosensors was compared in competitive immunoassays taking carbaryl insecticide as the model analyte. The highest sensitivity was exhibited by the 100MHz HFF-QCM carbaryl immunosensor. When results were compared with those reported for 9MHz QCM, analytical parameters clearly showed an improvement of one order of magnitude for sensitivity (estimated as the I50 value) and two orders of magnitude for the limit of detection (LOD): 30μgl(-1) vs 0.66μgL(-1)I50 value and 11μgL(-1) vs 0.14μgL(-1) LOD, for 9 and 100MHz, respectively. For the fungicide thiabendazole, I50 value was roughly the same as that previously reported for SPR under the same biochemical conditions, whereas LOD improved by a factor of 2. The analytical performance achieved by high frequency QCM immunosensors surpassed those of conventional QCM and SPR, closely approaching the most sensitive ELISAs. The developed 100MHz QCM immunosensor strongly improves sensitivity in biosensing, and therefore can be considered as a very promising new analytical tool for in liquid applications where highly sensitive detection is required. Copyright © 2014 Elsevier B.V. All rights reserved.
Wu, Yun; Wang, Fenrong; Ai, Yu; Ma, Wen; Bian, Qiaoxia; Lee, David Y-W; Dai, Ronghua
2015-06-01
A simple, sensitive and reliable ultra performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) method has been developed and validated for simultaneous quantitation of seven coumarins, the bio-active ingredients of Huo Luo Xiao Ling Dan (HLXLD), in rat plasma. The liquid-liquid extraction method with ether-dichloromethane (2:1, v/v) was used to prepare the plasma samples. Analytes and internal standard (IS) of bifendate were separated on a Shim-pack XR-ODS column (75mm×3.0mm, 2.2μm particles) using gradient elution with the mobile phase consisting of methanol and 0.05% formic acid in water at a flow rate of 0.4mL/min. Detection was performed on a triple quadrupole (TQ) tandem mass spectrometry equipped with an electrospray ionization source in the positive ionization and multiple reaction monitoring (MRM) mode. The lower limits of quantitation (LLOQ) were 0.03-0.25ng/mL for all the analytes. Intra- and inter-day precision and accuracy of the seven analytes were well within acceptance criteria (15%). The matrix effect and the mean extraction recoveries of the analytes and IS from rat plasma were all within satisfaction. The validated method has been successfully applied to compare pharmacokinetic profiles of the seven active ingredients in rat plasma between normal and arthritic rats after oral administration of HLXLD, Angelica pubescens extract and Notopterygium incisum extract, respectively. Results showed that there were remarkable differences in pharmacokinetic properties of the analytes among the different groups. Copyright © 2015. Published by Elsevier B.V.
Kling, Maximilian; Seyring, Nicole; Tzanova, Polia
2016-09-01
Economic instruments provide significant potential for countries with low municipal waste management performance in decreasing landfill rates and increasing recycling rates for municipal waste. In this research, strengths and weaknesses of landfill tax, pay-as-you-throw charging systems, deposit-refund systems and extended producer responsibility schemes are compared, focusing on conditions in countries with low waste management performance. In order to prioritise instruments for implementation in these countries, the analytic hierarchy process is applied using results of a literature review as input for the comparison. The assessment reveals that pay-as-you-throw is the most preferable instrument when utility-related criteria are regarded (wb = 0.35; analytic hierarchy process distributive mode; absolute comparison) mainly owing to its waste prevention effect, closely followed by landfill tax (wb = 0.32). Deposit-refund systems (wb = 0.17) and extended producer responsibility (wb = 0.16) rank third and fourth, with marginal differences owing to their similar nature. When cost-related criteria are additionally included in the comparison, landfill tax seems to provide the highest utility-cost ratio. Data from literature concerning cost (contrary to utility-related criteria) is currently not sufficiently available for a robust ranking according to the utility-cost ratio. In general, the analytic hierarchy process is seen as a suitable method for assessing economic instruments in waste management. Independent from the chosen analytic hierarchy process mode, results provide valuable indications for policy-makers on the application of economic instruments, as well as on their specific strengths and weaknesses. Nevertheless, the instruments need to be put in the country-specific context along with the results of this analytic hierarchy process application before practical decisions are made. © The Author(s) 2016.
Wang, Xiaogang; Qi, Meiling; Fu, Ruonong
2014-12-05
Here we report the separation performance of a new stationary phase of cucurbit[7]uril (CB7) incorporated into an ionic liquid-based sol-gel coating (CB7-SG) for capillary gas chromatography (GC). The CB7-SG stationary phase showed an average polarity of 455, suggesting its polar nature. Abraham system constants revealed that its major interactions with analytes include H-bond basicity (a), dipole-dipole (s) and dispersive (l) interactions. The CB7-SG stationary phase achieved baseline separation for a wide range of analytes with symmetrical peak shapes and showed advantages over the conventional polar stationary phase that failed to resolve some critical analytes. Also, it exhibited different retention behaviors from the conventional stationary phase in terms of retention times and elution order. Most interestingly, in contrast to the conventional polar phase, the CB7-SG stationary phase exhibited longer retentions for analytes of lower polarity but relatively comparable retentions for polar analytes such as alcohols and phenols. The high resolving ability and unique retention behaviors of the CB7-SG stationary phase may stem from the comprehensive interactions of the aforementioned interactions and shape selectivity. Moreover, the CB7-SG column showed good peak shapes for analytes prone to peak tailing, good thermal stability up to 280°C and separation repeatability with RSD values in the range of 0.01-0.11% for intra-day, 0.04-0.41% for inter-day and 2.5-6.0% for column-to-column, respectively. As demonstrated, the proposed coating method can simultaneously address the solubility problem with CBs for the intended purpose and achieve outstanding GC separation performance. Copyright © 2014 Elsevier B.V. All rights reserved.
The MCNP6 Analytic Criticality Benchmark Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
2016-06-16
Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less
An Efficient and Effective Design of InP Nanowires for Maximal Solar Energy Harvesting.
Wu, Dan; Tang, Xiaohong; Wang, Kai; He, Zhubing; Li, Xianqiang
2017-11-25
Solar cells based on subwavelength-dimensions semiconductor nanowire (NW) arrays promise a comparable or better performance than their planar counterparts by taking the advantages of strong light coupling and light trapping. In this paper, we present an accurate and time-saving analytical design for optimal geometrical parameters of vertically aligned InP NWs for maximal solar energy absorption. Short-circuit current densities are calculated for each NW array with different geometrical dimensions under solar illumination. Optimal geometrical dimensions are quantitatively presented for single, double, and multiple diameters of the NW arrays arranged both squarely and hexagonal achieving the maximal short-circuit current density of 33.13 mA/cm 2 . At the same time, intensive finite-difference time-domain numerical simulations are performed to investigate the same NW arrays for the highest light absorption. Compared with time-consuming simulations and experimental results, the predicted maximal short-circuit current densities have tolerances of below 2.2% for all cases. These results unambiguously demonstrate that this analytical method provides a fast and accurate route to guide high performance InP NW-based solar cell design.
A splay tree-based approach for efficient resource location in P2P networks.
Zhou, Wei; Tan, Zilong; Yao, Shaowen; Wang, Shipu
2014-01-01
Resource location in structured P2P system has a critical influence on the system performance. Existing analytical studies of Chord protocol have shown some potential improvements in performance. In this paper a splay tree-based new Chord structure called SChord is proposed to improve the efficiency of locating resources. We consider a novel implementation of the Chord finger table (routing table) based on the splay tree. This approach extends the Chord finger table with additional routing entries. Adaptive routing algorithm is proposed for implementation, and it can be shown that hop count is significantly minimized without introducing any other protocol overheads. We analyze the hop count of the adaptive routing algorithm, as compared to Chord variants, and demonstrate sharp upper and lower bounds for both worst-case and average case settings. In addition, we theoretically analyze the hop reducing in SChord and derive the fact that SChord can significantly reduce the routing hops as compared to Chord. Several simulations are presented to evaluate the performance of the algorithm and support our analytical findings. The simulation results show the efficiency of SChord.
An Efficient and Effective Design of InP Nanowires for Maximal Solar Energy Harvesting
NASA Astrophysics Data System (ADS)
Wu, Dan; Tang, Xiaohong; Wang, Kai; He, Zhubing; Li, Xianqiang
2017-11-01
Solar cells based on subwavelength-dimensions semiconductor nanowire (NW) arrays promise a comparable or better performance than their planar counterparts by taking the advantages of strong light coupling and light trapping. In this paper, we present an accurate and time-saving analytical design for optimal geometrical parameters of vertically aligned InP NWs for maximal solar energy absorption. Short-circuit current densities are calculated for each NW array with different geometrical dimensions under solar illumination. Optimal geometrical dimensions are quantitatively presented for single, double, and multiple diameters of the NW arrays arranged both squarely and hexagonal achieving the maximal short-circuit current density of 33.13 mA/cm2. At the same time, intensive finite-difference time-domain numerical simulations are performed to investigate the same NW arrays for the highest light absorption. Compared with time-consuming simulations and experimental results, the predicted maximal short-circuit current densities have tolerances of below 2.2% for all cases. These results unambiguously demonstrate that this analytical method provides a fast and accurate route to guide high performance InP NW-based solar cell design.
NASA Astrophysics Data System (ADS)
Khazaee, I.
2015-05-01
In this study, the performance of a proton exchange membrane fuel cell in mobile applications is investigated analytically. At present the main use and advantages of fuel cells impact particularly strongly on mobile applications such as vehicles, mobile computers and mobile telephones. Some external parameters such as the cell temperature (Tcell ) , operating pressure of gases (P) and air stoichiometry (λair ) affect the performance and voltage losses in the PEM fuel cell. Because of the existence of many theoretical, empirical and semi-empirical models of the PEM fuel cell, it is necessary to compare the accuracy of these models. But theoretical models that are obtained from thermodynamic and electrochemical approach, are very exact but complex, so it would be easier to use the empirical and smi-empirical models in order to forecast the fuel cell system performance in many applications such as mobile applications. The main purpose of this study is to obtain the semi-empirical relation of a PEM fuel cell with the least voltage losses. Also, the results are compared with the existing experimental results in the literature and a good agreement is seen.
Ai, Yu; Wu, Yun; Wang, Fenrong; Ma, Wen; Bian, Qiaoxia; Lee, David Y-W; Dai, Ronghua
2015-03-01
The objective of this study was to develop a sensitive and reliable ultra performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) method for simultaneous quantitation of three monoterpene glycosides (paeoniflorin, alibiflorin and oxypaeoniflorin) and four alkaloids (tetrahydropalmatine, corydaline, dehydrocorydaline and berberine), the main active ingredients of Radix Paeoniae Rubra extract (RPE) and Corydalis yanhusuo extract (CYE) in Huo Luo Xiao Ling Dan (HLXLD), and to compare the pharmacokinetics of these active ingredients in normal and arthritic rats orally administrated with HLXLD or RPE/CYE alone. The analytes and internal standard (IS) (geniposide) were separated on a XBridge C18 column (150 × 4.6 mm, 3.5 µm) using gradient elution with the mobile phase consisting of methanol and 0.01% formic acid in water at a flow rate of 0.6 ml/min. The detection of the analytes was performed on Acquity UPLC-MS/MS system with an electrospray ionization and multiple reaction monitoring mode via polarity switching between negative (for monoterpene glycosides) and positive (for alkaloids) ionization mode. The lower limits of quantification were 2.5, 1, 0.5, 0.2, 0.2, 0.02 and 0.01 ng/ml for paeoniflorin, alibiflorin, oxypaeoniflorin, tetrahydropalmatine, corydaline, dehydrocorydaline and berberine, respectively. Intra-day and inter-day precision and accuracy of analytes were well within acceptance criteria (15%). The mean extraction recoveries of analytes and IS from rat plasma were all more than 83.1%. The validated method has been successfully applied to determination of the analytes. Results showed that there were remarkable differences in pharmacokinetic properties of the analytes between herbal formula and single herb group, normal and arthritic group. Copyright © 2015 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batcheller, Thomas Aquinas; Taylor, Dean Dalton
Idaho Nuclear Technology and Engineering Center 300,000-gallon vessel WM-189 was filled in late 2001 with concentrated sodium bearing waste (SBW). Three airlifted liquid samples and a steam jetted slurry sample were obtained for quantitative analysis and characterization of WM-189 liquid phase SBW and tank heel sludge. Estimates were provided for most of the reported data values, based on the greater of (a) analytical uncertainty, and (b) variation of analytical results between nominally similar samples. A consistency check on the data was performed by comparing the total mass of dissolved solids in the liquid, as measured gravimetrically from a dried sample,more » with the corresponding value obtained by summing the masses of cations and anions in the liquid, based on the reported analytical data. After reasonable adjustments to the nitrate and oxygen concentrations, satisfactory consistency between the two results was obtained. A similar consistency check was performed on the reported compositional data for sludge solids from the steam jetted sample. In addition to the compositional data, various other analyses were performed: particle size distribution was measured for the sludge solids, sludge settling tests were performed, and viscosity measurements were made. WM-189 characterization results were compared with those for WM-180, and other Tank Farm Facility tank characterization data. A 2-liter batch of WM-189 simulant was prepared and a clear, stable solution was obtained, based on a general procedure for mixing SBW simulant that was develop by Dr. Jerry Christian. This WM-189 SBW simulant is considered suitable for laboratory testing for process development.« less
A hybrid approach to near-optimal launch vehicle guidance
NASA Technical Reports Server (NTRS)
Leung, Martin S. K.; Calise, Anthony J.
1992-01-01
This paper evaluates a proposed hybrid analytical/numerical approach to launch-vehicle guidance for ascent to orbit injection. The feedback-guidance approach is based on a piecewise nearly analytic zero-order solution evaluated using a collocation method. The zero-order solution is then improved through a regular perturbation analysis, wherein the neglected dynamics are corrected in the first-order term. For real-time implementation, the guidance approach requires solving a set of small dimension nonlinear algebraic equations and performing quadrature. Assessment of performance and reliability are carried out through closed-loop simulation for a vertically launched 2-stage heavy-lift capacity vehicle to a low earth orbit. The solutions are compared with optimal solutions generated from a multiple shooting code. In the example the guidance approach delivers over 99.9 percent of optimal performance and terminal constraint accuracy.
Buelow, Daelynn; Sun, Yilun; Tang, Li; Gu, Zhengming; Pounds, Stanley; Hayden, Randall
2016-07-01
Monitoring of Epstein-Barr virus (EBV) load in immunocompromised patients has become integral to their care. An increasing number of reagents are available for quantitative detection of EBV; however, there are little published comparative data. Four real-time PCR systems (one using laboratory-developed reagents and three using analyte-specific reagents) were compared with one another for detection of EBV from whole blood. Whole blood specimens seeded with EBV were used to determine quantitative linearity, analytical measurement range, lower limit of detection, and CV for each assay. Retrospective testing of 198 clinical samples was performed in parallel with all methods; results were compared to determine relative quantitative and qualitative performance. All assays showed similar performance. No significant difference was found in limit of detection (3.12-3.49 log10 copies/mL; P = 0.37). A strong qualitative correlation was seen with all assays that used clinical samples (positive detection rates of 89.5%-95.8%). Quantitative correlation of clinical samples across assays was also seen in pairwise regression analysis, with R(2) ranging from 0.83 to 0.95. Normalizing clinical sample results to IU/mL did not alter the quantitative correlation between assays. Quantitative EBV detection by real-time PCR can be performed over a wide linear dynamic range, using three different commercially available reagents and laboratory-developed methods. EBV was detected with comparable sensitivity and quantitative correlation for all assays. Copyright © 2016 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Isotope-ratio-monitoring gas chromatography-mass spectrometry: methods for isotopic calibration
NASA Technical Reports Server (NTRS)
Merritt, D. A.; Brand, W. A.; Hayes, J. M.
1994-01-01
In trial analyses of a series of n-alkanes, precise determinations of 13C contents were based on isotopic standards introduced by five different techniques and results were compared. Specifically, organic-compound standards were coinjected with the analytes and carried through chromatography and combustion with them; or CO2 was supplied from a conventional inlet and mixed with the analyte in the ion source, or CO2 was supplied from an auxiliary mixing volume and transmitted to the source without interruption of the analyte stream. Additionally, two techniques were investigated in which the analyte stream was diverted and CO2 standards were placed on a near-zero background. All methods provided accurate results. Where applicable, methods not involving interruption of the analyte stream provided the highest performance (sigma = 0.00006 at.% 13C or 0.06% for 250 pmol C as CO2 reaching the ion source), but great care was required. Techniques involving diversion of the analyte stream were immune to interference from coeluting sample components and still provided high precision (0.0001 < or = sigma < or = 0.0002 at.% or 0.1 < or = sigma < or = 0.2%).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tandon, Lav; Kuhn, Kevin J; Drake, Lawrence R
Los Alamos National Laboratory's (LANL) Actinide Analytical Chemistry (AAC) group has been in existence since the Manhattan Project. It maintains a complete set of analytical capabilities for performing complete characterization (elemental assay, isotopic, metallic and non metallic trace impurities) of uranium and plutonium samples in different forms. For a majority of the customers there are strong quality assurance (QA) and quality control (QC) objectives including highest accuracy and precision with well defined uncertainties associated with the analytical results. Los Alamos participates in various international and national programs such as the Plutonium Metal Exchange Program, New Brunswick Laboratory's (NBL' s) Safeguardsmore » Measurement Evaluation Program (SME) and several other inter-laboratory round robin exercises to monitor and evaluate the data quality generated by AAC. These programs also provide independent verification of analytical measurement capabilities, and allow any technical problems with analytical measurements to be identified and corrected. This presentation will focus on key analytical capabilities for destructive analysis in AAC and also comparative data between LANL and peer groups for Pu assay and isotopic analysis.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, B.L.; Pool, K.H.; Evans, J.C.
1997-01-01
This report describes the analytical results of vapor samples taken from the headspace of waste storage tank 241-BY-108 (Tank BY-108) at the Hanford Site in Washington State. The results described in this report is the second in a series comparing vapor sampling of the tank headspace using the Vapor Sampling System (VSS) and In Situ Vapor Sampling (ISVS) system without high efficiency particulate air (HEPA) prefiltration. The results include air concentrations of water (H{sub 2}O) and ammonia (NH{sub 3}), permanent gases, total non-methane organic compounds (TO-12), and individual organic analytes collected in SUMMA{trademark} canisters and on triple sorbent traps (TSTs).more » Samples were collected by Westinghouse Hanford Company (WHC) and analyzed by Pacific Northwest National Laboratory (PNNL). Analyses were performed by the Vapor Analytical Laboratory (VAL) at PNNL. Analyte concentrations were based on analytical results and, where appropriate, sample volume measurements provided by WHC.« less
An Analytical-Numerical Model for Two-Phase Slug Flow through a Sudden Area Change in Microchannels
Momen, A. Mehdizadeh; Sherif, S. A.; Lear, W. E.
2016-01-01
In this article, two new analytical models have been developed to calculate two-phase slug flow pressure drop in microchannels through a sudden contraction. Even though many studies have been reported on two-phase flow in microchannels, considerable discrepancies still exist, mainly due to the difficulties in experimental setup and measurements. Numerical simulations were performed to support the new analytical models and to explore in more detail the physics of the flow in microchannels with a sudden contraction. Both analytical and numerical results were compared to the available experimental data and other empirical correlations. Results show that models, which were developed basedmore » on the slug and semi-slug assumptions, agree well with experiments in microchannels. Moreover, in contrast to the previous empirical correlations which were tuned for a specific geometry, the new analytical models are capable of taking geometrical parameters as well as flow conditions into account.« less
NASA Technical Reports Server (NTRS)
Nayfeh, A. H.; Kaiser, J. E.; Marshall, R. L.; Hurst, L. J.
1978-01-01
The performance of sound suppression techniques in ducts that produce refraction effects due to axial velocity gradients was evaluated. A computer code based on the method of multiple scales was used to calculate the influence of axial variations due to slow changes in the cross-sectional area as well as transverse gradients due to the wall boundary layers. An attempt was made to verify the analytical model through direct comparison of experimental and computational results and the analytical determination of the influence of axial gradients on optimum liner properties. However, the analytical studies were unable to examine the influence of non-parallel ducts on the optimum linear conditions. For liner properties not close to optimum, the analytical predictions and the experimental measurements were compared. The circumferential variations of pressure amplitudes and phases at several axial positions were examined in straight and variable-area ducts, hard-wall and lined sections with and without a mean flow. Reasonable agreement between the theoretical and experimental results was obtained.
Fujiyoshi, Tomoharu; Ikami, Takahito; Sato, Takashi; Kikukawa, Koji; Kobayashi, Masato; Ito, Hiroshi; Yamamoto, Atsushi
2016-02-19
The consequences of matrix effects in GC are a major issue of concern in pesticide residue analysis. The aim of this study was to evaluate the applicability of an analyte protectant generator in pesticide residue analysis using a GC-MS system. The technique is based on continuous introduction of ethylene glycol into the carrier gas. Ethylene glycol as an analyte protectant effectively compensated the matrix effects in agricultural product extracts. All peak intensities were increased by this technique without affecting the GC-MS performance. Calibration curves for ethylene glycol in the GC-MS system with various degrees of pollution were compared and similar response enhancements were observed. This result suggests a convenient multi-residue GC-MS method using an analyte protectant generator instead of the conventional compensation method for matrix-induced response enhancement adding the mixture of analyte protectants into both neat and sample solutions. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ragan, Eric D; Goodall, John R
2014-01-01
Provenance tools can help capture and represent the history of analytic processes. In addition to supporting analytic performance, provenance tools can be used to support memory of the process and communication of the steps to others. Objective evaluation methods are needed to evaluate how well provenance tools support analyst s memory and communication of analytic processes. In this paper, we present several methods for the evaluation of process memory, and we discuss the advantages and limitations of each. We discuss methods for determining a baseline process for comparison, and we describe various methods that can be used to elicit processmore » recall, step ordering, and time estimations. Additionally, we discuss methods for conducting quantitative and qualitative analyses of process memory. By organizing possible memory evaluation methods and providing a meta-analysis of the potential benefits and drawbacks of different approaches, this paper can inform study design and encourage objective evaluation of process memory and communication.« less
Lee, Sang Hun; Yoo, Myung Hoon; Park, Jun Woo; Kang, Byung Chul; Yang, Chan Joo; Kang, Woo Suk; Ahn, Joong Ho; Chung, Jong Woo; Park, Hong Ju
2018-06-01
To evaluate whether video head impulse test (vHIT) gains are dependent on the measuring device and method of analysis. Prospective study. vHIT was performed in 25 healthy subjects using two devices simultaneously. vHIT gains were compared between these instruments and using five different methods of comparing position and velocity gains during head movement intervals. The two devices produced different vHIT gain results with the same method of analysis. There were also significant differences in the vHIT gains measured using different analytical methods. The gain analytic method that compares the areas under the velocity curve (AUC) of the head and eye movements during head movements showed lower vHIT gains than a method that compared the peak velocities of the head and eye movements. The former method produced the vHIT gain with the smallest standard deviation among the five procedures tested in this study. vHIT gains differ in normal subjects depending on the device and method of analysis used, suggesting that it is advisable for each device to have its own normal values. Gain calculations that compare the AUC of the head and eye movements during the head movements show the smallest variance.
Analytical sensitivity of current best-in-class malaria rapid diagnostic tests.
Jimenez, Alfons; Rees-Channer, Roxanne R; Perera, Rushini; Gamboa, Dionicia; Chiodini, Peter L; González, Iveth J; Mayor, Alfredo; Ding, Xavier C
2017-03-24
Rapid diagnostic tests (RDTs) are today the most widely used method for malaria diagnosis and are recommended, alongside microscopy, for the confirmation of suspected cases before the administration of anti-malarial treatment. The diagnostic performance of RDTs, as compared to microscopy or PCR is well described but the actual analytical sensitivity of current best-in-class tests is poorly documented. This value is however a key performance indicator and a benchmark value needed to developed new RDTs of improved sensitivity. Thirteen RDTs detecting either the Plasmodium falciparum histidine rich protein 2 (HRP2) or the plasmodial lactate dehydrogenase (pLDH) antigens were selected from the best performing RDTs according to the WHO-FIND product testing programme. The analytical sensitivity of these products was evaluated using a range of reference materials including P. falciparum and Plasmodium vivax whole parasite samples as well as recombinant proteins. The best performing HRP2-based RDTs could detect all P. falciparum cultured samples at concentrations as low as 0.8 ng/mL of HRP2. The limit of detection of the best performing pLDH-based RDT specifically detecting P. vivax was 25 ng/mL of pLDH. The analytical sensitivity of P. vivax and Pan pLDH-based RDTs appears to vary considerably from product to product, and improvement of the limit-of-detection for P. vivax detecting RDTs is needed to match the performance of HRP2 and Pf pLDH-based RDTs for P. falciparum. Different assays using different reference materials produce different values for antigen concentration in a given specimen, highlighting the need to establish universal reference assays.
Student Facing Dashboards: One Size Fits All?
ERIC Educational Resources Information Center
Teasley, Stephanie D.
2017-01-01
This emerging technology report reviews a new development in educational technology, student-facing dashboards, which provide comparative performance feedback to students calculated by Learning Analytics-based algorithms on data generated from university students' use of educational technology. Instructor- and advisor-facing dashboards emerged as…
Gao, Hongying; Deng, Shibing; Obach, R Scott
2015-12-01
An unbiased scanning methodology using ultra high-performance liquid chromatography coupled with high-resolution mass spectrometry was used to bank data and plasma samples for comparing the data generated at different dates. This method was applied to bank the data generated earlier in animal samples and then to compare the exposure to metabolites in animal versus human for safety assessment. With neither authentic standards nor prior knowledge of the identities and structures of metabolites, full scans for precursor ions and all ion fragments (AIF) were employed with a generic gradient LC method to analyze plasma samples at positive and negative polarity, respectively. In a total of 22 tested drugs and metabolites, 21 analytes were detected using this unbiased scanning method except that naproxen was not detected due to low sensitivity at negative polarity and interference at positive polarity; and 4'- or 5-hydroxy diclofenac was not separated by a generic UPLC method. Statistical analysis of the peak area ratios of the analytes versus the internal standard in five repetitive analyses over approximately 1 year demonstrated that the analysis variation was significantly different from sample instability. The confidence limits for comparing the exposure using peak area ratio of metabolites in animal plasma versus human plasma measured over approximately 1 year apart were comparable to the analysis undertaken side by side on the same days. These statistical analysis results showed it was feasible to compare data generated at different dates with neither authentic standards nor prior knowledge of the analytes.
NASA Astrophysics Data System (ADS)
Cucu, Daniela; Woods, Mike
2008-08-01
The paper aims to present a practical approach for testing laboratories to ensure the quality of their test results. It is based on the experience gained in assessing a large number of testing laboratories, discussing with management and staff, reviewing results obtained in national and international PTs and ILCs and exchanging information in the EA laboratory committee. According to EN ISO/IEC 17025, an accredited laboratory has to implement a programme to ensure the quality of its test results for each measurand. Pre-analytical, analytical and post-analytical measures shall be applied in a systematic manner. They shall include both quality control and quality assurance measures. When designing the quality assurance programme a laboratory should consider pre-analytical activities (like personnel training, selection and validation of test methods, qualifying equipment), analytical activities ranging from sampling, sample preparation, instrumental analysis and post-analytical activities (like decoding, calculation, use of statistical tests or packages, management of results). Designed on different levels (analyst, quality manager and technical manager), including a variety of measures, the programme shall ensure the validity and accuracy of test results, the adequacy of the management system, prove the laboratory's competence in performing tests under accreditation and last but not least show the comparability of test results. Laboratory management should establish performance targets and review periodically QC/QA results against them, implementing appropriate measures in case of non-compliance.
Murray, Ian; Walker, Glenn; Bereman, Michael S
2016-06-20
Two paper-based microfluidic techniques, photolithography and wax patterning, were investigated for their potential to improve upon the sensitivity, reproducibility, and versatility of paper spray mass spectrometry. The main limitation of photolithography was the significant signal (approximately three orders of magnitude) above background which was attributed to the chemicals used in the photoresist process. Hydrophobic barriers created via wax patterning were discovered to have approximately 2 orders of magnitude less background signal compared to analogous barriers created using photolithography. A minimum printed wax barrier thickness of approximately 0.3 mm was necessary to consistently retain commonly used paper spray solvents (1 : 1 water : acetonitrile/methanol) and avoid leakage. Constricting capillary flow via wax-printed channels yielded both a significant increase in signal and detection time for detection of model analytes. This signal increase, which was attributed to restricting the radial flow of analyte/solvent on paper (i.e., a concentrating effect), afforded a significant increase in sensitivity (p ≪ 0.05) for the detection of pesticides spiked into residential tap water using a five-point calibration curve. Finally, unique mixing designs using wax patterning can be envisioned to perform on-paper analyte derivatization.
Experimental and analytical investigation of a fluidic power generator
NASA Technical Reports Server (NTRS)
Sarohia, V.; Bernal, L.; Beauchamp, R. B.
1981-01-01
A combined experimental and analytical investigation was performed to understand the various fluid processes associated with the conversion of flow energy into electric power in a fluidic generator. Experiments were performed under flight-simulated laboratory conditions and results were compared with those obtained in the free-flight conditions. It is concluded that the mean mass flow critically controlled the output of the fluidic generator. Cross-correlation of the outputs of transducer data indicate the presence of a standing wave in the tube; the mechanism of oscillation is an acoustic resonance tube phenomenon. A linearized model was constructed coupling the flow behavior of the jet, the jet-layer, the tube, the cavity, and the holes of the fluidic generator. The analytical results also show that the mode of the fluidic power generator is an acoustical resonance phenomenon with the frequency of operation given by f approx = a/4L, where f is the frequency of jet swallowing, a is the average speed of sound in the tube, and L is the length of the tube. Analytical results further indicated that oscillations in the fluidic generator are always damped and consequently there is a forcing of the system in operation.
Strengthening of reinforced concrete beams with basalt-based FRP sheets: An analytical assessment
NASA Astrophysics Data System (ADS)
Nerilli, Francesca; Vairo, Giuseppe
2016-06-01
In this paper the effectiveness of the flexural strengthening of RC beams through basalt fiber-reinforced sheets is investigated. The non-linear flexural response of RC beams strengthened with FRP composites applied at the traction side is described via an analytical formulation. Validation results and some comparative analyses confirm soundness and consistency of the proposed approach, and highlight the good mechanical performances (in terms of strength and ductility enhancement of the beam) produced by basalt-based reinforcements in comparison with traditional glass or carbon FRPs.
Wardak, Cecylia; Grabarczyk, Malgorzata
2016-08-02
A simple, fast and cheap method for monitoring copper and nitrate in drinking water and food products using newly developed solid contact ion-selective electrodes is proposed. Determination of copper and nitrate was performed by application of multiple standard additions technique. The reliability of the obtained results was assessed by comparing them using the anodic stripping voltammetry or spectrophotometry for the same samples. In each case, satisfactory agreement of the results was obtained, which confirms the analytical usefulness of the constructed electrodes.
Strengthening of reinforced concrete beams with basalt-based FRP sheets: An analytical assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nerilli, Francesca; Vairo, Giuseppe
2016-06-08
In this paper the effectiveness of the flexural strengthening of RC beams through basalt fiber-reinforced sheets is investigated. The non-linear flexural response of RC beams strengthened with FRP composites applied at the traction side is described via an analytical formulation. Validation results and some comparative analyses confirm soundness and consistency of the proposed approach, and highlight the good mechanical performances (in terms of strength and ductility enhancement of the beam) produced by basalt-based reinforcements in comparison with traditional glass or carbon FRPs.
NASA Technical Reports Server (NTRS)
Abel, I.; Newsom, J. R.
1981-01-01
Two flutter suppression control laws were synthesized, implemented, and tested on a low speed aeroelastic wing model of a DC-10 derivative. The methodology used to design the control laws is described. Both control laws demonstrated increases in flutter speed in excess of 25 percent above the passive wing flutter speed. The effect of variations in gain and phase on the closed loop performance was measured and compared with analytical predictions. The analytical results are in good agreement with experimental data.
Status of internal quality control for thyroid hormones immunoassays from 2011 to 2016 in China.
Zhang, Shishi; Wang, Wei; Zhao, Haijian; He, Falin; Zhong, Kun; Yuan, Shuai; Wang, Zhiguo
2018-01-01
Internal quality control (IQC) plays a key role in the evaluation of precision performance in clinical laboratories. This report aims to present precision status of thyroid hormones immunoassays from 2011 to 2016 in China. Through Clinet-EQA reporting system, IQC information of Triiodothyronine and Thyroxine in the form of free and total (FT3, TT3, FT4, TT4), as well as Thyroid Stimulating Hormone (TSH) were collected from participant laboratories submitting IQC data in February, 2011-2016. For each analyte, current CVs were compared among different years and measurement systems. Percentages of laboratories meeting five allowable imprecision specifications (pass rates) were also calculated. Analysis of IQC practice was conducted to constitute a complete report. Current CVs were decreasing significantly but pass rates increasing only for FT3 during 6 years. FT3, TT3, FT4, and TT4 had the highest pass rates comparing with 1/3TEa imprecision specification but TSH had this comparing with minimum imprecision specification derived from biological variation. Constituent ratios of four mainstream measurement systems changed insignificantly. In 2016, precision performance of Abbott and Roche systems were better than Beckman and Siemens systems for all analytes except FT3 had Siemens also better than Beckman. Analysis of IQC practice demonstrated wide variation and great progress in aspects of IQC rules and control frequency. With change of IQC practice, only FT3 had precision performance improved in 6 years. However, precision status of five analytes in China was still unsatisfying. Ongoing investigation and improvement of IQC have yet to be achieved. © 2017 Wiley Periodicals, Inc.
Macro elemental analysis of food samples by nuclear analytical technique
NASA Astrophysics Data System (ADS)
Syahfitri, W. Y. N.; Kurniawati, S.; Adventini, N.; Damastuti, E.; Lestiani, D. D.
2017-06-01
Energy-dispersive X-ray fluorescence (EDXRF) spectrometry is a non-destructive, rapid, multi elemental, accurate, and environment friendly analysis compared with other detection methods. Thus, EDXRF spectrometry is applicable for food inspection. The macro elements calcium and potassium constitute important nutrients required by the human body for optimal physiological functions. Therefore, the determination of Ca and K content in various foods needs to be done. The aim of this work is to demonstrate the applicability of EDXRF for food analysis. The analytical performance of non-destructive EDXRF was compared with other analytical techniques; neutron activation analysis and atomic absorption spectrometry. Comparison of methods performed as cross checking results of the analysis and to overcome the limitations of the three methods. Analysis results showed that Ca found in food using EDXRF and AAS were not significantly different with p-value 0.9687, whereas p-value of K between EDXRF and NAA is 0.6575. The correlation between those results was also examined. The Pearson correlations for Ca and K were 0.9871 and 0.9558, respectively. Method validation using SRM NIST 1548a Typical Diet was also applied. The results showed good agreement between methods; therefore EDXRF method can be used as an alternative method for the determination of Ca and K in food samples.
Hydrodynamic dispersion in porous media with macroscopic disorder of parameters
NASA Astrophysics Data System (ADS)
Goldobin, D. S.; Maryshev, B. S.
2017-10-01
We present an analytical derivation of the macroscopic hydrodynamic dispersion for flows in porous media with frozen disorder of macroscopic parameters: porosity and permeability. The parameter inhomogeneities generate inhomogeneities of filtration flow which perform fluid mixing and, on the large spacial scale, act as an additional effective diffusion (eddy diffusivity or hydrodynamic dispersion). The derivation is performed for the general case, where the only restrictions are (i) the spatial autocorrelation functions of parameter inhomogeneities decay with the distance r not slower than 1/rn with n > 1, and (ii) the amplitudes of inhomogeneities are small compared to the mean value of parameters. Our analytical findings are confirmed with the results of direct numerical simulation for the transport of a passive scalar in inhomogeneous filtration flow.
Analytical and numerical performance models of a Heisenberg Vortex Tube
NASA Astrophysics Data System (ADS)
Bunge, C. D.; Cavender, K. A.; Matveev, K. I.; Leachman, J. W.
2017-12-01
Analytical and numerical investigations of a Heisenberg Vortex Tube (HVT) are performed to estimate the cooling potential with cryogenic hydrogen. The Ranque-Hilsch Vortex Tube (RHVT) is a device that tangentially injects a compressed fluid stream into a cylindrical geometry to promote enthalpy streaming and temperature separation between inner and outer flows. The HVT is the result of lining the inside of a RHVT with a hydrogen catalyst. This is the first concept to utilize the endothermic heat of para-orthohydrogen conversion to aid primary cooling. A review of 1st order vortex tube models available in the literature is presented and adapted to accommodate cryogenic hydrogen properties. These first order model predictions are compared with 2-D axisymmetric Computational Fluid Dynamics (CFD) simulations.
Statistical evaluation of forecasts
NASA Astrophysics Data System (ADS)
Mader, Malenka; Mader, Wolfgang; Gluckman, Bruce J.; Timmer, Jens; Schelter, Björn
2014-08-01
Reliable forecasts of extreme but rare events, such as earthquakes, financial crashes, and epileptic seizures, would render interventions and precautions possible. Therefore, forecasting methods have been developed which intend to raise an alarm if an extreme event is about to occur. In order to statistically validate the performance of a prediction system, it must be compared to the performance of a random predictor, which raises alarms independent of the events. Such a random predictor can be obtained by bootstrapping or analytically. We propose an analytic statistical framework which, in contrast to conventional methods, allows for validating independently the sensitivity and specificity of a forecasting method. Moreover, our method accounts for the periods during which an event has to remain absent or occur after a respective forecast.
Dervisevic, Muamer; Senel, Mehmet; Sagir, Tugba; Isik, Sevim
2017-04-15
The detection of cancer cells through important molecular recognition target such as sialic acid is significant for the clinical diagnosis and treatment. There are many electrochemical cytosensors developed for cancer cells detection but most of them have complicated fabrication processes which results in poor reproducibility and reliability. In this study, a simple, low-cost, and highly sensitive electrochemical cytosensor was designed based on boronic acid-functionalized polythiophene. In cytosensors fabrication simple single-step procedure was used which includes coating pencil graphite electrode (PGE) by means of electro-polymerization of 3-Thienyl boronic acid and Thiophen. Electrochemical impedance spectroscopy and cyclic voltammetry were used as an analytical methods to optimize and measure analytical performances of PGE/P(TBA 0.5 Th 0.5 ) based electrode. Cytosensor showed extremely good analytical performances in detection of cancer cells with linear rage of 1×10 1 to 1×10 6 cellsmL -1 exhibiting low detection limit of 10 cellsmL -1 and incubation time of 10min. Next to excellent analytical performances, it showed high selectivity towards AGS cancer cells when compared to HEK 293 normal cells and bone marrow mesenchymal stem cells (BM-hMSCs). This method is promising for future applications in early stage cancer diagnosis. Copyright © 2016 Elsevier B.V. All rights reserved.
Parvin, C A
1993-03-01
The error detection characteristics of quality-control (QC) rules that use control observations within a single analytical run are investigated. Unlike the evaluation of QC rules that span multiple analytical runs, most of the fundamental results regarding the performance of QC rules applied within a single analytical run can be obtained from statistical theory, without the need for simulation studies. The case of two control observations per run is investigated for ease of graphical display, but the conclusions can be extended to more than two control observations per run. Results are summarized in a graphical format that offers many interesting insights into the relations among the various QC rules. The graphs provide heuristic support to the theoretical conclusions that no QC rule is best under all error conditions, but the multirule that combines the mean rule and a within-run standard deviation rule offers an attractive compromise.
NASA Technical Reports Server (NTRS)
Tegart, J. R.; Aydelott, J. C.
1978-01-01
The design of surface tension propellant acquisition systems using fine-mesh screen must take into account all factors that influence the liquid pressure differentials within the system. One of those factors is spacecraft vibration. Analytical models to predict the effects of vibration have been developed. A test program to verify the analytical models and to allow a comparative evaluation of the parameters influencing the response to vibration was performed. Screen specimens were tested under conditions simulating the operation of an acquisition system, considering the effects of such parameters as screen orientation and configuration, screen support method, screen mesh, liquid flow and liquid properties. An analytical model, based on empirical coefficients, was most successful in predicting the effects of vibration.
Analytical solution of groundwater flow in a sloping aquifer with stream-aquifer interaction.
NASA Astrophysics Data System (ADS)
Liu, X.; Zhan, H.
2017-12-01
This poster presents a new analytical solution to study water exchange, hydraulic head distribution and water flow in a stream-unconfined aquifer interaction system with a sloping bed and stream of varying heads in presence of two thin vertical sedimentary layers. The formation of a clogging bed of fine-grained sediments allows the interfaces among a sloping aquifer and two rivers as the third kind and Cauchy boundary conditions. The numerical solution of the corresponding nonlinear Boussinesq equation is also developed to compare the performance of the analytical solution. The effects of precipitation recharge, bed slope and stage variation rate of two rivers for water flow in the sloping aquifer are discussed in the results.
The Development of Proofs in Analytical Mathematics for Undergraduate Students
NASA Astrophysics Data System (ADS)
Ali, Maselan; Sufahani, Suliadi; Hasim, Nurnazifa; Saifullah Rusiman, Mohd; Roslan, Rozaini; Mohamad, Mahathir; Khalid, Kamil
2018-04-01
Proofs in analytical mathematics are essential parts of mathematics, difficult to learn because its underlying concepts are not visible. This research consists of problems involving logic and proofs. In this study, a short overview was provided on how proofs in analytical mathematics were used by university students. From the results obtained, excellent students obtained better scores compared to average and poor students. The research instruments used in this study consisted of two parts: test and interview. In this way, analysis of students’ actual performances can be obtained. The result of this study showed that the less able students have fragile conceptual and cognitive linkages but the more able students use their strong conceptual linkages to produce effective solutions
NASA Technical Reports Server (NTRS)
Dayton, J. A., Jr.; Kosmahl, H. G.; Ramins, P.; Stankiewicz, N.
1979-01-01
Experimental and analytical results are compared for two high performance, octave bandwidth TWT's that use depressed collectors (MDC's) to improve the efficiency. The computations were carried out with advanced, multidimensional computer programs that are described here in detail. These programs model the electron beam as a series of either disks or rings of charge and follow their multidimensional trajectories from the RF input of the ideal TWT, through the slow wave structure, through the magnetic refocusing system, to their points of impact in the depressed collector. Traveling wave tube performance, collector efficiency, and collector current distribution were computed and the results compared with measurements for a number of TWT-MDC systems. Power conservation and correct accounting of TWT and collector losses were observed. For the TWT's operating at saturation, very good agreement was obtained between the computed and measured collector efficiencies. For a TWT operating 3 and 6 dB below saturation, excellent agreement between computed and measured collector efficiencies was obtained in some cases but only fair agreement in others. However, deviations can largely be explained by small differences in the computed and actual spent beam energy distributions. The analytical tools used here appear to be sufficiently refined to design efficient collectors for this class of TWT. However, for maximum efficiency, some experimental optimization (e.g., collector voltages and aperture sizes) will most likely be required.
Saraji, Mohammad; Ghambari, Hoda
2018-06-21
In this work we seek clues to select the appropriate dispersive liquid-liquid microextraction mode for extracting three categories of compounds. For this purpose, three common dispersive liquid-liquid microextraction modes were compared under optimized conditions. Traditional dispersive liquid-liquid microextraction, in situ ionic liquid dispersive liquid-liquid microextraction and conventional ionic liquid dispersive liquid-liquid microextraction using chloroform, 1-butyl-3-methylimidazolium tetrafluoroborate, and 1-hexyl-3-methylimidazolium hexafluorophosphate as the extraction solvent, respectively, were considered in this work. Phenolic, neutral aromatic and amino compounds (each category included six members) were studied as analytes. The analytes in the extracts were determined by high-performance liquid chromatography with UV detection. For the analytes with polar functionalities, the in situ ionic liquid dispersive liquid-liquid microextraction mode mostly led to better results. In contrast, for neutral hydrocarbons without polar functionalities, traditional dispersive liquid-liquid microextraction using chloroform produced better results. In this case, where dispersion forces were the dominant interactions in the extraction, the refractive index of solvent and analyte predicted the extraction performance better than the octanol-water partition coefficient. It was also revealed that none of the methods were successful in extracting very hydrophilic analytes (compounds with the log octanol-water partition coefficient < 2). The results of this study could be helpful in selecting a dispersive liquid-liquid microextraction mode for the extraction of various groups of compounds. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Evaluation of FUS-2000 urine analyzer: analytical properties and particle recognition.
Beňovská, Miroslava; Wiewiorka, Ondřej; Pinkavová, Jana
This study evaluates the performance of microscopic part of a hybrid analyzer FUS-2000 (Dirui Industrial Co., Changchun, China), its analytical properties and particle recognition. The evaluation of trueness, repeatability, detection limit, carry-over, linearity range and analytical stability was performed according to Dirui protocol guidelines designed by Dirui Company to guarantee the quality of the instrument. Trueness for low, medium and high-value concentrations was calculated with bias of 15.5, 4.7 and -6.6%, respectively. Detection limit of 5 Ery/μl was confirmed. Coefficient of variation of 11.0, 5.2 and 3.8% was measured for within-run repeatability of low, medium and high concentration. Between-run repeatability for daily quality control had coefficient of variation of 3.0%. Carry-over did not exceed 0.05%. Linearity was confirmed for range of 0-16,000 particles/μl (R 2 = 0.9997). The analytical stability had coefficient of variation of 4.3%. Out of 1258 analyzed urine samples, 362 positive were subjected to light microscopy urine sediment analysis and compared to the analyzer results. Cohen's kappa coefficients were calculated to express the concordance. Squared kappa coefficient was 0.927 (red blood cells), 0.888 (white blood cells), 0.908 (squamous epithelia), 0.634 (transitional epithelia), 0.628 (hyaline casts), 0.843 (granular casts) and 0.623 (bacteria). Single kappa coefficients were 0.885 (yeasts) and 0.756 (crystals), respectively. Aforementioned results show good analytical performance of the analyzer and tight agreement with light microscopy of urine sediment.
Chen, Jun; Quan, Wenting; Cui, Tingwei
2015-01-01
In this study, two sample semi-analytical algorithms and one new unified multi-band semi-analytical algorithm (UMSA) for estimating chlorophyll-a (Chla) concentration were constructed by specifying optimal wavelengths. The three sample semi-analytical algorithms, including the three-band semi-analytical algorithm (TSA), four-band semi-analytical algorithm (FSA), and UMSA algorithm, were calibrated and validated by the dataset collected in the Yellow River Estuary between September 1 and 10, 2009. By comparing of the accuracy of assessment of TSA, FSA, and UMSA algorithms, it was found that the UMSA algorithm had a superior performance in comparison with the two other algorithms, TSA and FSA. Using the UMSA algorithm in retrieving Chla concentration in the Yellow River Estuary decreased by 25.54% NRMSE (normalized root mean square error) when compared with the FSA algorithm, and 29.66% NRMSE in comparison with the TSA algorithm. These are very significant improvements upon previous methods. Additionally, the study revealed that the TSA and FSA algorithms are merely more specific forms of the UMSA algorithm. Owing to the special form of the UMSA algorithm, if the same bands were used for both the TSA and UMSA algorithms or FSA and UMSA algorithms, the UMSA algorithm would theoretically produce superior results in comparison with the TSA and FSA algorithms. Thus, good results may also be produced if the UMSA algorithm were to be applied for predicting Chla concentration for datasets of Gitelson et al. (2008) and Le et al. (2009).
NASA Astrophysics Data System (ADS)
Akai, Takashi; Bijeljic, Branko; Blunt, Martin J.
2018-06-01
In the color gradient lattice Boltzmann model (CG-LBM), a fictitious-density wetting boundary condition has been widely used because of its ease of implementation. However, as we show, this may lead to inaccurate results in some cases. In this paper, a new scheme for the wetting boundary condition is proposed which can handle complicated 3D geometries. The validity of our method for static problems is demonstrated by comparing the simulated results to analytical solutions in 2D and 3D geometries with curved boundaries. Then, capillary rise simulations are performed to study dynamic problems where the three-phase contact line moves. The results are compared to experimental results in the literature (Heshmati and Piri, 2014). If a constant contact angle is assumed, the simulations agree with the analytical solution based on the Lucas-Washburn equation. However, to match the experiments, we need to implement a dynamic contact angle that varies with the flow rate.
Analytical Model of Large Data Transactions in CoAP Networks
Ludovici, Alessandro; Di Marco, Piergiuseppe; Calveras, Anna; Johansson, Karl H.
2014-01-01
We propose a novel analytical model to study fragmentation methods in wireless sensor networks adopting the Constrained Application Protocol (CoAP) and the IEEE 802.15.4 standard for medium access control (MAC). The blockwise transfer technique proposed in CoAP and the 6LoWPAN fragmentation are included in the analysis. The two techniques are compared in terms of reliability and delay, depending on the traffic, the number of nodes and the parameters of the IEEE 802.15.4 MAC. The results are validated trough Monte Carlo simulations. To the best of our knowledge this is the first study that evaluates and compares analytically the performance of CoAP blockwise transfer and 6LoWPAN fragmentation. A major contribution is the possibility to understand the behavior of both techniques with different network conditions. Our results show that 6LoWPAN fragmentation is preferable for delay-constrained applications. For highly congested networks, the blockwise transfer slightly outperforms 6LoWPAN fragmentation in terms of reliability. PMID:25153143
How Can Visual Analytics Assist Investigative Analysis? Design Implications from an Evaluation.
Youn-Ah Kang; Görg, Carsten; Stasko, John
2011-05-01
Despite the growing number of systems providing visual analytic support for investigative analysis, few empirical studies of the potential benefits of such systems have been conducted, particularly controlled, comparative evaluations. Determining how such systems foster insight and sensemaking is important for their continued growth and study, however. Furthermore, studies that identify how people use such systems and why they benefit (or not) can help inform the design of new systems in this area. We conducted an evaluation of the visual analytics system Jigsaw employed in a small investigative sensemaking exercise, and compared its use to three other more traditional methods of analysis. Sixteen participants performed a simulated intelligence analysis task under one of the four conditions. Experimental results suggest that Jigsaw assisted participants to analyze the data and identify an embedded threat. We describe different analysis strategies used by study participants and how computational support (or the lack thereof) influenced the strategies. We then illustrate several characteristics of the sensemaking process identified in the study and provide design implications for investigative analysis tools based thereon. We conclude with recommendations on metrics and techniques for evaluating visual analytics systems for investigative analysis.
Sensitivity of echo enabled harmonic generation to sinusoidal electron beam energy structure
Hemsing, E.; Garcia, B.; Huang, Z.; ...
2017-06-19
Here, we analytically examine the bunching factor spectrum of a relativistic electron beam with sinusoidal energy structure that then undergoes an echo-enabled harmonic generation (EEHG) transformation to produce high harmonics. The performance is found to be described primarily by a simple scaling parameter. The dependence of the bunching amplitude on fluctuations of critical parameters is derived analytically, and compared with simulations. Where applicable, EEHG is also compared with high gain harmonic generation (HGHG) and we find that EEHG is generally less sensitive to several types of energy structure. In the presence of intermediate frequency modulations like those produced by themore » microbunching instability, EEHG has a substantially narrower intrinsic bunching pedestal.« less
Evaluating supplier quality performance using analytical hierarchy process
NASA Astrophysics Data System (ADS)
Kalimuthu Rajoo, Shanmugam Sundram; Kasim, Maznah Mat; Ahmad, Nazihah
2013-09-01
This paper elaborates the importance of evaluating supplier quality performance to an organization. Supplier quality performance evaluation reflects the actual performance of the supplier exhibited at customer's end. It is critical in enabling the organization to determine the area of improvement and thereafter works with supplier to close the gaps. Success of the customer partly depends on supplier's quality performance. Key criteria as quality, cost, delivery, technology support and customer service are categorized as main factors in contributing to supplier's quality performance. 18 suppliers' who were manufacturing automotive application parts evaluated in year 2010 using weight point system. There were few suppliers with common rating which led to common ranking observed by few suppliers'. Analytical Hierarchy Process (AHP), a user friendly decision making tool for complex and multi criteria problems was used to evaluate the supplier's quality performance challenging the weight point system that was used for 18 suppliers'. The consistency ratio was checked for criteria and sub-criteria. Final results of AHP obtained with no overlap ratings, therefore yielded a better decision making methodology as compared to weight point rating system.
Active member control of a precision structure with an H(infinity) performance objective
NASA Technical Reports Server (NTRS)
Fanson, J. L.; Chu, C.-C.; Smith, R. S.; Anderson, E. H.
1990-01-01
This paper addresses the noncollocated control of active structures using active structural elements. A top level architecture for active structures is presented, and issues pertaining to robust control of structures are discussed. Controllers optimized for an H sub inf performance specification are implemented on a test structure and the results are compared with analytical predictions. Directions for further research are identified.
Huynh, Benjamin Q; Li, Hui; Giger, Maryellen L
2016-07-01
Convolutional neural networks (CNNs) show potential for computer-aided diagnosis (CADx) by learning features directly from the image data instead of using analytically extracted features. However, CNNs are difficult to train from scratch for medical images due to small sample sizes and variations in tumor presentations. Instead, transfer learning can be used to extract tumor information from medical images via CNNs originally pretrained for nonmedical tasks, alleviating the need for large datasets. Our database includes 219 breast lesions (607 full-field digital mammographic images). We compared support vector machine classifiers based on the CNN-extracted image features and our prior computer-extracted tumor features in the task of distinguishing between benign and malignant breast lesions. Five-fold cross validation (by lesion) was conducted with the area under the receiver operating characteristic (ROC) curve as the performance metric. Results show that classifiers based on CNN-extracted features (with transfer learning) perform comparably to those using analytically extracted features [area under the ROC curve [Formula: see text
Dervisevic, Muamer; Şenel, Mehmet; Sagir, Tugba; Isik, Sevim
2017-05-15
A comparative study is reported where folic acid (FA) and boronic acid (BA) based cytosensors and their analytical performances in cancer cell detection were analyzed by using electrochemical impedance spectroscopy (EIS) method. Cytosensors were fabricated using self-assembled monolayer principle by modifying Au electrode with cysteamine (Cys) and immobilization of ferrocene cored polyamidiamine dendrimers second generation (Fc-PAMAM (G2)), after which electrodes were modified with FA and BA. Au/Fc-PAMAM(G2)/FA and Au/Fc-PAMAM(G2)/BA based cytosensors showed extremely good analytical performances in cancer cell detection with linear range of 1×10 2 to 1×10 6 cellsml -1 , detection limit of 20cellsml -1 with incubation time of 20min for FA based electrode, and for BA based electrode detection limit was 28cellsml -1 with incubation time of 10min. Next to excellent analytical performances, cytosensors showed high selectivity towards cancer cells which was demonstrated in selectivity study using human embryonic kidney 293 cells (HEK 293) as normal cells and Au/Fc-PAMAM(G2)/FA electrode showed two times better selectivity than BA modified electrode. These cytosensors are promising for future applications in cancer cell diagnosis. Copyright © 2017 Elsevier B.V. All rights reserved.
An analytic performance model of disk arrays and its application
NASA Technical Reports Server (NTRS)
Lee, Edward K.; Katz, Randy H.
1991-01-01
As disk arrays become widely used, tools for understanding and analyzing their performance become increasingly important. In particular, performance models can be invaluable in both configuring and designing disk arrays. Accurate analytic performance models are desirable over other types of models because they can be quickly evaluated, are applicable under a wide range of system and workload parameters, and can be manipulated by a range of mathematical techniques. Unfortunately, analytical performance models of disk arrays are difficult to formulate due to the presence of queuing and fork-join synchronization; a disk array request is broken up into independent disk requests which must all complete to satisfy the original request. We develop, validate, and apply an analytic performance model for disk arrays. We derive simple equations for approximating their utilization, response time, and throughput. We then validate the analytic model via simulation and investigate the accuracy of each approximation used in deriving the analytical model. Finally, we apply the analytical model to derive an equation for the optimal unit of data striping in disk arrays.
Numerical Modeling of Pulse Detonation Rocket Engine Gasdynamics and Performance
NASA Technical Reports Server (NTRS)
Morris, C. I.
2003-01-01
Pulse detonation engines (PDB) have generated considerable research interest in recent years as a chemical propulsion system potentially offering improved performance and reduced complexity compared to conventional gas turbines and rocket engines. The detonative mode of combustion employed by these devices offers a theoretical thermodynamic advantage over the constant-pressure deflagrative combustion mode used in conventional engines. However, the unsteady blowdown process intrinsic to all pulse detonation devices has made realistic estimates of the actual propulsive performance of PDES problematic. The recent review article by Kailasanath highlights some of the progress that has been made in comparing the available experimental measurements with analytical and numerical models.
Performance Models for the Spike Banded Linear System Solver
Manguoglu, Murat; Saied, Faisal; Sameh, Ahmed; ...
2011-01-01
With availability of large-scale parallel platforms comprised of tens-of-thousands of processors and beyond, there is significant impetus for the development of scalable parallel sparse linear system solvers and preconditioners. An integral part of this design process is the development of performance models capable of predicting performance and providing accurate cost models for the solvers and preconditioners. There has been some work in the past on characterizing performance of the iterative solvers themselves. In this paper, we investigate the problem of characterizing performance and scalability of banded preconditioners. Recent work has demonstrated the superior convergence properties and robustness of banded preconditioners,more » compared to state-of-the-art ILU family of preconditioners as well as algebraic multigrid preconditioners. Furthermore, when used in conjunction with efficient banded solvers, banded preconditioners are capable of significantly faster time-to-solution. Our banded solver, the Truncated Spike algorithm is specifically designed for parallel performance and tolerance to deep memory hierarchies. Its regular structure is also highly amenable to accurate performance characterization. Using these characteristics, we derive the following results in this paper: (i) we develop parallel formulations of the Truncated Spike solver, (ii) we develop a highly accurate pseudo-analytical parallel performance model for our solver, (iii) we show excellent predication capabilities of our model – based on which we argue the high scalability of our solver. Our pseudo-analytical performance model is based on analytical performance characterization of each phase of our solver. These analytical models are then parameterized using actual runtime information on target platforms. An important consequence of our performance models is that they reveal underlying performance bottlenecks in both serial and parallel formulations. All of our results are validated on diverse heterogeneous multiclusters – platforms for which performance prediction is particularly challenging. Finally, we provide predict the scalability of the Spike algorithm using up to 65,536 cores with our model. In this paper we extend the results presented in the Ninth International Symposium on Parallel and Distributed Computing.« less
Lísa, Miroslav; Cífková, Eva; Khalikova, Maria; Ovčačíková, Magdaléna; Holčapek, Michal
2017-11-24
Lipidomic analysis of biological samples in a clinical research represents challenging task for analytical methods given by the large number of samples and their extreme complexity. In this work, we compare direct infusion (DI) and chromatography - mass spectrometry (MS) lipidomic approaches represented by three analytical methods in terms of comprehensiveness, sample throughput, and validation results for the lipidomic analysis of biological samples represented by tumor tissue, surrounding normal tissue, plasma, and erythrocytes of kidney cancer patients. Methods are compared in one laboratory using the identical analytical protocol to ensure comparable conditions. Ultrahigh-performance liquid chromatography/MS (UHPLC/MS) method in hydrophilic interaction liquid chromatography mode and DI-MS method are used for this comparison as the most widely used methods for the lipidomic analysis together with ultrahigh-performance supercritical fluid chromatography/MS (UHPSFC/MS) method showing promising results in metabolomics analyses. The nontargeted analysis of pooled samples is performed using all tested methods and 610 lipid species within 23 lipid classes are identified. DI method provides the most comprehensive results due to identification of some polar lipid classes, which are not identified by UHPLC and UHPSFC methods. On the other hand, UHPSFC method provides an excellent sensitivity for less polar lipid classes and the highest sample throughput within 10min method time. The sample consumption of DI method is 125 times higher than for other methods, while only 40μL of organic solvent is used for one sample analysis compared to 3.5mL and 4.9mL in case of UHPLC and UHPSFC methods, respectively. Methods are validated for the quantitative lipidomic analysis of plasma samples with one internal standard for each lipid class. Results show applicability of all tested methods for the lipidomic analysis of biological samples depending on the analysis requirements. Copyright © 2017 Elsevier B.V. All rights reserved.
Almeida, C; Nogueira, J M F
2014-06-27
In the present work, the development of an analytical methodology which combines bar adsorptive microextraction with microliquid desorption followed by high performance liquid chromatography-diode array detection (BAμE-μLD/HPLC-DAD) is proposed for the determination of trace levels of four parabens (methyl, ethyl, propyl and buthyl paraben) in real matrices. By comparing six polymer (P1, P2, P3, P4, P5 and P6) and five activated carbon (AC1, AC2, AC3, AC4 and AC5) coatings through BAμE, AC2 exhibited much higher selectivity and efficiency from all the sorbent phases tested, even when compared with the commercial stir bar sorptive extraction with polydimethylsiloxane. Assays performed through BAμE(AC2, 1.7mg) on 25mL of ultrapure water samples spiked at the 8.0μg/L level, yielded recoveries ranging from 85.6±6.3% to 100.6±11.8%, under optimized experimental conditions. The analytical performance showed also convenient limits of detection (0.1μg/L) and quantification (0.3μg/L), as well as good linear dynamic ranges (0.5-28.0μg/L) with remarkable determination coefficients (r(2)>0.9982). Excellent repeatability was also achieved through intraday (RSD<10.2%) and interday (RSD<10.0%) assays. By downsizing the analytical device to half-length (BAμE(AC2, 0.9mg)), similar analytical data was also achieved for the four parabens, under optimized experimental conditions, showing that this analytical technology can be design to operate with lower volumes of sample and desorption solvent, thus increasing the sensitivity and effectiveness. The application of the proposed analytical approach using the standard addition methodology on tap, underground, estuarine, swimming pool and waste water samples, as well as on commercial cosmetic products and urine samples, revealed good sensitivity, absence of matrix effects and the occurrence of levels of some parabens. Moreover, the present methodology is easy to implement, reliable, sensitive, requiring low sample and minimized desorption solvent volume, having the possibility to tune the most selective sorbent coating, according to the target compounds involved. Copyright © 2014 Elsevier B.V. All rights reserved.
Oftedal, O T; Eisert, R; Barrell, G K
2014-01-01
Mammalian milks may differ greatly in composition from cow milk, and these differences may affect the performance of analytical methods. High-fat, high-protein milks with a preponderance of oligosaccharides, such as those produced by many marine mammals, present a particular challenge. We compared the performance of several methods against reference procedures using Weddell seal (Leptonychotes weddellii) milk of highly varied composition (by reference methods: 27-63% water, 24-62% fat, 8-12% crude protein, 0.5-1.8% sugar). A microdrying step preparatory to carbon-hydrogen-nitrogen (CHN) gas analysis slightly underestimated water content and had a higher repeatability relative standard deviation (RSDr) than did reference oven drying at 100°C. Compared with a reference macro-Kjeldahl protein procedure, the CHN (or Dumas) combustion method had a somewhat higher RSDr (1.56 vs. 0.60%) but correlation between methods was high (0.992), means were not different (CHN: 17.2±0.46% dry matter basis; Kjeldahl 17.3±0.49% dry matter basis), there were no significant proportional or constant errors, and predictive performance was high. A carbon stoichiometric procedure based on CHN analysis failed to adequately predict fat (reference: Röse-Gottlieb method) or total sugar (reference: phenol-sulfuric acid method). Gross energy content, calculated from energetic factors and results from reference methods for fat, protein, and total sugar, accurately predicted gross energy as measured by bomb calorimetry. We conclude that the CHN (Dumas) combustion method and calculation of gross energy are acceptable analytical approaches for marine mammal milk, but fat and sugar require separate analysis by appropriate analytic methods and cannot be adequately estimated by carbon stoichiometry. Some other alternative methods-low-temperature drying for water determination; Bradford, Lowry, and biuret methods for protein; the Folch and the Bligh and Dyer methods for fat; and enzymatic and reducing sugar methods for total sugar-appear likely to produce substantial error in marine mammal milks. It is important that alternative analytical methods be properly validated against a reference method before being used, especially for mammalian milks that differ greatly from cow milk in analyte characteristics and concentrations. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Taher, K. A.; Majumder, S. P.
2017-05-01
An analytical approach is developed to find the effect of cross-polarization (XPol)-induced crosstalk on the bit error rate (BER) performance of a polarization division multiplex (PDM) quadrature phase shift keying (QPSK) optical transmission system with polarization diversity receiver. Analytical expression for the XPol-induced crosstalk and signal to crosstalk plus noise ratio (SCNR) are developed at the output of polarization diversity PDM-QPSK coherent optical homodyne receiver conditioned on a given value of mean misalignment angle. Considering Maxwellian distribution for the pdf of the misalignment angle, the average SCNR and average BER are derived. Results show that there is significant deterioration in the BER performance and power penalty due to XPol-induced crosstalk. Penalties in signal power are found to be 8.85 dB, 11.28 dB and 12.59 dB correspondingly for LO laser power of -10 dBm, -5 dBm and 0 dBm at a data rate of 100 Gbps, mean misalignment angle of 7.5 degree and BER of 10-9 compared to the signal power without crosstalk.
Maximum flow-based resilience analysis: From component to system
Jin, Chong; Li, Ruiying; Kang, Rui
2017-01-01
Resilience, the ability to withstand disruptions and recover quickly, must be considered during system design because any disruption of the system may cause considerable loss, including economic and societal. This work develops analytic maximum flow-based resilience models for series and parallel systems using Zobel’s resilience measure. The two analytic models can be used to evaluate quantitatively and compare the resilience of the systems with the corresponding performance structures. For systems with identical components, the resilience of the parallel system increases with increasing number of components, while the resilience remains constant in the series system. A Monte Carlo-based simulation method is also provided to verify the correctness of our analytic resilience models and to analyze the resilience of networked systems based on that of components. A road network example is used to illustrate the analysis process, and the resilience comparison among networks with different topologies but the same components indicates that a system with redundant performance is usually more resilient than one without redundant performance. However, not all redundant capacities of components can improve the system resilience, the effectiveness of the capacity redundancy depends on where the redundant capacity is located. PMID:28545135
Tague, Lauren; Wiggs, Justin; Li, Qianxi; McCarter, Robert; Sherwin, Elizabeth; Weinberg, Jacqueline; Sable, Craig
2018-05-17
Left ventricular hypertrophy (LVH) is a common finding on pediatric electrocardiography (ECG) leading to many referrals for echocardiography (echo). This study utilizes a novel analytics tool that combines ECG and echo databases to evaluate ECG as a screening tool for LVH. SQL Server 2012 data warehouse incorporated ECG and echo databases for all patients from a single institution from 2006 to 2016. Customized queries identified patients 0-18 years old with LVH on ECG and an echo performed within 24 h. Using data visualization (Tableau) and analytic (Stata 14) software, ECG and echo findings were compared. Of 437,699 encounters, 4637 met inclusion criteria. ECG had high sensitivity (≥ 90%) but poor specificity (43%), and low positive predictive value (< 20%) for echo abnormalities. ECG performed only 11-22% better than chance (AROC = 0.50). 83% of subjects with LVH on ECG had normal left ventricle (LV) structure and size on echo. African-Americans with LVH were least likely to have an abnormal echo. There was a low correlation between V 6 R on ECG and echo-derived Z score of left ventricle diastolic diameter (r = 0.14) and LV mass index (r = 0.24). The data analytics client was able to mine a database of ECG and echo reports, comparing LVH by ECG and LV measurements and qualitative findings by echo, identifying an abnormal LV by echo in only 17% of cases with LVH on ECG. This novel tool is useful for rapid data mining for both clinical and research endeavors.
Investigating the Mobility of Trilayer Graphene Nanoribbon in Nanoscale FETs
NASA Astrophysics Data System (ADS)
Rahmani, Meisam; Ghafoori Fard, Hassan; Ahmadi, Mohammad Taghi; Rahbarpour, Saeideh; Habibiyan, Hamidreza; Varmazyari, Vali; Rahmani, Komeil
2017-10-01
The aim of the present paper is to investigate the scaling behaviors of charge carrier mobility as one of the most remarkable characteristics for modeling of nanoscale field-effect transistors (FETs). Many research groups in academia and industry are contributing to the model development and experimental identification of multi-layer graphene FET-based devices. The approach in the present work is to provide an analytical model for carrier mobility of tri-layer graphene nanoribbon (TGN) FET. In order to do so, one starts by identifying the analytical modeling of TGN carrier velocity and ballistic conductance. At the end, a model of charge carrier mobility with numerical solution is analytically derived for TGN FET, in which the carrier concentration, temperature and channel length characteristics dependence are highlighted. Moreover, variation of band gap and gate voltage during the proposed device operation and its effect on carrier mobility is investigated. To evaluate the nanoscale FET performance, the carrier mobility model is also adopted to obtain the I-V characteristics of the device. In order to verify the accuracy of the proposed analytical model for TGN mobility, it is compared to the existing experimental data, and a satisfactory agreement is reported for analogous ambient conditions. Moreover, the proposed model is compared with the published data of single-layer graphene and bi-layer graphene, in which the obtained results demonstrate significant insights into the importance of charge carrier mobility impact in high-performance TGN FET. The work presented here is one step towards an applicable model for real-world nanoscale FETs.
Analysis of Advanced Rotorcraft Configurations
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2000-01-01
Advanced rotorcraft configurations are being investigated with the objectives of identifying vehicles that are larger, quieter, and faster than current-generation rotorcraft. A large rotorcraft, carrying perhaps 150 passengers, could do much to alleviate airport capacity limitations, and a quiet rotorcraft is essential for community acceptance of the benefits of VTOL operations. A fast, long-range, long-endurance rotorcraft, notably the tilt-rotor configuration, will improve rotorcraft economics through productivity increases. A major part of the investigation of advanced rotorcraft configurations consists of conducting comprehensive analyses of vehicle behavior for the purpose of assessing vehicle potential and feasibility, as well as to establish the analytical models required to support the vehicle development. The analytical work of FY99 included applications to tilt-rotor aircraft. Tilt Rotor Aeroacoustic Model (TRAM) wind tunnel measurements are being compared with calculations performed by using the comprehensive analysis tool (Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics (CAMRAD 11)). The objective is to establish the wing and wake aerodynamic models that are required for tilt-rotor analysis and design. The TRAM test in the German-Dutch Wind Tunnel (DNW) produced extensive measurements. This is the first test to encompass air loads, performance, and structural load measurements on tilt rotors, as well as acoustic and flow visualization data. The correlation of measurements and calculations includes helicopter-mode operation (performance, air loads, and blade structural loads), hover (performance and air loads), and airplane-mode operation (performance).
NASA Technical Reports Server (NTRS)
1974-01-01
Technical information is presented covering the areas of: (1) analytical instrumentation useful in the analysis of physical phenomena; (2) analytical techniques used to determine the performance of materials; and (3) systems and component analyses for design and quality control.
42 CFR 493.845 - Standard; Toxicology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.851 - Standard; Hematology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.843 - Standard; Endocrinology.
Code of Federal Regulations, 2013 CFR
2013-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.845 - Standard; Toxicology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.845 - Standard; Toxicology.
Code of Federal Regulations, 2013 CFR
2013-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.851 - Standard; Hematology.
Code of Federal Regulations, 2013 CFR
2013-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.843 - Standard; Endocrinology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.843 - Standard; Endocrinology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.851 - Standard; Hematology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
The electric rail gun for space propulsion
NASA Technical Reports Server (NTRS)
Bauer, D. P.; Barber, J. P.; Vahlberg, C. J.
1981-01-01
An analytic feasibility investigation of an electric propulsion concept for space application is described. In this concept, quasistatic thrust due to inertial reaction to repetitively accelerated pellets by an electric rail gun is used to propel a spacecraft. The study encompasses the major subsystems required in an electric rail gun propulsion system. The mass, performance, and configuration of each subsystem are described. Based on an analytic model of the system mass and performance, the electric rail gun mission performance as a reusable orbital transfer vehicle (OTV) is analyzed and compared to a 30 cm ion thruster system (BIMOD) and a chemical propulsion system (IUS) for payloads with masses of 1150 kg and 2300 kg. For system power levels in the range from 25 kW(e) to 100 kW(e) an electric rail gun OTV is more attractive than a BIMOD system for low Earth orbit to geosynchronous orbit transfer durations in the range from 20 to 120 days.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gittens, Alex; Devarakonda, Aditya; Racah, Evan
We explore the trade-offs of performing linear algebra using Apache Spark, compared to traditional C and MPI implementations on HPC platforms. Spark is designed for data analytics on cluster computing platforms with access to local disks and is optimized for data-parallel tasks. We examine three widely-used and important matrix factorizations: NMF (for physical plausibility), PCA (for its ubiquity) and CX (for data interpretability). We apply these methods to 1.6TB particle physics, 2.2TB and 16TB climate modeling and 1.1TB bioimaging data. The data matrices are tall-and-skinny which enable the algorithms to map conveniently into Spark’s data parallel model. We perform scalingmore » experiments on up to 1600 Cray XC40 nodes, describe the sources of slowdowns, and provide tuning guidance to obtain high performance.« less
Transport composite fuselage technology: Impact dynamics and acoustic transmission
NASA Technical Reports Server (NTRS)
Jackson, A. C.; Balena, F. J.; Labarge, W. L.; Pei, G.; Pitman, W. A.; Wittlin, G.
1986-01-01
A program was performed to develop and demonstrate the impact dynamics and acoustic transmission technology for a composite fuselage which meets the design requirements of a 1990 large transport aircraft without substantial weight and cost penalties. The program developed the analytical methodology for the prediction of acoustic transmission behavior of advanced composite stiffened shell structures. The methodology predicted that the interior noise level in a composite fuselage due to turbulent boundary layer will be less than in a comparable aluminum fuselage. The verification of these analyses will be performed by NASA Langley Research Center using a composite fuselage shell fabricated by filament winding. The program also developed analytical methodology for the prediction of the impact dynamics behavior of lower fuselage structure constructed with composite materials. Development tests were performed to demonstrate that the composite structure designed to the same operating load requirement can have at least the same energy absorption capability as aluminum structure.
Comparison of analytical and experimental performance of a wind-tunnel diffuser section
NASA Technical Reports Server (NTRS)
Shyne, R. J.; Moore, R. D.; Boldman, D. R.
1986-01-01
Wind tunnel diffuser performance is evaluated by comparing experimental data with analytical results predicted by an one-dimensional integration procedure with skin friction coefficient, a two-dimensional interactive boundary layer procedure for analyzing conical diffusers, and a two-dimensional, integral, compressible laminar and turbulent boundary layer code. Pressure, temperature, and velocity data for a 3.25 deg equivalent cone half-angle diffuser (37.3 in., 94.742 cm outlet diameter) was obtained from the one-tenth scale Altitude Wind Tunnel modeling program at the NASA Lewis Research Center. The comparison is performed at Mach numbers of 0.162 (Re = 3.097x19(6)), 0.326 (Re = 6.2737x19(6)), and 0.363 (Re = 7.0129x10(6)). The Reynolds numbers are all based on an inlet diffuser diameter of 32.4 in., 82.296 cm, and reasonable quantitative agreement was obtained between the experimental data and computational codes.
Mutual coupling, channel model, and BER for curvilinear antenna arrays
NASA Astrophysics Data System (ADS)
Huang, Zhiyong
This dissertation introduces a wireless communications system with an adaptive beam-former and investigates its performance with different antenna arrays. Mutual coupling, real antenna elements and channel models are included to examine the system performance. In a beamforming system, mutual coupling (MC) among the elements can significantly degrade the system performance. However, MC effects can be compensated if an accurate model of mutual coupling is available. A mutual coupling matrix model is utilized to compensate mutual coupling in the beamforming of a uniform circular array (UCA). Its performance is compared with other models in uplink and downlink beamforming scenarios. In addition, the predictions are compared with measurements and verified with results from full-wave simulations. In order to accurately investigate the minimum mean-square-error (MSE) of an adaptive array in MC, two different noise models, the environmental and the receiver noise, are modeled. The minimum MSEs with and without data domain MC compensation are analytically compared. The influence of mutual coupling on the convergence is also examined. In addition, the weight compensation method is proposed to attain the desired array pattern. Adaptive arrays with different geometries are implemented with the minimum MSE algorithm in the wireless communications system to combat interference at the same frequency. The bit-error-rate (BER) of systems with UCA, uniform rectangular array (URA) and UCA with center element are investigated in additive white Gaussian noise plus well-separated signals or random direction signals scenarios. The output SINR of an adaptive array with multiple interferers is analytically examined. The influence of the adaptive algorithm convergence on the BER is investigated. The UCA is then investigated in a narrowband Rician fading channel. The channel model is built and the space correlations are examined. The influence of the number of signal paths, number of the interferers, Doppler spread and convergence are investigated. The tracking mode is introduced to the adaptive array system, and it further improves the BER. The benefit of using faster data rate (wider bandwidth) is discussed. In order to have better performance in a 3D space, the geometries of uniform spherical array (USAs) are presented and different configurations of USAs are discussed. The LMS algorithm based on temporal a priori information is applied to UCAs and USAs to beamform the patterns. Their performances are compared based on simulation results. Based on the analytical and simulation results, it can be concluded that mutual coupling slightly influences the performance of the adaptive array in communication systems. In addition, arrays with curvilinear geometries perform well in AWGN and fading channels.
Buchanan, Verica; Lu, Yafeng; McNeese, Nathan; Steptoe, Michael; Maciejewski, Ross; Cooke, Nancy
2017-03-01
Historically, domains such as business intelligence would require a single analyst to engage with data, develop a model, answer operational questions, and predict future behaviors. However, as the problems and domains become more complex, organizations are employing teams of analysts to explore and model data to generate knowledge. Furthermore, given the rapid increase in data collection, organizations are struggling to develop practices for intelligence analysis in the era of big data. Currently, a variety of machine learning and data mining techniques are available to model data and to generate insights and predictions, and developments in the field of visual analytics have focused on how to effectively link data mining algorithms with interactive visuals to enable analysts to explore, understand, and interact with data and data models. Although studies have explored the role of single analysts in the visual analytics pipeline, little work has explored the role of teamwork and visual analytics in the analysis of big data. In this article, we present an experiment integrating statistical models, visual analytics techniques, and user experiments to study the role of teamwork in predictive analytics. We frame our experiment around the analysis of social media data for box office prediction problems and compare the prediction performance of teams, groups, and individuals. Our results indicate that a team's performance is mediated by the team's characteristics such as openness of individual members to others' positions and the type of planning that goes into the team's analysis. These findings have important implications for how organizations should create teams in order to make effective use of information from their analytic models.
Analytical and clinical evaluation of the Abbott RealTime hepatitis B sequencing assay.
Huh, Hee Jae; Kim, Ji-Youn; Lee, Myoung-Keun; Lee, Nam Yong; Kim, Jong-Won; Ki, Chang-Seok
2016-12-01
Long-term nucleoside analogue (NA) treatment leads to selection for drug-resistant mutations in patients undergoing hepatitis B virus (HBV) therapy. The Abbott RealTime HBV Sequencing assay (Abbott assay; Abbott Molecular Inc., Des Plaines, IL, USA) targets the reverse transcriptase region of the polymerase gene and as such has the ability to detect NA resistance-associated mutations in HBV. We evaluated the analytical performance of the Abbott assay and compared its diagnostic performance to that of a laboratory-developed nested-PCR and sequencing method. The analytical sensitivity of the Abbott assay was determined using a serially-diluted WHO International Standard. To validate the clinical performances of the Abbott assay and the laboratory-developed assay, 89 clinical plasma samples with various levels of HBV DNA were tested using both assays. The limit of detection of the Abbott assay, was 210IU/ml and it successfully detected mutations when the mutant types were present at levels ≥20%. Among 89 clinical specimens, 43 and 42 were amplification positive in the Abbott and laboratory-developed assays, respectively, with 87.6% overall agreement (78/89; 95% confidence interval [CI], 78.6-93.4). The Abbott assay failed to detect the minor mutant populations in two specimens, and therefore overall concordance was 85.3% (76/89), and the kappa value was 0.79 (95% CI, 0.67-0.90). The Abbott assay showed comparable diagnostic performance to laboratory-developed nested PCR followed by direct sequencing, and may be useful as a routine method for detecting HBV NA resistance-associated mutations in clinical laboratory settings. Copyright © 2016 Elsevier B.V. All rights reserved.
Rotteveel-de Groot, Dorien M; Ross, H Alec; Janssen, Marcel J R; Netea-Maier, Romana T; Oosting, Janine D; Sweep, Fred C G J; van Herwaarden, Antonius E
2016-08-01
Thyroglobulin (Tg) measurements are used to monitor for residual thyroid tissue in patients with differentiated thyroid cancer (DTC) after thyroidectomy and radioiodine ablative therapy. In recent years highly sensitive Tg assays have been developed. In this study the analytical performance of the new Roche Elecsys Tg II assay was evaluated and compared with the well documented Access2 Tg assay (Beckman-Coulter). Analytical performance was examined using various Clinical and Laboratory Standards Institute (CLSI) evaluation protocols. Tg negative patient sera were used to establish an upper reference limit (URL) for the Elecsys Tg II assay. Non-linearity, drift and carry-over according to CLSI EP10 and EP6 in a measuring range of 0.04-500 ng/mL were non-significant. Total precision according to CLSI EP5 was 10% at a Tg concentration of 0.08 ng/mL. A patient serum comparison performed according to a modified CLSI EP9 protocol showed a significant difference of a factor of approximately 1.4, despite using an identical CRM calibrator. The Elecsys Tg II assay measured Tg with a two-fold higher sensitivity than the Access2 assay. Finally, using human sera without Tg, an URL of 0.05 ng/mL was determined. In our hands the highly sensitive Elecsys Tg II assay shows a good analytical performance and a higher sensitivity compared to the Access2 Tg assay. An URL of 0.05 ng/mL for the Elecsys Tg II assay was determined which may improve the clinical utility of the assay for the detection of residual DTC or disease recurrence.
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-15
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 - 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.
Asgharzadeh, Hafez; Borazjani, Iman
2016-01-01
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 – 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80–90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future. PMID:28042172
NASA Astrophysics Data System (ADS)
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-01
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for non-linear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form a preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42-74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal and full Jacobian, respectivley, when the stretching factor was increased. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.
Evaluation of a Novel Single-Tube Method for Extended Genotyping of Human Papillomavirus
Serrano, I.; Wennington, H.; Graham, C.; Cubie, H.; Boland, E.; Fu, G.; Cuschieri, K.
2017-01-01
ABSTRACT The use of high-risk human papillomavirus (HPV) testing for surveillance and clinical applications is increasing globally, and it is important that tests are evaluated to ensure they are fit for this purpose. In this study, the performance of a new HPV genotyping test, the Papilloplex high-risk HPV (HR-HPV) test, was compared to two well-established genotyping tests. Preliminary clinical performance was also ascertained for the detection of CIN2+ in a disease-enriched retrospective cohort. A panel of 500 cervical liquid-based cytology samples with known clinical outcomes were tested by the Papilloplex HR-HPV test. Analytical concordance was compared to two assays: a Linear Array (LA) HPV genotyping test and an Optiplex HPV genotyping test. The initial clinical performance for the detection for CIN2+ samples was performed and compared to that of two clinically validated HPV tests: a RealTime High-Risk HPV test (RealTime) and a Hybrid Capture 2 HPV test (HC2). High agreement for HR-HPV was observed between the Papilloplex and LA and Optiplex HPV tests (97 and 95%, respectively), with kappa values for HPV16 and HPV18 being 0.90 and 0.81 compared to the LA and 0.70 and 0.82 compared to the Optiplex test. The sensitivity, specificity, positive predictive value, and negative predictive value of the Papilloplex test for the detection of CIN2+ were 92, 54, 33, and 96%, respectively, and very similar to the values observed with RealTime and HC2. The Papilloplex HR-HPV test demonstrated a analytical performance similar to those of the two HPV genotyping tests at the HR-HPV level and the type-specific level. The preliminary data on clinical performance look encouraging, although further longitudinal studies within screening populations are required to confirm these findings. PMID:29237790
Mechanical behavior of regular open-cell porous biomaterials made of diamond lattice unit cells.
Ahmadi, S M; Campoli, G; Amin Yavari, S; Sajadi, B; Wauthle, R; Schrooten, J; Weinans, H; Zadpoor, A A
2014-06-01
Cellular structures with highly controlled micro-architectures are promising materials for orthopedic applications that require bone-substituting biomaterials or implants. The availability of additive manufacturing techniques has enabled manufacturing of biomaterials made of one or multiple types of unit cells. The diamond lattice unit cell is one of the relatively new types of unit cells that are used in manufacturing of regular porous biomaterials. As opposed to many other types of unit cells, there is currently no analytical solution that could be used for prediction of the mechanical properties of cellular structures made of the diamond lattice unit cells. In this paper, we present new analytical solutions and closed-form relationships for predicting the elastic modulus, Poisson׳s ratio, critical buckling load, and yield (plateau) stress of cellular structures made of the diamond lattice unit cell. The mechanical properties predicted using the analytical solutions are compared with those obtained using finite element models. A number of solid and porous titanium (Ti6Al4V) specimens were manufactured using selective laser melting. A series of experiments were then performed to determine the mechanical properties of the matrix material and cellular structures. The experimentally measured mechanical properties were compared with those obtained using analytical solutions and finite element (FE) models. It has been shown that, for small apparent density values, the mechanical properties obtained using analytical and numerical solutions are in agreement with each other and with experimental observations. The properties estimated using an analytical solution based on the Euler-Bernoulli theory markedly deviated from experimental results for large apparent density values. The mechanical properties estimated using FE models and another analytical solution based on the Timoshenko beam theory better matched the experimental observations. Copyright © 2014 Elsevier Ltd. All rights reserved.
A high-frequency servosystem for fuel control in hypersonic engines
NASA Technical Reports Server (NTRS)
Simon, Donald L.
1991-01-01
A hydrogen fuel-flow valve with an electrohydraulic servosystem is described. An analysis of the servosystem is presented along with a discussion of the limitations imposed on system performance by nonlinearities. The response of the valve to swept-frequency inputs is experimentally determined and compared with analytical results obtained from a computer model. The valve is found to perform favorably for frequencies up to 200 Hz.
Humbert, H; Machinal, C; Labaye, Ivan; Schrotter, J C
2011-01-01
The determination of the virus retention capabilities of UF units during operation is essential for the operators of drinking water treatment facilities in order to guarantee an efficient and stable removal of viruses through time. In previous studies, an effective method (MS2-phage challenge tests) was developed by the Water Research Center of Veolia Environnement for the measurement of the virus retention rates (Log Removal Rate, LRV) of commercially available hollow fiber membranes at lab scale. In the present work, the protocol for monitoring membrane performance was transferred from lab scale to pilot scale. Membrane performances were evaluated during pilot trial and compared to the results obtained at lab scale with fibers taken from the pilot plant modules. PFU culture method was compared to RT-PCR method for the calculation of LRV in both cases. Preliminary tests at lab scale showed that both methods can be used interchangeably. For tests conducted on virgin membrane, a good consistency was observed between lab and pilot scale results with the two analytical methods used. This work intends to show that a reliable determination of the membranes performances based on RT-PCR analytical method can be achieved during the operation of the UF units.
Thickness dependences of solar cell performance
NASA Technical Reports Server (NTRS)
Sah, C. T.
1982-01-01
The significance of including factors such as the base resistivity loss for solar cells thicker than 100 microns and emitter and BSF layer recombination for thin cells in predicting the fill factor and efficiency of solar cells is demonstrated analytically. A model for a solar cell is devised with the inclusion of the dopant impurity concentration profile, variation of the electron and hole mobility with dopant concentration, the concentration and thermal capture and emission rates of the recombination center, device temperature, the AM1 spectra and the Si absorption coefficient. Device equations were solved by means of the transmission line technique. The analytical results were compared with those of low-level theory for cell performance. Significant differences in predictions of the fill factor resulted, and inaccuracies in the low-level approximations are discussed.
On cup anemometer rotor aerodynamics.
Pindado, Santiago; Pérez, Javier; Avila-Sanchez, Sergio
2012-01-01
The influence of anemometer rotor shape parameters, such as the cups' front area or their center rotation radius on the anemometer's performance was analyzed. This analysis was based on calibrations performed on two different anemometers (one based on magnet system output signal, and the other one based on an opto-electronic system output signal), tested with 21 different rotors. The results were compared to the ones resulting from classical analytical models. The results clearly showed a linear dependency of both calibration constants, the slope and the offset, on the cups' center rotation radius, the influence of the front area of the cups also being observed. The analytical model of Kondo et al. was proved to be accurate if it is based on precise data related to the aerodynamic behavior of a rotor's cup.
Huang, Yao-Hung; Chang, Jeng-Shian; Chao, Sheng D.; Wu, Kuang-Chong; Huang, Long-Sun
2014-01-01
A quartz crystal microbalance (QCM) serving as a biosensor to detect the target biomolecules (analytes) often suffers from the time consuming process, especially in the case of diffusion-limited reaction. In this experimental work, we modify the reaction chamber of a conventional QCM by integrating into the multi-microelectrodes to produce electrothermal vortex flow which can efficiently drive the analytes moving toward the sensor surface, where the analytes were captured by the immobilized ligands. The microelectrodes are placed on the top surface of the chamber opposite to the sensor, which is located on the bottom of the chamber. Besides, the height of reaction chamber is reduced to assure that the suspended analytes in the fluid can be effectively drived to the sensor surface by induced electrothermal vortex flow, and also the sample costs are saved. A series of frequency shift measurements associated with the adding mass due to the specific binding of the analytes in the fluid flow and the immobilized ligands on the QCM sensor surface are performed with or without applying electrothermal effect (ETE). The experimental results show that electrothermal vortex flow does effectively accelerate the specific binding and make the frequency shift measurement more sensible. In addition, the images of the binding surfaces of the sensors with or without applying electrothermal effect are taken through the scanning electron microscopy. By comparing the images, it also clearly indicates that ETE does raise the specific binding of the analytes and ligands and efficiently improves the performance of the QCM sensor. PMID:25538808
A developed nearly analytic discrete method for forward modeling in the frequency domain
NASA Astrophysics Data System (ADS)
Liu, Shaolin; Lang, Chao; Yang, Hui; Wang, Wenshuai
2018-02-01
High-efficiency forward modeling methods play a fundamental role in full waveform inversion (FWI). In this paper, the developed nearly analytic discrete (DNAD) method is proposed to accelerate frequency-domain forward modeling processes. We first derive the discretization of frequency-domain wave equations via numerical schemes based on the nearly analytic discrete (NAD) method to obtain a linear system. The coefficients of numerical stencils are optimized to make the linear system easier to solve and to minimize computing time. Wavefield simulation and numerical dispersion analysis are performed to compare the numerical behavior of DNAD method with that of the conventional NAD method. The results demonstrate the superiority of our proposed method. Finally, the DNAD method is implemented in frequency-domain FWI, and high-resolution inverse results are obtained.
Dynamic multiplexed analysis method using ion mobility spectrometer
Belov, Mikhail E [Richland, WA
2010-05-18
A method for multiplexed analysis using ion mobility spectrometer in which the effectiveness and efficiency of the multiplexed method is optimized by automatically adjusting rates of passage of analyte materials through an IMS drift tube during operation of the system. This automatic adjustment is performed by the IMS instrument itself after determining the appropriate levels of adjustment according to the method of the present invention. In one example, the adjustment of the rates of passage for these materials is determined by quantifying the total number of analyte molecules delivered to the ion trap in a preselected period of time, comparing this number to the charge capacity of the ion trap, selecting a gate opening sequence; and implementing the selected gate opening sequence to obtain a preselected rate of analytes within said IMS drift tube.
Application of Interface Technology in Progressive Failure Analysis of Composite Panels
NASA Technical Reports Server (NTRS)
Sleight, D. W.; Lotts, C. G.
2002-01-01
A progressive failure analysis capability using interface technology is presented. The capability has been implemented in the COMET-AR finite element analysis code developed at the NASA Langley Research Center and is demonstrated on composite panels. The composite panels are analyzed for damage initiation and propagation from initial loading to final failure using a progressive failure analysis capability that includes both geometric and material nonlinearities. Progressive failure analyses are performed on conventional models and interface technology models of the composite panels. Analytical results and the computational effort of the analyses are compared for the conventional models and interface technology models. The analytical results predicted with the interface technology models are in good correlation with the analytical results using the conventional models, while significantly reducing the computational effort.
BIG DATA ANALYTICS AND PRECISION ANIMAL AGRICULTURE SYMPOSIUM: Data to decisions.
White, B J; Amrine, D E; Larson, R L
2018-04-14
Big data are frequently used in many facets of business and agronomy to enhance knowledge needed to improve operational decisions. Livestock operations collect data of sufficient quantity to perform predictive analytics. Predictive analytics can be defined as a methodology and suite of data evaluation techniques to generate a prediction for specific target outcomes. The objective of this manuscript is to describe the process of using big data and the predictive analytic framework to create tools to drive decisions in livestock production, health, and welfare. The predictive analytic process involves selecting a target variable, managing the data, partitioning the data, then creating algorithms, refining algorithms, and finally comparing accuracy of the created classifiers. The partitioning of the datasets allows model building and refining to occur prior to testing the predictive accuracy of the model with naive data to evaluate overall accuracy. Many different classification algorithms are available for predictive use and testing multiple algorithms can lead to optimal results. Application of a systematic process for predictive analytics using data that is currently collected or that could be collected on livestock operations will facilitate precision animal management through enhanced livestock operational decisions.
A Comparative Study of Multi-material Data Structures for Computational Physics Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garimella, Rao Veerabhadra; Robey, Robert W.
The data structures used to represent the multi-material state of a computational physics application can have a drastic impact on the performance of the application. We look at efficient data structures for sparse applications where there may be many materials, but only one or few in most computational cells. We develop simple performance models for use in selecting possible data structures and programming patterns. We verify the analytic models of performance through a small test program of the representative cases.
Tellez, Jason A; Schmidt, Jason D
2011-08-20
The propagation of a free-space optical communications signal through atmospheric turbulence experiences random fluctuations in intensity, including signal fades, which negatively impact the performance of the communications link. The gamma-gamma probability density function is commonly used to model the scintillation of a single beam. One proposed method to reduce the occurrence of scintillation-induced fades at the receiver plane involves the use of multiple beams propagating through independent paths, resulting in a sum of independent gamma-gamma random variables. Recently an analytical model for the probability distribution of irradiance from the sum of multiple independent beams was developed. Because truly independent beams are practically impossible to create, we present here a more general but approximate model for the distribution of beams traveling through partially correlated paths. This model compares favorably with wave-optics simulations and highlights the reduced scintillation as the number of transmitted beams is increased. Additionally, a pulse-position modulation scheme is used to reduce the impact of signal fades when they occur. Analytical and simulated results showed significantly improved performance when compared to fixed threshold on/off keying. © 2011 Optical Society of America
Ichihara, Kiyoshi; Ozarda, Yesim; Barth, Julian H; Klee, George; Shimizu, Yoshihisa; Xia, Liangyu; Hoffmann, Mariza; Shah, Swarup; Matsha, Tandi; Wassung, Janette; Smit, Francois; Ruzhanskaya, Anna; Straseski, Joely; Bustos, Daniel N; Kimura, Shogo; Takahashi, Aki
2017-04-01
The intent of this study, based on a global multicenter study of reference values (RVs) for serum analytes was to explore biological sources of variation (SVs) of the RVs among 12 countries around the world. As described in the first part of this paper, RVs of 50 major serum analytes from 13,396 healthy individuals living in 12 countries were obtained. Analyzed in this study were 23 clinical chemistry analytes and 8 analytes measured by immunoturbidimetry. Multiple regression analysis was performed for each gender, country by country, analyte by analyte, by setting four major SVs (age, BMI, and levels of drinking and smoking) as a fixed set of explanatory variables. For analytes with skewed distributions, log-transformation was applied. The association of each source of variation with RVs was expressed as the partial correlation coefficient (r p ). Obvious gender and age-related changes in the RVs were observed in many analytes, almost consistently between countries. Compilation of age-related variations of RVs after adjusting for between-country differences revealed peculiar patterns specific to each analyte. Judged fromthe r p , BMI related changes were observed for many nutritional and inflammatory markers in almost all countries. However, the slope of linear regression of BMI vs. RV differed greatly among countries for some analytes. Alcohol and smoking-related changes were observed less conspicuously in a limited number of analytes. The features of sex, age, alcohol, and smoking-related changes in RVs of the analytes were largely comparable worldwide. The finding of differences in BMI-related changes among countries in some analytes is quite relevant to understanding ethnic differences in susceptibility to nutritionally related diseases. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
De Keukeleire, Steven; Desmet, Stefanie; Lagrou, Katrien; Oosterlynck, Julie; Verhulst, Manon; Van Besien, Jessica; Saegeman, Veroniek; Reynders, Marijke
2017-03-01
The performance of Elecsys Syphilis was compared to Architect Syphilis TP and Reformulated Architect Syphilis TP. The overall sensitivity and specificity were 98.4% and 99.5%, 97.7% and 97.1%, and 99.2% and 99.7% respectively. The assays are comparable and considered adequate for syphilis screening. Copyright © 2016 Elsevier Inc. All rights reserved.
Dominko, Robert; Patel, Manu U M; Bele, Marjan; Pejovnik, Stane
2016-01-01
The electrochemical characteristics of sulfurized polyacrylonitrile composite (PAN/S) cathodes were compared with the commonly used carbon/S-based composite material. The difference in the working mechanism of these composites was examined. Analytical investigations were performed on both kinds of cathode electrode composites by using two reliable analytical techniques, in-situ UV-Visible spectroscopy and a four-electrode Swagelok cell. This study differentiates the working mechanisms of PAN/S composites from conventional elemental sulphur/carbon composite and also sheds light on factors that could be responsible for capacity fading in the case of PAN/S composites.
Accurate Sloshing Modes Modeling: A New Analytical Solution and its Consequences on Control
NASA Astrophysics Data System (ADS)
Gonidou, Luc-Olivier; Desmariaux, Jean
2014-06-01
This study addresses the issue of sloshing modes modeling for GNC analyses purposes. On European launchers, equivalent mechanical systems are commonly used for modeling sloshing effects on launcher dynamics. The representativeness of such a methodology is discussed here. First an exact analytical formulation of the launcher dynamics fitted with sloshing modes is proposed and discrepancies with equivalent mechanical system approach are emphasized. Then preliminary comparative GNC analyses are performed using the different models of dynamics in order to evaluate the impact of the aforementioned discrepancies from GNC standpoint. Special attention is paid to system stability.
Cogeneration Technology Alternatives Study (CTAS). Volume 2: Analytical approach
NASA Technical Reports Server (NTRS)
Gerlaugh, H. E.; Hall, E. W.; Brown, D. H.; Priestley, R. R.; Knightly, W. F.
1980-01-01
The use of various advanced energy conversion systems were compared with each other and with current technology systems for their savings in fuel energy, costs, and emissions in individual plants and on a national level. The ground rules established by NASA and assumptions made by the General Electric Company in performing this cogeneration technology alternatives study are presented. The analytical methodology employed is described in detail and is illustrated with numerical examples together with a description of the computer program used in calculating over 7000 energy conversion system-industrial process applications. For Vol. 1, see 80N24797.
Human, Lauren J; Thorson, Katherine R; Woolley, Joshua D; Mendes, Wendy Berry
2017-04-01
Intranasal administration of the hypothalamic neuropeptide oxytocin (OT) has, in some studies, been associated with positive effects on social perception and cognition. Similarly, positive emotion inductions can improve a range of perceptual and performance-based behaviors. In this exploratory study, we examined how OT administration and positive emotion inductions interact in their associations with social and analytical performance. Participants (N=124) were randomly assigned to receive an intranasal spray of OT (40IU) or placebo and then viewed one of three videos designed to engender one of the following emotion states: social warmth, pride, or an affectively neutral state. Following the emotion induction, participants completed social perception and analytical tasks. There were no significant main effects of OT condition on social perception tasks, failing to replicate prior research, or on analytical performance. Further, OT condition and positive emotion inductions did not interact with each other in their associations with social perception performance. However, OT condition and positive emotion manipulations did significantly interact in their associations with analytical performance. Specifically, combining positive emotion inductions with OT administration was associated with worse analytical performance, with the pride induction no longer benefiting performance and the warmth induction resulting in worse performance. In sum, we found little evidence for main or interactive effects of OT on social perception but preliminary evidence that OT administration may impair analytical performance when paired with positive emotion inductions. Copyright © 2017 Elsevier Inc. All rights reserved.
Tesija Kuna, Andrea; Dukic, Kristina; Nikolac Gabaj, Nora; Miler, Marijana; Vukasovic, Ines; Langer, Sanja; Simundic, Ana-Maria; Vrkic, Nada
2018-03-08
To compare the analytical performances of the enzymatic method (EM) and capillary electrophoresis (CE) for hemoglobin A1c (HbA1c) measurement. Imprecision, carryover, stability, linearity, method comparison, and interferences were evaluated for HbA1c via EM (Abbott Laboratories, Inc) and CE (Sebia). Both methods have shown overall within-laboratory imprecision of less than 3% for International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) units (<2% National Glycohemoglobin Standardization Program [NGSP] units). Carryover effects were within acceptable criteria. The linearity of both methods has proven to be excellent (R2 = 0.999). Significant proportional and constant difference were found for EM, compared with CE, but were not clinically relevant (<5 mmol/mol; NGSP <0.5%). At the clinically relevant HbA1c concentration, stability observed with both methods was acceptable (bias, <3%). Triglyceride levels of 8.11 mmol per L or greater showed to interfere with EM and fetal hemoglobin (HbF) of 10.6% or greater with CE. The enzymatic method proved to be comparable to the CE method in analytical performances; however, certain interferences can influence the measurements of each method.
Dong, Qi; Nasir, Muhammad Zafir Mohamad; Pumera, Martin
2017-10-18
As-synthetized single walled carbon nanotubes (SWCNTs) contain both metallic and semiconducting nanotubes. For the electronics, it is desirable to separate semiconducting SWCNTs (s-SWCNTs) from the metallic ones as s-SWCNTs provide desirable electronic properties. Here we test whether ultrapure semi-conducting single-walled carbon nanotubes (s-SWCNTs) provide advantageous electrochemical properties over the as prepared SWCNTs which contain a mixture of semiconducting and metallic CNTs. We test them as a transducer platform which enhanced the detection of target analytes (ascorbic acid, dopamine, uric acid) when compared to a bare glassy carbon (GC) electrode. Despite that, the two materials exhibit significantly different electrochemical properties and performances. A mixture of m-SWCNTs and s-SWCNTs demonstrated superior performance over ultrapure s-SWCNTs with greater peak currents and pronounced shift in peak potentials to lower values in cyclic and differential pulse voltammetry for the detection of target analytes. The mixture of m- and s-SWCNTs displayed about a 4 times improved heterogeneous electron transfer rate as compared to bare GC and a 2 times greater heterogeneous electron transfer rate than s-SWCNTs, demonstrating that ultrapure SWCNTs do not provide any major enhancement over the as prepared SWCNTs.
Useful measures and models for analytical quality management in medical laboratories.
Westgard, James O
2016-02-01
The 2014 Milan Conference "Defining analytical performance goals 15 years after the Stockholm Conference" initiated a new discussion of issues concerning goals for precision, trueness or bias, total analytical error (TAE), and measurement uncertainty (MU). Goal-setting models are critical for analytical quality management, along with error models, quality-assessment models, quality-planning models, as well as comprehensive models for quality management systems. There are also critical underlying issues, such as an emphasis on MU to the possible exclusion of TAE and a corresponding preference for separate precision and bias goals instead of a combined total error goal. This opinion recommends careful consideration of the differences in the concepts of accuracy and traceability and the appropriateness of different measures, particularly TAE as a measure of accuracy and MU as a measure of traceability. TAE is essential to manage quality within a medical laboratory and MU and trueness are essential to achieve comparability of results across laboratories. With this perspective, laboratory scientists can better understand the many measures and models needed for analytical quality management and assess their usefulness for practical applications in medical laboratories.
Tarasov, Andrii; Rauhut, Doris; Jung, Rainer
2017-12-01
Analytical methods of haloanisoles and halophenols quantification in cork matrix are summarized in the current review. Sample-preparation and sample-treatment techniques have been compared and discussed from the perspective of their efficiency, time- and extractant-optimization, easiness of performance. Primary interest of these analyses usually addresses to 2,4,6-trichloroanisole (TCA), which is a major wine contaminant among haloanisoles. Two concepts of TCA determination are described in the review: releasable TCA and total TCA analyses. Chromatographic, bioanalytical and sensorial methods were compared according to their application in the cork industry and in scientific investigations. Finally, it was shown that modern analytical techniques are able to provide required sensitivity, selectivity and repeatability for haloanisoles and halophenols determination. Copyright © 2017 Elsevier B.V. All rights reserved.
Performance Analysis for Channel Estimation With 1-Bit ADC and Unknown Quantization Threshold
NASA Astrophysics Data System (ADS)
Stein, Manuel S.; Bar, Shahar; Nossek, Josef A.; Tabrikian, Joseph
2018-05-01
In this work, the problem of signal parameter estimation from measurements acquired by a low-complexity analog-to-digital converter (ADC) with $1$-bit output resolution and an unknown quantization threshold is considered. Single-comparator ADCs are energy-efficient and can be operated at ultra-high sampling rates. For analysis of such systems, a fixed and known quantization threshold is usually assumed. In the symmetric case, i.e., zero hard-limiting offset, it is known that in the low signal-to-noise ratio (SNR) regime the signal processing performance degrades moderately by ${2}/{\\pi}$ ($-1.96$ dB) when comparing to an ideal $\\infty$-bit converter. Due to hardware imperfections, low-complexity $1$-bit ADCs will in practice exhibit an unknown threshold different from zero. Therefore, we study the accuracy which can be obtained with receive data processed by a hard-limiter with unknown quantization level by using asymptotically optimal channel estimation algorithms. To characterize the estimation performance of these nonlinear algorithms, we employ analytic error expressions for different setups while modeling the offset as a nuisance parameter. In the low SNR regime, we establish the necessary condition for a vanishing loss due to missing offset knowledge at the receiver. As an application, we consider the estimation of single-input single-output wireless channels with inter-symbol interference and validate our analysis by comparing the analytic and experimental performance of the studied estimation algorithms. Finally, we comment on the extension to multiple-input multiple-output channel models.
Evaluation of analytical performance based on partial order methodology.
Carlsen, Lars; Bruggemann, Rainer; Kenessova, Olga; Erzhigitov, Erkin
2015-01-01
Classical measurements of performances are typically based on linear scales. However, in analytical chemistry a simple scale may be not sufficient to analyze the analytical performance appropriately. Here partial order methodology can be helpful. Within the context described here, partial order analysis can be seen as an ordinal analysis of data matrices, especially to simplify the relative comparisons of objects due to their data profile (the ordered set of values an object have). Hence, partial order methodology offers a unique possibility to evaluate analytical performance. In the present data as, e.g., provided by the laboratories through interlaboratory comparisons or proficiency testings is used as an illustrative example. However, the presented scheme is likewise applicable for comparison of analytical methods or simply as a tool for optimization of an analytical method. The methodology can be applied without presumptions or pretreatment of the analytical data provided in order to evaluate the analytical performance taking into account all indicators simultaneously and thus elucidating a "distance" from the true value. In the present illustrative example it is assumed that the laboratories analyze a given sample several times and subsequently report the mean value, the standard deviation and the skewness, which simultaneously are used for the evaluation of the analytical performance. The analyses lead to information concerning (1) a partial ordering of the laboratories, subsequently, (2) a "distance" to the Reference laboratory and (3) a classification due to the concept of "peculiar points". Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Albajar, F.; Bertelli, N.; Bornatici, M.; Engelmann, F.
2007-01-01
On the basis of the electromagnetic energy balance equation, a quasi-exact analytical evaluation of the electron-cyclotron (EC) absorption coefficient is performed for arbitrary propagation (with respect to the magnetic field) in a (Maxwellian) magneto-plasma for the temperature range of interest for fusion reactors (in which EC radiation losses tend to be important in the plasma power balance). The calculation makes use of Bateman's expansion for the product of two Bessel functions, retaining the lowest-order contribution. The integration over electron momentum can then be carried out analytically, fully accounting for finite Larmor radius effects in this approximation. On the basis of the analytical expressions for the EC absorption coefficients of both the extraordinary and ordinary modes thus obtained, (i) for the case of perpendicular propagation simple formulae are derived for both modes and (ii) a numerical analysis of the angular distribution of EC absorption is carried out. An assessment of the accuracy of asymptotic expressions that have been given earlier is also performed, showing that these approximations can be usefully applied for calculating EC power losses from reactor-grade plasmas. Presented in part at the 14th Joint Workshop on Electron Cyclotron Emission and Electron Cyclotron Resonance Heating, Santorini, Greece, 9-12 May 2006.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffin, Brian M.; Larson, Vincent E.
Microphysical processes, such as the formation, growth, and evaporation of precipitation, interact with variability and covariances (e.g., fluxes) in moisture and heat content. For instance, evaporation of rain may produce cold pools, which in turn may trigger fresh convection and precipitation. These effects are usually omitted or else crudely parameterized at subgrid scales in weather and climate models.A more formal approach is pursued here, based on predictive, horizontally averaged equations for the variances, covariances, and fluxes of moisture and heat content. These higher-order moment equations contain microphysical source terms. The microphysics terms can be integrated analytically, given a suitably simplemore » warm-rain microphysics scheme and an approximate assumption about the multivariate distribution of cloud-related and precipitation-related variables. Performing the integrations provides exact expressions within an idealized context.A large-eddy simulation (LES) of a shallow precipitating cumulus case is performed here, and it indicates that the microphysical effects on (co)variances and fluxes can be large. In some budgets and altitude ranges, they are dominant terms. The analytic expressions for the integrals are implemented in a single-column, higher-order closure model. Interactive single-column simulations agree qualitatively with the LES. The analytic integrations form a parameterization of microphysical effects in their own right, and they also serve as benchmark solutions that can be compared to non-analytic integration methods.« less
Dupuy, Anne Marie; Sebbane, Mustapha; Roubille, François; Coste, Thibault; Bargnoux, Anne Sophie; Badiou, Stéphanie; Kuster, Nils; Cristol, Jean Paul
2015-03-01
To report the analytical performances of the Radiometer AQT90 FLEX® cTnT assay (Neuilly-Plaisance, France) and to evaluate the concordance with hs-cTnT results from central laboratory for the diagnosis of acute myocardial infarction (AMI) at baseline and during a short follow-up among unselected patients admitted in emergency room or cardiology department. Analytical performances of AQT90 FLEX® cTnT immunoassay included imprecision study with determination of a coefficient of variation at 10% and 20%, linearity, and limit of detection. The concordance study was based on samples obtained from 170 consecutive patients with chest pain suggestive of acute coronary syndrome (ACS) admitted in the emergency room or cardiology department. The kinetic study (within 62 additional samples 3h later) was based on absolute delta criterion and the combination of relative change of 30% with absolute change of 7ng/L. The cTnT assay from Radiometer was evaluated as clinically usable, although less sensitive than the Roche hs-cTnT assay as demonstrated by the concordance and the kinetic studies. In non-selected population, the cTnT AQT Flex© assay on AQT90© with kinetic change at 3h, provides similar clinical classification of patients, particularly for AMI group as compared to central laboratory hs-cTnT assay and could be suitable for clinical use. Copyright © 2014 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Lemasson, Elise; Bertin, Sophie; Hennig, Philippe; Boiteux, Hélène; Lesellier, Eric; West, Caroline
2015-08-21
Impurity profiling of organic products that are synthesized as possible drug candidates requires complementary analytical methods to ensure that all impurities are identified. Supercritical fluid chromatography (SFC) is a very useful tool to achieve this objective, as an adequate selection of stationary phases can provide orthogonal separations so as to maximize the chances to see all impurities. In this series of papers, we have developed a method for achiral SFC-MS profiling of drug candidates, based on a selection of 160 analytes issued from Servier Research Laboratories. In the first part of this study, focusing on mobile phase selection, a gradient elution with carbon dioxide and methanol comprising 2% water and 20mM ammonium acetate proved to be the best in terms of chromatographic performance, while also providing good MS response [1]. The objective of this second part was the selection of an orthogonal set of ultra-high performance stationary phases, that was carried out in two steps. Firstly, a reduced set of analytes (20) was used to screen 23 columns. The columns selected were all 1.7-2.5μm fully porous or 2.6-2.7μm superficially porous particles, with a variety of stationary phase chemistries. Derringer desirability functions were used to rank the columns according to retention window, column efficiency evaluated with peak width of selected analytes, and the proportion of analytes successfully eluted with good peak shapes. The columns providing the worst performances were thus eliminated and a shorter selection of columns (11) was obtained. Secondly, based on 160 tested analytes, the 11 columns were ranked again. The retention data obtained on these columns were then compared to define a reduced set of the best columns providing the greatest orthogonality, to maximize the chances to see all impurities within a limited number of runs. Two high-performance columns were thus selected: ACQUITY UPC(2) HSS C18 SB and Nucleoshell HILIC. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Maghsoudi, Mohammad Javad; Mohamed, Z.; Sudin, S.; Buyamin, S.; Jaafar, H. I.; Ahmad, S. M.
2017-08-01
This paper proposes an improved input shaping scheme for an efficient sway control of a nonlinear three dimensional (3D) overhead crane with friction using the particle swarm optimization (PSO) algorithm. Using this approach, a higher payload sway reduction is obtained as the input shaper is designed based on a complete nonlinear model, as compared to the analytical-based input shaping scheme derived using a linear second order model. Zero Vibration (ZV) and Distributed Zero Vibration (DZV) shapers are designed using both analytical and PSO approaches for sway control of rail and trolley movements. To test the effectiveness of the proposed approach, MATLAB simulations and experiments on a laboratory 3D overhead crane are performed under various conditions involving different cable lengths and sway frequencies. Their performances are studied based on a maximum residual of payload sway and Integrated Absolute Error (IAE) values which indicate total payload sway of the crane. With experiments, the superiority of the proposed approach over the analytical-based is shown by 30-50% reductions of the IAE values for rail and trolley movements, for both ZV and DZV shapers. In addition, simulations results show higher sway reductions with the proposed approach. It is revealed that the proposed PSO-based input shaping design provides higher payload sway reductions of a 3D overhead crane with friction as compared to the commonly designed input shapers.
Wang, Xiaofan; Zhao, Xu; Gu, Liqiang; Lv, Chunxiao; He, Bosai; Liu, Zhenzhen; Hou, Pengyi; Bi, Kaishun; Chen, Xiaohui
2014-03-15
A simple and rapid ultra-high performance liquid chromatography-tandem mass spectrometry (uHPLC-MS/MS) method has been developed for the simultaneous determination of five free flavonoids (amentoflavone, isorhamnetin, naringenin, kaempferol and quercetin) and their total (free and conjugated) forms, and to compare the pharmacokinetics of these active ingredients in normal and hyperlipidemic rats. The free and total forms of these flavonoids were extracted by liquid-liquid extraction with ethyl acetate. The conjugated flavonoids were deconjugated by the enzyme β-Glucuronidase and Sulfatase. Chromatographic separation was accomplished on a ZORBAX Eclipse XDB-C8 USP L7 column using gradient elution. Detection was performed on a 4000Q uHPLC-MS/MS system from AB Sciex using negative ion mode in the multiple reaction monitoring (MRM) mode. The lower limits of quantification were 2.0-5.0ng/mL for all the analytes. Intra-day and inter-day precision were less than 15% and accuracy ranged from -9.3% to 11.0%, and the mean extraction recoveries of analytes and internal standard (IS) from rat plasma were all more than 81.7%. The validated method was successfully applied to a comparative pharmacokinetic study of five free and total analytes in rat plasma. The results indicated that the absorption of five total flavonoids in hyperlipidemia group were significantly higher than those in normal group with similar concentration-time curves. Copyright © 2014 Elsevier B.V. All rights reserved.
Bourget, P; Amin, A; Vidal, F; Merlette, C; Troude, P; Corriol, O
2013-09-01
In France, central IV admixture of chemotherapy (CT) treatments at the hospital is now required by law. We have previously shown that the shaping of Therapeutic Objects (TOs) could profit from an Analytical Quality Assurance (AQA), closely linked to the batch release, for the three key parameters: identity, purity, and initial concentration of the compound of interest. In the course of recent and diversified works, we showed the technical superiority of non-intrusive Raman Spectroscopy (RS) vs. any other analytical option and, especially for both HPLC and vibrational method using a UV/visible-FTIR coupling. An interconnected qualitative and economic assessment strongly helps to enrich these relevant works. The study compares in operational situation, the performance of three analytical methods used for the AQC of TOs. We used: a) a set of evaluation criteria, b) the depreciation tables of the machinery, c) the cost of disposables, d) the weight of equipment and technical installations, e) the basic accounting unit (unit of work) and its composite costs (Euros), which vary according to the technical options, the weight of both human resources and disposables; finally, different combinations are described. So, the unit of work can take 12 different values between 1 and 5.5 Euros, and we provide various recommendations. A qualitative evaluation grid constantly places the SR technology as superior or equal to the 2 other techniques currently available. Our results demonstrated: a) the major interest of the non-intrusive AQC performed by RS, especially when it is not possible to analyze a TO with existing methods e.g. elastomeric portable pumps, and b) the high potential for this technique to be a strong contributor to the security of the medication circuit, and to fight the iatrogenic effects of drugs especially in the hospital. It also contributes to the protection of all actors in healthcare and of their working environment.
Schroder, L.J.; Brooks, M.H.; Malo, B.A.; Willoughby, T.C.
1986-01-01
Five intersite comparison studies for the field determination of pH and specific conductance, using simulated-precipitation samples, were conducted by the U.S.G.S. for the National Atmospheric Deposition Program and National Trends Network. These comparisons were performed to estimate the precision of pH and specific conductance determinations made by sampling-site operators. Simulated-precipitation samples were prepared from nitric acid and deionized water. The estimated standard deviation for site-operator determination of pH was 0.25 for pH values ranging from 3.79 to 4.64; the estimated standard deviation for specific conductance was 4.6 microsiemens/cm at 25 C for specific-conductance values ranging from 10.4 to 59.0 microsiemens/cm at 25 C. Performance-audit samples with known analyte concentrations were prepared by the U.S.G.S.and distributed to the National Atmospheric Deposition Program 's Central Analytical Laboratory. The differences between the National Atmospheric Deposition Program and national Trends Network-reported analyte concentrations and known analyte concentrations were calculated, and the bias and precision were determined. For 1983, concentrations of calcium, magnesium, sodium, and chloride were biased at the 99% confidence limit; concentrations of potassium and sulfate were unbiased at the 99% confidence limit. Four analytical laboratories routinely analyzing precipitation were evaluated in their analysis of identical natural- and simulated precipitation samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple-range test on data produced by these laboratories, from the analysis of identical simulated-precipitation samples. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Interlaboratory comparability results may be used to normalize natural-precipitation chemistry data obtained from two or more of these laboratories. (Author 's abstract)
Smalley, James; Marino, Anthony M; Xin, Baomin; Olah, Timothy; Balimane, Praveen V
2007-07-01
Caco-2 cells, the human colon carcinoma cells, are typically used for screening compounds for their permeability characteristics and P-glycoprotein (P-gp) interaction potential during discovery and development. The P-gp inhibition of test compounds is assessed by performing bi-directional permeability studies with digoxin, a well established P-gp substrate probe. Studies performed with digoxin alone as well as digoxin in presence of test compounds as putative inhibitors constitute the P-gp inhibition assay used to assess the potential liability of discovery compounds. Radiolabeled (3)H-digoxin is commonly used in such studies followed by liquid scintillation counting. This manuscript describes the development of a sensitive, accurate, and reproducible LC-MS/MS method for analysis of digoxin and its internal standard digitoxin using an on-line extraction turbulent flow chromatography coupled to tandem mass spectrometric detection that is amendable to high throughput with use of 96-well plates. The standard curve for digoxin was linear between 10 nM and 5000 nM with regression coefficient (R(2)) of 0.99. The applicability and reliability of the analysis method was evaluated by successful demonstration of efflux ratio (permeability B to A over permeability A to B) greater than 10 for digoxin in Caco-2 cells. Additional evaluations were performed on 13 marketed compounds by conducting inhibition studies in Caco-2 cells using classical P-gp inhibitors (ketoconazole, cyclosporin, verapamil, quinidine, saquinavir etc.) and comparing the results to historical data with (3)H-digoxin studies. Similarly, P-gp inhibition studies with LC-MS/MS analytical method for digoxin were also performed for 21 additional test compounds classified as negative, moderate, and potent P-gp inhibitors spanning multiple chemo types and results compared with the historical P-gp inhibition data from the (3)H-digoxin studies. A very good correlation coefficient (R(2)) of 0.89 between the results from the two analytical methods affords an attractive LC-MS/MS analytical option for labs that need to conduct the P-gp inhibition assay without using radiolabeled compounds.
Comprehensive characterizations of nanoparticle biodistribution following systemic injection in mice
NASA Astrophysics Data System (ADS)
Liao, Wei-Yin; Li, Hui-Jing; Chang, Ming-Yao; Tang, Alan C. L.; Hoffman, Allan S.; Hsieh, Patrick C. H.
2013-10-01
Various nanoparticle (NP) properties such as shape and surface charge have been studied in an attempt to enhance the efficacy of NPs in biomedical applications. When trying to undermine the precise biodistribution of NPs within the target organs, the analytical method becomes the determining factor in measuring the precise quantity of distributed NPs. High performance liquid chromatography (HPLC) represents a more powerful tool in quantifying NP biodistribution compared to conventional analytical methods such as an in vivo imaging system (IVIS). This, in part, is due to better curve linearity offered by HPLC than IVIS. Furthermore, HPLC enables us to fully analyze each gram of NPs present in the organs without compromising the signals and the depth-related sensitivity as is the case in IVIS measurements. In addition, we found that changing physiological conditions improved large NP (200-500 nm) distribution in brain tissue. These results reveal the importance of selecting analytic tools and physiological environment when characterizing NP biodistribution for future nanoscale toxicology, therapeutics and diagnostics.Various nanoparticle (NP) properties such as shape and surface charge have been studied in an attempt to enhance the efficacy of NPs in biomedical applications. When trying to undermine the precise biodistribution of NPs within the target organs, the analytical method becomes the determining factor in measuring the precise quantity of distributed NPs. High performance liquid chromatography (HPLC) represents a more powerful tool in quantifying NP biodistribution compared to conventional analytical methods such as an in vivo imaging system (IVIS). This, in part, is due to better curve linearity offered by HPLC than IVIS. Furthermore, HPLC enables us to fully analyze each gram of NPs present in the organs without compromising the signals and the depth-related sensitivity as is the case in IVIS measurements. In addition, we found that changing physiological conditions improved large NP (200-500 nm) distribution in brain tissue. These results reveal the importance of selecting analytic tools and physiological environment when characterizing NP biodistribution for future nanoscale toxicology, therapeutics and diagnostics. Electronic supplementary information (ESI) available. See DOI: 10.1039/c3nr03954d
Bratkowska, D; Fontanals, N; Cormack, P A G; Borrull, F; Marcé, R M
2012-02-17
A monolithic, hydrophilic stir bar coating based upon a copolymer of methacrylic acid and divinylbenzene [poly(MAA-co-DVB)] was synthesised and evaluated as a new polymeric phase for the stir bar sorptive extraction (SBSE) of polar compounds from complex environmental water samples. The experimental conditions for the extraction and liquid desorption in SBSE were optimised. Liquid chromatography-triple quadrupole mass spectrometry (LC-MS/MS) was used for the determination of a group of polar pharmaceuticals in environmental water matrices. The extraction performance of the poly(MAA-co-DVB) stir bar was compared to the extraction performance of a commercially available polydimethylsiloxane stir bar; it was found that the former gave rise to significantly higher extraction efficiency of polar analytes (% recovery values near to 100% for most of the studied analytes) than the commercial product. The developed method was applied to determine the studied analytes at low ng L⁻¹ in different complex environmental water samples. Copyright © 2011 Elsevier B.V. All rights reserved.
Vosough, Maryam; Rashvand, Masoumeh; Esfahani, Hadi M; Kargosha, Kazem; Salemi, Amir
2015-04-01
In this work, a rapid HPLC-DAD method has been developed for the analysis of six antibiotics (amoxicillin, metronidazole, sulfamethoxazole, ofloxacine, sulfadiazine and sulfamerazine) in the sewage treatment plant influent and effluent samples. Decreasing the chromatographic run time to less than 4 min as well as lowering the cost per analysis, were achieved through direct injection of the samples into the HPLC system followed by chemometric analysis. The problem of the complete separation of the analytes from each other and/or from the matrix ingredients was resolved as a posteriori. The performance of MCR/ALS and U-PLS/RBL, as second-order algorithms, was studied and comparable results were obtained from implication of these modeling methods. It was demonstrated that the proposed methods could be used promisingly as green analytical strategies for detection and quantification of the targeted pollutants in wastewater samples while avoiding the more complicated high cost instrumentations. Copyright © 2014 Elsevier B.V. All rights reserved.
Ion Mobility Derived Collision Cross Sections to Support Metabolomics Applications
2015-01-01
Metabolomics is a rapidly evolving analytical approach in life and health sciences. The structural elucidation of the metabolites of interest remains a major analytical challenge in the metabolomics workflow. Here, we investigate the use of ion mobility as a tool to aid metabolite identification. Ion mobility allows for the measurement of the rotationally averaged collision cross-section (CCS), which gives information about the ionic shape of a molecule in the gas phase. We measured the CCSs of 125 common metabolites using traveling-wave ion mobility-mass spectrometry (TW-IM-MS). CCS measurements were highly reproducible on instruments located in three independent laboratories (RSD < 5% for 99%). We also determined the reproducibility of CCS measurements in various biological matrixes including urine, plasma, platelets, and red blood cells using ultra performance liquid chromatography (UPLC) coupled with TW-IM-MS. The mean RSD was < 2% for 97% of the CCS values, compared to 80% of retention times. Finally, as proof of concept, we used UPLC–TW-IM-MS to compare the cellular metabolome of epithelial and mesenchymal cells, an in vitro model used to study cancer development. Experimentally determined and computationally derived CCS values were used as orthogonal analytical parameters in combination with retention time and accurate mass information to confirm the identity of key metabolites potentially involved in cancer. Thus, our results indicate that adding CCS data to searchable databases and to routine metabolomics workflows will increase the identification confidence compared to traditional analytical approaches. PMID:24640936
Agut, C; Caron, A; Giordano, C; Hoffman, D; Ségalini, A
2011-09-10
In 2001, a multidisciplinary team made of analytical scientists and statisticians at Sanofi-aventis has published a methodology which has governed, from that time, the transfers from R&D sites to Manufacturing sites of the release monographs. This article provides an overview of the recent adaptations brought to this original methodology taking advantage of our experience and the new regulatory framework, and, in particular, the risk management perspective introduced by ICH Q9. Although some alternate strategies have been introduced in our practices, the comparative testing one, based equivalence testing as statistical approach, remains the standard for assays lying on very critical quality attributes. This is conducted with the concern to control the most important consumer's risk involved at two levels in analytical decisions in the frame of transfer studies: risk, for the receiving laboratory, to take poor release decisions with the analytical method and risk, for the sending laboratory, to accredit such a receiving laboratory on account of its insufficient performances with the method. Among the enhancements to the comparative studies, the manuscript presents the process settled within our company for a better integration of the transfer study into the method life-cycle, just as proposals of generic acceptance criteria and designs for assay and related substances methods. While maintaining rigor and selectivity of the original approach, these improvements tend towards an increased efficiency in the transfer operations. Copyright © 2011 Elsevier B.V. All rights reserved.
Performance specifications and six sigma theory: Clinical chemistry and industry compared.
Oosterhuis, W P; Severens, M J M J
2018-04-11
Analytical performance specifications are crucial in test development and quality control. Although consensus has been reached on the use of biological variation to derive these specifications, no consensus has been reached which model should be preferred. The Six Sigma concept is widely applied in industry for quality specifications of products and can well be compared with Six Sigma models in clinical chemistry. However, the models for measurement specifications differ considerably between both fields: where the sigma metric is used in clinical chemistry, in industry the Number of Distinct Categories is used instead. In this study the models in both fields are compared and discussed. Copyright © 2018. Published by Elsevier Inc.
Jędrkiewicz, Renata; Orłowski, Aleksander; Namieśnik, Jacek; Tobiszewski, Marek
2016-01-15
In this study we perform ranking of analytical procedures for 3-monochloropropane-1,2-diol determination in soy sauces by PROMETHEE method. Multicriteria decision analysis was performed for three different scenarios - metrological, economic and environmental, by application of different weights to decision making criteria. All three scenarios indicate capillary electrophoresis-based procedure as the most preferable. Apart from that the details of ranking results differ for these three scenarios. The second run of rankings was done for scenarios that include metrological, economic and environmental criteria only, neglecting others. These results show that green analytical chemistry-based selection correlates with economic, while there is no correlation with metrological ones. This is an implication that green analytical chemistry can be brought into laboratories without analytical performance costs and it is even supported by economic reasons. Copyright © 2015 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Elder, Catherine; McNamara, Tim; Congdon, Peter
2003-01-01
Used Rasch analytic procedures to study item bias or differential item functioning in both dichotomous and scalar items on a test of English for academic purposes. Results for 139 college students on a pilot English language test model the approach and illustrate the measurement challenges posed by a diagnostic instrument to measure English…
Analytical approach on the stiffness of MR fluid filled spring
NASA Astrophysics Data System (ADS)
Sikulskyi, Stanislav; Kim, Daewon
2017-04-01
A solid mechanical spring generally exhibits uniform stiffness. This paper studies a mechanical spring filled with magnetorheological (MR) fluid to achieve controllable stiffness. The hollow spring filled with MR fluid is subjected to a controlled magnetic field in order to change the viscosity of the MR fluid and thereby to change the overall stiffness of the spring. MR fluid is considered as a Bingham viscoplastic linear material in the mathematical model. The goal of this research is to study the feasibility of such spring system by analytically computing the effects of MR fluid on overall spring stiffness. For this purpose, spring mechanics and MR fluid behavior are studied to increase the accuracy of the analysis. Numerical simulations are also performed to generate some assumptions, which simplify calculations in the analytical part. The accuracy of the present approach is validated by comparing the analytical results to previously known experimental results. Overall stiffness variations of the spring are also discussed for different spring designs.
Ultra-high efficiency moving wire combustion interface for on-line coupling of HPLC
Thomas, Avi T.; Ognibene, Ted; Daley, Paul; Turteltaub, Ken; Radousky, Harry; Bench, Graham
2011-01-01
We describe a 100% efficient moving-wire interface for on-line coupling of high performance liquid chromatography which transmits 100% of carbon in non-volatile analytes to a CO2 gas accepting ion source. This interface accepts a flow of analyte in solvent, evaporates the solvent, combusts the remaining analyte, and directs the combustion products to the instrument of choice. Effluent is transferred to a periodically indented wire by a coherent jet to increase efficiency and maintain peak resolution. The combustion oven is plumbed such that gaseous combustion products are completely directed to an exit capillary, avoiding the loss of combustion products to the atmosphere. This system achieves the near complete transfer of analyte at HPLC flow rates up to 125 μL/min at a wire speed of 6 cm/s. This represents a 30x efficiency increase and 8x maximum wire loading compared to the spray transfer technique used in earlier moving wire interfaces. PMID:22004428
Analytical Model-Based Design Optimization of a Transverse Flux Machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasan, Iftekhar; Husain, Tausif; Sozer, Yilmaz
This paper proposes an analytical machine design tool using magnetic equivalent circuit (MEC)-based particle swarm optimization (PSO) for a double-sided, flux-concentrating transverse flux machine (TFM). The magnetic equivalent circuit method is applied to analytically establish the relationship between the design objective and the input variables of prospective TFM designs. This is computationally less intensive and more time efficient than finite element solvers. A PSO algorithm is then used to design a machine with the highest torque density within the specified power range along with some geometric design constraints. The stator pole length, magnet length, and rotor thickness are the variablesmore » that define the optimization search space. Finite element analysis (FEA) was carried out to verify the performance of the MEC-PSO optimized machine. The proposed analytical design tool helps save computation time by at least 50% when compared to commercial FEA-based optimization programs, with results found to be in agreement with less than 5% error.« less
Physics of thermo-acoustic sound generation
NASA Astrophysics Data System (ADS)
Daschewski, M.; Boehm, R.; Prager, J.; Kreutzbruck, M.; Harrer, A.
2013-09-01
We present a generalized analytical model of thermo-acoustic sound generation based on the analysis of thermally induced energy density fluctuations and their propagation into the adjacent matter. The model provides exact analytical prediction of the sound pressure generated in fluids and solids; consequently, it can be applied to arbitrary thermal power sources such as thermophones, plasma firings, laser beams, and chemical reactions. Unlike existing approaches, our description also includes acoustic near-field effects and sound-field attenuation. Analytical results are compared with measurements of sound pressures generated by thermo-acoustic transducers in air for frequencies up to 1 MHz. The tested transducers consist of titanium and indium tin oxide coatings on quartz glass and polycarbonate substrates. The model reveals that thermo-acoustic efficiency increases linearly with the supplied thermal power and quadratically with thermal excitation frequency. Comparison of the efficiency of our thermo-acoustic transducers with those of piezoelectric-based airborne ultrasound transducers using impulse excitation showed comparable sound pressure values. The present results show that thermo-acoustic transducers can be applied as broadband, non-resonant, high-performance ultrasound sources.
Roy, Rajarshi; Desai, Jaydev P.
2016-01-01
This paper outlines a comprehensive parametric approach for quantifying mechanical properties of spatially heterogeneous thin biological specimens such as human breast tissue using contact-mode Atomic Force Microscopy. Using inverse finite element (FE) analysis of spherical nanoindentation, the force response from hyperelastic material models is compared with the predicted force response from existing analytical contact models, and a sensitivity study is carried out to assess uniqueness of the inverse FE solution. Furthermore, an automation strategy is proposed to analyze AFM force curves with varying levels of material nonlinearity with minimal user intervention. Implementation of our approach on an elastic map acquired from raster AFM indentation of breast tissue specimens indicates that a judicious combination of analytical and numerical techniques allow more accurate interpretation of AFM indentation data compared to relying on purely analytical contact models, while keeping the computational cost associated an inverse FE solution with reasonable limits. The results reported in this study have several implications in performing unsupervised data analysis on AFM indentation measurements on a wide variety of heterogeneous biomaterials. PMID:25015130
Campi-Azevedo, Ana Carolina; Peruhype-Magalhães, Vanessa; Coelho-Dos-Reis, Jordana Grazziela; Costa-Pereira, Christiane; Yamamura, Anna Yoshida; Lima, Sheila Maria Barbosa de; Simões, Marisol; Campos, Fernanda Magalhães Freire; de Castro Zacche Tonini, Aline; Lemos, Elenice Moreira; Brum, Ricardo Cristiano; de Noronha, Tatiana Guimarães; Freire, Marcos Silva; Maia, Maria de Lourdes Sousa; Camacho, Luiz Antônio Bastos; Rios, Maria; Chancey, Caren; Romano, Alessandro; Domingues, Carla Magda; Teixeira-Carvalho, Andréa; Martins-Filho, Olindo Assis
2017-09-01
Technological innovations in vaccinology have recently contributed to bring about novel insights for the vaccine-induced immune response. While the current protocols that use peripheral blood samples may provide abundant data, a range of distinct components of whole blood samples are required and the different anticoagulant systems employed may impair some properties of the biological sample and interfere with functional assays. Although the interference of heparin in functional assays for viral neutralizing antibodies such as the functional plaque-reduction neutralization test (PRNT), considered the gold-standard method to assess and monitor the protective immunity induced by the Yellow fever virus (YFV) vaccine, has been well characterized, the development of pre-analytical treatments is still required for the establishment of optimized protocols. The present study intended to optimize and evaluate the performance of pre-analytical treatment of heparin-collected blood samples with ecteola-cellulose (ECT) to provide accurate measurement of anti-YFV neutralizing antibodies, by PRNT. The study was designed in three steps, including: I. Problem statement; II. Pre-analytical steps; III. Analytical steps. Data confirmed the interference of heparin on PRNT reactivity in a dose-responsive fashion. Distinct sets of conditions for ECT pre-treatment were tested to optimize the heparin removal. The optimized protocol was pre-validated to determine the effectiveness of heparin plasma:ECT treatment to restore the PRNT titers as compared to serum samples. The validation and comparative performance was carried out by using a large range of serum vs heparin plasma:ECT 1:2 paired samples obtained from unvaccinated and 17DD-YFV primary vaccinated subjects. Altogether, the findings support the use of heparin plasma:ECT samples for accurate measurement of anti-YFV neutralizing antibodies. Copyright © 2017 Elsevier B.V. All rights reserved.
Pacheco-Fernández, Idaira; Najafi, Ali; Pino, Verónica; Anderson, Jared L; Ayala, Juan H; Afonso, Ana M
2016-09-01
Several crosslinked polymeric ionic liquid (PIL)-based sorbent coatings of different nature were prepared by UV polymerization onto nitinol wires. They were evaluated in a direct-immersion solid-phase microextraction (DI-SPME) method in combination with high-performance liquid chromatography (HPLC) and diode array detection (DAD). The studied PIL coatings contained either vinyl alkyl or vinylbenzyl imidazolium-based (ViCnIm- or ViBCnIm-) IL monomers with different anions, as well as different dicationic IL crosslinkers. The analytical performance of these PIL-based SPME coatings was firstly evaluated for the extraction of a group of 10 different model analytes, including hydrocarbons and phenols, while exhaustively comparing the performance with commercial SPME fibers such as polydimethylsyloxane (PDMS), polyacrylate (PA) and polydimethylsiloxane/divinylbenzene (PDMS/DVB), and using all fibers under optimized conditions. Those fibers exhibiting a high selectivity for polar compounds were selected to carry out an analytical method for a group of 5 alkylphenols, including bisphenol-A (BPA) and nonylphenol (n-NP). Under optimum conditions, average relative recoveries of 108% and inter-day precision values (3 non-consecutive days) lower than 19% were obtained for a spiked level of 10µgL(-1). Correlations coefficients for the overall method ranged between 0.990 and 0.999, and limits of detection were down to 1µgL(-1). Tap water, river water, and bottled water were analyzed to evaluate matrix effects. Comparison with the PA fiber was also performed in terms of analytical performance. Partition coefficients (logKfs) of the alkylphenols to the SPME coating varied from 1.69 to 2.45 for the most efficient PIL-based fiber, and from 1.58 to 2.30 for the PA fiber. These results agree with those obtained by the normalized calibration slopes, pointing out the affinity of these PILs-based coatings. Copyright © 2016 Elsevier B.V. All rights reserved.
Quality performance of laboratory testing in pharmacies: a collaborative evaluation.
Zaninotto, Martina; Miolo, Giorgia; Guiotto, Adriano; Marton, Silvia; Plebani, Mario
2016-11-01
The quality performance and the comparability between results of pharmacies point-of-care-testing (POCT) and institutional laboratories have been evaluated. Eight pharmacies participated in the project: a capillary specimen collected by the pharmacist and, simultaneously, a lithium-heparin sample drawn by a physician of laboratory medicine for the pharmacy customers (n=106) were analyzed in the pharmacy and in the laboratory, respectively. Glucose, cholesterol, HDL-cholesterol, triglycerides, creatinine, uric acid, aspartate aminotransferase, alanine aminotransferase, were measured using: Reflotron, n=5; Samsung, n=1; Cardiocheck PA, n=1; Cholestech LDX, n=1 and Cobas 8000. The POCT analytical performance only (phase 2) were evaluated testing, in pharmacies and in the laboratory, the lithium heparin samples from a female drawn fasting daily in a week, and a control sample containing high concentrations of glucose, cholesterol and triglycerides. For all parameters, except triglycerides, the slopes showed a satisfactory correlation. For triglycerides, a median value higher in POCT in comparison to the laboratory (1.627 mmol/L vs. 0.950 mmol/L) has been observed. The agreement in the subjects classification, demonstrates that for glucose, 70% of the subjects show concentrations below the POCT recommended level (5.8-6.1 mmol/L), while 56% are according to the laboratory limit (<5.6 mmol/L). Total cholesterol exhibits a similar trend while POCT triglycerides show a greater percentage of increased values (21% vs. 9%). The reduction in triglycerides bias (phase 2) suggests that differences between POCT and central laboratory is attributable to a pre-analytical problem. The results confirm the acceptable analytical performance of POCT pharmacies and specific criticisms in the pre- and post-analytical phases.
ERIC Educational Resources Information Center
O'Donnell, Mary E.; Musial, Beata A.; Bretz, Stacey Lowery; Danielson, Neil D.; Ca, Diep
2009-01-01
Liquid chromatography (LC) experiments for the undergraduate analytical laboratory course often illustrate the application of reversed-phase LC to solve a separation problem, but rarely compare LC retention mechanisms. In addition, a high-performance liquid chromatography instrument may be beyond what some small colleges can purchase. Solid-phase…
Young, Joshua E; Pan, Zhongli; Teh, Hui Ean; Menon, Veena; Modereger, Brent; Pesek, Joseph J; Matyska, Maria T; Dao, Lan; Takeoka, Gary
2017-04-01
The peels of different pomegranate cultivars (Molla Nepes, Parfianka, Purple Heart, Wonderful and Vkunsyi) were compared in terms of phenolic composition and total phenolics. Analyses were performed on two silica hydride based stationary phases: phenyl and undecanoic acid columns. Quantitation was accomplished by developing a liquid chromatography with mass spectrometry approach for separating different phenolic analytes, initially in the form of reference standards and then with pomegranate extracts. The high-performance liquid chromatography columns used in the separations had the ability to retain a wide polarity range of phenolic analytes, as well as offering beneficial secondary selectivity mechanisms for resolving the isobaric compounds, catechin and epicatechin. The Vkunsyi peel extract had the highest concentration of phenolics (as determined by liquid chromatography with mass spectrometry) and was the only cultivar to contain the important compound punicalagin. The liquid chromatography with mass spectrometry data were compared to the standard total phenolics content as determined by using the Folin-Ciocalteu assay. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Propfan experimental data analysis
NASA Technical Reports Server (NTRS)
Vernon, David F.; Page, Gregory S.; Welge, H. Robert
1984-01-01
A data reduction method, which is consistent with the performance prediction methods used for analysis of new aircraft designs, is defined and compared to the method currently used by NASA using data obtained from an Ames Res. Center 11 foot transonic wind tunnel test. Pressure and flow visualization data from the Ames test for both the powered straight underwing nacelle, and an unpowered contoured overwing nacelle installation is used to determine the flow phenomena present for a wind mounted turboprop installation. The test data is compared to analytic methods, showing the analytic methods to be suitable for design and analysis of new configurations. The data analysis indicated that designs with zero interference drag levels are achieveable with proper wind and nacelle tailoring. A new overwing contoured nacelle design and a modification to the wing leading edge extension for the current wind tunnel model design are evaluated. Hardware constraints of the current model parts prevent obtaining any significant performance improvement due to a modified nacelle contouring. A new aspect ratio wing design for an up outboard rotation turboprop installation is defined, and an advanced contoured nacelle is provided.
Long-term Results of an Analytical Assessment of Student Compounded Preparations
Roark, Angie M.; Anksorus, Heidi N.
2014-01-01
Objective. To investigate the long-term (ie, 6-year) impact of a required remake vs an optional remake on student performance in a compounding laboratory course in which students’ compounded preparations were analyzed. Methods. The analysis data for several preparations made by students were compared for differences in the analyzed content of the active pharmaceutical ingredient (API) and the number of students who successfully compounded the preparation on the first attempt. Results. There was a consistent statistical difference in the API amount or concentration in 4 of the preparations (diphenhydramine, ketoprofen, metoprolol, and progesterone) in each optional remake year compared to the required remake year. As the analysis requirement was continued, the outcome for each preparation approached and/or attained the expected API result. Two preparations required more than 1 year to demonstrate a statistical difference. Conclusion. The analytical assessment resulted in a consistent, long-term improvement in student performance during the 5-year period after the optional remake policy was instituted. Our assumption is that investment in such an assessment would result in a similar benefits at other colleges and schools of pharmacy. PMID:26056402
Comparison of two optimized readout chains for low light CIS
NASA Astrophysics Data System (ADS)
Boukhayma, A.; Peizerat, A.; Dupret, A.; Enz, C.
2014-03-01
We compare the noise performance of two optimized readout chains that are based on 4T pixels and featuring the same bandwidth of 265kHz (enough to read 1Megapixel with 50frame/s). Both chains contain a 4T pixel, a column amplifier and a single slope analog-to-digital converter operating a CDS. In one case, the pixel operates in source follower configuration, and in common source configuration in the other case. Based on analytical noise calculation of both readout chains, an optimization methodology is presented. Analytical results are confirmed by transient simulations using 130nm process. A total input referred noise bellow 0.4 electrons RMS is reached for a simulated conversion gain of 160μV/e-. Both optimized readout chains show the same input referred 1/f noise. The common source based readout chain shows better performance for thermal noise and requires smaller silicon area. We discuss the possible drawbacks of the common source configuration and provide the reader with a comparative table between the two readout chains. The table contains several variants (column amplifier gain, in-pixel transistor sizes and type).
Testing and Validation of the Dynamic Inertia Measurement Method
NASA Technical Reports Server (NTRS)
Chin, Alexander W.; Herrera, Claudia Y.; Spivey, Natalie D.; Fladung, William A.; Cloutier, David
2015-01-01
The Dynamic Inertia Measurement (DIM) method uses a ground vibration test setup to determine the mass properties of an object using information from frequency response functions. Most conventional mass properties testing involves using spin tables or pendulum-based swing tests, which for large aerospace vehicles becomes increasingly difficult and time-consuming, and therefore expensive, to perform. The DIM method has been validated on small test articles but has not been successfully proven on large aerospace vehicles. In response, the National Aeronautics and Space Administration Armstrong Flight Research Center (Edwards, California) conducted mass properties testing on an "iron bird" test article that is comparable in mass and scale to a fighter-type aircraft. The simple two-I-beam design of the "iron bird" was selected to ensure accurate analytical mass properties. Traditional swing testing was also performed to compare the level of effort, amount of resources, and quality of data with the DIM method. The DIM test showed favorable results for the center of gravity and moments of inertia; however, the products of inertia showed disagreement with analytical predictions.
Study of Spray Disintegration in Accelerating Flow Fields
NASA Technical Reports Server (NTRS)
Nurick, W. H.
1972-01-01
An analytical and experimental investigation was conducted to perform "proof of principlem experiments to establish the effects of propellant combustion gas velocity on propella'nt atomization characteristics. The propellants were gaseous oxygen (GOX) and Shell Wax 270. The fuel was thus the same fluid used in earlier primary cold-flow atomization studies using the frozen wax method. Experiments were conducted over a range in L* (30 to 160 inches) at two contraction ratios (2 and 6). Characteristic exhaust velocity (c*) efficiencies varied from SO to 90 percent. The hot fire experimental performance characteristics at a contraction ratio of 6.0 in conjunction with analytical predictions from the drovlet heat-up version of the Distributed Energy Release (DER) combustion computer proDam showed that the apparent initial dropsize compared well with cold-flow predictions (if adjusted for the gas velocity effects). The results also compared very well with the trend in perfomnce as predicted with the model. significant propellant wall impingement at the contraction ratio of 2.0 precluded complete evaluation of the effect of gross changes in combustion gas velocity on spray dropsize.
Standardless quantification by parameter optimization in electron probe microanalysis
NASA Astrophysics Data System (ADS)
Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.
2012-11-01
A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively.
Long-term Results of an Analytical Assessment of Student Compounded Preparations.
Roark, Angie M; Anksorus, Heidi N; Shrewsbury, Robert P
2014-11-15
To investigate the long-term (ie, 6-year) impact of a required remake vs an optional remake on student performance in a compounding laboratory course in which students' compounded preparations were analyzed. The analysis data for several preparations made by students were compared for differences in the analyzed content of the active pharmaceutical ingredient (API) and the number of students who successfully compounded the preparation on the first attempt. There was a consistent statistical difference in the API amount or concentration in 4 of the preparations (diphenhydramine, ketoprofen, metoprolol, and progesterone) in each optional remake year compared to the required remake year. As the analysis requirement was continued, the outcome for each preparation approached and/or attained the expected API result. Two preparations required more than 1 year to demonstrate a statistical difference. The analytical assessment resulted in a consistent, long-term improvement in student performance during the 5-year period after the optional remake policy was instituted. Our assumption is that investment in such an assessment would result in a similar benefits at other colleges and schools of pharmacy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Congrui; Davoodabadi, Ali; Li, Jianlin
Because of the development of novel micro-fabrication techniques to produce ultra-thin materials and increasing interest in thin biological membranes, in recent years, the mechanical characterization of thin films has received a significant amount of attention. To provide a more accurate solution for the relationship among contact radius, load and deflection, the fundamental and widely applicable problem of spherical indentation of a freestanding circular membrane have been revisited. The work presented here significantly extends the previous contributions by providing an exact analytical solution to the governing equations of Föppl–Hecky membrane indented by a frictionless spherical indenter. In this study, experiments ofmore » spherical indentation has been performed, and the exact analytical solution presented in this article is compared against experimental data from existing literature as well as our own experimental results.« less
Parsec-Scale Obscuring Accretion Disk with Large-Scale Magnetic Field in AGNs
NASA Technical Reports Server (NTRS)
Dorodnitsyn, A.; Kallman, T.
2017-01-01
A magnetic field dragged from the galactic disk, along with inflowing gas, can provide vertical support to the geometrically and optically thick pc (parsec) -scale torus in AGNs (Active Galactic Nuclei). Using the Soloviev solution initially developed for Tokamaks, we derive an analytical model for a rotating torus that is supported and confined by a magnetic field. We further perform three-dimensional magneto-hydrodynamic simulations of X-ray irradiated, pc-scale, magnetized tori. We follow the time evolution and compare models that adopt initial conditions derived from our analytic model with simulations in which the initial magnetic flux is entirely contained within the gas torus. Numerical simulations demonstrate that the initial conditions based on the analytic solution produce a longer-lived torus that produces obscuration that is generally consistent with observed constraints.
Jin, Congrui; Davoodabadi, Ali; Li, Jianlin; ...
2017-01-11
Because of the development of novel micro-fabrication techniques to produce ultra-thin materials and increasing interest in thin biological membranes, in recent years, the mechanical characterization of thin films has received a significant amount of attention. To provide a more accurate solution for the relationship among contact radius, load and deflection, the fundamental and widely applicable problem of spherical indentation of a freestanding circular membrane have been revisited. The work presented here significantly extends the previous contributions by providing an exact analytical solution to the governing equations of Föppl–Hecky membrane indented by a frictionless spherical indenter. In this study, experiments ofmore » spherical indentation has been performed, and the exact analytical solution presented in this article is compared against experimental data from existing literature as well as our own experimental results.« less
Steering particles by breaking symmetries
NASA Astrophysics Data System (ADS)
Bet, Bram; Samin, Sela; Georgiev, Rumen; Burak Eral, Huseyin; van Roij, René
2018-06-01
We derive general equations of motions for highly-confined particles that perform quasi-two-dimensional motion in Hele-Shaw channels, which we solve analytically, aiming to derive design principles for self-steering particles. Based on symmetry properties of a particle, its equations of motion can be simplified, where we retrieve an earlier-known equation of motion for the orientation of dimer particles consisting of disks (Uspal et al 2013 Nat. Commun. 4), but now in full generality. Subsequently, these solutions are compared with particle trajectories that are obtained numerically. For mirror-symmetric particles, excellent agreement between the analytical and numerical solutions is found. For particles lacking mirror symmetry, the analytic solutions provide means to classify the motion based on particle geometry, while we find that taking the side-wall interactions into account is important to accurately describe the trajectories.
Parsec-scale Obscuring Accretion Disk with Large-scale Magnetic Field in AGNs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorodnitsyn, A.; Kallman, T.
A magnetic field dragged from the galactic disk, along with inflowing gas, can provide vertical support to the geometrically and optically thick pc-scale torus in AGNs. Using the Soloviev solution initially developed for Tokamaks, we derive an analytical model for a rotating torus that is supported and confined by a magnetic field. We further perform three-dimensional magneto-hydrodynamic simulations of X-ray irradiated, pc-scale, magnetized tori. We follow the time evolution and compare models that adopt initial conditions derived from our analytic model with simulations in which the initial magnetic flux is entirely contained within the gas torus. Numerical simulations demonstrate thatmore » the initial conditions based on the analytic solution produce a longer-lived torus that produces obscuration that is generally consistent with observed constraints.« less
Analytical validation of a psychiatric pharmacogenomic test.
Jablonski, Michael R; King, Nina; Wang, Yongbao; Winner, Joel G; Watterson, Lucas R; Gunselman, Sandra; Dechairo, Bryan M
2018-05-01
The aim of this study was to validate the analytical performance of a combinatorial pharmacogenomics test designed to aid in the appropriate medication selection for neuropsychiatric conditions. Genomic DNA was isolated from buccal swabs. Twelve genes (65 variants/alleles) associated with psychotropic medication metabolism, side effects, and mechanisms of actions were evaluated by bead array, MALDI-TOF mass spectrometry, and/or capillary electrophoresis methods (GeneSight Psychotropic, Assurex Health, Inc.). The combinatorial pharmacogenomics test has a dynamic range of 2.5-20 ng/μl of input genomic DNA, with comparable performance for all assays included in the test. Both the precision and accuracy of the test were >99.9%, with individual gene components between 99.4 and 100%. This study demonstrates that the combinatorial pharmacogenomics test is robust and reproducible, making it suitable for clinical use.
Tan, Ming-Hui; Chong, Kok-Keong; Wong, Chee-Woon
2014-01-20
Optimization of the design of a nonimaging dish concentrator (NIDC) for a dense-array concentrator photovoltaic system is presented. A new algorithm has been developed to determine configuration of facet mirrors in a NIDC. Analytical formulas were derived to analyze the optical performance of a NIDC and then compared with a simulated result obtained from a numerical method. Comprehensive analysis of optical performance via analytical method has been carried out based on facet dimension and focal distance of the concentrator with a total reflective area of 120 m2. The result shows that a facet dimension of 49.8 cm, focal distance of 8 m, and solar concentration ratio of 411.8 suns is the most optimized design for the lowest cost-per-output power, which is US$1.93 per watt.
Drift-Free Position Estimation of Periodic or Quasi-Periodic Motion Using Inertial Sensors
Latt, Win Tun; Veluvolu, Kalyana Chakravarthy; Ang, Wei Tech
2011-01-01
Position sensing with inertial sensors such as accelerometers and gyroscopes usually requires other aided sensors or prior knowledge of motion characteristics to remove position drift resulting from integration of acceleration or velocity so as to obtain accurate position estimation. A method based on analytical integration has previously been developed to obtain accurate position estimate of periodic or quasi-periodic motion from inertial sensors using prior knowledge of the motion but without using aided sensors. In this paper, a new method is proposed which employs linear filtering stage coupled with adaptive filtering stage to remove drift and attenuation. The prior knowledge of the motion the proposed method requires is only approximate band of frequencies of the motion. Existing adaptive filtering methods based on Fourier series such as weighted-frequency Fourier linear combiner (WFLC), and band-limited multiple Fourier linear combiner (BMFLC) are modified to combine with the proposed method. To validate and compare the performance of the proposed method with the method based on analytical integration, simulation study is performed using periodic signals as well as real physiological tremor data, and real-time experiments are conducted using an ADXL-203 accelerometer. Results demonstrate that the performance of the proposed method outperforms the existing analytical integration method. PMID:22163935
Experimental and analytical investigation of a modified ring cusp NSTAR engine
NASA Technical Reports Server (NTRS)
Sengupta, Anita
2005-01-01
A series of experimental measurements on a modified laboratory NSTAR engine were used to validate a zero dimensional analytical discharge performance model of a ring cusp ion thruster. The model predicts the discharge performance of a ring cusp NSTAR thruster as a function the magnetic field configuration, thruster geometry, and throttle level. Analytical formalisms for electron and ion confinement are used to predict the ionization efficiency for a given thruster design. Explicit determination of discharge loss and volume averaged plasma parameters are also obtained. The model was used to predict the performance of the nominal and modified three and four ring cusp 30-cm ion thruster configurations operating at the full power (2.3 kW) NSTAR throttle level. Experimental measurements of the modified engine configuration discharge loss compare well with the predicted value for propellant utilizations from 80 to 95%. The theory, as validated by experiment, indicates that increasing the magnetic strength of the minimum closed reduces maxwellian electron diffusion and electrostatically confines the ion population and subsequent loss to the anode wall. The theory also indicates that increasing the cusp strength and minimizing the cusp area improves primary electron confinement increasing the probability of an ionization collision prior to loss at the cusp.
High-Performance Data Analytics Beyond the Relational and Graph Data Models with GEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castellana, Vito G.; Minutoli, Marco; Bhatt, Shreyansh
Graphs represent an increasingly popular data model for data-analytics, since they can naturally represent relationships and interactions between entities. Relational databases and their pure table-based data model are not well suitable to store and process sparse data. Consequently, graph databases have gained interest in the last few years and the Resource Description Framework (RDF) became the standard data model for graph data. Nevertheless, while RDF is well suited to analyze the relationships between the entities, it is not efficient in representing their attributes and properties. In this work we propose the adoption of a new hybrid data model, based onmore » attributed graphs, that aims at overcoming the limitations of the pure relational and graph data models. We present how we have re-designed the GEMS data-analytics framework to fully take advantage of the proposed hybrid data model. To improve analysts productivity, in addition to a C++ API for applications development, we adopt GraQL as input query language. We validate our approach implementing a set of queries on net-flow data and we compare our framework performance against Neo4j. Experimental results show significant performance improvement over Neo4j, up to several orders of magnitude when increasing the size of the input data.« less
Weykamp, Cas; Secchiero, Sandra; Plebani, Mario; Thelen, Marc; Cobbaert, Christa; Thomas, Annette; Jassam, Nuthar; Barth, Julian H; Perich, Carmen; Ricós, Carmen; Faria, Ana Paula
2017-02-01
Optimum patient care in relation to laboratory medicine is achieved when results of laboratory tests are equivalent, irrespective of the analytical platform used or the country where the laboratory is located. Standardization and harmonization minimize differences and the success of efforts to achieve this can be monitored with international category 1 external quality assessment (EQA) programs. An EQA project with commutable samples, targeted with reference measurement procedures (RMPs) was organized by EQA institutes in Italy, the Netherlands, Portugal, UK, and Spain. Results of 17 general chemistry analytes were evaluated across countries and across manufacturers according to performance specifications derived from biological variation (BV). For K, uric acid, glucose, cholesterol and high-density density (HDL) cholesterol, the minimum performance specification was met in all countries and by all manufacturers. For Na, Cl, and Ca, the minimum performance specifications were met by none of the countries and manufacturers. For enzymes, the situation was complicated, as standardization of results of enzymes toward RMPs was still not achieved in 20% of the laboratories and questionable in the remaining 80%. The overall performance of the measurement of 17 general chemistry analytes in European medical laboratories met the minimum performance specifications. In this general picture, there were no significant differences per country and no significant differences per manufacturer. There were major differences between the analytes. There were six analytes for which the minimum quality specifications were not met and manufacturers should improve their performance for these analytes. Standardization of results of enzymes requires ongoing efforts.
Hutchinson, Joseph P; Li, Jianfeng; Farrell, William; Groeber, Elizabeth; Szucs, Roman; Dicinoski, Greg; Haddad, Paul R
2011-03-25
The responses of four different types of aerosol detectors have been evaluated and compared to establish their potential use as a universal detector in conjunction with ultra high pressure liquid chromatography (UHPLC). Two charged-aerosol detectors, namely Corona CAD and Corona Ultra, and also two different types of light-scattering detectors (an evaporative light scattering detector, and a nano-quantity analyte detector [NQAD]) were evaluated. The responses of these detectors were systematically investigated under changing experimental and instrumental parameters, such as the mobile phase flow-rate, analyte concentration, mobile phase composition, nebulizer temperature, evaporator temperature, evaporator gas flow-rate and instrumental signal filtering after detection. It was found that these parameters exerted non-linear effects on the responses of the aerosol detectors and must therefore be considered when designing analytical separation conditions, particularly when gradient elution is performed. Identical reversed-phase gradient separations were compared on all four aerosol detectors and further compared with UV detection at 200 nm. The aerosol detectors were able to detect all 11 analytes in a test set comprising species having a variety of physicochemical properties, whilst UV detection was applicable only to those analytes containing chromophores. The reproducibility of the detector response for 11 analytes over 10 consecutive separations was found to be approximately 5% for the charged-aerosol detectors and approximately 11% for the light-scattering detectors. The tested analytes included semi-volatile species which exhibited a more variable response on the aerosol detectors. Peak efficiencies were generally better on the aerosol detectors in comparison to UV detection and particularly so for the light-scattering detectors which exhibited efficiencies of around 110,000 plates per metre. Limits of detection were calculated using different mobile phase compositions and the NQAD detector was found to be the most sensitive (LOD of 10 ng/mL), followed by the Corona CAD (76 ng/mL), then UV detection at 200 nm (178 ng/mL) using an injection volume of 25 μL. Copyright © 2011 Elsevier B.V. All rights reserved.
van Delft, Sanne; Goedhart, Annelijn; Spigt, Mark; van Pinxteren, Bart; de Wit, Niek; Hopstaken, Rogier
2016-01-01
Objective Point-of-care testing (POCT) urinalysis might reduce errors in (subjective) reading, registration and communication of test results, and might also improve diagnostic outcome and optimise patient management. Evidence is lacking. In the present study, we have studied the analytical performance of automated urinalysis and visual urinalysis compared with a reference standard in routine general practice. Setting The study was performed in six general practitioner (GP) group practices in the Netherlands. Automated urinalysis was compared with visual urinalysis in these practices. Reference testing was performed in a primary care laboratory (Saltro, Utrecht, The Netherlands). Primary and secondary outcome measures Analytical performance of automated and visual urinalysis compared with the reference laboratory method was the primary outcome measure, analysed by calculating sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) and Cohen's κ coefficient for agreement. Secondary outcome measure was the user-friendliness of the POCT analyser. Results Automated urinalysis by experienced and routinely trained practice assistants in general practice performs as good as visual urinalysis for nitrite, leucocytes and erythrocytes. Agreement for nitrite is high for automated and visual urinalysis. κ's are 0.824 and 0.803 (ranked as very good and good, respectively). Agreement with the central laboratory reference standard for automated and visual urinalysis for leucocytes is rather poor (0.256 for POCT and 0.197 for visual, respectively, ranked as fair and poor). κ's for erythrocytes are higher: 0.517 (automated) and 0.416 (visual), both ranked as moderate. The Urisys 1100 analyser was easy to use and considered to be not prone to flaws. Conclusions Automated urinalysis performed as good as traditional visual urinalysis on reading of nitrite, leucocytes and erythrocytes in routine general practice. Implementation of automated urinalysis in general practice is justified as automation is expected to reduce human errors in patient identification and transcribing of results. PMID:27503860
van Delft, Sanne; Goedhart, Annelijn; Spigt, Mark; van Pinxteren, Bart; de Wit, Niek; Hopstaken, Rogier
2016-08-08
Point-of-care testing (POCT) urinalysis might reduce errors in (subjective) reading, registration and communication of test results, and might also improve diagnostic outcome and optimise patient management. Evidence is lacking. In the present study, we have studied the analytical performance of automated urinalysis and visual urinalysis compared with a reference standard in routine general practice. The study was performed in six general practitioner (GP) group practices in the Netherlands. Automated urinalysis was compared with visual urinalysis in these practices. Reference testing was performed in a primary care laboratory (Saltro, Utrecht, The Netherlands). Analytical performance of automated and visual urinalysis compared with the reference laboratory method was the primary outcome measure, analysed by calculating sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) and Cohen's κ coefficient for agreement. Secondary outcome measure was the user-friendliness of the POCT analyser. Automated urinalysis by experienced and routinely trained practice assistants in general practice performs as good as visual urinalysis for nitrite, leucocytes and erythrocytes. Agreement for nitrite is high for automated and visual urinalysis. κ's are 0.824 and 0.803 (ranked as very good and good, respectively). Agreement with the central laboratory reference standard for automated and visual urinalysis for leucocytes is rather poor (0.256 for POCT and 0.197 for visual, respectively, ranked as fair and poor). κ's for erythrocytes are higher: 0.517 (automated) and 0.416 (visual), both ranked as moderate. The Urisys 1100 analyser was easy to use and considered to be not prone to flaws. Automated urinalysis performed as good as traditional visual urinalysis on reading of nitrite, leucocytes and erythrocytes in routine general practice. Implementation of automated urinalysis in general practice is justified as automation is expected to reduce human errors in patient identification and transcribing of results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
The Earth Data Analytic Services (EDAS) Framework
NASA Astrophysics Data System (ADS)
Maxwell, T. P.; Duffy, D.
2017-12-01
Faced with unprecedented growth in earth data volume and demand, NASA has developed the Earth Data Analytic Services (EDAS) framework, a high performance big data analytics framework built on Apache Spark. This framework enables scientists to execute data processing workflows combining common analysis operations close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using vetted earth data analysis tools (ESMF, CDAT, NCO, etc.). EDAS utilizes a dynamic caching architecture, a custom distributed array framework, and a streaming parallel in-memory workflow for efficiently processing huge datasets within limited memory spaces with interactive response times. EDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be accessed using direct web service calls, a Python script, a Unix-like shell client, or a JavaScript-based web application. New analytic operations can be developed in Python, Java, or Scala (with support for other languages planned). Client packages in Python, Java/Scala, or JavaScript contain everything needed to build and submit EDAS requests. The EDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service enables decision makers to compare multiple reanalysis datasets and investigate trends, variability, and anomalies in earth system dynamics around the globe.
Liu, Jiakai; Tan, Chin Hon; Badrick, Tony; Loh, Tze Ping
2018-02-01
An increase in analytical imprecision (expressed as CV a ) can introduce additional variability (i.e. noise) to the patient results, which poses a challenge to the optimal management of patients. Relatively little work has been done to address the need for continuous monitoring of analytical imprecision. Through numerical simulations, we describe the use of moving standard deviation (movSD) and a recently described moving sum of outlier (movSO) patient results as means for detecting increased analytical imprecision, and compare their performances against internal quality control (QC) and the average of normal (AoN) approaches. The power of detecting an increase in CV a is suboptimal under routine internal QC procedures. The AoN technique almost always had the highest average number of patient results affected before error detection (ANPed), indicating that it had generally the worst capability for detecting an increased CV a . On the other hand, the movSD and movSO approaches were able to detect an increased CV a at significantly lower ANPed, particularly for measurands that displayed a relatively small ratio of biological variation to CV a. CONCLUSION: The movSD and movSO approaches are effective in detecting an increase in CV a for high-risk measurands with small biological variation. Their performance is relatively poor when the biological variation is large. However, the clinical risks of an increase in analytical imprecision is attenuated for these measurands as an increased analytical imprecision will only add marginally to the total variation and less likely to impact on the clinical care. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Recent developments in nickel electrode analysis
NASA Technical Reports Server (NTRS)
Whiteley, Richard V.; Daman, M. E.; Kaiser, E. Q.
1991-01-01
Three aspects of nickel electrode analysis for Nickel-Hydrogen and Nickel-Cadmium battery cell applications are addressed: (1) the determination of active material; (2) charged state nickel (as NiOOH + CoOOH); and (3) potassium ion content in the electrode. Four deloading procedures are compared for completeness of active material removal, and deloading conditions for efficient active material analyses are established. Two methods for charged state nickel analysis are compared: the current NASA procedure and a new procedure based on the oxidation of sodium oxalate by the charged material. Finally, a method for determining potassium content in an electrode sample by flame photometry is presented along with analytical results illustrating differences in potassium levels from vendor to vendor and the effects of stress testing on potassium content in the electrode. The relevance of these analytical procedures to electrode performance is reviewed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liezers, Martin; Olsen, Khris B.; Mitroshkov, Alexandre V.
2010-08-11
The most time consuming process in uranium or plutonium isotopic analyses is performing the requisite chromatographic separation of the actinides. Filament preparation for thermal ionization (TIMS) adds further delays, but is generally accepted due to the unmatched performance in trace isotopic analyses. Advances in Multi-Collector Inductively Coupled Plasma Mass Spectrometry (MC-ICP-MS) are beginning to rival the performance of TIMS. Methods, such as Electrochemically Modulated Separations (EMS) can efficiently pre-concentrate U or Pu quite selectively from small solution volumes in a matrix of 0.5 M nitric acid. When performed in-line with ICP-MS, the rapid analyte release from the electrode is fast,more » and large transient analyte signal enhancements of >100 fold can be achieved as compared to more conventional continuous nebulization of the original starting solution. This makes the approach ideal for very low level isotope ratio measurements. In this paper, some aspects of EMS performance are described. These include low level Pu isotope ratio behavior versus concentration by MC-ICP-MS and uranium rejection characteristics that are also important for reliable low level Pu isotope ratio determinations.« less
Analytical methods to predict liquid congealing in ram air heat exchangers during cold operation
NASA Astrophysics Data System (ADS)
Coleman, Kenneth; Kosson, Robert
1989-07-01
Ram air heat exchangers used to cool liquids such as lube oils or Ethylene-Glycol/water solutions can be subject to congealing in very cold ambients, resulting in a loss of cooling capability. Two-dimensional, transient analytical models have been developed to explore this phenomenon with both continuous and staggered fin cores. Staggered fin predictions are compared to flight test data from the E-2C Allison T56 engine lube oil system during winter conditions. For simpler calculations, a viscosity ratio correction was introduced and found to provide reasonable cold ambient performance predictions for the staggered fin core, using a one-dimensional approach.
NASA Technical Reports Server (NTRS)
Muraca, R. J.; Stephens, M. V.; Dagenhart, J. R.
1975-01-01
A general analysis capable of predicting performance characteristics of cross-wind axis turbines was developed, including the effects of airfoil geometry, support struts, blade aspect ratio, windmill solidity, blade interference and curved flow. The results were compared with available wind tunnel results for a catenary blade shape. A theoretical performance curve for an aerodynamically efficient straight blade configuration was also presented. In addition, a linearized analytical solution applicable for straight configurations was developed. A listing of the computer program developed for numerical solutions of the general performance equations is included in the appendix.
NASA Astrophysics Data System (ADS)
Zhou, Weimin; Anastasio, Mark A.
2018-03-01
It has been advocated that task-based measures of image quality (IQ) should be employed to evaluate and optimize imaging systems. Task-based measures of IQ quantify the performance of an observer on a medically relevant task. The Bayesian Ideal Observer (IO), which employs complete statistical information of the object and noise, achieves the upper limit of the performance for a binary signal classification task. However, computing the IO performance is generally analytically intractable and can be computationally burdensome when Markov-chain Monte Carlo (MCMC) techniques are employed. In this paper, supervised learning with convolutional neural networks (CNNs) is employed to approximate the IO test statistics for a signal-known-exactly and background-known-exactly (SKE/BKE) binary detection task. The receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC) are compared to those produced by the analytically computed IO. The advantages of the proposed supervised learning approach for approximating the IO are demonstrated.
Streby, Ashleigh; Mull, Bonnie J; Levy, Karen; Hill, Vincent R
2015-05-01
Naegleria fowleri is a thermophilic free-living ameba found in freshwater environments worldwide. It is the cause of a rare but potentially fatal disease in humans known as primary amebic meningoencephalitis. Established N. fowleri detection methods rely on conventional culture techniques and morphological examination followed by molecular testing. Multiple alternative real-time PCR assays have been published for rapid detection of Naegleria spp. and N. fowleri. Foursuch assays were evaluated for the detection of N. fowleri from surface water and sediment. The assays were compared for thermodynamic stability, analytical sensitivity and specificity, detection limits, humic acid inhibition effects, and performance with seeded environmental matrices. Twenty-one ameba isolates were included in the DNA panel used for analytical sensitivity and specificity analyses. N. fowleri genotypes I and III were used for method performance testing. Two of the real-time PCR assays were determined to yield similar performance data for specificity and sensitivity for detecting N. fowleri in environmental matrices.
Streby, Ashleigh; Mull, Bonnie J.; Levy, Karen
2015-01-01
Naegleria fowleri is a thermophilic free-living ameba found in freshwater environments worldwide. It is the cause of a rare but potentially fatal disease in humans known as primary amebic meningoencephalitis. Established N. fowleri detection methods rely on conventional culture techniques and morphological examination followed by molecular testing. Multiple alternative real-time PCR assays have been published for rapid detection of Naegleria spp. and N. fowleri. Four such assays were evaluated for the detection of N. fowleri from surface water and sediment. The assays were compared for thermodynamic stability, analytical sensitivity and specificity, detection limits, humic acid inhibition effects, and performance with seeded environmental matrices. Twenty-one ameba isolates were included in the DNA panel used for analytical sensitivity and specificity analyses. N. fowleri genotypes I and III were used for method performance testing. Two of the real-time PCR assays were determined to yield similar performance data for specificity and sensitivity for detecting N. fowleri in environmental matrices. PMID:25855343
Abdollahpour, Assem; Heydari, Rouhollah; Shamsipur, Mojtaba
2017-07-01
Two chiral stationary phases (CSPs) based on crystalline degradation products (CDPs) of vancomycin by using different synthetic methods were prepared and compared. Crystalline degradation products of vancomycin were produced by hydrolytic loss of ammonia from vancomycin molecules. Performances of two chiral columns prepared with these degradation products were investigated using several acidic and basic drugs as model analytes. Retention and resolution of these analytes on the prepared columns, as two main parameters, in enantioseparation were studied. The results demonstrated that the stationary phase preparation procedure has a significant effect on the column performance. The resolving powers of prepared columns for enantiomers resolution were changed with the variation in vancomycin-CDP coverage on the silica support. Elemental analysis was used to monitor the surface coverage of silica support by vancomycin-CDP. The results showed that both columns can be successfully applied to chiral separation studies.
A predictive pilot model for STOL aircraft landing
NASA Technical Reports Server (NTRS)
Kleinman, D. L.; Killingsworth, W. R.
1974-01-01
An optimal control approach has been used to model pilot performance during STOL flare and landing. The model is used to predict pilot landing performance for three STOL configurations, each having a different level of automatic control augmentation. Model predictions are compared with flight simulator data. It is concluded that the model can be effective design tool for studying analytically the effects of display modifications, different stability augmentation systems, and proposed changes in the landing area geometry.
42 CFR 493.807 - Condition: Reinstatement of laboratories performing nonwaived testing.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., subspecialties, analyte or test, or voluntarily withdraws its certification under CLIA for the failed specialty, subspecialty, or analyte, the laboratory must then demonstrate sustained satisfactory performance on two... reinstatement for certification and Medicare or Medicaid approval in that specialty, subspecialty, analyte or...
42 CFR 493.807 - Condition: Reinstatement of laboratories performing nonwaived testing.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., subspecialties, analyte or test, or voluntarily withdraws its certification under CLIA for the failed specialty, subspecialty, or analyte, the laboratory must then demonstrate sustained satisfactory performance on two... reinstatement for certification and Medicare or Medicaid approval in that specialty, subspecialty, analyte or...
42 CFR 493.807 - Condition: Reinstatement of laboratories performing nonwaived testing.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., subspecialties, analyte or test, or voluntarily withdraws its certification under CLIA for the failed specialty, subspecialty, or analyte, the laboratory must then demonstrate sustained satisfactory performance on two... reinstatement for certification and Medicare or Medicaid approval in that specialty, subspecialty, analyte or...
Statistical correlation analysis for comparing vibration data from test and analysis
NASA Technical Reports Server (NTRS)
Butler, T. G.; Strang, R. F.; Purves, L. R.; Hershfeld, D. J.
1986-01-01
A theory was developed to compare vibration modes obtained by NASTRAN analysis with those obtained experimentally. Because many more analytical modes can be obtained than experimental modes, the analytical set was treated as expansion functions for putting both sources in comparative form. The dimensional symmetry was developed for three general cases: nonsymmetric whole model compared with a nonsymmetric whole structural test, symmetric analytical portion compared with a symmetric experimental portion, and analytical symmetric portion with a whole experimental test. The theory was coded and a statistical correlation program was installed as a utility. The theory is established with small classical structures.
Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.
2000-01-01
Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.
Enhanced biosensor performance using an avidin-biotin bridge for antibody immobilization
NASA Astrophysics Data System (ADS)
Narang, Upvan; Anderson, George P.; King, Keeley D.; Liss, Heidi S.; Ligler, Frances S.
1997-05-01
Maintaining antibody function after immobilization is critical to the performance of a biosensor. The conventional methods to immobilize antibodies onto surfaces are via covalent attachment using a crosslinker or by adsorption. Often, these methods of immobilization result in partial denaturation of the antibody and conformational changes leading to a reduced activity of the antibody. In this paper, we report on the immobilization of antibodies onto the surface of an optical fiber through an avidin-biotin bridge for the detection of ricin, ovalbumin, and Bacillus globigii (Bg). The assays are performed in a sandwich format. First, a capture antibody is immobilized, followed by the addition of the analyte. Finally, a fluorophore- labeled antibody is added for the specific detection of the analyte. The evanescent wave-induced fluorescence is coupled back through the same fiber to be detected using a photodiode. In all cases, we observe an improved performance of the biosensor, i.e., lower limit of detection and wide linear dynamic range, for the assays in which the antibody is immobilized via avidin-biotin bridges compared to covalent attachment method.
Škrbić, Biljana; Héberger, Károly; Durišić-Mladenović, Nataša
2013-10-01
Sum of ranking differences (SRD) was applied for comparing multianalyte results obtained by several analytical methods used in one or in different laboratories, i.e., for ranking the overall performances of the methods (or laboratories) in simultaneous determination of the same set of analytes. The data sets for testing of the SRD applicability contained the results reported during one of the proficiency tests (PTs) organized by EU Reference Laboratory for Polycyclic Aromatic Hydrocarbons (EU-RL-PAH). In this way, the SRD was also tested as a discriminant method alternative to existing average performance scores used to compare mutlianalyte PT results. SRD should be used along with the z scores--the most commonly used PT performance statistics. SRD was further developed to handle the same rankings (ties) among laboratories. Two benchmark concentration series were selected as reference: (a) the assigned PAH concentrations (determined precisely beforehand by the EU-RL-PAH) and (b) the averages of all individual PAH concentrations determined by each laboratory. Ranking relative to the assigned values and also to the average (or median) values pointed to the laboratories with the most extreme results, as well as revealed groups of laboratories with similar overall performances. SRD reveals differences between methods or laboratories even if classical test(s) cannot. The ranking was validated using comparison of ranks by random numbers (a randomization test) and using seven folds cross-validation, which highlighted the similarities among the (methods used in) laboratories. Principal component analysis and hierarchical cluster analysis justified the findings based on SRD ranking/grouping. If the PAH-concentrations are row-scaled, (i.e., z scores are analyzed as input for ranking) SRD can still be used for checking the normality of errors. Moreover, cross-validation of SRD on z scores groups the laboratories similarly. The SRD technique is general in nature, i.e., it can be applied to any experimental problem in which multianalyte results obtained either by several analytical procedures, analysts, instruments, or laboratories need to be compared.
ERIC Educational Resources Information Center
Pfeiffer, Mark G.; Scott, Paul G.
A fly-only group (N=16) of Navy replacement pilots undergoing fleet readiness training in the SH-3 helicopter was compared with groups pre-trained on Device 2F64C with: (1) visual only (N=13); (2) no visual/no motion (N=14); and (3) one visual plus motion group (N=19). Groups were compared for their SH-3 helicopter performance in the transition…
Gaze, David C; Prante, Christian; Dreier, Jens; Knabbe, Cornelius; Collet, Corinne; Launay, Jean-Marie; Franekova, Janka; Jabor, Antonin; Lennartz, Lieselotte; Shih, Jessie; del Rey, Jose Manuel; Zaninotto, Martina; Plebani, Mario; Collinson, Paul O
2014-06-01
Galectin-3 is secreted from macrophages and binds and activates fibroblasts forming collagen. Tissue fibrosis is central to the progression of chronic heart failure (CHF). We performed a European multicentered evaluation of the analytical performance of the two-step routine and Short Turn-Around-Time (STAT) galectin-3 immunoassay on the ARCHITECT i1000SR, i2000SR, and i4000SR (Abbott Laboratories). We evaluated the assay precision and dilution linearity for both routine and STAT assays and compared serum and plasma, and fresh vs. frozen samples. The reference interval and biological variability were also assessed. Measurable samples were compared between ARCHITECT instruments and between the routine and STAT assays and also to a galectin-3 ELISA (BG Medicine). The total assay coefficient of variation (CV%) was 2.3%-6.2% and 1.7%-7.4% for the routine and STAT assays, respectively. Both assays demonstrated linearity up to 120 ng/mL. Galectin-3 concentrations were higher in plasma samples than in serum samples and correlated well between fresh and frozen samples (R=0.997), between the routine and STAT assays, between the ARCHITECT i1000 and i2000 instruments and with the galectin-3 ELISA. The reference interval on 627 apparently healthy individuals (53% male) yielded upper 95th and 97.5th percentiles of 25.2 and 28.4 ng/mL, respectively. Values were significantly lower in subjects younger than 50 years. The galectin-3 routine and STAT assays on the Abbott ARCHITECT instruments demonstrated good analytical performance. Further clinical studies are required to demonstrate the diagnostic and prognostic potential of this novel marker in patients with CHF.
Colletes, T C; Garcia, P T; Campanha, R B; Abdelnur, P V; Romão, W; Coltro, W K T; Vaz, B G
2016-03-07
The analytical performance for paper spray (PS) using a new insert sample approach based on paper with paraffin barriers (PS-PB) is presented. The paraffin barrier is made using a simple, fast and cheap method based on the stamping of paraffin onto a paper surface. Typical operation conditions of paper spray such as the solvent volume applied on the paper surface, and the paper substrate type are evaluated. A paper substrate with paraffin barriers shows better performance on analysis of a range of typical analytes when compared to the conventional PS-MS using normal paper (PS-NP) and PS-MS using paper with two rounded corners (PS-RC). PS-PB was applied to detect sugars and their inhibitors in sugarcane bagasse liquors from a second generation ethanol process. Moreover, the PS-PB proved to be excellent, showing results for the quantification of glucose in hydrolysis liquors with excellent linearity (R(2) = 0.99), limits of detection (2.77 mmol L(-1)) and quantification (9.27 mmol L(-1)). The results are better than for PS-NP and PS-RC. The PS-PB was also excellent in performance when compared with the HPLC-UV method for glucose quantification on hydrolysis of liquor samples.
Balest, Lydia; Murgolo, Sapia; Sciancalepore, Lucia; Montemurro, Patrizia; Abis, Pier Paolo; Pastore, Carlo; Mascolo, Giuseppe
2016-06-01
An on-line solid phase extraction coupled with high-performance liquid chromatography in tandem with mass spectrometry (on-line SPE/HPLC/MS-MS) method for the determination of five microcystins and nodularin in surface waters at submicrogram per liter concentrations has been optimized. Maximum recoveries were achieved by carefully optimizing the extraction sample volume, loading solvent, wash solvent, and pH of the sample. The developed method was also validated according to both UNI EN ISO IEC 17025 and UNICHIM guidelines. Specifically, ten analytical runs were performed at three different concentration levels using a reference mix solution containing the six analytes. The method was applied for monitoring the concentrations of microcystins and nodularin in real surface water during a sampling campaign of 9 months in which the ELISA method was used as standard official method. The results of the two methods were compared showing good agreement when the highest concentration values of MCs were found. Graphical abstract An on-line SPE/HPLC/MS-MS method for the determination of five microcystins and nodularin in surface waters at sub μg L(-1) was optimized and compared with ELISA assay method for real samples.
Tempestilli, Massimo; Pucci, Luigia; Notari, Stefania; Di Caro, Antonino; Castilletti, Concetta; Rivelli, Maria Rosaria; Agrati, Chiara; Pucillo, Leopoldo Paolo
2015-11-01
Ebola virus, an enveloped virus, is the cause of the largest and most complex Ebola virus disease (EVD) outbreak in West Africa. Blood or body fluids of an infected person may represent a biohazard to laboratory workers. Laboratory tests of virus containing specimens should be conducted in referral centres at biosafety level 4, but based on the severity of clinical symptoms, basic laboratories might be required to execute urgent tests for patients suspected of EVD. The aim of this work was to compare the analytical performances of laboratory tests when Triton X-100, a chemical agent able to inactivate other enveloped viruses, was added to specimens. Results of clinical chemistry, coagulation and haematology parameters on samples before and after the addition of 0.1% (final concentration) of Triton X-100 and 1 h of incubation at room temperature were compared. Overall, results showed very good agreement by all statistical analyses. Triton X-100 at 0.1% did not significantly affect the results for the majority of the analytes tested. Triton X-100 at 0.1% can be used to reduce the biohazard in performing laboratory tests on samples from patients with EVD without affecting clinical decisions.
Truxene-Based Hyperbranched Conjugated Polymers: Fluorescent Micelles Detect Explosives in Water.
Huang, Wei; Smarsly, Emanuel; Han, Jinsong; Bender, Markus; Seehafer, Kai; Wacker, Irene; Schröder, Rasmus R; Bunz, Uwe H F
2017-01-25
We report two hyperbranched conjugated polymers (HCP) with truxene units as core and 1,4-didodecyl-2,5-diethynylbenzene as well as 1,4-bis(dodecyloxy)-2,5-diethynylbenzene as comonomers. Two analogous poly(para-phenyleneethynylene)s (PPE) are also prepared as comparison to demonstrate the difference between the truxene and the phenyl moieties in their optical properties and their sensing performance. The four polymers are tested for nitroaromatic analytes and display different fluorescence quenching responses. The quenching efficiencies are dependent upon the spectral overlap between the absorbance of the analyte and the emission of the fluorescent polymer. Optical fingerprints are obtained, based on the unique response patterns of the analytes toward the polymers. With this small sensor array, one can distinguish nine nitroaromatic analytes with 100% accuracy. The amphiphilic polymer F127 (a polyethylene glycol-polypropylene glycol block copolymer) carries the hydrophobic HCPs and self-assembles into micelles in water, forming highly fluorescent HCP micelles. The micelle-bound conjugated polymers detect nitroaromatic analytes effectively in water and show an increased sensitivity compared to the sensing of nitroaromatics in organic solvents. The nitroarenes are also discriminated in water using this four-element chemical tongue.
Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity
Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.
2010-01-01
An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183
NASA Astrophysics Data System (ADS)
Kesharwani, Manoj K.; Sylvetsky, Nitai; Martin, Jan M. L.
2017-11-01
We show that the DCSD (distinguishable clusters with all singles and doubles) correlation method permits the calculation of vibrational spectra at near-CCSD(T) quality but at no more than CCSD cost, and with comparatively inexpensive analytical gradients. For systems dominated by a single reference configuration, even MP2.5 is a viable alternative, at MP3 cost. MP2.5 performance for vibrational frequencies is comparable to double hybrids such as DSD-PBEP86-D3BJ, but without resorting to empirical parameters. DCSD is also quite suitable for computing zero-point vibrational energies in computational thermochemistry.
Defining Higher-Order Turbulent Moment Closures with an Artificial Neural Network and Random Forest
NASA Astrophysics Data System (ADS)
McGibbon, J.; Bretherton, C. S.
2017-12-01
Unresolved turbulent advection and clouds must be parameterized in atmospheric models. Modern higher-order closure schemes depend on analytic moment closure assumptions that diagnose higher-order moments in terms of lower-order ones. These are then tested against Large-Eddy Simulation (LES) higher-order moment relations. However, these relations may not be neatly analytic in nature. Rather than rely on an analytic higher-order moment closure, can we use machine learning on LES data itself to define a higher-order moment closure?We assess the ability of a deep artificial neural network (NN) and random forest (RF) to perform this task using a set of observationally-based LES runs from the MAGIC field campaign. By training on a subset of 12 simulations and testing on remaining simulations, we avoid over-fitting the training data.Performance of the NN and RF will be assessed and compared to the Analytic Double Gaussian 1 (ADG1) closure assumed by Cloudy Layers Unified By Binormals (CLUBB), a higher-order turbulence closure currently used in the Community Atmosphere Model (CAM). We will show that the RF outperforms the NN and the ADG1 closure for the MAGIC cases within this diagnostic framework. Progress and challenges in using a diagnostic machine learning closure within a prognostic cloud and turbulence parameterization will also be discussed.
Microchip integrating magnetic nanoparticles for allergy diagnosis.
Teste, Bruno; Malloggi, Florent; Siaugue, Jean-Michel; Varenne, Anne; Kanoufi, Frederic; Descroix, Stéphanie
2011-12-21
We report on the development of a simple and easy to use microchip dedicated to allergy diagnosis. This microchip combines both the advantages of homogeneous immunoassays i.e. species diffusion and heterogeneous immunoassays i.e. easy separation and preconcentration steps. In vitro allergy diagnosis is based on specific Immunoglobulin E (IgE) quantitation, in that way we have developed and integrated magnetic core-shell nanoparticles (MCSNPs) as an IgE capture nanoplatform in a microdevice taking benefit from both their magnetic and colloidal properties. Integrating such immunosupport allows to perform the target analyte (IgE) capture in the colloidal phase thus increasing the analyte capture kinetics since both immunological partners are diffusing during the immune reaction. This colloidal approach improves 1000 times the analyte capture kinetics compared to conventional methods. Moreover, based on the MCSNPs' magnetic properties and on the magnetic chamber we have previously developed the MCSNPs and therefore the target can be confined and preconcentrated within the microdevice prior to the detection step. The MCSNPs preconcentration factor achieved was about 35,000 and allows to reach high sensitivity thus avoiding catalytic amplification during the detection step. The developed microchip offers many advantages: the analytical procedure was fully integrated on-chip, analyses were performed in short assay time (20 min), the sample and reagents consumption was reduced to few microlitres (5 μL) while a low limit of detection can be achieved (about 1 ng mL(-1)).
Kumar, B Vinodh; Mohan, Thuthi
2018-01-01
Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level. For all analytes <6 sigma level, the quality goal index (QGI) was <0.8 indicating the area requiring improvement to be imprecision except cholesterol whose QGI >1.2 indicated inaccuracy. This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkins, T.F.; Thorne, P.G.; Myers, K.F.
Salting-out solvent extraction (SOE) was compared with cartridge and membrane solid-phase extraction (SPE) for preconcentration of nitroaromatics, nitramines, and aminonitroaromatics prior to determination by reversed-phase high-performance liquid chromatography. The solid phases used were manufacturer-cleaned materials, Porapak RDX for the cartridge method and Empore SDB-RPS for the membrane method. Thirty-three groundwater samples from the Naval Surface Warfare Center, Crane, Indiana, were analyzed using the direct analysis protocol specified in SW846 Method 8330, and the results were compared with analyses conducted after preconcentration using SOE with acetonitrile, cartridge-based SPE, and membrane-based SPE. For high-concentration samples, analytical results from the three preconcentration techniquesmore » were compared with results from the direct analysis protocol; good recovery of all target analytes was achieved by all three pre-concentration methods. For low-concentration samples, results from the two SPE methods were correlated with results from the SOE method; very similar data was obtained by the SOE and SPE methods, even at concentrations well below 1 microgram/L.« less
Validation of a multiplex electrochemiluminescent immunoassay platform in human and mouse samples
Bastarache, J.A.; Koyama, T.; Wickersham, N.E; Ware, L.B.
2014-01-01
Despite the widespread use of multiplex immunoassays, there are very few scientific reports that test the accuracy and reliability of a platform prior to publication of experimental data. Our laboratory has previously demonstrated the need for new assay platform validation prior to use of biologic samples from large studies in order to optimize sample handling and assay performance. In this study, our goal was to test the accuracy and reproducibility of an electrochemiluminescent multiplex immunoassay platform (Meso Scale Discovery, MSD®) and compare this platform to validated, singleplex immunoassays (R&D Systems®) using actual study subject (human plasma and mouse bronchoalveolar lavage fluid (BALF) and plasma) samples. We found that the MSD platform performed well on intra- and inter-assay comparisons, spike and recovery and cross-platform comparisons. The mean intra-assay CV% and range for MSD was 3.49 (0.0-10.4) for IL-6 and 2.04 (0.1-7.9) for IL-8. The correlation between values for identical samples measured on both MSD and R&D was R=0.97 for both analytes. The mouse MSD assay had a broader range of CV% with means ranging from 9.5-28.5 depending on the analyte. The range of mean CV% was similar for single plex ELISAs at 4.3-23.7 depending on the analyte. Regardless of species or sample type, CV% was more variable at lower protein concentrations. In conclusion, we validated a multiplex electrochemiluminscent assay system and found that it has superior test characteristics in human plasma compared to mouse BALF and plasma. Both human and MSD assays compared favorably to well-validated singleplex ELISA's PMID:24768796
Drug screening in medical examiner casework by high-resolution mass spectrometry (UPLC-MSE-TOF).
Rosano, Thomas G; Wood, Michelle; Ihenetu, Kenneth; Swift, Thomas A
2013-10-01
Postmortem drug findings yield important analytical evidence in medical examiner casework, and chromatography coupled with nominal mass spectrometry (MS) serves as the predominant general unknown screening approach. We report screening by ultra performance liquid chromatography (UPLC) coupled with hybrid quadrupole time-of-flight mass spectrometer (MS(E)-TOF), with comparison to previously validated nominal mass UPLC-MS and UPLC-MS-MS methods. UPLC-MS(E)-TOF screening for over 950 toxicologically relevant drugs and metabolites was performed in a full-spectrum (m/z 50-1,000) mode using an MS(E) acquisition of both molecular and fragment ion data at low (6 eV) and ramped (10-40 eV) collision energies. Mass error averaged 1.27 ppm for a large panel of reference drugs and metabolites. The limit of detection by UPLC-MS(E)-TOF ranges from 0.5 to 100 ng/mL and compares closely with UPLC-MS-MS. The influence of column recovery and matrix effect on the limit of detection was demonstrated with ion suppression by matrix components correlating closely with early and late eluting reference analytes. Drug and metabolite findings by UPLC-MS(E)-TOF were compared with UPLC-MS and UPLC-MS-MS analyses of postmortem blood in 300 medical examiner cases. Positive findings by all methods totaled 1,528, with a detection rate of 57% by UPLC-MS, 72% by UPLC-MS-MS and 80% by combined UPLC-MS and UPLC-MS-MS screening. Compared with nominal mass screening methods, UPLC-MS(E)-TOF screening resulted in a 99% detection rate and, in addition, offered the potential for the detection of nontargeted analytes via high-resolution acquisition of molecular and fragment ion data.
Martinuzzo, Marta E; Duboscq, Cristina; Lopez, Marina S; Barrera, Luis H; Vinuales, Estela S; Ceresetto, Jose; Forastiero, Ricardo R; Oyhamburu, Jose
2018-06-01
Rivaroxaban oral anticoagulant does not need laboratory monitoring, but in some situations plasma level measurement is useful. The objective of this paper was to verify analytical performance and compare two rivaroxaban calibrated anti Xa assays/coagulometer systems with specific or other branch calibrators. In 59 samples drawn at trough or peak from patients taking rivaroxaban, plasma levels were measured by HemosIL Liquid anti Xa in ACLTOP 300/500, and STA liquid Anti Xa in TCoag Destiny Plus. HemosIL and STA rivaroxaban calibrators and controls were used. CLSI guideline procedures EP15A3 for precision and trueness, EP6 for linearity, and EP9 for methods comparison were used. Coefficient of variation within run and total precision (CVR and CVWL respectively) of plasmatic rivaroxaban were < 4.2 and < 4.85% and BIAS < 7.4 and < 6.5%, for HemosIL-ACL TOP and STA-Destiny systems, respectively. Linearity verification 8 - 525 ng/mL a Deming regression for methods comparison presented R 0.963, 0.968 and 0.982, with a mean CV 13.3% when using different systems and calibrations. The analytical performance of plasma rivaroxaban was acceptable in both systems, and results from reagent/coagulometer systems are comparable even when calibrating with different branch material.
10 CFR 26.168 - Blind performance testing.
Code of Federal Regulations, 2014 CFR
2014-01-01
... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...
10 CFR 26.168 - Blind performance testing.
Code of Federal Regulations, 2010 CFR
2010-01-01
... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...
10 CFR 26.168 - Blind performance testing.
Code of Federal Regulations, 2011 CFR
2011-01-01
... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...
10 CFR 26.168 - Blind performance testing.
Code of Federal Regulations, 2013 CFR
2013-01-01
... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...
10 CFR 26.168 - Blind performance testing.
Code of Federal Regulations, 2012 CFR
2012-01-01
... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...
Xiao, Xiaohua; Song, Wei; Wang, Jiayue; Li, Gongke
2012-01-27
In this study, low temperature vacuum microwave-assisted extraction, which simultaneous performed microwave-assisted extraction (MAE) in low temperature and in vacuo environment, was proposed. The influencing parameters including solid/liquid ratio, extraction temperature, extraction time, degree of vacuum and microwave power were discussed. The predominance of low temperature vacuum microwave-assisted extraction was investigated by comparing the extraction yields of vitamin C, β-carotene, aloin A and astaxanthin in different foods with that in MAE and solvent extraction, and 5.2-243% increments were obtained. On the other hand, the chemical kinetics of vitamin C and aloin A, which composed two different steps including the extraction step of analyte transferred from matrix into solvent and the decomposition step of analyte degraded in the extraction solvent, were proposed. All of the decomposition rates (K(2)) for the selected analyte in low temperature, in vacuo and in nitrogen atmosphere decreased significantly comparing with that in conventional MAE, which are in agreement with that obtained from experiments. Consequently, the present method was successfully applied to extract labile compound from different food samples. These results showed that low temperature and/or in vacuo environment in microwave-assisted extraction system was especially important to prevent the degradation of labile components and have good potential on the extraction of labile compound in foods, pharmaceutical and natural products. Copyright © 2011 Elsevier B.V. All rights reserved.
Mottier, Nicolas; Tharin, Manuel; Cluse, Camille; Crudo, Jean-René; Lueso, María Gómez; Goujon-Ginglinger, Catherine G; Jaquier, Anne; Mitova, Maya I; Rouget, Emmanuel G R; Schaller, Mathieu; Solioz, Jennifer
2016-09-01
Studies in environmentally controlled rooms have been used over the years to assess the impact of environmental tobacco smoke on indoor air quality. As new tobacco products are developed, it is important to determine their impact on air quality when used indoors. Before such an assessment can take place it is essential that the analytical methods used to assess indoor air quality are validated and shown to be fit for their intended purpose. Consequently, for this assessment, an environmentally controlled room was built and seven analytical methods, representing eighteen analytes, were validated. The validations were carried out with smoking machines using a matrix-based approach applying the accuracy profile procedure. The performances of the methods were compared for all three matrices under investigation: background air samples, the environmental aerosol of Tobacco Heating System THS 2.2, a heat-not-burn tobacco product developed by Philip Morris International, and the environmental tobacco smoke of a cigarette. The environmental aerosol generated by the THS 2.2 device did not have any appreciable impact on the performances of the methods. The comparison between the background and THS 2.2 environmental aerosol samples generated by smoking machines showed that only five compounds were higher when THS 2.2 was used in the environmentally controlled room. Regarding environmental tobacco smoke from cigarettes, the yields of all analytes were clearly above those obtained with the other two air sample types. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Resolution of seven-axis manipulator redundancy: A heuristic issue
NASA Technical Reports Server (NTRS)
Chen, I.
1990-01-01
An approach is presented for the resolution of the redundancy of a seven-axis manipulator arm from the AI and expert systems point of view. This approach is heuristic, analytical, and globally resolves the redundancy at the position level. When compared with other approaches, this approach has several improved performance capabilities, including singularity avoidance, repeatability, stability, and simplicity.
NLC Luminosity as a Function of Beam Parameters
NASA Astrophysics Data System (ADS)
Nosochkov, Y.
2002-06-01
Realistic calculation of NLC luminosity has been performed using particle tracking in DIMAD and beam-beam simulations in GUINEA-PIG code for various values of beam emittance, energy and beta functions at the Interaction Point (IP). Results of the simulations are compared with analytic luminosity calculations. The optimum range of IP beta functions for high luminosity was identified.
ERIC Educational Resources Information Center
Martin-Fernandez, Manuel; Revuelta, Javier
2017-01-01
This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…
The Influence of Delaying Judgments of Learning on Metacognitive Accuracy: A Meta-Analytic Review
ERIC Educational Resources Information Center
Rhodes, Matthew G.; Tauber, Sarah K.
2011-01-01
Many studies have examined the accuracy of predictions of future memory performance solicited through judgments of learning (JOLs). Among the most robust findings in this literature is that delaying predictions serves to substantially increase the relative accuracy of JOLs compared with soliciting JOLs immediately after study, a finding termed the…
A new concept of pencil beam dose calculation for 40-200 keV photons using analytical dose kernels.
Bartzsch, Stefan; Oelfke, Uwe
2013-11-01
The advent of widespread kV-cone beam computer tomography in image guided radiation therapy and special therapeutic application of keV photons, e.g., in microbeam radiation therapy (MRT) require accurate and fast dose calculations for photon beams with energies between 40 and 200 keV. Multiple photon scattering originating from Compton scattering and the strong dependence of the photoelectric cross section on the atomic number of the interacting tissue render these dose calculations by far more challenging than the ones established for corresponding MeV beams. That is why so far developed analytical models of kV photon dose calculations fail to provide the required accuracy and one has to rely on time consuming Monte Carlo simulation techniques. In this paper, the authors introduce a novel analytical approach for kV photon dose calculations with an accuracy that is almost comparable to the one of Monte Carlo simulations. First, analytical point dose and pencil beam kernels are derived for homogeneous media and compared to Monte Carlo simulations performed with the Geant4 toolkit. The dose contributions are systematically separated into contributions from the relevant orders of multiple photon scattering. Moreover, approximate scaling laws for the extension of the algorithm to inhomogeneous media are derived. The comparison of the analytically derived dose kernels in water showed an excellent agreement with the Monte Carlo method. Calculated values deviate less than 5% from Monte Carlo derived dose values, for doses above 1% of the maximum dose. The analytical structure of the kernels allows adaption to arbitrary materials and photon spectra in the given energy range of 40-200 keV. The presented analytical methods can be employed in a fast treatment planning system for MRT. In convolution based algorithms dose calculation times can be reduced to a few minutes.
Kostera, Joshua; Leckie, Gregor; Tang, Ning; Lampinen, John; Szostak, Magdalena; Abravaya, Klara; Wang, Hong
2016-12-01
Clinical management of drug-resistant tuberculosis patients continues to present significant challenges to global health. To tackle these challenges, the Abbott RealTime MTB RIF/INH Resistance assay was developed to accelerate the diagnosis of rifampicin and/or isoniazid resistant tuberculosis to within a day. This article summarizes the performance of the Abbott RealTime MTB RIF/INH Resistance assay; including reliability, analytical sensitivity, and clinical sensitivity/specificity as compared to Cepheid GeneXpert MTB/RIF version 1.0 and Hain MTBDRplus version 2.0. The limit of detection (LOD) of the Abbott RealTime MTB RIF/INH Resistance assay was determined to be 32 colony forming units/milliliter (cfu/mL) using the Mycobacterium tuberculosis (MTB) strain H37Rv cell line. For rifampicin resistance detection, the Abbott RealTime MTB RIF/INH Resistance assay demonstrated statistically equivalent clinical sensitivity and specificity as compared to Cepheid GeneXpert MTB/RIF. For isoniazid resistance detection, the assay demonstrated statistically equivalent clinical sensitivity and specificity as compared to Hain MTBDRplus. The performance data presented herein demonstrate that the Abbott RealTime MTB RIF/INH Resistance assay is a sensitive, robust, and reliable test for realtime simultaneous detection of first line anti-tuberculosis antibiotics rifampicin and isoniazid in patient specimens. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.
MODEL CORRELATION STUDY OF A RETRACTABLE BOOM FOR A SOLAR SAIL SPACECRAFT
NASA Technical Reports Server (NTRS)
Adetona, O.; Keel, L. H.; Oakley, J. D.; Kappus, K.; Whorton, M. S.; Kim, Y. K.; Rakpczy, J. M.
2005-01-01
To realize design concepts, predict dynamic behavior and develop appropriate control strategies for high performance operation of a solar-sail spacecraft, we developed a simple analytical model that represents dynamic behavior of spacecraft with various sizes. Since motion of the vehicle is dominated by retractable booms that support the structure, our study concentrates on developing and validating a dynamic model of a long retractable boom. Extensive tests with various configurations were conducted for the 30 Meter, light-weight, retractable, lattice boom at NASA MSFC that is structurally and dynamically similar to those of a solar-sail spacecraft currently under construction. Experimental data were then compared with the corresponding response of the analytical model. Though mixed results were obtained, the analytical model emulates several key characteristics of the boom. The paper concludes with a detailed discussion of issues observed during the study.
Andrés, Axel; Rosés, Martí; Bosch, Elisabeth
2014-11-28
In previous work, a two-parameter model to predict chromatographic retention of ionizable analytes in gradient mode was proposed. However, the procedure required some previous experimental work to get a suitable description of the pKa change with the mobile phase composition. In the present study this previous experimental work has been simplified. The analyte pKa values have been calculated through equations whose coefficients vary depending on their functional group. Forced by this new approach, other simplifications regarding the retention of the totally neutral and totally ionized species also had to be performed. After the simplifications were applied, new prediction values were obtained and compared with the previously acquired experimental data. The simplified model gave pretty good predictions while saving a significant amount of time and resources. Copyright © 2014 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasan, Iftekhar; Husain, Tausif; Sozer, Yilmaz
This paper proposes an analytical machine design tool using magnetic equivalent circuit (MEC)-based particle swarm optimization (PSO) for a double-sided, flux-concentrating transverse flux machine (TFM). The magnetic equivalent circuit method is applied to analytically establish the relationship between the design objective and the input variables of prospective TFM designs. This is computationally less intensive and more time efficient than finite element solvers. A PSO algorithm is then used to design a machine with the highest torque density within the specified power range along with some geometric design constraints. The stator pole length, magnet length, and rotor thickness are the variablesmore » that define the optimization search space. Finite element analysis (FEA) was carried out to verify the performance of the MEC-PSO optimized machine. The proposed analytical design tool helps save computation time by at least 50% when compared to commercial FEA-based optimization programs, with results found to be in agreement with less than 5% error.« less
Pasta, D J; Taylor, J L; Henning, J M
1999-01-01
Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.
NASA Technical Reports Server (NTRS)
Storey, Jedediah Morse
2016-01-01
Understanding, predicting, and controlling fluid slosh dynamics is critical to safety and improving performance of space missions when a significant percentage of the spacecraft's mass is a liquid. Computational fluid dynamics simulations can be used to predict the dynamics of slosh, but these programs require extensive validation. Many experimental and numerical studies of water slosh have been conducted. However, slosh data for cryogenic liquids is lacking. Water and cryogenic liquid nitrogen are used in various ground-based tests with a spherical tank to characterize damping, slosh mode frequencies, and slosh forces. A single ring baffle is installed in the tank for some of the tests. Analytical models for slosh modes, slosh forces, and baffle damping are constructed based on prior work. Select experiments are simulated using a commercial CFD software, and the numerical results are compared to the analytical and experimental results for the purposes of validation and methodology-improvement.
21 CFR 809.30 - Restrictions on the sale, distribution and use of analyte specific reagents.
Code of Federal Regulations, 2010 CFR
2010-04-01
... other than providing diagnostic information to patients and practitioners, e.g., forensic, academic... include the statement for class I exempt ASR's: “Analyte Specific Reagent. Analytical and performance... and performance characteristics are not established”; and (4) Shall not make any statement regarding...
Summary of recent NASA propeller research
NASA Technical Reports Server (NTRS)
Mikkelson, D. C.; Mitchell, G. A.; Bober, L. J.
1984-01-01
Advanced high-speed propellers offer large performance improvements for aircraft that cruise in the Mach 0.7 to 0.8 speed regime. At these speeds, studies indicate that there is a 15 to near 40 percent block fuel savings and associated operating cost benefits for advanced turboprops compared to equivalent technology turbofan powered aircraft. Recent wind tunnel results for five eight to ten blade advanced models are compared with analytical predictions. Test results show that blade sweep was important in achieving net efficiencies near 80 percent at Mach 0.8 and reducing nearfield cruise noise by about 6 dB. Lifting line and lifting surface aerodynamic analysis codes are under development and some results are compared with propeller force and probe data. Also, analytical predictions are compared with some initial laser velocimeter measurements of the flow field velocities of an eightbladed 45 swept propeller. Experimental aeroelastic results indicate that cascade effects and blade sweep strongly affect propeller aeroelastic characteristics. Comparisons of propeller near-field noise data with linear acoustic theory indicate that the theory adequately predicts near-field noise for subsonic tip speeds but overpredicts the noise for supersonic tip speeds.
Summary of recent NASA propeller research
NASA Technical Reports Server (NTRS)
Mikkelson, D. C.; Mitchell, G. A.; Bober, L. J.
1985-01-01
Advanced high speed propellers offer large performance improvements for aircraft that cruise in the Mach 0.7 to 0.8 speed regime. At these speeds, studies indicate that there is a 15 to near 40 percent block fuel savings and associated operating cost benefits for advanced turboprops compared to equivalent technology turbofan powered aircraft. Recent wind tunnel results for five eight to ten blade advanced models are compared with analytical predictions. Test results show that blade sweep was important in achieving net efficiencies near 80 percent at Mach 0.8 and reducing nearfield cruise noise about 6 dB. Lifting line and lifting surface aerodynamic analysis codes are under development and some results are compared with propeller force and probe data. Also, analytical predictions are compared with some initial laser velocimeter measurements of the flow field velocities of an eight bladed 45 swept propeller. Experimental aeroelastic results indicate that cascade effects and blade sweep strongly affect propeller aeroelastic characteristics. Comparisons of propeller nearfield noise data with linear acoustic theory indicate that the theory adequately predicts nearfield noise for subsonic tip speeds, but overpredicts the noise for supersonic tip speeds.
Slushy weightings for the optimal pilot model. [considering visual tracking task
NASA Technical Reports Server (NTRS)
Dillow, J. D.; Picha, D. G.; Anderson, R. O.
1975-01-01
A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.
Optimizing piezoelectric receivers for acoustic power transfer applications
NASA Astrophysics Data System (ADS)
Gorostiaga, M.; Wapler, M. C.; Wallrabe, U.
2018-07-01
In this paper, we aim to optimize piezoelectric plate receivers for acoustic power transfer applications by analyzing the influence of the losses and of the acoustic boundary conditions. We derive the analytic expressions of the efficiency of the receiver with the optimal electric loads attached, and analyze the maximum efficiency value and its frequency with different loss and acoustic boundary conditions. To validate the analytical expressions that we have derived, we perform experiments in water with composite transducers of different filling fractions, and see that a lower acoustic impedance mismatch can compensate the influence of large dielectric and acoustic losses to achieve a good performance. Finally, we briefly compare the advantages and drawbacks of composite transducers and pure PZT (lead zirconate titanate) plates as acoustic power receivers, and conclude that 1–3 composites can achieve similar efficiency values in low power applications due to their adjustable acoustic impedance.
Li, Hang; He, Junting; Liu, Qin; Huo, Zhaohui; Liang, Si; Liang, Yong
2011-03-01
A tandem solid-phase extraction method (SPE) of connecting two different cartridges (C(18) and MCX) in series was developed as the extraction procedure in this article, which provided better extraction yields (>86%) for all analytes and more appropriate sample purification from endogenous interference materials compared with a single cartridge. Analyte separation was achieved on a C(18) reversed-phase column at the wavelength of 265 nm by high-performance liquid chromatography (HPLC). The method was validated in terms of extraction yield, precision and accuracy. These assays gave mean accuracy values higher than 89% with RSD values that were always less than 3.8%. The method has been successfully applied to plasma samples from rats after oral administration of target compounds. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Internal quality control: best practice.
Kinns, Helen; Pitkin, Sarah; Housley, David; Freedman, Danielle B
2013-12-01
There is a wide variation in laboratory practice with regard to implementation and review of internal quality control (IQC). A poor approach can lead to a spectrum of scenarios from validation of incorrect patient results to over investigation of falsely rejected analytical runs. This article will provide a practical approach for the routine clinical biochemistry laboratory to introduce an efficient quality control system that will optimise error detection and reduce the rate of false rejection. Each stage of the IQC system is considered, from selection of IQC material to selection of IQC rules, and finally the appropriate action to follow when a rejection signal has been obtained. The main objective of IQC is to ensure day-to-day consistency of an analytical process and thus help to determine whether patient results are reliable enough to be released. The required quality and assay performance varies between analytes as does the definition of a clinically significant error. Unfortunately many laboratories currently decide what is clinically significant at the troubleshooting stage. Assay-specific IQC systems will reduce the number of inappropriate sample-run rejections compared with the blanket use of one IQC rule. In practice, only three or four different IQC rules are required for the whole of the routine biochemistry repertoire as assays are assigned into groups based on performance. The tools to categorise performance and assign IQC rules based on that performance are presented. Although significant investment of time and education is required prior to implementation, laboratories have shown that such systems achieve considerable reductions in cost and labour.
2017-01-01
AFRL-SA-WP-SR-2017-0001 Population Spotting Using “ Big Data ”: Validating the Human Performance Concept of Operations Analytic Vision...TITLE AND SUBTITLE Population Spotting Using “ Big Data ”: Validating the Human Performance Concept of Operations Analytic Vision 5a. CONTRACT...STINFO COPY NOTICE AND SIGNATURE PAGE Using Government drawings, specifications, or other data included in this document for any
Fatigue Performance of Advanced High-Strength Steels (AHSS) GMAW Joints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Zhili; Sang, Yan; Jiang, Cindy
2009-01-01
The fatigue performance of gas metal arc welding (GMAW) joints of advanced high strength steels (AHSS) are compared and analyzed. The steel studied included a number of different grades of AHSS and baseline mild steels: DP600, DP780, DP980, M130, M220, solution annealed boron steel, fully hardened boron steels, HSLA690 and DR210 (a mild steel). Fatigue testing was conducted under a number of nominal stress ranges to obtain the S/N curves of the weld joints. A two-phase analytical model is developed to predict the fatigue performance of AHSS welds. It was found that there are appreciable differences in the fatigue S/Nmore » curves among different AHSS joints made using the same welding practices, suggesting that the local microstructure in the weld toe and root region plays non-negligible role in the fatigue performance of AHSS welds. Changes in weld parameters can influence the joint characteristics which in turn influence fatigue life of the weld joints, particularly of those of higher strength AHSS. The analytical model is capable of reasonably predicting the fatigue performance of welds made with various steel grades in this study.« less
Decision-directed detector for overlapping PCM/NRZ signals.
NASA Technical Reports Server (NTRS)
Wang, C. D.; Noack, T. L.
1973-01-01
A decision-directed (DD) technique for the detection of overlapping PCM/NRZ signals in the presence of white Gaussian noise is investigated. The performance of the DD detector is represented by probability of error Pe versus input signal-to-noise ratio (SNR). To examine how much improvement in performance can be achieved with this technique, Pe's with and without DD feedback are evaluated in parallel. Further, analytical results are compared with those found by Monte Carlo simulations. The results are in good agreement.
Extracting and identifying concrete structural defects in GPR images
NASA Astrophysics Data System (ADS)
Ye, Qiling; Jiao, Liangbao; Liu, Chuanxin; Cao, Xuehong; Huston, Dryver; Xia, Tian
2018-03-01
Traditionally most GPR data interpretations are performed manually. With the advancement of computing technologies, how to automate GPR data interpretation to achieve high efficiency and accuracy has become an active research subject. In this paper, analytical characterizations of major defects in concrete structures, including delamination, air void and moisture in GPR images, are performed. In the study, the image features of different defects are compared. Algorithms are developed for defect feature extraction and identification. For validations, both simulation results and field test data are utilized.
Propeller flow visualization techniques
NASA Technical Reports Server (NTRS)
Stefko, G. L.; Paulovich, F. J.; Greissing, J. P.; Walker, E. D.
1982-01-01
Propeller flow visualization techniques were tested. The actual operating blade shape as it determines the actual propeller performance and noise was established. The ability to photographically determine the advanced propeller blade tip deflections, local flow field conditions, and gain insight into aeroelastic instability is demonstrated. The analytical prediction methods which are being developed can be compared with experimental data. These comparisons contribute to the verification of these improved methods and give improved capability for designing future advanced propellers with enhanced performance and noise characteristics.
NASA Technical Reports Server (NTRS)
Delleur, Ann M.; Kerslake, Thomas W.
2002-01-01
With the first United States (U.S.) photovoltaic array (PVA) activated on International Space Station (ISS) in December 2000, on-orbit data can now be compared to analytical predictions. Due to ISS operational constraints, it is not always possible to point the front side of the arrays at the Sun. Thus, in many cases, sunlight directly illuminates the backside of the PVA as well as albedo illumination on either the front or the back. During this time, appreciable power is produced since the solar cells are mounted on a thin, solar transparent substrate. It is important to present accurate predictions for both front and backside power generation for mission planning, certification of flight readiness for a given mission, and on-orbit mission support. To provide a more detailed assessment of the ISS power production capability, the authors developed a PVA electrical performance model applicable to generalized bifacial illumination conditions. On-orbit PVA performance data were also collected and analyzed. This paper describes the ISS PVA performance model, and the methods used to reduce orbital performance data. Analyses were performed using SPACE. a NASA-GRC developed computer code for the ISS program office. Results showed a excellent comparison of on-orbit performance data and analytical results.
Maly, Friedrich E; Fried, Roman; Spannagl, Michael
2014-01-01
INSTAND e.V. has provided Molecular Genetics Multi-Analyte EQA schemes since 2006. EQA participation and performance were assessed from 2006 - 2012. From 2006 to 2012, the number of analytes in the Multi-Analyte EQA schemes rose from 17 to 53. Total number of results returned rose from 168 in January 2006 to 824 in August 2012. The overall error rate was 1.40 +/- 0.84% (mean +/- SD, N = 24 EQA dates). From 2006 to 2012, no analyte was reported 100% correctly. Individual participant performance was analysed for one common analyte, Lactase (LCT) T-13910C. From 2006 to 2012, 114 laboratories participated in this EQA. Of these, 10 laboratories (8.8%) reported at least one wrong result during the whole observation period. All laboratories reported correct results after their failure incident. In spite of the low overall error rate, EQA will continue to be important for Molecular Genetics.
Tchamna, Rodrigue; Lee, Moonyong
2018-01-01
This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Comparison of Three Methods for Wind Turbine Capacity Factor Estimation
Ditkovich, Y.; Kuperman, A.
2014-01-01
Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first “quasiexact” approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second “analytic” approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third “approximate” approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation. PMID:24587755
Cacho, J I; Campillo, N; Viñas, P; Hernández-Córdoba, M
2012-07-20
This paper describes a method for the simultaneous determination of bisphenol A (BPA), bisphenol F (BPF), bisphenol Z (BPZ) and biphenol (BP), using stir bar sorptive extraction (SBSE) in combination with thermal desorption-gas chromatography-mass spectrometry (TD-GC-MS). Several parameters affecting both extraction and thermal desorption of the SBSE stages were carefully optimized by multivariate designs. SBSE was performed with two derivatization procedures, in situ acetylation and in tube silylation, and the results were compared with those obtained when the analytes were not derivatized. The proposed method, determining the analytes as acyl derivatives, was applied to analyze commercially canned beverages, as well as the filling liquids of canned vegetables, providing detection limits of between 4.7 and 12.5 ng L⁻¹, depending on the compound. The intraday and interday precisions were lower than 6% in terms of relative standard deviation. Recovery studies at two concentration levels, 0.1 and 1 μg L⁻¹, were performed providing recoveries in the 86-122% range. The samples analyzed contained higher concentrations of BPA than of the other analytes. Copyright © 2012 Elsevier B.V. All rights reserved.
Computing Platforms for Big Biological Data Analytics: Perspectives and Challenges.
Yin, Zekun; Lan, Haidong; Tan, Guangming; Lu, Mian; Vasilakos, Athanasios V; Liu, Weiguo
2017-01-01
The last decade has witnessed an explosion in the amount of available biological sequence data, due to the rapid progress of high-throughput sequencing projects. However, the biological data amount is becoming so great that traditional data analysis platforms and methods can no longer meet the need to rapidly perform data analysis tasks in life sciences. As a result, both biologists and computer scientists are facing the challenge of gaining a profound insight into the deepest biological functions from big biological data. This in turn requires massive computational resources. Therefore, high performance computing (HPC) platforms are highly needed as well as efficient and scalable algorithms that can take advantage of these platforms. In this paper, we survey the state-of-the-art HPC platforms for big biological data analytics. We first list the characteristics of big biological data and popular computing platforms. Then we provide a taxonomy of different biological data analysis applications and a survey of the way they have been mapped onto various computing platforms. After that, we present a case study to compare the efficiency of different computing platforms for handling the classical biological sequence alignment problem. At last we discuss the open issues in big biological data analytics.
Tuning the gas sensing performance of single PEDOT nanowire devices.
Hangarter, Carlos M; Hernandez, Sandra C; He, Xueing; Chartuprayoon, Nicha; Choa, Yong Ho; Myung, Nosang V
2011-06-07
This paper reports the synthesis and dopant dependent electrical and sensing properties of single poly(ethylenedioxythiophene) (PEDOT) nanowire sensors. Dopant type (i.e. polystyrenesulfonate (PSS(-)) and perchlorate (ClO(4)(-))) and solvent (i.e. acetonitrile and 1 : 1 water-acetonitrile mixture) were adjusted to change the conjugation length and hydrophilicity of nanowires which resulted in change of the electrical properties and sensing performance. Temperature dependent coefficient of resistance (TCR) indicated that the electrical properties are greatly dependent on dopants and electrolyte where greater disorder was found in PSS(-) doped PEDOT nanowires compared to ClO(4)(-) doped nanowires. Upon exposure to different analytes including water vapor and volatile organic compounds, these nanowire devices displayed substantially different sensing characteristics. ClO(4)(-) doped PEDOT nanowires from an acetonitrile bath show superior sensing responses toward less electronegative analytes and followed a power law dependence on the analyte concentration at high partial pressures. These tunable sensing properties were attributed to variation in the conjugation lengths, dopant type and concentration of the wires which may be attributed to two distinct sensing mechanisms: swelling within the bulk of the nanowire and work function modulation of Schottky barrier junction between nanowire and electrodes.
Investigation of the Persistence of Nerve Agent Degradation ...
Journal Article The persistence of chemical warfare nerve agent degradation analytes on surfaces is important for reasons ranging from indicating the presence of nerve agent on that surface to environmental restoration of a site after nerve agent release. This study investigates the persistence of several chemical warfare nerve agent degradation analytes on a number of indoor surfaces and presents an approach for wipe sampling of surfaces, followed by wipe extraction and liquid chromatography-tandem mass spectrometry detection. Multiple commercially available wipe materials were investigated to determine optimal wipe recoveries. Tested surfaces, including several porous/permeable and largely nonporous/impermeable surfaces, were investigated to determine recoveries from these indoor surface materials. Wipe extracts were analyzed by ultra-high performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) and compared with high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) results. UPLC provides a sensitive separation of targeted degradation analytes in addition to being nearly four times faster than HPLC, allowing for greater throughput during a widespread release concerning large-scale contamination and subsequent remediation events. Percent recoveries from nonporous/impermeable surfaces were 60-103% for isopropyl methylphosphonate (IMPA), 61-91 % for ethyl methylphosphonate (EMPA), and 60-98% for pinacolyl methylphosphona
Assessment and prediction of drying shrinkage cracking in bonded mortar overlays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beushausen, Hans, E-mail: hans.beushausen@uct.ac.za; Chilwesa, Masuzyo
2013-11-15
Restrained drying shrinkage cracking was investigated on composite beams consisting of substrate concrete and bonded mortar overlays, and compared to the performance of the same mortars when subjected to the ring test. Stress development and cracking in the composite specimens were analytically modeled and predicted based on the measurement of relevant time-dependent material properties such as drying shrinkage, elastic modulus, tensile relaxation and tensile strength. Overlay cracking in the composite beams could be very well predicted with the analytical model. The ring test provided a useful qualitative comparison of the cracking performance of the mortars. The duration of curing wasmore » found to only have a minor influence on crack development. This was ascribed to the fact that prolonged curing has a beneficial effect on tensile strength at the onset of stress development, but is in the same time not beneficial to the values of tensile relaxation and elastic modulus. -- Highlights: •Parameter study on material characteristics influencing overlay cracking. •Analytical model gives good quantitative indication of overlay cracking. •Ring test presents good qualitative indication of overlay cracking. •Curing duration has little effect on overlay cracking.« less
Schønning, Kristian; Johansen, Kim; Nielsen, Lone Gilmor; Weis, Nina; Westh, Henrik
2018-07-01
Quantification of HBV DNA is used for initiating and monitoring antiviral treatment. Analytical test performance consequently impacts treatment decisions. To compare the analytical performance of the Aptima HBV Quant Assay (Aptima) and the COBAS Ampliprep/COBAS TaqMan HBV Test v2.0 (CAPCTMv2) for the quantification of HBV DNA in plasma samples. The performance of the two tests was compared on 129 prospective plasma samples, and on 63 archived plasma samples of which 53 were genotyped. Linearity of the two assays was assessed on dilutions series of three clinical samples (Genotype B, C, and D). Bland-Altman analysis of 120 clinical samples, which quantified in both tests, showed an average quantification bias (Aptima - CAPCTMv2) of -0.19 Log IU/mL (SD: 0.33 Log IU/mL). A single sample quantified more than three standard deviations higher in Aptima than in CAPCTMv2. Only minor differences were observed between genotype A (N = 4; average difference -0.01 Log IU/mL), B (N = 8; -0.13 Log IU/mL), C (N = 8; -0.31 Log IU/mL), D (N = 25; -0.22 Log IU/mL), and E (N = 7; -0.03 Log IU/mL). Deming regression showed that the two tests were excellently correlated (slope of the regression line 1.03; 95% CI: 0.998-1.068). Linearity of the tests was evaluated on dilution series and showed an excellent correlation of the two tests. Both tests were precise with %CV less than 3% for HBV DNA ≥3 Log IU/mL. The Aptima and CAPCTMv2 tests are highly correlated, and both tests are useful for monitoring patients chronically infected with HBV. Copyright © 2018 Elsevier B.V. All rights reserved.
Hubert, C; Houari, S; Rozet, E; Lebrun, P; Hubert, Ph
2015-05-22
When using an analytical method, defining an analytical target profile (ATP) focused on quantitative performance represents a key input, and this will drive the method development process. In this context, two case studies were selected in order to demonstrate the potential of a quality-by-design (QbD) strategy when applied to two specific phases of the method lifecycle: the pre-validation study and the validation step. The first case study focused on the improvement of a liquid chromatography (LC) coupled to mass spectrometry (MS) stability-indicating method by the means of the QbD concept. The design of experiments (DoE) conducted during the optimization step (i.e. determination of the qualitative design space (DS)) was performed a posteriori. Additional experiments were performed in order to simultaneously conduct the pre-validation study to assist in defining the DoE to be conducted during the formal validation step. This predicted protocol was compared to the one used during the formal validation. A second case study based on the LC/MS-MS determination of glucosamine and galactosamine in human plasma was considered in order to illustrate an innovative strategy allowing the QbD methodology to be incorporated during the validation phase. An operational space, defined by the qualitative DS, was considered during the validation process rather than a specific set of working conditions as conventionally performed. Results of all the validation parameters conventionally studied were compared to those obtained with this innovative approach for glucosamine and galactosamine. Using this strategy, qualitative and quantitative information were obtained. Consequently, an analyst using this approach would be able to select with great confidence several working conditions within the operational space rather than a given condition for the routine use of the method. This innovative strategy combines both a learning process and a thorough assessment of the risk involved. Copyright © 2015 Elsevier B.V. All rights reserved.
White, P Lewis; Barnes, Rosemary A; Springer, Jan; Klingspor, Lena; Cuenca-Estrella, Manuel; Morton, C Oliver; Lagrou, Katrien; Bretagne, Stéphane; Melchers, Willem J G; Mengoli, Carlo; Donnelly, J Peter; Heinz, Werner J; Loeffler, Juergen
2015-09-01
Aspergillus PCR testing of serum provides technical simplicity but with potentially reduced sensitivity compared to whole-blood testing. With diseases for which screening to exclude disease represents an optimal strategy, sensitivity is paramount. The associated analytical study confirmed that DNA concentrations were greater in plasma than those in serum. The aim of the current investigation was to confirm analytical findings by comparing the performance of Aspergillus PCR testing of plasma and serum in the clinical setting. Standardized Aspergillus PCR was performed on plasma and serum samples concurrently obtained from hematology patients in a multicenter retrospective anonymous case-control study, with cases diagnosed according to European Organization for Research and Treatment of Cancer/Invasive Fungal Infections Cooperative Group and the National Institute of Allergy and Infectious Diseases Mycoses Study Group (EORTC/MSG) consensus definitions (19 proven/probable cases and 42 controls). Clinical performance and clinical utility (time to positivity) were calculated for both kinds of samples. The sensitivity and specificity for Aspergillus PCR when testing serum were 68.4% and 76.2%, respectively, and for plasma, they were 94.7% and 83.3%, respectively. Eighty-five percent of serum and plasma PCR results were concordant. On average, plasma PCR was positive 16.8 days before diagnosis and was the earliest indicator of infection in 13 cases, combined with other biomarkers in five cases. On average, serum PCR was positive 10.8 days before diagnosis and was the earliest indicator of infection in six cases, combined with other biomarkers in three cases. These results confirm the analytical finding that the sensitivity of Aspergillus PCR using plasma is superior to that using serum. PCR positivity occurs earlier when testing plasma and provides sufficient sensitivity for the screening of invasive aspergillosis while maintaining methodological simplicity. Copyright © 2015 White et al.
Li, Kangning; Ma, Jing; Tan, Liying; Yu, Siyuan; Zhai, Chao
2016-06-10
The performances of fiber-based free-space optical (FSO) communications over gamma-gamma distributed turbulence are studied for multiple aperture receiver systems. The equal gain combining (EGC) technique is considered as a practical scheme to mitigate the atmospheric turbulence. Bit error rate (BER) performances for binary-phase-shift-keying-modulated coherent detection fiber-based free-space optical communications are derived and analyzed for EGC diversity receptions through an approximation method. To show the net diversity gain of a multiple aperture receiver system, BER performances of EGC are compared with a single monolithic aperture receiver system with the same total aperture area (same average total incident optical power on the aperture surface) for fiber-based free-space optical communications. The analytical results are verified by Monte Carlo simulations. System performances are also compared for EGC diversity coherent FSO communications with or without considering fiber-coupling efficiencies.
Freye, Chris E; Moore, Nicholas R; Synovec, Robert E
2018-02-16
The complementary information provided by tandem ionization time-of-flight mass spectrometry (TI-TOFMS) is investigated for comparative discovery-based analysis, when coupled with comprehensive two-dimensional gas chromatography (GC × GC). The TI conditions implemented were a hard ionization energy (70 eV) concurrently collected with a soft ionization energy (14 eV). Tile-based Fisher ratio (F-ratio) analysis is used to analyze diesel fuel spiked with twelve analytes at a nominal concentration of 50 ppm. F-ratio analysis is a supervised discovery-based technique that compares two different sample classes, in this case spiked and unspiked diesel, to reduce the complex GC × GC-TI-TOFMS data into a hit list of class distinguishing analyte features. Hit lists of the 70 eV and 14 eV data sets, and the single hit list produced when the two data sets are fused together, are all investigated. For the 70 eV hit list, eleven of the twelve analytes were found in the top thirteen hits. For the 14 eV hit list, nine of the twelve analytes were found in the top nine hits, with the other three analytes either not found or well down the hit list. As expected, the F-ratios per m/z used to calculate each average F-ratio per hit were generally smaller fragment ions for the 70 eV data set, while the larger fragment ions were emphasized in the 14 eV data set, supporting the notion that complementary information was provided. The discovery rate was improved when F-ratio analysis was performed on the fused data sets resulted in eleven of the twelve analytes being at the top of the single hit list. Using PARAFAC, analytes that were "discovered" were deconvoluted in order to obtain their identification via match values (MV). Location of the analytes and the "F-ratio spectra" obtained from F-ratio analysis were used to guide the deconvolution. Eight of the twelve analytes where successfully deconvoluted and identified using the in-house library for the 70 eV data set. PARAFAC deconvolution of the two separate data sets provided increased confidence in identification of "discovered" analytes. Herein, we explore the limit of analyte discovery and limit of analyte identification, and demonstrate a general workflow for the investigation of key chemical features in complex samples. Copyright © 2018 Elsevier B.V. All rights reserved.
Giantsoudi, Drosoula; Schuemann, Jan; Jia, Xun; Dowdell, Stephen; Jiang, Steve; Paganetti, Harald
2015-03-21
Monte Carlo (MC) methods are recognized as the gold-standard for dose calculation, however they have not replaced analytical methods up to now due to their lengthy calculation times. GPU-based applications allow MC dose calculations to be performed on time scales comparable to conventional analytical algorithms. This study focuses on validating our GPU-based MC code for proton dose calculation (gPMC) using an experimentally validated multi-purpose MC code (TOPAS) and compare their performance for clinical patient cases. Clinical cases from five treatment sites were selected covering the full range from very homogeneous patient geometries (liver) to patients with high geometrical complexity (air cavities and density heterogeneities in head-and-neck and lung patients) and from short beam range (breast) to large beam range (prostate). Both gPMC and TOPAS were used to calculate 3D dose distributions for all patients. Comparisons were performed based on target coverage indices (mean dose, V95, D98, D50, D02) and gamma index distributions. Dosimetric indices differed less than 2% between TOPAS and gPMC dose distributions for most cases. Gamma index analysis with 1%/1 mm criterion resulted in a passing rate of more than 94% of all patient voxels receiving more than 10% of the mean target dose, for all patients except for prostate cases. Although clinically insignificant, gPMC resulted in systematic underestimation of target dose for prostate cases by 1-2% compared to TOPAS. Correspondingly the gamma index analysis with 1%/1 mm criterion failed for most beams for this site, while for 2%/1 mm criterion passing rates of more than 94.6% of all patient voxels were observed. For the same initial number of simulated particles, calculation time for a single beam for a typical head and neck patient plan decreased from 4 CPU hours per million particles (2.8-2.9 GHz Intel X5600) for TOPAS to 2.4 s per million particles (NVIDIA TESLA C2075) for gPMC. Excellent agreement was demonstrated between our fast GPU-based MC code (gPMC) and a previously extensively validated multi-purpose MC code (TOPAS) for a comprehensive set of clinical patient cases. This shows that MC dose calculations in proton therapy can be performed on time scales comparable to analytical algorithms with accuracy comparable to state-of-the-art CPU-based MC codes.
Goetz, H; Kuschel, M; Wulff, T; Sauber, C; Miller, C; Fisher, S; Woodward, C
2004-09-30
Protein analysis techniques are developing fast due to the growing number of proteins obtained by recombinant DNA techniques. In the present paper we compare selected techniques, which are used for protein sizing, quantitation and molecular weight determination: sodium dodecylsulfate-polyacrylamide gel electrophoresis (SDS-PAGE), lab-on-a-chip or microfluidics technology (LoaC), size exclusion chromatography (SEC) and mass spectrometry (MS). We compare advantages and limitations of each technique in respect to different application areas, analysis time, protein sizing and quantitation performance.
Analytical and experimental investigations of the oblique detonation wave engine concept
NASA Technical Reports Server (NTRS)
Menees, Gene P.; Adelman, Henry G.; Cambier, Jean-Luc
1990-01-01
Wave combustors, which include the oblique detonation wave engine (ODWE), are attractive propulsion concepts for hypersonic flight. These engines utilize oblique shock or detonation waves to rapidly mix, ignite, and combust the air-fuel mixture in thin zones in the combustion chamber. Benefits of these combustion systems include shorter and lighter engines which require less cooling and can provide thrust at higher Mach numbers than conventional scramjets. The wave combustor's ability to operate at lower combustor inlet pressures may allow the vehicle to operate at lower dynamic pressures which could lessen the heating loads on the airframe. The research program at NASA-Ames includes analytical studies of the ODWE combustor using Computational Fluid Dynamics (CFD) codes which fully couple finite rate chemistry with fluid dynamics. In addition, experimental proof-of-concept studies are being performed in an arc heated hypersonic wind tunnel. Several fuel injection design were studied analytically and experimentally. In-stream strut fuel injectors were chosen to provide good mixing with minimal stagnation pressure losses. Measurements of flow field properties behind the oblique wave are compared to analytical predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cresti, Alessandro; Grosso, Giuseppe; Parravicini, Giuseppe Pastori
2006-05-15
We have derived closed analytic expressions for the Green's function of an electron in a two-dimensional electron gas threaded by a uniform perpendicular magnetic field, also in the presence of a uniform electric field and of a parabolic spatial confinement. A workable and powerful numerical procedure for the calculation of the Green's functions for a large infinitely extended quantum wire is considered exploiting a lattice model for the wire, the tight-binding representation for the corresponding matrix Green's function, and the Peierls phase factor in the Hamiltonian hopping matrix element to account for the magnetic field. The numerical evaluation of themore » Green's function has been performed by means of the decimation-renormalization method, and quite satisfactorily compared with the analytic results worked out in this paper. As an example of the versatility of the numerical and analytic tools here presented, the peculiar semilocal character of the magnetic Green's function is studied in detail because of its basic importance in determining magneto-transport properties in mesoscopic systems.« less
Vorberg, Ellen; Fleischer, Heidi; Junginger, Steffen; Liu, Hui; Stoll, Norbert; Thurow, Kerstin
2016-10-01
Life science areas require specific sample pretreatment to increase the concentration of the analytes and/or to convert the analytes into an appropriate form for the detection and separation systems. Various workstations are commercially available, allowing for automated biological sample pretreatment. Nevertheless, due to the required temperature, pressure, and volume conditions in typical element and structure-specific measurements, automated platforms are not suitable for analytical processes. Thus, the purpose of the presented investigation was the design, realization, and evaluation of an automated system ensuring high-precision sample preparation for a variety of analytical measurements. The developed system has to enable system adaption and high performance flexibility. Furthermore, the system has to be capable of dealing with the wide range of required vessels simultaneously, allowing for less cost and time-consuming process steps. However, the system's functionality has been confirmed in various validation sequences. Using element-specific measurements, the automated system was up to 25% more precise compared to the manual procedure and as precise as the manual procedure using structure-specific measurements. © 2015 Society for Laboratory Automation and Screening.
Local Navon letter processing affects skilled behavior: a golf-putting experiment.
Lewis, Michael B; Dawkins, Gemma
2015-04-01
Expert or skilled behaviors (for example, face recognition or sporting performance) are typically performed automatically and with little conscious awareness. Previous studies, in various domains of performance, have shown that activities immediately prior to a task demanding a learned skill can affect performance. In sport, describing the to-be-performed action is detrimental, whereas in face recognition, describing a face or reading local Navon letters is detrimental. Two golf-putting experiments are presented that compare the effects that these three tasks have on experienced and novice golfers. Experiment 1 found a Navon effect on golf performance for experienced players. Experiment 2 found, for experienced players only, that performance was impaired following the three tasks described above, when compared with reading or global Navon tasks. It is suggested that the three tasks affect skilled performance by provoking a shift from automatic behavior to a more analytic style. By demonstrating similarities between effects in face recognition and sporting behavior, it is hoped to better understand concepts in both fields.
Howanitz, Peter J; Jones, Bruce A
2004-07-01
One of the major attributes of laboratory testing is cost. Although fully automated central laboratory glucose testing and semiautomated bedside glucose testing (BGT) are performed at most institutions, rigorous determinations of interinstitutional comparative costs have not been performed. To compare interinstitutional analytical costs of central laboratory glucose testing and BGT and to provide suggestions for improvement. Participants completed a demographic form about their institutional glucose monitoring practices. They also collected information about the costs of central laboratory glucose testing, BGT at a high-volume testing site, and BGT at a low-volume testing site, including specified cost variables for labor, reagents, and instruments. A total of 445 institutions enrolled in the College of American Pathologists Q-Probes program. Median cost per glucose test at 3 testing sites. The median (10th-90th percentile range) costs per glucose test were 1.18 dollars (5.59 dollars-0.36 dollars), 1.96 dollars (9.51 dollars-0.77 dollars), and 4.66 dollars (27.54 dollars-1.02 dollars) for central laboratory, high-volume BGT sites, and low-volume BGT sites, respectively. The largest percentages of the cost per test were for labor (59.3%, 72.7%, and 85.8%), followed by supplies (27.2%, 27.3%, and 13.4%) and equipment (2.1%, 0.0%, and 0.0%) for the 3 sites, respectively. The median number of patient specimens per month at the high-volume BGT sites was 625 compared to 30 at the low-volume BGT sites. Most participants did not include labor, instrument maintenance, competency assessment, or oversight in their BGT estimated costs until required to do so for the study. Analytical costs per glucose test were lower for central laboratory glucose testing than for BGT, which, in turn, was highly variable and dependent on volume. Data that would be used for financial justification for BGT were widely aberrant and in need of improvement.
ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION
Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey
2013-01-01
MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053
Evaluation of a handheld point-of-care analyser for measurement of creatinine in cats.
Reeve, Jenny; Warman, Sheena; Lewis, Daniel; Watson, Natalie; Papasouliotis, Kostas
2017-02-01
Objectives The aim of the study was to evaluate whether a handheld creatinine analyser (StatSensor Xpress; SSXp), available for human patients, can be used to measure creatinine reliably in cats. Methods Analytical performance was evaluated by determining within- and between-run coefficient of variation (CV, %), total error observed (TE obs , %) and sigma metrics. Fifty client-owned cats presenting for investigation of clinical disease had creatinine measured simultaneously, using SSXp (whole blood and plasma) and a reference instrument (Konelab, serum); 48 paired samples were included in the study. Creatinine correlation between methodologies (SSXp vs Konelab) and sample types (SSXp whole blood vs SSXp plasma ) was assessed by Spearman's correlation coefficient and agreement was determined using Bland-Altman difference plots. Each creatinine value was assigned an IRIS stage (1-4); correlation and agreement between Konelab and SSXp IRIS stages were evaluated. Results Within-run CV (4.23-8.85%), between-run CV (8.95-11.72%), TE obs (22.15-34.92%) and sigma metrics (⩽3) did not meet desired analytical requirements. Correlation between sample types was high (SSXp whole blood vs SSXp plasma ; r = 0.89), and between instruments was high (SSXp whole blood vs Konelab serum ; r = 0.85) to very high (SSXp plasma vs Konelab serum ; r = 0.91). Konelab and SSXp whole blood IRIS scores exhibited high correlation ( r = 0.76). Packed cell volume did not significantly affect SSXp determination of creatinine. Bland-Altman difference plots identified a positive bias for the SSXp (7.13 μmol/l SSXp whole blood ; 20.23 μmol/l SSXp plasma ) compared with the Konelab. Outliers (1/48 whole blood; 2/48 plasma) occurred exclusively at very high creatinine concentrations. The SSXp failed to identify 2/21 azotaemic cats. Conclusions and relevance Analytical performance of the SSXp in feline patients is not considered acceptable. The SSXp exhibited a high to very high correlation compared with the reference methodology but the two instruments cannot be used interchangeably. Improvements in the SSXp analytical performance are needed before its use can be recommended in feline clinical practice.
Durner, Bernhard; Ehmann, Thomas; Matysik, Frank-Michael
2018-06-05
The adaption of an parallel-path poly(tetrafluoroethylene)(PTFE) ICP-nebulizer to an evaporative light scattering detector (ELSD) was realized. This was done by substituting the originally installed concentric glass nebulizer of the ELSD. The performance of both nebulizers was compared regarding nebulizer temperature, evaporator temperature, flow rate of nebulizing gas and flow rate of mobile phase of different solvents using caffeine and poly(dimethylsiloxane) (PDMS) as analytes. Both nebulizers showed similar performances but for the parallel-path PTFE nebulizer the performance was considerably better at low LC flow rates and the nebulizer lifetime was substantially increased. In general, for both nebulizers the highest sensitivity was obtained by applying the lowest possible evaporator temperature in combination with the highest possible nebulizer temperature at preferably low gas flow rates. Besides the optimization of detector parameters, response factors for various PDMS oligomers were determined and the dependency of the detector signal on molar mass of the analytes was studied. The significant improvement regarding long-term stability made the modified ELSD much more robust and saved time and money by reducing the maintenance efforts. Thus, especially in polymer HPLC, associated with a complex matrix situation, the PTFE-based parallel-path nebulizer exhibits attractive characteristics for analytical studies of polymers. Copyright © 2018. Published by Elsevier B.V.
Dupuy, Anne Marie; Né, Maxence; Bargnoux, Anne Sophie; Badiou, Stéphanie; Cristol, Jean Paul
2017-03-01
We report the analytical performances of the Lumipulse®G BRAHMS PCT assay (Fujirebio, Courteboeuf, France) and the concordance with BRAHMS PCT Kryptor CompactPlus© results from central laboratory. Lumipulse®G BRAHMS PCT immunoassay on Lumipulse®G600II instrument is a chemiluminescence enzyme immunoassay (CLEIA). Analytical performances included imprecision study, linearity, limit of detection and comparison study on 138 plasma specimen on Lumipulse®G600II vs plasma on Kryptor CompactPlus©. The intra and inter assay imprecision of Lumipulse®G BRAHMS PCT was between 2 and 5%. The LoD in our condition was 0.0029ng/mL in accordance with the LoD provided by the manufacturer (0.0048ng/mL). The linear equation of linearity was y=1,001×-0,052 with r 2 =0.99, with a mean recovery (SD) percentage of 1.8% (8%). Correlation studies showed a good correlation (r=0.99) between plasma on Kryptor and Lumipulse, with a bias of 0.02 in the range from 0.12 to 1ng/mL. The new adaptation developed from Fujirebio on quantification of PCT with CLEIA technology from monoclonal antibodies from ThermoFisher appears to be acceptable for clinical use. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Frankowski, Marcin; Zioła-Frankowska, Anetta; Kurzyca, Iwona; Novotný, Karel; Vaculovič, Tomas; Kanický, Viktor; Siepak, Marcin; Siepak, Jerzy
2011-11-01
The paper presents the results of aluminium determinations in ground water samples of the Miocene aquifer from the area of the city of Poznań (Poland). The determined aluminium content amounted from <0.0001 to 752.7 μg L(-1). The aluminium determinations were performed using three analytical techniques: graphite furnace atomic absorption spectrometry (GF-AAS), inductively coupled plasma atomic emission spectrometry (ICP-AES) and inductively coupled plasma mass spectrometry (ICP-MS). The results of aluminium determinations in groundwater samples for particular analytical techniques were compared. The results were used to identify the ascent of ground water from the Mesozoic aquifer to the Miocene aquifer in the area of the fault graben. Using the Mineql+ program, the modelling of the occurrence of aluminium and the following aluminium complexes: hydroxy, with fluorides and sulphates was performed. The paper presents the results of aluminium determinations in ground water using different analytical techniques as well as the chemical modelling in the Mineql+ program, which was performed for the first time and which enabled the identification of aluminium complexes in the investigated samples. The study confirms the occurrence of aluminium hydroxy complexes and aluminium fluoride complexes in the analysed groundwater samples. Despite the dominance of sulphates and organic matter in the sample, major participation of the complexes with these ligands was not stated based on the modelling.
NASA Astrophysics Data System (ADS)
Kim, Jeong-Man; Koo, Min-Mo; Jeong, Jae-Hoon; Hong, Keyyong; Cho, Il-Hyoung; Choi, Jang-Young
2017-05-01
This paper reports the design and analysis of a tubular permanent magnet linear generator (TPMLG) for a small-scale wave-energy converter. The analytical field computation is performed by applying a magnetic vector potential and a 2-D analytical model to determine design parameters. Based on analytical solutions, parametric analysis is performed to meet the design specifications of a wave-energy converter (WEC). Then, 2-D FEA is employed to validate the analytical method. Finally, the experimental result confirms the predictions of the analytical and finite element analysis (FEA) methods under regular and irregular wave conditions.
Non-Markovian dynamics of fermionic and bosonic systems coupled to several heat baths
NASA Astrophysics Data System (ADS)
Hovhannisyan, A. A.; Sargsyan, V. V.; Adamian, G. G.; Antonenko, N. V.; Lacroix, D.
2018-03-01
Employing the fermionic and bosonic Hamiltonians for the collective oscillator linearly FC-coupled with several heat baths, the analytical expressions for the collective occupation number are derived within the non-Markovian quantum Langevin approach. The master equations for the occupation number of collective subsystem are derived and discussed. In the case of Ohmic dissipation with Lorenzian cutoffs, the possibility of reduction of the system with several heat baths to the system with one heat bath is analytically demonstrated. For the fermionic and bosonic systems, a comparative analysis is performed between the collective subsystem coupled to two heat baths and the reference case of the subsystem coupled to one bath.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, J.; Miki, K.; Uzawa, K.
2006-11-30
During the past years the understanding of the multi scale interaction problems have increased significantly. However, at present there exists a flora of different analytical models for investigating multi scale interactions and hardly any specific comparisons have been performed among these models. In this work two different models for the generation of zonal flows from ion-temperature-gradient (ITG) background turbulence are discussed and compared. The methods used are the coherent mode coupling model and the wave kinetic equation model (WKE). It is shown that the two models give qualitatively the same results even though the assumption on the spectral difference ismore » used in the (WKE) approach.« less
Use of multiple colorimetric indicators for paper-based microfluidic devices.
Dungchai, Wijitar; Chailapakul, Orawon; Henry, Charles S
2010-08-03
We report here the use of multiple indicators for a single analyte for paper-based microfluidic devices (microPAD) in an effort to improve the ability to visually discriminate between analyte concentrations. In existing microPADs, a single dye system is used for the measurement of a single analyte. In our approach, devices are designed to simultaneously quantify analytes using multiple indicators for each analyte improving the accuracy of the assay. The use of multiple indicators for a single analyte allows for different indicator colors to be generated at different analyte concentration ranges as well as increasing the ability to better visually discriminate colors. The principle of our devices is based on the oxidation of indicators by hydrogen peroxide produced by oxidase enzymes specific for each analyte. Each indicator reacts at different peroxide concentrations and therefore analyte concentrations, giving an extended range of operation. To demonstrate the utility of our approach, the mixture of 4-aminoantipyrine and 3,5-dichloro-2-hydroxy-benzenesulfonic acid, o-dianisidine dihydrochloride, potassium iodide, acid black, and acid yellow were chosen as the indicators for simultaneous semi-quantitative measurement of glucose, lactate, and uric acid on a microPAD. Our approach was successfully applied to quantify glucose (0.5-20 mM), lactate (1-25 mM), and uric acid (0.1-7 mM) in clinically relevant ranges. The determination of glucose, lactate, and uric acid in control serum and urine samples was also performed to demonstrate the applicability of this device for biological sample analysis. Finally results for the multi-indicator and single indicator system were compared using untrained readers to demonstrate the improvements in accuracy achieved with the new system. 2010 Elsevier B.V. All rights reserved.
Glodt, Stephen R.; Pirkey, Kimberly D.
1998-01-01
Performance-evaluation studies provide customers of the U.S. Geological Survey National Water Quality Laboratory (NWQL) with data needed to evaluate performance and to compare of select laboratories for analytical work. The NWQL participates in national and international performance-evaluation (PE) studies that consist of samples of water, sediment, and aquatic biological materials for the analysis of inorganic constituents, organic compounds, and radionuclides. This Fact Sheet provides a summary of PE study results from January 1993 through April 1997. It should be of particular interest to USGS customers and potential customers of the NWQL, water-quality specialists, cooperators, and agencies of the Federal Government.
Samanipour, Saer; Baz-Lomba, Jose A; Alygizakis, Nikiforos A; Reid, Malcolm J; Thomaidis, Nikolaos S; Thomas, Kevin V
2017-06-09
LC-HR-QTOF-MS recently has become a commonly used approach for the analysis of complex samples. However, identification of small organic molecules in complex samples with the highest level of confidence is a challenging task. Here we report on the implementation of a two stage algorithm for LC-HR-QTOF-MS datasets. We compared the performances of the two stage algorithm, implemented via NIVA_MZ_Analyzer™, with two commonly used approaches (i.e. feature detection and XIC peak picking, implemented via UNIFI by Waters and TASQ by Bruker, respectively) for the suspect analysis of four influent wastewater samples. We first evaluated the cross platform compatibility of LC-HR-QTOF-MS datasets generated via instruments from two different manufacturers (i.e. Waters and Bruker). Our data showed that with an appropriate spectral weighting function the spectra recorded by the two tested instruments are comparable for our analytes. As a consequence, we were able to perform full spectral comparison between the data generated via the two studied instruments. Four extracts of wastewater influent were analyzed for 89 analytes, thus 356 detection cases. The analytes were divided into 158 detection cases of artificial suspect analytes (i.e. verified by target analysis) and 198 true suspects. The two stage algorithm resulted in a zero rate of false positive detection, based on the artificial suspect analytes while producing a rate of false negative detection of 0.12. For the conventional approaches, the rates of false positive detection varied between 0.06 for UNIFI and 0.15 for TASQ. The rates of false negative detection for these methods ranged between 0.07 for TASQ and 0.09 for UNIFI. The effect of background signal complexity on the two stage algorithm was evaluated through the generation of a synthetic signal. We further discuss the boundaries of applicability of the two stage algorithm. The importance of background knowledge and experience in evaluating the reliability of results during the suspect screening was evaluated. Copyright © 2017 Elsevier B.V. All rights reserved.
Brooks, M.H.; Schroder, L.J.; Malo, B.A.
1985-01-01
Four laboratories were evaluated in their analysis of identical natural and simulated precipitation water samples. Interlaboratory comparability was evaluated using analysis of variance coupled with Duncan 's multiple range test, and linear-regression models describing the relations between individual laboratory analytical results for natural precipitation samples. Results of the statistical analyses indicate that certain pairs of laboratories produce different results when analyzing identical samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple range test on data produced by the laboratories from the analysis of identical simulated precipitation samples. Bias for a given analyte produced by a single laboratory has been indicated when the laboratory mean for that analyte is shown to be significantly different from the mean for the most-probable analyte concentrations in the simulated precipitation samples. Ion-chromatographic methods for the determination of chloride, nitrate, and sulfate have been compared with the colorimetric methods that were also in use during the study period. Comparisons were made using analysis of variance coupled with Duncan 's multiple range test for means produced by the two methods. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Analyte estimated precisions have been compared using F-tests and differences in analyte precisions for laboratory pairs have been reported. (USGS)
NASA Astrophysics Data System (ADS)
Aghakhani, Amirreza; Basdogan, Ipek; Erturk, Alper
2016-04-01
Plate-like components are widely used in numerous automotive, marine, and aerospace applications where they can be employed as host structures for vibration based energy harvesting. Piezoelectric patch harvesters can be easily attached to these structures to convert the vibrational energy to the electrical energy. Power output investigations of these harvesters require accurate models for energy harvesting performance evaluation and optimization. Equivalent circuit modeling of the cantilever-based vibration energy harvesters for estimation of electrical response has been proposed in recent years. However, equivalent circuit formulation and analytical modeling of multiple piezo-patch energy harvesters integrated to thin plates including nonlinear circuits has not been studied. In this study, equivalent circuit model of multiple parallel piezoelectric patch harvesters together with a resistive load is built in electronic circuit simulation software SPICE and voltage frequency response functions (FRFs) are validated using the analytical distributedparameter model. Analytical formulation of the piezoelectric patches in parallel configuration for the DC voltage output is derived while the patches are connected to a standard AC-DC circuit. The analytic model is based on the equivalent load impedance approach for piezoelectric capacitance and AC-DC circuit elements. The analytic results are validated numerically via SPICE simulations. Finally, DC power outputs of the harvesters are computed and compared with the peak power amplitudes in the AC output case.
On the performance of piezoelectric harvesters loaded by finite width impulses
NASA Astrophysics Data System (ADS)
Doria, A.; Medè, C.; Desideri, D.; Maschio, A.; Codecasa, L.; Moro, F.
2018-02-01
The response of cantilevered piezoelectric harvesters loaded by finite width impulses of base acceleration is studied analytically in the frequency domain in order to identify the parameters that influence the generated voltage. Experimental tests are then performed on harvesters loaded by hammer impacts. The latter are used to confirm analytical results and to validate a linear finite element (FE) model of a unimorph harvester. The FE model is, in turn, used to extend analytical results to more general harvesters (tapered, inverse tapered, triangular) and to more general impulses (heel strike in human gait). From analytical and numerical results design criteria for improving harvester performance are obtained.
Kuan, Da-Han; Wang, I-Shun; Lin, Jiun-Rue; Yang, Chao-Han; Huang, Chi-Hsien; Lin, Yen-Hung; Lin, Chih-Ting; Huang, Nien-Tsu
2016-08-02
The hemoglobin-A1c test, measuring the ratio of glycated hemoglobin (HbA1c) to hemoglobin (Hb) levels, has been a standard assay in diabetes diagnosis that removes the day-to-day glucose level variation. Currently, the HbA1c test is restricted to hospitals and central laboratories due to the laborious, time-consuming whole blood processing and bulky instruments. In this paper, we have developed a microfluidic device integrating dual CMOS polysilicon nanowire sensors (MINS) for on-chip whole blood processing and simultaneous detection of multiple analytes. The micromachined polymethylmethacrylate (PMMA) microfluidic device consisted of a serpentine microchannel with multiple dam structures designed for non-lysed cells or debris trapping, uniform plasma/buffer mixing and dilution. The CMOS-fabricated polysilicon nanowire sensors integrated with the microfluidic device were designed for the simultaneous, label-free electrical detection of multiple analytes. Our study first measured the Hb and HbA1c levels in 11 clinical samples via these nanowire sensors. The results were compared with those of standard Hb and HbA1c measurement methods (Hb: the sodium lauryl sulfate hemoglobin detection method; HbA1c: cation-exchange high-performance liquid chromatography) and showed comparable outcomes. Finally, we successfully demonstrated the efficacy of the MINS device's on-chip whole blood processing followed by simultaneous Hb and HbA1c measurement in a clinical sample. Compared to current Hb and HbA1c sensing instruments, the MINS platform is compact and can simultaneously detect two analytes with only 5 μL of whole blood, which corresponds to a 300-fold blood volume reduction. The total assay time, including the in situ sample processing and analyte detection, was just 30 minutes. Based on its on-chip whole blood processing and simultaneous multiple analyte detection functionalities with a lower sample volume requirement and shorter process time, the MINS device can be effectively applied to real-time diabetes diagnostics and monitoring in point-of-care settings.
Major advances in testing of dairy products: milk component and dairy product attribute testing.
Barbano, D M; Lynch, J M
2006-04-01
Milk component analysis is relatively unusual in the field of quantitative analytical chemistry because an analytical test result determines the allocation of very large amounts of money between buyers and sellers of milk. Therefore, there is high incentive to develop and refine these methods to achieve a level of analytical performance rarely demanded of most methods or laboratory staff working in analytical chemistry. In the last 25 yr, well-defined statistical methods to characterize and validate analytical method performance combined with significant improvements in both the chemical and instrumental methods have allowed achievement of improved analytical performance for payment testing. A shift from marketing commodity dairy products to the development, manufacture, and marketing of value added dairy foods for specific market segments has created a need for instrumental and sensory approaches and quantitative data to support product development and marketing. Bringing together sensory data from quantitative descriptive analysis and analytical data from gas chromatography olfactometry for identification of odor-active compounds in complex natural dairy foods has enabled the sensory scientist and analytical chemist to work together to improve the consistency and quality of dairy food flavors.
Plasma creatinine and creatine quantification by capillary electrophoresis diode array detector.
Zinellu, Angelo; Caria, Marcello A; Tavera, Claudio; Sotgia, Salvatore; Chessa, Roberto; Deiana, Luca; Carru, Ciriaco
2005-07-15
Traditional clinical assays for nonprotein nitrogen compounds, such as creatine and creatinine, have focused on the use of enzymes or chemical reactions that allow measurement of each analyte separately. Most of these assays are mainly directed to urine quantification, so that their applicability on plasma samples is frequently hard to perform. This work describes a simple free zone capillary electrophoresis method for the simultaneous measurement of creatinine and creatine in human plasma. The effect of analytical parameters such as concentration and pH of Tris-phosphate running buffer and cartridge temperature on resolution, migration times, peak areas, and efficiency was investigated. Good separation was achieved using a 60.2-cm x 75-microm uncoated silica capillary, 75 mmol/L Tris-phosphate buffer, pH 2.25, at 15 degrees C, in less than 8 min. We compared the present method to a validated capillary electrophoresis assay, by measuring plasma creatinine in 120 normal subjects. The obtained data were compared by the Passing-Bablok regression and the Bland-Altman test. Moreover the performance of the developed method was assessed by measuring creatine and creatinine in 16 volunteers prior to and after a moderate physical exercise.
Coates, James; Jeyaseelan, Asha K; Ybarra, Norma; David, Marc; Faria, Sergio; Souhami, Luis; Cury, Fabio; Duclos, Marie; El Naqa, Issam
2015-04-01
We explore analytical and data-driven approaches to investigate the integration of genetic variations (single nucleotide polymorphisms [SNPs] and copy number variations [CNVs]) with dosimetric and clinical variables in modeling radiation-induced rectal bleeding (RB) and erectile dysfunction (ED) in prostate cancer patients. Sixty-two patients who underwent curative hypofractionated radiotherapy (66 Gy in 22 fractions) between 2002 and 2010 were retrospectively genotyped for CNV and SNP rs5489 in the xrcc1 DNA repair gene. Fifty-four patients had full dosimetric profiles. Two parallel modeling approaches were compared to assess the risk of severe RB (Grade⩾3) and ED (Grade⩾1); Maximum likelihood estimated generalized Lyman-Kutcher-Burman (LKB) and logistic regression. Statistical resampling based on cross-validation was used to evaluate model predictive power and generalizability to unseen data. Integration of biological variables xrcc1 CNV and SNP improved the fit of the RB and ED analytical and data-driven models. Cross-validation of the generalized LKB models yielded increases in classification performance of 27.4% for RB and 14.6% for ED when xrcc1 CNV and SNP were included, respectively. Biological variables added to logistic regression modeling improved classification performance over standard dosimetric models by 33.5% for RB and 21.2% for ED models. As a proof-of-concept, we demonstrated that the combination of genetic and dosimetric variables can provide significant improvement in NTCP prediction using analytical and data-driven approaches. The improvement in prediction performance was more pronounced in the data driven approaches. Moreover, we have shown that CNVs, in addition to SNPs, may be useful structural genetic variants in predicting radiation toxicities. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
High-performance analysis of filtered semantic graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buluc, Aydin; Fox, Armando; Gilbert, John R.
2012-01-01
High performance is a crucial consideration when executing a complex analytic query on a massive semantic graph. In a semantic graph, vertices and edges carry "attributes" of various types. Analytic queries on semantic graphs typically depend on the values of these attributes; thus, the computation must either view the graph through a filter that passes only those individual vertices and edges of interest, or else must first materialize a subgraph or subgraphs consisting of only the vertices and edges of interest. The filtered approach is superior due to its generality, ease of use, and memory efficiency, but may carry amore » performance cost. In the Knowledge Discovery Toolbox (KDT), a Python library for parallel graph computations, the user writes filters in a high-level language, but those filters result in relatively low performance due to the bottleneck of having to call into the Python interpreter for each edge. In this work, we use the Selective Embedded JIT Specialization (SEJITS) approach to automatically translate filters defined by programmers into a lower-level efficiency language, bypassing the upcall into Python. We evaluate our approach by comparing it with the high-performance C++ /MPI Combinatorial BLAS engine, and show that the productivity gained by using a high-level filtering language comes without sacrificing performance.« less
NASA Astrophysics Data System (ADS)
Tong, Rong
As a primary digital library portal for astrophysics researchers, SAO/NASA ADS (Astrophysics Data System) 2.0 interface features several visualization tools such as Author Network and Metrics. This research study involves 20 ADS long term users who participated in a usability and eye tracking research session. Participants first completed a cognitive test, and then performed five tasks in ADS 2.0 where they explored its multiple visualization tools. Results show that over half of the participants were Imagers and half of the participants were Analytic. Cognitive styles were found to have significant impacts on several efficiency-based measures. Analytic-oriented participants were observed to spent shorter time on web pages and apps, made fewer web page changes than less-Analytic-driving participants in performing common tasks, whereas AI (Analytic-Imagery) participants also completed their five tasks faster than non-AI participants. Meanwhile, self-identified Imagery participants were found to be more efficient in their task completion through multiple measures including total time on task, number of mouse clicks, and number of query revisions made. Imagery scores were negatively associated with frequency of confusion and the observed counts of being surprised. Compared to those who did not claimed to be a visual person, self-identified Imagery participants were observed to have significantly less frequency in frustration and hesitation during their task performance. Both demographic variables and past user experiences were found to correlate with task performance; query revision also correlated with multiple time-based measurements. Considered as an indicator of efficiency, query revisions were found to correlate negatively with the rate of complete with ease, and positively with several time-based efficiency measures, rate of complete with some difficulty, and the frequency of frustration. These results provide rich insights into the cognitive styles of ADS' core users, the impact of such styles and demographic attributes on their task performance their affective and cognitive experiences, and their interaction behaviors while using the visualization component of ADS 2.0, and would subsequently contribute to the design of bibliographic retrieval systems for scientists.
Performance of 75 millimeter-bore arched outer-race ball bearings
NASA Technical Reports Server (NTRS)
Coe, H. H.; Hamrock, B. J.
1976-01-01
An investigation was performed to determine the operating characteristics of 75-mm bore, arched outer-race bearings, and to compare the data with those for a similar, but conventional, deep groove ball bearing. Further, results of an analytical study, made using a computer program developed previously, were compared with the experimental data. Bearings were tested up to 28,000 rpm shaft speed with a load of 2200 N (500 lb). The amount of arching was 0.13, 0.25, and 0.51 mm (.005, .010, and .020 in.). All bearings operated satisfactorily. The outer-race temperatures and the torques, however, were consistently higher for the arched bearings than for the conventional bearing.
Performance of 75-millimeter bore arched outer-race ball bearings
NASA Technical Reports Server (NTRS)
Coe, H. H.; Hamrock, B. J.
1976-01-01
An investigation was performed to determine the operating characteristics of 75-mm bore, arched outer-race bearings, and to compare the data with those for a similar, but conventional, deep groove ball bearing. Further, results of an analytical study, made using a computer program developed previously, were compared with the experimental data. Bearings were tested up to 28,000 rpm shaft speed with a load of 2,200 N (500 lb). The amount of arching was 0.13, 0.25, and 0.51 mm (0.005, 0.010, and 0.020 in). All bearings operated satisfactorily. The outer-race temperatures and the torques, however, were consistently higher for the arched bearings than for the conventional bearings
Performance evaluation soil samples utilizing encapsulation technology
Dahlgran, J.R.
1999-08-17
Performance evaluation soil samples and method of their preparation uses encapsulation technology to encapsulate analytes which are introduced into a soil matrix for analysis and evaluation by analytical laboratories. Target analytes are mixed in an appropriate solvent at predetermined concentrations. The mixture is emulsified in a solution of polymeric film forming material. The emulsified solution is polymerized to form microcapsules. The microcapsules are recovered, quantitated and introduced into a soil matrix in a predetermined ratio to form soil samples with the desired analyte concentration. 1 fig.
Performance evaluation soil samples utilizing encapsulation technology
Dahlgran, James R.
1999-01-01
Performance evaluation soil samples and method of their preparation using encapsulation technology to encapsulate analytes which are introduced into a soil matrix for analysis and evaluation by analytical laboratories. Target analytes are mixed in an appropriate solvent at predetermined concentrations. The mixture is emulsified in a solution of polymeric film forming material. The emulsified solution is polymerized to form microcapsules. The microcapsules are recovered, quantitated and introduced into a soil matrix in a predetermined ratio to form soil samples with the desired analyte concentration.
NASA Astrophysics Data System (ADS)
Askari, Davood
The theoretical objectives and accomplishment of this work are the analytical and numerical investigation of material properties and mechanical behavior of carbon nanotubes (CNTs) and nanotube nanocomposites when they are subjected to various loading conditions. First, the finite element method is employed to investigate numerically the effective Young's modulus and Poisson's ratio of a single-walled CNT. Next, the effects of chirality on the effective Young's modulus and Poisson's ratio are investigated and then variations of their effective coefficient of thermal expansions and effective thermal conductivities are studied for CNTs with different structural configurations. To study the influence of small vacancy defects on mechanical properties of CNTs, finite element analyses are performed and the behavior of CNTs with various structural configurations having different types of vacancy defects is studied. It is frequently reported that nano-materials are excellent candidates as reinforcements in nanocomposites to change or enhance material properties of polymers and their nanocomposites. Second, the inclusion of nano-materials can considerably improve electrical, thermal, and mechanical properties of the bonding agent, i.e., resin. Note that, materials atomic and molecular level do not usually show isotropic behaviour, rather they have orthotropic properties. Therefore, two-phase and three-phase cylindrically orthotropic composite models consisting of different constituents with orthotropic properties are developed and introduced in this work to analytically predict the effective mechanical properties and mechanical behavior of such structures when they are subjected to various external loading conditions. To verify the analytically obtained exact solutions, finite element analyses of identical cylindrical structures are also performed and then results are compared with those obtained analytically, and excellent agreement is achieved. The third part of this dissertation investigates the growth of vertically aligned, long, and high density arrays of CNTs and novel 3-D carbon nanotube nano-forests. A Chemical vapor deposition technique is used to grow radially aligned CNTs on various types of fibrous materials such as silicon carbide, carbon, Kevlar, and glass fibers and clothes that can be used for the fabrication of multifunctional high performing laminated nanocomposite structures. Using the CNTs nano-forest clothes, nanocomposite samples are prepared and tested giving promising results for the improvement of mechanical properties and performance of composites structures.
Nikolac Gabaj, Nora; Miler, Marijana; Vrtarić, Alen; Hemar, Marina; Filipi, Petra; Kocijančić, Marija; Šupak Smolčić, Vesna; Ćelap, Ivana; Šimundić, Ana-Maria
2018-04-25
The aim of our study was to perform verification of serum indices on three clinical chemistry platforms. This study was done on three analyzers: Abbott Architect c8000, Beckman Coulter AU5800 (BC) and Roche Cobas 6000 c501. The following analytical specifications were verified: precision (two patient samples), accuracy (sample with the highest concentration of interferent was serially diluted and measured values compared to theoretical values), comparability (120 patients samples) and cross reactivity (samples with increasing concentrations of interferent were divided in two aliquots and remaining interferents were added in each aliquot. Measurements were done before and after adding interferents). Best results for precision were obtained for the H index (0.72%-2.08%). Accuracy for the H index was acceptable for Cobas and BC, while on Architect, deviations in the high concentration range were observed (y=0.02 [0.01-0.07]+1.07 [1.06-1.08]x). All three analyzers showed acceptable results in evaluating accuracy of L index and unacceptable results for I index. The H index was comparable between BC and both, Architect (Cohen's κ [95% CI]=0.795 [0.692-0.898]) and Roche (Cohen's κ [95% CI]=0.825 [0.729-0.922]), while Roche and Architect were not comparable. The I index was not comparable between all analyzer combinations, while the L index was only comparable between Abbott and BC. Cross reactivity analysis mostly showed that serum indices measurement is affected when a combination of interferences is present. There is heterogeneity between analyzers in the hemolysis, icteria, lipemia (HIL) quality performance. Verification of serum indices in routine work is necessary to establish analytical specifications.
Kumar, B. Vinodh; Mohan, Thuthi
2018-01-01
OBJECTIVE: Six Sigma is one of the most popular quality management system tools employed for process improvement. The Six Sigma methods are usually applied when the outcome of the process can be measured. This study was done to assess the performance of individual biochemical parameters on a Sigma Scale by calculating the sigma metrics for individual parameters and to follow the Westgard guidelines for appropriate Westgard rules and levels of internal quality control (IQC) that needs to be processed to improve target analyte performance based on the sigma metrics. MATERIALS AND METHODS: This is a retrospective study, and data required for the study were extracted between July 2015 and June 2016 from a Secondary Care Government Hospital, Chennai. The data obtained for the study are IQC - coefficient of variation percentage and External Quality Assurance Scheme (EQAS) - Bias% for 16 biochemical parameters. RESULTS: For the level 1 IQC, four analytes (alkaline phosphatase, magnesium, triglyceride, and high-density lipoprotein-cholesterol) showed an ideal performance of ≥6 sigma level, five analytes (urea, total bilirubin, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level and for level 2 IQCs, same four analytes of level 1 showed a performance of ≥6 sigma level, and four analytes (urea, albumin, cholesterol, and potassium) showed an average performance of <3 sigma level. For all analytes <6 sigma level, the quality goal index (QGI) was <0.8 indicating the area requiring improvement to be imprecision except cholesterol whose QGI >1.2 indicated inaccuracy. CONCLUSION: This study shows that sigma metrics is a good quality tool to assess the analytical performance of a clinical chemistry laboratory. Thus, sigma metric analysis provides a benchmark for the laboratory to design a protocol for IQC, address poor assay performance, and assess the efficiency of existing laboratory processes. PMID:29692587
ESIP Earth Sciences Data Analytics (ESDA) Cluster - Work in Progress
NASA Technical Reports Server (NTRS)
Kempler, Steven
2015-01-01
The purpose of this poster is to promote a common understanding of the usefulness of, and activities that pertain to, Data Analytics and more broadly, the Data Scientist; Facilitate collaborations to better understand the cross usage of heterogeneous datasets and to provide accommodating data analytics expertise, now and as the needs evolve into the future; Identify gaps that, once filled, will further collaborative activities. Objectives Provide a forum for Academic discussions that provides ESIP members a better understanding of the various aspects of Earth Science Data Analytics Bring in guest speakers to describe external efforts, and further teach us about the broader use of Data Analytics. Perform activities that:- Compile use cases generated from specific community needs to cross analyze heterogeneous data- Compile sources of analytics tools, in particular, to satisfy the needs of the above data users- Examine gaps between needs and sources- Examine gaps between needs and community expertise- Document specific data analytics expertise needed to perform Earth science data analytics Seek graduate data analytics Data Science student internship opportunities.
A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.
Yang, Harry; Zhang, Jianchun
2015-01-01
The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.
Alexovič, Michal; Horstkotte, Burkhard; Solich, Petr; Sabo, Ján
2016-02-04
Simplicity, effectiveness, swiftness, and environmental friendliness - these are the typical requirements for the state of the art development of green analytical techniques. Liquid phase microextraction (LPME) stands for a family of elegant sample pretreatment and analyte preconcentration techniques preserving these principles in numerous applications. By using only fractions of solvent and sample compared to classical liquid-liquid extraction, the extraction kinetics, the preconcentration factor, and the cost efficiency can be increased. Moreover, significant improvements can be made by automation, which is still a hot topic in analytical chemistry. This review surveys comprehensively and in two parts the developments of automation of non-dispersive LPME methodologies performed in static and dynamic modes. Their advantages and limitations and the reported analytical performances are discussed and put into perspective with the corresponding manual procedures. The automation strategies, techniques, and their operation advantages as well as their potentials are further described and discussed. In this first part, an introduction to LPME and their static and dynamic operation modes as well as their automation methodologies is given. The LPME techniques are classified according to the different approaches of protection of the extraction solvent using either a tip-like (needle/tube/rod) support (drop-based approaches), a wall support (film-based approaches), or microfluidic devices. In the second part, the LPME techniques based on porous supports for the extraction solvent such as membranes and porous media are overviewed. An outlook on future demands and perspectives in this promising area of analytical chemistry is finally given. Copyright © 2015 Elsevier B.V. All rights reserved.
Analytical evaluation of the novel Lumipulse G BRAHMS procalcitonin immunoassay.
Ruzzenente, Orazio; Salvagno, Gian Luca; Gelati, Matteo; Lippi, Giuseppe
2016-12-01
This study was designed to evaluate the analytical performance of the novel Lumipulse G1200 BRAHMS procalcitonin (PCT) immunoassay. This analytical evaluation encompassed the calculation of the limit of blank (LOB), limit of detection (LOD), functional sensitivity, intra- and inter-assay imprecision, confirmation of linearity and a comparison with the Vidas BRAHMS PCT assay. The LOB, LOD and functional sensitivity were 0.0010 ng/mL, 0.0016 ng/mL and 0.008 ng/mL, respectively. The total analytical imprecision was found to be 2.1% and the linearity was excellent (r=1.00) in the range of concentrations between 0.006-75.5 ng/mL. The correlation coefficient with Vidas BRAHMS PCT was 0.995 and the equation of the Passing and Bablok regression analysis was [Lumipulse G BRAHMS PCT]=0.76×[Vidas BRAHMS PCT]+0.04. The mean overall bias of Lumipulse G BRAHMS PCT versus Vidas BRAHMS PCT was -3.03 ng/mL (95% confidence interval [CI]: -4.32 to -1.74 ng/mL), whereas the mean bias in samples with PCT concentration between 0-10 ng/mL was -0.49 ng/mL (95% CI: -0.77 to -0.24 ng/mL). The diagnostic agreement was 100% at 0.5 ng/mL, 97% at 2.0 ng/mL and 95% at 10 ng/mL, respectively. These results attest that Lumipulse G BRAHMS PCT exhibits excellent analytical performance, among the best of the methods currently available on the diagnostic market. However, the significant bias compared to the Vidas BRAHMS PCT suggests that the methods cannot be used interchangeably.
Shelley, Jacob T.; Wiley, Joshua S.; Hieftje, Gary M.
2011-01-01
The advent of ambient desorption/ionization mass spectrometry has resulted in a strong interest in ionization sources that are capable of direct analyte sampling and ionization. One source that has enjoyed increasing interest is the Flowing Atmospheric-Pressure Afterglow (FAPA). FAPA has been proven capable of directly desorbing/ionizing samples in any phase (solid, liquid, or gas) and with impressive limits of detection (<100 fmol). The FAPA was also shown to be less affected by competitive-ionization matrix effects than other plasma-based sources. However, the original FAPA design exhibited substantial background levels, cluttered background spectra in the negative-ion mode, and significant oxidation of aromatic analytes, which ultimately compromised analyte identification and quantification. In the present study, a change in the FAPA configuration from a pin-to-plate to a pin-to-capillary geometry was found to vastly improve performance. Background signals in positive- and negative-ionization modes were reduced by 89% and 99%, respectively. Additionally, the capillary anode strongly reduced the amount of atomic oxygen that could cause oxidation of analytes. Temperatures of the gas stream that interacts with the sample, which heavily influences desorption capabilities, were compared between the two sources by means of IR thermography. The performance of the new FAPA configuration is evaluated through the determination of a variety of compounds in positive- and negative-ion mode, including agrochemicals and explosives. A detection limit of 4 amol was found for the direct determination of the agrochemical ametryn, and appears to be spectrometer-limited. The ability to quickly screen for analytes in bulk liquid samples with the pin-to-capillary FAPA is also shown. PMID:21627097
Shelley, Jacob T; Wiley, Joshua S; Hieftje, Gary M
2011-07-15
The advent of ambient desorption/ionization mass spectrometry has resulted in a strong interest in ionization sources that are capable of direct analyte sampling and ionization. One source that has enjoyed increasing interest is the flowing atmospheric-pressure afterglow (FAPA). The FAPA has been proven capable of directly desorbing/ionizing samples in any phase (solid, liquid, or gas) and with impressive limits of detection (<100 fmol). The FAPA was also shown to be less affected by competitive-ionization matrix effects than other plasma-based sources. However, the original FAPA design exhibited substantial background levels, cluttered background spectra in the negative-ion mode, and significant oxidation of aromatic analytes, which ultimately compromised analyte identification and quantification. In the present study, a change in the FAPA configuration from a pin-to-plate to a pin-to-capillary geometry was found to vastly improve performance. Background signals in positive- and negative-ionization modes were reduced by 89% and 99%, respectively. Additionally, the capillary anode strongly reduced the amount of atomic oxygen that could cause oxidation of analytes. Temperatures of the gas stream that interacts with the sample, which heavily influences desorption capabilities, were compared between the two sources by means of IR thermography. The performance of the new FAPA configuration is evaluated through the determination of a variety of compounds in positive- and negative-ion mode, including agrochemicals and explosives. A detection limit of 4 amol was found for the direct determination of the agrochemical ametryn and appears to be spectrometer-limited. The ability to quickly screen for analytes in bulk liquid samples with the pin-to-capillary FAPA is also shown.
A Lightweight I/O Scheme to Facilitate Spatial and Temporal Queries of Scientific Data Analytics
NASA Technical Reports Server (NTRS)
Tian, Yuan; Liu, Zhuo; Klasky, Scott; Wang, Bin; Abbasi, Hasan; Zhou, Shujia; Podhorszki, Norbert; Clune, Tom; Logan, Jeremy; Yu, Weikuan
2013-01-01
In the era of petascale computing, more scientific applications are being deployed on leadership scale computing platforms to enhance the scientific productivity. Many I/O techniques have been designed to address the growing I/O bottleneck on large-scale systems by handling massive scientific data in a holistic manner. While such techniques have been leveraged in a wide range of applications, they have not been shown as adequate for many mission critical applications, particularly in data post-processing stage. One of the examples is that some scientific applications generate datasets composed of a vast amount of small data elements that are organized along many spatial and temporal dimensions but require sophisticated data analytics on one or more dimensions. Including such dimensional knowledge into data organization can be beneficial to the efficiency of data post-processing, which is often missing from exiting I/O techniques. In this study, we propose a novel I/O scheme named STAR (Spatial and Temporal AggRegation) to enable high performance data queries for scientific analytics. STAR is able to dive into the massive data, identify the spatial and temporal relationships among data variables, and accordingly organize them into an optimized multi-dimensional data structure before storing to the storage. This technique not only facilitates the common access patterns of data analytics, but also further reduces the application turnaround time. In particular, STAR is able to enable efficient data queries along the time dimension, a practice common in scientific analytics but not yet supported by existing I/O techniques. In our case study with a critical climate modeling application GEOS-5, the experimental results on Jaguar supercomputer demonstrate an improvement up to 73 times for the read performance compared to the original I/O method.
An empirical model for calculation of the collimator contamination dose in therapeutic proton beams
NASA Astrophysics Data System (ADS)
Vidal, M.; De Marzi, L.; Szymanowski, H.; Guinement, L.; Nauraye, C.; Hierso, E.; Freud, N.; Ferrand, R.; François, P.; Sarrut, D.
2016-02-01
Collimators are used as lateral beam shaping devices in proton therapy with passive scattering beam lines. The dose contamination due to collimator scattering can be as high as 10% of the maximum dose and influences calculation of the output factor or monitor units (MU). To date, commercial treatment planning systems generally use a zero-thickness collimator approximation ignoring edge scattering in the aperture collimator and few analytical models have been proposed to take scattering effects into account, mainly limited to the inner collimator face component. The aim of this study was to characterize and model aperture contamination by means of a fast and accurate analytical model. The entrance face collimator scatter distribution was modeled as a 3D secondary dose source. Predicted dose contaminations were compared to measurements and Monte Carlo simulations. Measurements were performed on two different proton beam lines (a fixed horizontal beam line and a gantry beam line) with divergent apertures and for several field sizes and energies. Discrepancies between analytical algorithm dose prediction and measurements were decreased from 10% to 2% using the proposed model. Gamma-index (2%/1 mm) was respected for more than 90% of pixels. The proposed analytical algorithm increases the accuracy of analytical dose calculations with reasonable computation times.
Electroanalytical sensing of chromium(III) and (VI) utilising gold screen printed macro electrodes.
Metters, Jonathan P; Kadara, Rashid O; Banks, Craig E
2012-02-21
We report the fabrication of gold screen printed macro electrodes which are electrochemically characterised and contrasted to polycrystalline gold macroelectrodes with their potential analytical application towards the sensing of chromium(III) and (VI) critically explored. It is found that while these gold screen printed macro electrodes have electrode kinetics typically one order of magnitude lower than polycrystalline gold macroelectrodes as is measured via a standard redox probe, in terms of analytical sensing, these gold screen printed macro electrodes mimic polycrystalline gold in terms of their analytical performance towards the sensing of chromium(III) and (VI), whilst boasting additional advantages over the macro electrode due to their disposable one-shot nature and the ease of mass production. An additional advantage of these gold screen printed macro electrodes compared to polycrystalline gold is the alleviation of the requirement to potential cycle the latter to form the required gold oxide which aids in the simplification of the analytical protocol. We demonstrate that gold screen printed macro electrodes allow the low micro-molar sensing of chromium(VI) in aqueous solutions over the range 10 to 1600 μM with a limit of detection (3σ) of 4.4 μM. The feasibility of the analytical protocol is also tested through chromium(VI) detection in environmental samples.
Capillary waveguide optrodes: an approach to optical sensing in medical diagnostics
NASA Astrophysics Data System (ADS)
Lippitsch, Max E.; Draxler, Sonja; Kieslinger, Dietmar; Lehmann, Hartmut; Weigl, Bernhard H.
1996-07-01
Glass capillaries with a chemically sensitive coating on the inner surface are used as optical sensors for medical diagnostics. A capillary simultaneously serves as a sample compartment, a sensor element, and an inhomogeneous optical waveguide. Various detection schemes based on absorption, fluorescence intensity, or fluorescence lifetime are described. In absorption-based capillary waveguide optrodes the absorption in the sensor layer is analyte dependent; hence light transmission along the inhomogeneous waveguiding structure formed by the capillary wall and the sensing layer is a function of the analyte concentration. Similarly, in fluorescence-based capillary optrodes the fluorescence intensity or the fluorescence lifetime of an indicator dye fixed in the sensing layer is analyte dependent; thus the specific property of fluorescent light excited in the sensing layer and thereafter guided along the inhomogeneous waveguiding structure is a function of the analyte concentration. Both schemes are experimentally demonstrated, one with carbon dioxide as the analyte and the other one with oxygen. The device combines optical sensors with the standard glass capillaries usually applied to gather blood drops from fingertips, to yield a versatile diagnostic instrument, integrating the sample compartment, the optical sensor, and the light-collecting optics into a single piece. This ensures enhanced sensor performance as well as improved handling compared with other sensors. waveguide, blood gases, medical diagnostics.
Griffin, Brian M.; Larson, Vincent E.
2016-11-25
Microphysical processes, such as the formation, growth, and evaporation of precipitation, interact with variability and covariances (e.g., fluxes) in moisture and heat content. For instance, evaporation of rain may produce cold pools, which in turn may trigger fresh convection and precipitation. These effects are usually omitted or else crudely parameterized at subgrid scales in weather and climate models.A more formal approach is pursued here, based on predictive, horizontally averaged equations for the variances, covariances, and fluxes of moisture and heat content. These higher-order moment equations contain microphysical source terms. The microphysics terms can be integrated analytically, given a suitably simplemore » warm-rain microphysics scheme and an approximate assumption about the multivariate distribution of cloud-related and precipitation-related variables. Performing the integrations provides exact expressions within an idealized context.A large-eddy simulation (LES) of a shallow precipitating cumulus case is performed here, and it indicates that the microphysical effects on (co)variances and fluxes can be large. In some budgets and altitude ranges, they are dominant terms. The analytic expressions for the integrals are implemented in a single-column, higher-order closure model. Interactive single-column simulations agree qualitatively with the LES. The analytic integrations form a parameterization of microphysical effects in their own right, and they also serve as benchmark solutions that can be compared to non-analytic integration methods.« less
NASA Astrophysics Data System (ADS)
Takeda, M.; Nakajima, H.; Zhang, M.; Hiratsuka, T.
2008-04-01
To obtain reliable diffusion parameters for diffusion testing, multiple experiments should not only be cross-checked but the internal consistency of each experiment should also be verified. In the through- and in-diffusion tests with solution reservoirs, test interpretation of different phases often makes use of simplified analytical solutions. This study explores the feasibility of steady, quasi-steady, equilibrium and transient-state analyses using simplified analytical solutions with respect to (i) valid conditions for each analytical solution, (ii) potential error, and (iii) experimental time. For increased generality, a series of numerical analyses are performed using unified dimensionless parameters and the results are all related to dimensionless reservoir volume (DRV) which includes only the sorptive parameter as an unknown. This means the above factors can be investigated on the basis of the sorption properties of the testing material and/or tracer. The main findings are that steady, quasi-steady and equilibrium-state analyses are applicable when the tracer is not highly sorptive. However, quasi-steady and equilibrium-state analyses become inefficient or impractical compared to steady state analysis when the tracer is non-sorbing and material porosity is significantly low. Systematic and comprehensive reformulation of analytical models enables the comparison of experimental times between different test methods. The applicability and potential error of each test interpretation can also be studied. These can be applied in designing, performing, and interpreting diffusion experiments by deducing DRV from the available information for the target material and tracer, combined with the results of this study.
Mangas-Sanjuan, Victor; Navarro-Fontestad, Carmen; García-Arieta, Alfredo; Trocóniz, Iñaki F; Bermejo, Marival
2018-05-30
A semi-physiological two compartment pharmacokinetic model with two active metabolites (primary (PM) and secondary metabolites (SM)) with saturable and non-saturable pre-systemic efflux transporter, intestinal and hepatic metabolism has been developed. The aim of this work is to explore in several scenarios which analyte (parent drug or any of the metabolites) is the most sensitive to changes in drug product performance (i.e. differences in in vivo dissolution) and to make recommendations based on the simulations outcome. A total of 128 scenarios (2 Biopharmaceutics Classification System (BCS) drug types, 2 levels of K M Pgp , in 4 metabolic scenarios at 2 dose levels in 4 quality levels of the drug product) were simulated for BCS class II and IV drugs. Monte Carlo simulations of all bioequivalence studies were performed in NONMEM 7.3. Results showed the parent drug (PD) was the most sensitive analyte for bioequivalence trials in all the studied scenarios. PM and SM revealed less or the same sensitivity to detect differences in pharmaceutical quality as the PD. Another relevant result is that mean point estimate of C max and AUC methodology from Monte Carlo simulations allows to select more accurately the most sensitive analyte compared to the criterion on the percentage of failed or successful BE studies, even for metabolites which frequently show greater variability than PD. Copyright © 2018 Elsevier B.V. All rights reserved.
The Climate Data Analytic Services (CDAS) Framework.
NASA Astrophysics Data System (ADS)
Maxwell, T. P.; Duffy, D.
2016-12-01
Faced with unprecedented growth in climate data volume and demand, NASA has developed the Climate Data Analytic Services (CDAS) framework. This framework enables scientists to execute data processing workflows combining common analysis operations in a high performance environment close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using vetted climate data analysis tools (ESMF, CDAT, NCO, etc.). A dynamic caching architecture enables interactive response times. CDAS utilizes Apache Spark for parallelization and a custom array framework for processing huge datasets within limited memory spaces. CDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be accessed using either direct web service calls, a python script, a unix-like shell client, or a javascript-based web application. Client packages in python, scala, or javascript contain everything needed to make CDAS requests. The CDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service permits decision makers to investigate climate changes around the globe, inspect model trends and variability, and compare multiple reanalysis datasets.
Moskovets, Eugene; Misharin, Alexander; Laiko, Viktor; Doroshenko, Vladimir
2016-07-15
A comparative MS study was conducted on the analytical performance of two matrix-assisted laser desorption/ionization (MALDI) sources that operated at either low pressure (∼1Torr) or at atmospheric pressure. In both cases, the MALDI sources were attached to a linear ion trap mass spectrometer equipped with a two-stage ion funnel. The obtained results indicate that the limits of detection, in the analysis of identical peptide samples, were much lower with the source that was operated slightly below the 1-Torr pressure. In the low-pressure (LP) MALDI source, ion signals were observed at a laser fluence that was considerably lower than the one determining the appearance of ion signals in the atmospheric pressure (AP) MALDI source. When the near-threshold laser fluences were used to record MALDI MS spectra at 1-Torr and 750-Torr pressures, the level of chemical noise at the 1-Torr pressure was much lower compared to that at AP. The dependency of the analyte ion signals on the accelerating field which dragged the ions from the MALDI plate to the MS analyzer are presented for the LP and AP MALDI sources. The study indicates that the laser fluence, background gas pressure, and field accelerating the ions away from a MALDI plate were the main parameters which determined the ion yield, signal-to-noise (S/N) ratios, the fragmentation of the analyte ions, and adduct formation in the LP and AP MALDI MS methods. The presented results can be helpful for a deeper insight into the mechanisms responsible for the ion formation in MALDI. Copyright © 2016 Elsevier Inc. All rights reserved.
Seyyal, Emre; Malik, Abdul
2017-04-29
Principles of sol-gel chemistry were utilized to create silica- and germania-based dual-ligand surface-bonded sol-gel coatings providing enhanced performance in capillary microextraction (CME) through a combination of ligand superhydrophobicity and π-π interaction. These organic-inorganic hybrid coatings were prepared using sol-gel precursors with bonded perfluorododecyl (PF-C 12 ) and phenethyl (PhE) ligands. Here, the ability of the PF-C 12 ligand to provide enhanced hydrophobic interaction was advantageously combined with π-π interaction capability of the PhE moiety to attain the desired sorbent performance in CME. The effect of the inorganic sorbent component on microextraction performance of was explored by comparing microextraction characteristics of silica- and germania-based sol-gel sorbents. The germania-based dual-ligand sol-gel sorbent demonstrated superior CME performance compared to its silica-based counterpart. Thermogravimetric analysis (TGA) of the created silica- and germania-based dual-ligand sol-gel sorbents suggested higher carbon loading on the germania-based sorbent. This might be indicative of more effective condensation of the organic ligand-bearing sol-gel-active chemical species to the germania-based sol-gel network (than to its silica-based counterpart) evolving in the sol solution. The type and concentration of the organic ligands were varied in the sol-gel sorbents to fine-tune extraction selectivity toward different classes of analytes. Specific extraction (SE) values were used for an objective comparison of the prepared sol-gel CME sorbents. The sorbents with higher content of PF-C 12 showed remarkable affinity for aliphatic hydrocarbons. Compared to their single-ligand sol-gel counterparts, the dual-ligand sol-gel coatings demonstrated significantly superior CME performance in the extraction of alkylbenzenes, providing up to ∼65.0% higher SE values. The prepared sol-gel CME coatings provided low ng L -1 limit of detections (LOD) (4.2-26.3 ng L -1 ) for environmentally important analytes including polycyclic aromatic hydrocarbons, ketones and aliphatic hydrocarbons. In CME-GC experiments (n = 5), the capillary-to-capillary RSD value was ∼2.1%; such a low RSD value is indicative of excellent reproducibility of the sol-gel method used for the preparation of these CME coatings. The dual-ligand sol-gel coating provided stable performance in capillary microextraction of analytes from saline samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Rocchitta, Gaia; Spanu, Angela; Babudieri, Sergio; Latte, Gavinella; Madeddu, Giordano; Galleri, Grazia; Nuvoli, Susanna; Bagella, Paola; Demartis, Maria Ilaria; Fiore, Vito; Manetti, Roberto; Serra, Pier Andrea
2016-01-01
Enzyme-based chemical biosensors are based on biological recognition. In order to operate, the enzymes must be available to catalyze a specific biochemical reaction and be stable under the normal operating conditions of the biosensor. Design of biosensors is based on knowledge about the target analyte, as well as the complexity of the matrix in which the analyte has to be quantified. This article reviews the problems resulting from the interaction of enzyme-based amperometric biosensors with complex biological matrices containing the target analyte(s). One of the most challenging disadvantages of amperometric enzyme-based biosensor detection is signal reduction from fouling agents and interference from chemicals present in the sample matrix. This article, therefore, investigates the principles of functioning of enzymatic biosensors, their analytical performance over time and the strategies used to optimize their performance. Moreover, the composition of biological fluids as a function of their interaction with biosensing will be presented. PMID:27249001
Long-term CF6 engine performance deterioration: Evaluation of engine S/N 451-380
NASA Technical Reports Server (NTRS)
Kramer, W. H.; Smith, J. J.
1978-01-01
The performance testing and analytical teardown of CF6-6D engine serial number 451-380 which was recently removed from a DC-10 aircraft is summarized. The investigative test program was conducted inbound prior to normal overhaul/refurbishment. The performance testing included an inbound test, a test following cleaning of the low pressure turbine airfoils, and a final test after leading edge rework and cleaning the stage one fan blades. The analytical teardown consisted of detailed disassembly inspection measurements and airfoil surface finish checks of the as-received deteriorated hardware. Aspects discussed include the analysis of the test cell performance data, a complete analytical teardown report with a detailed description of all observed hardware distress, and an analytical assessment of the performance loss (deterioration) relating measured hardware conditions to losses in both specific fuel comsumption and exhaust gas temperature.
Long-term CF6 engine performance deterioration: Evaluation of engine S/N 451-479
NASA Technical Reports Server (NTRS)
Kramer, W. H.; Smith, J. J.
1978-01-01
The performance testing and analytical teardown of CF6-6D engine is summarized. This engine had completed its initial installation on DC-10 aircraft. The investigative test program was conducted inbound prior to normal overhaul/refurbishment. The performance testing included an inbound test, a test following cleaning of the low pressure turbine airfoils, and a final test after leading edge rework and cleaning the stage one fan blades. The analytical teardown consisted of detailed disassembly inspection measurements and airfoil surface finish checks of the as received deteriorated hardware. Included in this report is a detailed analysis of the test cell performance data, a complete analytical teardown report with a detailed description of all observed hardware distress, and an analytical assessment of the performance loss (deterioration) relating measured hardware conditions to losses in both SFC (specific fuel consumption) and EGT (exhaust gas temperature).
Bourget, P; Amin, A; Moriceau, A; Cassard, B; Vidal, F; Clement, R
2012-12-01
The study compares the performances of three analytical methods devoted to Analytical Quality Control (AQC) of therapeutic solutions formed into care environment, we are talking about Therapeutics Objects(TN) (TOs(TN)). We explored the pharmacological model of two widely used anthracyclines i.e. adriamycin and epirubicin. We compared the performance of the HPLC versus two vibrational spectroscopic techniques: a tandem UV/Vis-FTIR on one hand and Raman Spectroscopy (RS) on the other. The three methods give good results for the key criteria of repeatability, of reproducibility and, of accuracy. A Spearman and a Kendall correlation test confirms the noninferiority of the vibrational techniques as an alternative to the reference method (HPLC). The selection of bands for characterization and quantification by RS is the results of a gradual process adjustment, at the intercept of matrix effects. From the perspective of a AQC associated to release of TOs, RS displays various advantages: (a) to decide quickly (~2min), simultaneously and without intrusion or withdrawal on both the nature of a packaging than on a solvant and this, regardless of the compound of interest; it is the founder asset of the method, (b) to explore qualitatively and quantitatively any kinds of TOs, (c) operator safety is guaranteed during production and in the laboratory, (d) the suppression of analytical releases or waste contribute to protects the environment, (e) the suppression.of consumables, (f) a negligible costs of maintenance, (g) a small budget of technicians training. These results already show that the SR technology is potentially a strong contributor to the safety of the medication cycle and fight against the iatrogenic effects of drugs. Copyright © 2011 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Yuan, H. Z.; Chen, Z.; Shu, C.; Wang, Y.; Niu, X. D.; Shu, S.
2017-09-01
In this paper, a free energy-based surface tension force (FESF) model is presented for accurately resolving the surface tension force in numerical simulation of multiphase flows by the level set method. By using the analytical form of order parameter along the normal direction to the interface in the phase-field method and the free energy principle, FESF model offers an explicit and analytical formulation for the surface tension force. The only variable in this formulation is the normal distance to the interface, which can be substituted by the distance function solved by the level set method. On one hand, as compared to conventional continuum surface force (CSF) model in the level set method, FESF model introduces no regularized delta function, due to which it suffers less from numerical diffusions and performs better in mass conservation. On the other hand, as compared to the phase field surface tension force (PFSF) model, the evaluation of surface tension force in FESF model is based on an analytical approach rather than numerical approximations of spatial derivatives. Therefore, better numerical stability and higher accuracy can be expected. Various numerical examples are tested to validate the robustness of the proposed FESF model. It turns out that FESF model performs better than CSF model and PFSF model in terms of accuracy, stability, convergence speed and mass conservation. It is also shown in numerical tests that FESF model can effectively simulate problems with high density/viscosity ratio, high Reynolds number and severe topological interfacial changes.
Performance evaluation of the Abbott CELL-DYN Ruby and the Sysmex XT-2000i haematology analysers.
Leers, M P G; Goertz, H; Feller, A; Hoffmann, J J M L
2011-02-01
Two mid-range haematology analysers (Abbott CELL-DYN Ruby and Sysmex XT-2000i) were evaluated to determine their analytical performance and workflow efficiency in the haematology laboratory. In total 418 samples were processed for determining equivalence of complete blood count (CBC) measurements, and 100 for reticulocyte comparison. Blood smears served for assessing the agreement of the differential counts. Inter-instrument agreement for most parameters was good although small numbers of discrepancies were observed. Systematic biases were found for mean cell volume, reticulocytes, platelets and mean platelet volume. CELL-DYN Ruby WBC differentials were obtained with all samples while the XT-2000i suppressed differentials partially or completely in 13 samples (3.1%). WBC subpopulation counts were otherwise in good agreement with no major outliers. Following first-pass CBC/differential analysis, 88 (21%) of XT-2000i samples required further analyser processing compared to 18 (4.3%) for the CELL-DYN Ruby. Smear referrals for suspected WBC/nucleated red blood cells and platelet abnormalities were indicated for 106 (25.4%) and 95 (22.7%) of the XT-2000i and CELL-DYN Ruby samples respectively. Flagging efficiencies for both analysers were found to be similar. The Sysmex XT-2000i and Abbott CELL-DYN Ruby analysers have broadly comparable analytical performance, but the CELL-DYN Ruby showed superior first-pass efficiency. © 2010 Blackwell Publishing Ltd.
GoFFish: A Sub-Graph Centric Framework for Large-Scale Graph Analytics1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmhan, Yogesh; Kumbhare, Alok; Wickramaarachchi, Charith
2014-08-25
Large scale graph processing is a major research area for Big Data exploration. Vertex centric programming models like Pregel are gaining traction due to their simple abstraction that allows for scalable execution on distributed systems naturally. However, there are limitations to this approach which cause vertex centric algorithms to under-perform due to poor compute to communication overhead ratio and slow convergence of iterative superstep. In this paper we introduce GoFFish a scalable sub-graph centric framework co-designed with a distributed persistent graph storage for large scale graph analytics on commodity clusters. We introduce a sub-graph centric programming abstraction that combines themore » scalability of a vertex centric approach with the flexibility of shared memory sub-graph computation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation.« less
El-Sheikh, Amjad H; Sweileh, Jamal A; Al-Degs, Yahya S; Insisi, Ahmad A; Al-Rabady, Nancy
2008-02-15
In this work, optimization of multi-residue solid phase extraction (SPE) procedures coupled with high-performance liquid chromatography for the determination of Propoxur, Atrazine and Methidathion from environmental waters is reported. Three different sorbents were used in this work: multi-walled carbon nanotubes (MWCNTs), C18 silica and activated carbon (AC). The three optimized SPE procedures were compared in terms of analytical performance, application to environmental waters, cartridge re-use, adsorption capacity and cost of adsorbent. Although the adsorption capacity of MWCNT was larger than AC and C18, however, the analytical performance of AC could be made close to the other sorbents by appropriate optimization of the SPE procedures. A sample of AC was then oxidized with various oxidizing agents to show that ACs of various surface properties has different enrichment efficiencies. Thus researchers are advised to try AC of various surface properties in SPE of pollutants prior to using expensive sorbents (such as MWCNT and C18 silica).
Time-dependent inertia analysis of vehicle mechanisms
NASA Astrophysics Data System (ADS)
Salmon, James Lee
Two methods for performing transient inertia analysis of vehicle hardware systems are developed in this dissertation. The analysis techniques can be used to predict the response of vehicle mechanism systems to the accelerations associated with vehicle impacts. General analytical methods for evaluating translational or rotational system dynamics are generated and evaluated for various system characteristics. The utility of the derived techniques are demonstrated by applying the generalized methods to two vehicle systems. Time dependent acceleration measured during a vehicle to vehicle impact are used as input to perform a dynamic analysis of an automobile liftgate latch and outside door handle. Generalized Lagrange equations for a non-conservative system are used to formulate a second order nonlinear differential equation defining the response of the components to the transient input. The differential equation is solved by employing the fourth order Runge-Kutta method. The events are then analyzed using commercially available two dimensional rigid body dynamic analysis software. The results of the two analytical techniques are compared to experimental data generated by high speed film analysis of tests of the two components performed on a high G acceleration sled at Ford Motor Company.
Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György
2018-01-01
Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.
CREATE-IP and CREATE-V: Data and Services Update
NASA Astrophysics Data System (ADS)
Carriere, L.; Potter, G. L.; Hertz, J.; Peters, J.; Maxwell, T. P.; Strong, S.; Shute, J.; Shen, Y.; Duffy, D.
2017-12-01
The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center and the Earth System Grid Federation (ESGF) are working together to build a uniform environment for the comparative study and use of a group of reanalysis datasets of particular importance to the research community. This effort is called the Collaborative REAnalysis Technical Environment (CREATE) and it contains two components: the CREATE-Intercomparison Project (CREATE-IP) and CREATE-V. This year's efforts included generating and publishing an atmospheric reanalysis ensemble mean and spread and improving the analytics available through CREATE-V. Related activities included adding access to subsets of the reanalysis data through ArcGIS and expanding the visualization tool to GMAO forecast data. This poster will present the access mechanisms to this data and use cases including example Jupyter Notebook code. The reanalysis ensemble was generated using two methods, first using standard Python tools for regridding, extracting levels and creating the ensemble mean and spread on a virtual server in the NCCS environment. The second was using a new analytics software suite, the Earth Data Analytics Services (EDAS), coupled with a high-performance Data Analytics and Storage System (DASS) developed at the NCCS. Results were compared to validate the EDAS methodologies, and the results, including time to process, will be presented. The ensemble includes selected 6 hourly and monthly variables, regridded to 1.25 degrees, with 24 common levels used for the 3D variables. Use cases for the new data and services will be presented, including the use of EDAS for the backend analytics on CREATE-V, the use of the GMAO forecast aerosol and cloud data in CREATE-V, and the ability to connect CREATE-V data to NCCS ArcGIS services.
NASA Astrophysics Data System (ADS)
Priyadarshini, Lakshmi
Frequently transported packaging goods are more prone to damage due to impact, jolting or vibration in transit. Fragile goods, for example, glass, ceramics, porcelain are susceptible to mechanical stresses. Hence ancillary materials like cushions play an important role when utilized within package. In this work, an analytical model of a 3D cellular structure is established based on Kelvin model and lattice structure. The research will provide a comparative study between the 3D printed Kelvin unit structure and 3D printed lattice structure. The comparative investigation is based on parameters defining cushion performance such as cushion creep, indentation, and cushion curve analysis. The applications of 3D printing is in rapid prototyping where the study will provide information of which model delivers better form of energy absorption. 3D printed foam will be shown as a cost-effective approach as prototype. The research also investigates about the selection of material for 3D printing process. As cushion development demands flexible material, three-dimensional printing with material having elastomeric properties is required. Further, the concept of cushion design is based on Kelvin model structure and lattice structure. The analytical solution provides the cushion curve analysis with respect to the results observed when load is applied over the cushion. The results are reported on basis of attenuation and amplification curves.
Template based rotation: A method for functional connectivity analysis with a priori templates☆
Schultz, Aaron P.; Chhatwal, Jasmeer P.; Huijbers, Willem; Hedden, Trey; van Dijk, Koene R.A.; McLaren, Donald G.; Ward, Andrew M.; Wigman, Sarah; Sperling, Reisa A.
2014-01-01
Functional connectivity magnetic resonance imaging (fcMRI) is a powerful tool for understanding the network level organization of the brain in research settings and is increasingly being used to study large-scale neuronal network degeneration in clinical trial settings. Presently, a variety of techniques, including seed-based correlation analysis and group independent components analysis (with either dual regression or back projection) are commonly employed to compute functional connectivity metrics. In the present report, we introduce template based rotation,1 a novel analytic approach optimized for use with a priori network parcellations, which may be particularly useful in clinical trial settings. Template based rotation was designed to leverage the stable spatial patterns of intrinsic connectivity derived from out-of-sample datasets by mapping data from novel sessions onto the previously defined a priori templates. We first demonstrate the feasibility of using previously defined a priori templates in connectivity analyses, and then compare the performance of template based rotation to seed based and dual regression methods by applying these analytic approaches to an fMRI dataset of normal young and elderly subjects. We observed that template based rotation and dual regression are approximately equivalent in detecting fcMRI differences between young and old subjects, demonstrating similar effect sizes for group differences and similar reliability metrics across 12 cortical networks. Both template based rotation and dual-regression demonstrated larger effect sizes and comparable reliabilities as compared to seed based correlation analysis, though all three methods yielded similar patterns of network differences. When performing inter-network and sub-network connectivity analyses, we observed that template based rotation offered greater flexibility, larger group differences, and more stable connectivity estimates as compared to dual regression and seed based analyses. This flexibility owes to the reduced spatial and temporal orthogonality constraints of template based rotation as compared to dual regression. These results suggest that template based rotation can provide a useful alternative to existing fcMRI analytic methods, particularly in clinical trial settings where predefined outcome measures and conserved network descriptions across groups are at a premium. PMID:25150630
Hofmann, Hannes G; Keck, Benjamin; Rohkohl, Christopher; Hornegger, Joachim
2011-01-01
Interventional reconstruction of 3-D volumetric data from C-arm CT projections is a computationally demanding task. Hardware optimization is not an option but mandatory for interventional image processing and, in particular, for image reconstruction due to the high demands on performance. Several groups have published fast analytical 3-D reconstruction on highly parallel hardware such as GPUs to mitigate this issue. The authors show that the performance of modern CPU-based systems is in the same order as current GPUs for static 3-D reconstruction and outperforms them for a recent motion compensated (3-D+time) image reconstruction algorithm. This work investigates two algorithms: Static 3-D reconstruction as well as a recent motion compensated algorithm. The evaluation was performed using a standardized reconstruction benchmark, RABBITCT, to get comparable results and two additional clinical data sets. The authors demonstrate for a parametric B-spline motion estimation scheme that the derivative computation, which requires many write operations to memory, performs poorly on the GPU and can highly benefit from modern CPU architectures with large caches. Moreover, on a 32-core Intel Xeon server system, the authors achieve linear scaling with the number of cores used and reconstruction times almost in the same range as current GPUs. Algorithmic innovations in the field of motion compensated image reconstruction may lead to a shift back to CPUs in the future. For analytical 3-D reconstruction, the authors show that the gap between GPUs and CPUs became smaller. It can be performed in less than 20 s (on-the-fly) using a 32-core server.
NASA Technical Reports Server (NTRS)
Shinoda, Patrick M.
1996-01-01
A full-scale helicopter rotor test was conducted in the NASA Ames 80- by 120-Foot Wind Tunnel with a four-bladed S-76 rotor system. Rotor performance and loads data were obtained over a wide range of rotor shaft angles-of-attack and thrust conditions at tunnel speeds ranging from 0 to 100 kt. The primary objectives of this test were (1) to acquire forward flight rotor performance and loads data for comparison with analytical results; (2) to acquire S-76 forward flight rotor performance data in the 80- by 120-Foot Wind Tunnel to compare with existing full-scale 40- by 80-Foot Wind Tunnel test data that were acquired in 1977; (3) to evaluate the acoustic capability of the 80- by 120- Foot Wind Tunnel for acquiring blade vortex interaction (BVI) noise in the low speed range and compare BVI noise with in-flight test data; and (4) to evaluate the capability of the 80- by 120-Foot Wind Tunnel test section as a hover facility. The secondary objectives were (1) to evaluate rotor inflow and wake effects (variations in tunnel speed, shaft angle, and thrust condition) on wind tunnel test section wall and floor pressures; (2) to establish the criteria for the definition of flow breakdown (condition where wall corrections are no longer valid) for this size rotor and wind tunnel cross-sectional area; and (3) to evaluate the wide-field shadowgraph technique for visualizing full-scale rotor wakes. This data base of rotor performance and loads can be used for analytical and experimental comparison studies for full-scale, four-bladed, fully articulated rotor systems. Rotor performance and structural loads data are presented in this report.
Broeckhoven, K.; Cabooter, D.; Desmet, G.
2012-01-01
The reintroduction of superficially porous particles has resulted in a leap forward for the separation performance in liquid chromatography. The underlying reasons for the higher efficiency of columns packed with these particles are discussed. The performance of the newly introduced 5 μm superficially porous particles is evaluated and compared to 2.7 μm superficially porous and 3.5 and 5 μm fully porous columns using typical test compounds (alkylphenones) and a relevant pharmaceutical compound (impurity of amoxicillin). The 5 μm superficially porous particles provide a superior kinetic performance compared to both the 3.5 and 5 μm fully porous particles over the entire relevant range of separation conditions. The performance of the superficially porous particles, however, appears to depend strongly on retention and analyte properties, emphasizing the importance of comparing different columns under realistic conditions (high enough k) and using the compound of interest. PMID:29403833
Enhanced spot preparation for liquid extractive sampling and analysis
Van Berkel, Gary J.; King, Richard C.
2015-09-22
A method for performing surface sampling of an analyte, includes the step of placing the analyte on a stage with a material in molar excess to the analyte, such that analyte-analyte interactions are prevented and the analyte can be solubilized for further analysis. The material can be a matrix material that is mixed with the analyte. The material can be provided on a sample support. The analyte can then be contacted with a solvent to extract the analyte for further processing, such as by electrospray mass spectrometry.
NASA Astrophysics Data System (ADS)
Martinez, Oscar
Thermal protection systems (TPS) are the key features incorporated into a spacecraft's design to protect it from severe aerodynamic heating during high-speed travel through planetary atmospheres. The thermal protection system is the key technology that enables a spacecraft to be lightweight, fully reusable, and easily maintainable. Add-on TPS concepts have been used since the beginning of the space race. The Apollo space capsule used ablative TPS and the Space Shuttle Orbiter TPS technology consisted of ceramic tiles and blankets. Many problems arose from the add-on concept such as incompatibility, high maintenance costs, non-load bearing, and not being robust and operable. To make the spacecraft's TPS more reliable, robust, and efficient, we investigated Integral Thermal Protection System (ITPS) concept in which the load-bearing structure and the TPS are combined into one single component. The design of an ITPS was a challenging task, because the requirement of a load-bearing structure and a TPS are often conflicting. Finite element (FE) analysis is often the preferred method of choice for a structural analysis problem. However, as the structure becomes complex, the computational time and effort for an FE analysis increases. New structural analytical tools were developed, or available ones were modified, to perform a full structural analysis of the ITPS. With analytical tools, the designer is capable of obtaining quick and accurate results and has a good idea of the response of the structure without having to go to an FE analysis. A MATLABRTM code was developed to analytically determine performance metrics of the ITPS such as stresses, buckling, deflection, and other failure modes. The analytical models provide fast and accurate results that were within 5% difference from the FEM results. The optimization procedure usually performs 100 function evaluations for every design variable. Using the analytical models in the optimization procedure was a time saver, because the optimization time to reach an optimum design was reached in less than an hour, where as an FE optimization study would take hours to reach an optimum design. Corrugated-core structures were designed for ITPS applications with loads and boundary conditions similar to that of a Space Shuttle-like vehicle. Temperature, buckling, deflection and stress constraints were considered for the design and optimization process. An optimized design was achieved with consideration of all the constraints. The ITPS design obtained from the analytical solutions was lighter (4.38 lb/ft2) when compared to the ITPS design obtained from a finite element analysis (4.85 lb/ft 2). The ITPS boundary effects added local stresses and compressive loads to the top facesheet that was not able to be captured by the 2D plate solutions. The inability to fully capture the boundary effects lead to a lighter ITPS when compared to the FE solution. However, the ITPS can withstand substantially large mechanical loads when compared to the previous designs. Truss-core structures were found to be unsuitable as they could not withstand the large thermal gradients frequently encountered in ITPS applications.
FPI: FM Success through Analytics
ERIC Educational Resources Information Center
Hickling, Duane
2013-01-01
The APPA Facilities Performance Indicators (FPI) is perhaps one of the most powerful analytical tools that institutional facilities professionals have at their disposal. It is a diagnostic facilities performance management tool that addresses the essential questions that facilities executives must answer to effectively perform their roles. It…
NASA Astrophysics Data System (ADS)
Kirchner-Bossi, Nicolas; Porté-Agel, Fernando
2017-04-01
Wind turbine wakes can significantly disrupt the performance of further downstream turbines in a wind farm, thus seriously limiting the overall wind farm power output. Such effect makes the layout design of a wind farm to play a crucial role on the whole performance of the project. An accurate definition of the wake interactions added to a computationally compromised layout optimization strategy can result in an efficient resource when addressing the problem. This work presents a novel soft-computing approach to optimize the wind farm layout by minimizing the overall wake effects that the installed turbines exert on one another. An evolutionary algorithm with an elitist sub-optimization crossover routine and an unconstrained (continuous) turbine positioning set up is developed and tested over an 80-turbine offshore wind farm over the North Sea off Denmark (Horns Rev I). Within every generation of the evolution, the wind power output (cost function) is computed through a recently developed and validated analytical wake model with a Gaussian profile velocity deficit [1], which has shown to outperform the traditionally employed wake models through different LES simulations and wind tunnel experiments. Two schemes with slightly different perimeter constraint conditions (full or partial) are tested. Results show, compared to the baseline, gridded layout, a wind power output increase between 5.5% and 7.7%. In addition, it is observed that the electric cable length at the facilities is reduced by up to 21%. [1] Bastankhah, Majid, and Fernando Porté-Agel. "A new analytical model for wind-turbine wakes." Renewable Energy 70 (2014): 116-123.
Data quality in drug discovery: the role of analytical performance in ligand binding assays
NASA Astrophysics Data System (ADS)
Wätzig, Hermann; Oltmann-Norden, Imke; Steinicke, Franziska; Alhazmi, Hassan A.; Nachbar, Markus; El-Hady, Deia Abd; Albishri, Hassan M.; Baumann, Knut; Exner, Thomas; Böckler, Frank M.; El Deeb, Sami
2015-09-01
Despite its importance and all the considerable efforts made, the progress in drug discovery is limited. One main reason for this is the partly questionable data quality. Models relating biological activity and structures and in silico predictions rely on precisely and accurately measured binding data. However, these data vary so strongly, such that only variations by orders of magnitude are considered as unreliable. This can certainly be improved considering the high analytical performance in pharmaceutical quality control. Thus the principles, properties and performances of biochemical and cell-based assays are revisited and evaluated. In the part of biochemical assays immunoassays, fluorescence assays, surface plasmon resonance, isothermal calorimetry, nuclear magnetic resonance and affinity capillary electrophoresis are discussed in details, in addition radiation-based ligand binding assays, mass spectrometry, atomic force microscopy and microscale thermophoresis are briefly evaluated. In addition, general sources of error, such as solvent, dilution, sample pretreatment and the quality of reagents and reference materials are discussed. Biochemical assays can be optimized to provide good accuracy and precision (e.g. percental relative standard deviation <10 %). Cell-based assays are often considered superior related to the biological significance, however, typically they cannot still be considered as really quantitative, in particular when results are compared over longer periods of time or between laboratories. A very careful choice of assays is therefore recommended. Strategies to further optimize assays are outlined, considering the evaluation and the decrease of the relevant error sources. Analytical performance and data quality are still advancing and will further advance the progress in drug development.
Systems 1 and 2 thinking processes and cognitive reflection testing in medical students
Tay, Shu Wen; Ryan, Paul; Ryan, C Anthony
2016-01-01
Background Diagnostic decision-making is made through a combination of Systems 1 (intuition or pattern-recognition) and Systems 2 (analytic) thinking. The purpose of this study was to use the Cognitive Reflection Test (CRT) to evaluate and compare the level of Systems 1 and 2 thinking among medical students in pre-clinical and clinical programs. Methods The CRT is a three-question test designed to measure the ability of respondents to activate metacognitive processes and switch to System 2 (analytic) thinking where System 1 (intuitive) thinking would lead them astray. Each CRT question has a correct analytical (System 2) answer and an incorrect intuitive (System 1) answer. A group of medical students in Years 2 & 3 (pre-clinical) and Years 4 (in clinical practice) of a 5-year medical degree were studied. Results Ten percent (13/128) of students had the intuitive answers to the three questions (suggesting they generally relied on System 1 thinking) while almost half (44%) answered all three correctly (indicating full analytical, System 2 thinking). Only 3–13% had incorrect answers (i.e. that were neither the analytical nor the intuitive responses). Non-native English speaking students (n = 11) had a lower mean number of correct answers compared to native English speakers (n = 117: 1.0 s 2.12 respectfully: p < 0.01). As students progressed through questions 1 to 3, the percentage of correct System 2 answers increased and the percentage of intuitive answers decreased in both the pre-clinical and clinical students. Conclusions Up to half of the medical students demonstrated full or partial reliance on System 1 (intuitive) thinking in response to these analytical questions. While their CRT performance has no claims to make as to their future expertise as clinicians, the test may be used in helping students to understand the importance of awareness and regulation of their thinking processes in clinical practice. PMID:28344696
Systems 1 and 2 thinking processes and cognitive reflection testing in medical students.
Tay, Shu Wen; Ryan, Paul; Ryan, C Anthony
2016-10-01
Diagnostic decision-making is made through a combination of Systems 1 (intuition or pattern-recognition) and Systems 2 (analytic) thinking. The purpose of this study was to use the Cognitive Reflection Test (CRT) to evaluate and compare the level of Systems 1 and 2 thinking among medical students in pre-clinical and clinical programs. The CRT is a three-question test designed to measure the ability of respondents to activate metacognitive processes and switch to System 2 (analytic) thinking where System 1 (intuitive) thinking would lead them astray. Each CRT question has a correct analytical (System 2) answer and an incorrect intuitive (System 1) answer. A group of medical students in Years 2 & 3 (pre-clinical) and Years 4 (in clinical practice) of a 5-year medical degree were studied. Ten percent (13/128) of students had the intuitive answers to the three questions (suggesting they generally relied on System 1 thinking) while almost half (44%) answered all three correctly (indicating full analytical, System 2 thinking). Only 3-13% had incorrect answers (i.e. that were neither the analytical nor the intuitive responses). Non-native English speaking students (n = 11) had a lower mean number of correct answers compared to native English speakers (n = 117: 1.0 s 2.12 respectfully: p < 0.01). As students progressed through questions 1 to 3, the percentage of correct System 2 answers increased and the percentage of intuitive answers decreased in both the pre-clinical and clinical students. Up to half of the medical students demonstrated full or partial reliance on System 1 (intuitive) thinking in response to these analytical questions. While their CRT performance has no claims to make as to their future expertise as clinicians, the test may be used in helping students to understand the importance of awareness and regulation of their thinking processes in clinical practice.
Kockmann, Tobias; Trachsel, Christian; Panse, Christian; Wahlander, Asa; Selevsek, Nathalie; Grossmann, Jonas; Wolski, Witold E; Schlapbach, Ralph
2016-08-01
Quantitative mass spectrometry is a rapidly evolving methodology applied in a large number of omics-type research projects. During the past years, new designs of mass spectrometers have been developed and launched as commercial systems while in parallel new data acquisition schemes and data analysis paradigms have been introduced. Core facilities provide access to such technologies, but also actively support the researchers in finding and applying the best-suited analytical approach. In order to implement a solid fundament for this decision making process, core facilities need to constantly compare and benchmark the various approaches. In this article we compare the quantitative accuracy and precision of current state of the art targeted proteomics approaches single reaction monitoring (SRM), parallel reaction monitoring (PRM) and data independent acquisition (DIA) across multiple liquid chromatography mass spectrometry (LC-MS) platforms, using a readily available commercial standard sample. All workflows are able to reproducibly generate accurate quantitative data. However, SRM and PRM workflows show higher accuracy and precision compared to DIA approaches, especially when analyzing low concentrated analytes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Yeow, Y. T.; Morris, D. H.; Brinson, H. F.
1979-01-01
The paper compares the fracture behavior of a composite material by using the analytical models of Waddoups et al. (1971), Whitney and Nuismer (1974, 1975), and Snyder and Cruse (1975) with experimental results from tests performed on center-notched tensile strips. Laminate configurations of (0 deg)8s, (0 deg/90 deg)4s, (+ and -45 deg)4s, and (0 deg/+ and -45 deg/0 deg)2s from T300/934 graphite/epoxy are tested. These particular configurations are used so that the effect of various degrees of anisotropy can be studied. The procedure adopted uses the results from one test for crack size aspect ratio to predict the results of tests of other aspect ratios. For those methods that use a characteristic dimension, predictions are made by assuming the magnitude of this dimension to be constant. The validity of this assumption for a laminate is assessed by comparing predicted and experimental results. Analytical models using a characteristic dimension are compared to the model developed by Cruse (1973).
Carbon nanomaterials as broadband airborne ultrasound transducer
NASA Astrophysics Data System (ADS)
Daschewski, M.; Harrer, A.; Prager, J.; Kreutzbruck, M.; Guderian, M.; Meyer-Plath, A.
2012-05-01
A method has been developed for the generation of airborne ultrasound using the thermoacoustic principle applied to carbon materials at the micro- and nanoscale. Such materials are shown to be capable to emitting the ultrasound. We tested the acoustic performance of electrospun polyacrylonitrile-derived carbon nanofibers tissues and determined the sound pressure for frequencies up to 350 kHz. The experimental results are compared to analytic calculations.
Belcour, Laurent; Pacanowski, Romain; Delahaie, Marion; Laville-Geay, Aude; Eupherte, Laure
2014-12-01
We compare the performance of various analytical retroreflecting bidirectional reflectance distribution function (BRDF) models to assess how they reproduce accurately measured data of retroreflecting materials. We introduce a new parametrization, the back vector parametrization, to analyze retroreflecting data, and we show that this parametrization better preserves the isotropy of data. Furthermore, we update existing BRDF models to improve the representation of retroreflective data.
NGMIX: Gaussian mixture models for 2D images
NASA Astrophysics Data System (ADS)
Sheldon, Erin
2015-08-01
NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.
NASA Technical Reports Server (NTRS)
Ulbricht, T. E.; Hemminger, J. A.
1986-01-01
The low flow rate and high head rise requirements of hydrogen/oxygen auxiliary propulsion systems make the application of centrifugal pumps difficult. Positive displacement pumps are well-suited for these flow conditions, but little is known about their performance and life characteristics in liquid hydrogen. An experimental and analytical investigation was conducted to determine the performance and life characteristics of a vane-type, positive displacement pump. In the experimental part of this effort, mass flow rate and shaft torque were determined as functions of shaft speed and pump pressure rise. Since liquid hydrogen offers little lubrication in a rubbing situation, pump life is an issue. During the life test, the pump was operated intermittently for 10 hr at the steady-state point of 0.074 lbm/sec (0.03 kg/sec) flow rate, 3000 psid (2.07 MPa) pressure rise, and 8000 rpm (838 rad/sec) shaft speed. Pump performance was monitored during the life test series and the results indicated no loss in performance. Material loss from the vanes was recorded and wear of the other components was documented. In the analytical part of this effort, a comprehensive pump performance analysis computer code, developed in-house, was used to predict pump performance. The results of the experimental investigation are presented and compared with the results of the analysis. Results of the life test are also presented.
Woodworth, M.T.; Connor, B.F.
2001-01-01
This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-165 (trace constituents), M-158 (major constituents), N-69 (nutrient constituents), N-70 (nutrient constituents), P-36 (low ionic-strength constituents), and Hg-32 (mercury) -- that were distributed in April 2001 to laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data received from 73 laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.
Woodworth, M.T.; Conner, B.F.
2002-01-01
This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T- 169 (trace constituents), M- 162 (major constituents), N-73 (nutrient constituents), N-74 (nutrient constituents), P-38 (low ionic-strength constituents), and Hg-34 (mercury) -- that were distributed in March 2002 to laboratories enrolled in the U.S. Geological Survey sponsored intedaboratory testing program. Analytical data received from 93 laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.
Woodworth, Mark T.; Connor, Brooke F.
2003-01-01
This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-171 (trace constituents), M-164 (major constituents), N-75 (nutrient constituents), N-76 (nutrient constituents), P-39 (low ionic-strength constituents), and Hg-35 (mercury) -- that were distributed in September 2002 to laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data received from 102 laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.
Woodworth, Mark T.; Connor, Brooke F.
2002-01-01
This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-167 (trace constituents), M-160 (major constituents), N-71 (nutrient constituents), N-72 (nutrient constituents), P-37 (low ionic-strength constituents), and Hg-33 (mercury) -- that were distributed in September 2001 to laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data received from 98 laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.
Farrar, Jerry W.; Copen, Ashley M.
2000-01-01
This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-161 (trace constituents), M-154 (major constituents), N-65 (nutrient constituents), N-66 nutrient constituents), P-34 (low ionic strength constituents), and Hg-30 (mercury) -- that were distributed in March 2000 to 144 laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data that were received from 132 of the laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.
Farrar, T.W.
2000-01-01
This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-159 (trace constituents), M-152 (major constituents), N-63 (nutrient constituents), N-64 (nutrient constituents), P-33 (low ionic strength constituents), and Hg-29 (mercury) -- that were distributed in October 1999 to 149 laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data that were received from 131 of the laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.
Woodworth, Mark T.; Connor, Brooke F.
2003-01-01
This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-173 (trace constituents), M-166 (major constituents), N-77 (nutrient constituents), N-78 (nutrient constituents), P-40 (low ionic-strength constituents), and Hg-36 (mercury) -- that were distributed in March 2003 to laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data received from 110 laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.
Connor, B.F.; Currier, J.P.; Woodworth, M.T.
2001-01-01
This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-163 (trace constituents), M-156 (major constituents), N-67 (nutrient constituents), N-68 (nutrient constituents), P-35 (low ionic strength constituents), and Hg-31 (mercury) -- that were distributed in October 2000 to 126 laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data that were received from 122 of the laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.
Petersen, Per H; Lund, Flemming; Fraser, Callum G; Sölétormos, György
2016-11-01
Background The distributions of within-subject biological variation are usually described as coefficients of variation, as are analytical performance specifications for bias, imprecision and other characteristics. Estimation of specifications required for reference change values is traditionally done using relationship between the batch-related changes during routine performance, described as Δbias, and the coefficients of variation for analytical imprecision (CV A ): the original theory is based on standard deviations or coefficients of variation calculated as if distributions were Gaussian. Methods The distribution of between-subject biological variation can generally be described as log-Gaussian. Moreover, recent analyses of within-subject biological variation suggest that many measurands have log-Gaussian distributions. In consequence, we generated a model for the estimation of analytical performance specifications for reference change value, with combination of Δbias and CV A based on log-Gaussian distributions of CV I as natural logarithms. The model was tested using plasma prolactin and glucose as examples. Results Analytical performance specifications for reference change value generated using the new model based on log-Gaussian distributions were practically identical with the traditional model based on Gaussian distributions. Conclusion The traditional and simple to apply model used to generate analytical performance specifications for reference change value, based on the use of coefficients of variation and assuming Gaussian distributions for both CV I and CV A , is generally useful.
A high-performance spatial database based approach for pathology imaging algorithm evaluation
Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A.D.; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J.; Saltz, Joel H.
2013-01-01
Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. Aims: (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and loaded into a spatial database. To support efficient data loading, we have implemented a parallel data loading tool that takes advantage of multi-core CPUs to accelerate data injection. The spatial database manages both geometric shapes and image features or classifications, and enables spatial sampling, result comparison, and result aggregation through expressive structured query language (SQL) queries with spatial extensions. To provide scalable and efficient query support, we have employed a shared nothing parallel database architecture, which distributes data homogenously across multiple database partitions to take advantage of parallel computation power and implements spatial indexing to achieve high I/O throughput. Results: Our work proposes a high performance, parallel spatial database platform for algorithm validation and comparison. This platform was evaluated by storing, managing, and comparing analysis results from a set of brain tumor whole slide images. The tools we develop are open source and available to download. Conclusions: Pathology image algorithm validation and comparison are essential to iterative algorithm development and refinement. One critical component is the support for queries involving spatial predicates and comparisons. In our work, we develop an efficient data model and parallel database approach to model, normalize, manage and query large volumes of analytical image result data. Our experiments demonstrate that the data partitioning strategy and the grid-based indexing result in good data distribution across database nodes and reduce I/O overhead in spatial join queries through parallel retrieval of relevant data and quick subsetting of datasets. The set of tools in the framework provide a full pipeline to normalize, load, manage and query analytical results for algorithm evaluation. PMID:23599905
Sustainability performance evaluation: Literature review and future directions.
Büyüközkan, Gülçin; Karabulut, Yağmur
2018-07-01
Current global economic activities are increasingly being perceived as unsustainable. Despite the high number of publications, sustainability science remains highly dispersed over diverse approaches and topics. This article aims to provide a structured overview of sustainability performance evaluation related publications and to document the current state of literature, categorize publications, analyze and link trends, as well as highlight gaps and provide research recommendations. 128 articles between 2007 and 2018 are identified. The results suggest that sustainability performance evaluation models shall be more balanced, suitable criteria and their interrelations shall be well defined and subjectivity of qualitative criteria inherent to sustainability indicators shall be considered. To address this subjectivity, group decision-making techniques and other analytical methods that can deal with uncertainty, conflicting indicators, and linguistic evaluations can be used in future works. By presenting research gaps, this review stimulates researchers to establish practically applicable sustainability performance evaluation frameworks to help assess and compare the degree of sustainability, leading to more sustainable business practices. The review is unique in defining corporate sustainability performance evaluation for the first time, exploring the gap between sustainability accounting and sustainability assessment, and coming up with a structured overview of innovative research recommendations about integrating analytical assessment methods into conceptual sustainability frameworks. Copyright © 2018 Elsevier Ltd. All rights reserved.
Analytical performance of a bronchial genomic classifier.
Hu, Zhanzhi; Whitney, Duncan; Anderson, Jessica R; Cao, Manqiu; Ho, Christine; Choi, Yoonha; Huang, Jing; Frink, Robert; Smith, Kate Porta; Monroe, Robert; Kennedy, Giulia C; Walsh, P Sean
2016-02-26
The current standard practice of lung lesion diagnosis often leads to inconclusive results, requiring additional diagnostic follow up procedures that are invasive and often unnecessary due to the high benign rate in such lesions (Chest 143:e78S-e92, 2013). The Percepta bronchial genomic classifier was developed and clinically validated to provide more accurate classification of lung nodules and lesions that are inconclusive by bronchoscopy, using bronchial brushing specimens (N Engl J Med 373:243-51, 2015, BMC Med Genomics 8:18, 2015). The analytical performance of the Percepta test is reported here. Analytical performance studies were designed to characterize the stability of RNA in bronchial brushing specimens during collection and shipment; analytical sensitivity defined as input RNA mass; analytical specificity (i.e. potentially interfering substances) as tested on blood and genomic DNA; and assay performance studies including intra-run, inter-run, and inter-laboratory reproducibility. RNA content within bronchial brushing specimens preserved in RNAprotect is stable for up to 20 days at 4 °C with no changes in RNA yield or integrity. Analytical sensitivity studies demonstrated tolerance to variation in RNA input (157 ng to 243 ng). Analytical specificity studies utilizing cancer positive and cancer negative samples mixed with either blood (up to 10 % input mass) or genomic DNA (up to 10 % input mass) demonstrated no assay interference. The test is reproducible from RNA extraction through to Percepta test result, including variation across operators, runs, reagent lots, and laboratories (standard deviation of 0.26 for scores on > 6 unit scale). Analytical sensitivity, analytical specificity and robustness of the Percepta test were successfully verified, supporting its suitability for clinical use.
Steuer, Andrea E; Forss, Anna-Maria; Dally, Annika M; Kraemer, Thomas
2014-11-01
In the context of driving under the influence of drugs (DUID), not only common drugs of abuse may have an influence, but also medications with similar mechanisms of action. Simultaneous quantification of a variety of drugs and medications relevant in this context allows faster and more effective analyses. Therefore, multi-analyte approaches have gained more and more popularity in recent years. Usually, calibration curves for such procedures contain a mixture of all analytes, which might lead to mutual interferences. In this study we investigated whether the use of such mixtures leads to reliable results for authentic samples containing only one or two analytes. Five hundred microliters of whole blood were extracted by routine solid-phase extraction (SPE, HCX). Analysis was performed on an ABSciex 3200 QTrap instrument with ESI+ in scheduled MRM mode. The method was fully validated according to international guidelines including selectivity, recovery, matrix effects, accuracy and precision, stabilities, and limit of quantification. The selected SPE provided recoveries >60% for all analytes except 6-monoacetylmorphine (MAM) with coefficients of variation (CV) below 15% or 20% for quality controls (QC) LOW and HIGH, respectively. Ion suppression >30% was found for benzoylecgonine, hydrocodone, hydromorphone, MDA, oxycodone, and oxymorphone at QC LOW, however CVs were always below 10% (n=6 different whole blood samples). Accuracy and precision criteria were fulfilled for all analytes except for MAM. Systematic investigation of accuracy determined for QC MED in a multi-analyte mixture compared to samples containing only single analytes revealed no relevant differences for any analyte, indicating that a multi-analyte calibration is suitable for the presented method. Comparison of approximately 60 samples to a former GC-MS method showed good correlation. The newly validated method was successfully applied to more than 1600 routine samples and 3 proficiency tests. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Rao, Shalinee; Masilamani, Suresh; Sundaram, Sandhya; Duvuru, Prathiba; Swaminathan, Rajendiran
2016-01-01
Quality monitoring in histopathology unit is categorized into three phases, pre-analytical, analytical and post-analytical, to cover various steps in the entire test cycle. Review of literature on quality evaluation studies pertaining to histopathology revealed that earlier reports were mainly focused on analytical aspects with limited studies on assessment of pre-analytical phase. Pre-analytical phase encompasses several processing steps and handling of specimen/sample by multiple individuals, thus allowing enough scope for errors. Due to its critical nature and limited studies in the past to assess quality in pre-analytical phase, it deserves more attention. This study was undertaken to analyse and assess the quality parameters in pre-analytical phase in a histopathology laboratory. This was a retrospective study done on pre-analytical parameters in histopathology laboratory of a tertiary care centre on 18,626 tissue specimens received in 34 months. Registers and records were checked for efficiency and errors for pre-analytical quality variables: specimen identification, specimen in appropriate fixatives, lost specimens, daily internal quality control performance on staining, performance in inter-laboratory quality assessment program {External quality assurance program (EQAS)} and evaluation of internal non-conformities (NC) for other errors. The study revealed incorrect specimen labelling in 0.04%, 0.01% and 0.01% in 2007, 2008 and 2009 respectively. About 0.04%, 0.07% and 0.18% specimens were not sent in fixatives in 2007, 2008 and 2009 respectively. There was no incidence of specimen lost. A total of 113 non-conformities were identified out of which 92.9% belonged to the pre-analytical phase. The predominant NC (any deviation from normal standard which may generate an error and result in compromising with quality standards) identified was wrong labelling of slides. Performance in EQAS for pre-analytical phase was satisfactory in 6 of 9 cycles. A low incidence of errors in pre-analytical phase implies that a satisfactory level of quality standards was being practiced with still scope for improvement.
Channel characteristics and coordination in three-echelon dual-channel supply chain
NASA Astrophysics Data System (ADS)
Saha, Subrata
2016-02-01
We explore the impact of channel structure on the manufacturer, the distributer, the retailer and the entire supply chain by considering three different channel structures in radiance of with and without coordination. These structures include a traditional retail channel and two manufacturer direct channels with and without consistent pricing. By comparing the performance of the manufacturer, the distributer and the retailer, and the entire supply chain in three different supply chain structures, it is established analytically that, under some conditions, a dual channel can outperform a single retail channel; as a consequence, a coordination mechanism is developed that not only coordinates the dual channel but also outperforms the non-cooperative single retail channel. All the analytical results are further analysed through numerical examples.
NASA Astrophysics Data System (ADS)
Blanchard, J. P.; Tesche, F. M.; McConnell, B. W.
1987-09-01
An experiment to determine the interaction of an intense electromagnetic pulse (EMP), such as that produced by a nuclear detonation above the Earth's atmosphere, was performed in March, 1986 at Kirtland Air Force Base near Albuquerque, New Mexico. The results of that experiment have been published without analysis. Following an introduction of the corona phenomenon, the reason for interest in it, and a review of the experiment, this paper discusses five different analytic corona models that may model corona formation on a conducting line subjected to EMP. The results predicted by these models are compared with measured data acquired during the experiment to determine the strengths and weaknesses of each model.
NASA Technical Reports Server (NTRS)
Keller, M. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Inherent errors in using nonmetric Skylab photography and office-identified photo control made it necessary to perform numerous block adjustment solutions involving different combinations of control and weights. The final block adjustment was executed holding to 14 of the office-identified photo control points. Solution accuracy was evaluated by comparing the analytically computed ground positions of the withheld photo control points with their known ground positions and also by determining the standard errors of these points from variance values. A horizontal position RMS error of 15 meters was attained. The maximum observed error in position at a control point was 25 meters.
saltPAD: A New Analytical Tool for Monitoring Salt Iodization in Low Resource Settings
Myers, Nicholas M.; Strydom, Emmerentia Elza; Sweet, James; Sweet, Christopher; Spohrer, Rebecca; Dhansay, Muhammad Ali; Lieberman, Marya
2016-01-01
We created a paper test card that measures a common iodizing agent, iodate, in salt. To test the analytical metrics, usability, and robustness of the paper test card when it is used in low resource settings, the South African Medical Research Council and GroundWork performed independent validation studies of the device. The accuracy and precision metrics from both studies were comparable. In the SAMRC study, more than 90% of the test results (n=1704) were correctly classified as corresponding to adequately or inadequately iodized salt. The cards are suitable for market and household surveys to determine whether salt is adequately iodized. Further development of the cards will improve their utility for monitoring salt iodization during production. PMID:29942380
Janiszewski, J; Schneider, P; Hoffmaster, K; Swyden, M; Wells, D; Fouda, H
1997-01-01
The development and application of membrane solid phase extraction (SPE) in 96-well microtiter plate format is described for the automated analysis of drugs in biological fluids. The small bed volume of the membrane allows elution of the analyte in a very small solvent volume, permitting direct HPLC injection and negating the need for the time consuming solvent evaporation step. A programmable liquid handling station (Quadra 96) was modified to automate all SPE steps. To avoid drying of the SPE bed and to enhance the analytical precision a novel protocol for performing the condition, load and wash steps in rapid succession was utilized. A block of 96 samples can now be extracted in 10 min., about 30 times faster than manual solvent extraction or single cartridge SPE methods. This processing speed complements the high-throughput speed of contemporary high performance liquid chromatography mass spectrometry (HPLC/MS) analysis. The quantitative analysis of a test analyte (Ziprasidone) in plasma demonstrates the utility and throughput of membrane SPE in combination with HPLC/MS. The results obtained with the current automated procedure compare favorably with those obtained using solvent and traditional solid phase extraction methods. The method has been used for the analysis of numerous drug prototypes in biological fluids to support drug discovery efforts.
Predicting Student Success using Analytics in Course Learning Management Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olama, Mohammed M; Thakur, Gautam; McNair, Wade
Educational data analytics is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from the educational context. For example, predicting college student performance is crucial for both the student and educational institutions. It can support timely intervention to prevent students from failing a course, increasing efficacy of advising functions, and improving course completion rate. In this paper, we present the efforts carried out at Oak Ridge National Laboratory (ORNL) toward conducting predictive analytics to academic data collected from 2009 through 2013 and available in one of the most commonly used learning management systems,more » called Moodle. First, we have identified the data features useful for predicting student outcomes such as students scores in homework assignments, quizzes, exams, in addition to their activities in discussion forums and their total GPA at the same term they enrolled in the course. Then, Logistic Regression and Neural Network predictive models are used to identify students as early as possible that are in danger of failing the course they are currently enrolled in. These models compute the likelihood of any given student failing (or passing) the current course. Numerical results are presented to evaluate and compare the performance of the developed models and their predictive accuracy.« less
Predicting student success using analytics in course learning management systems
NASA Astrophysics Data System (ADS)
Olama, Mohammed M.; Thakur, Gautam; McNair, Allen W.; Sukumar, Sreenivas R.
2014-05-01
Educational data analytics is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from the educational context. For example, predicting college student performance is crucial for both the student and educational institutions. It can support timely intervention to prevent students from failing a course, increasing efficacy of advising functions, and improving course completion rate. In this paper, we present the efforts carried out at Oak Ridge National Laboratory (ORNL) toward conducting predictive analytics to academic data collected from 2009 through 2013 and available in one of the most commonly used learning management systems, called Moodle. First, we have identified the data features useful for predicting student outcomes such as students' scores in homework assignments, quizzes, exams, in addition to their activities in discussion forums and their total GPA at the same term they enrolled in the course. Then, Logistic Regression and Neural Network predictive models are used to identify students as early as possible that are in danger of failing the course they are currently enrolled in. These models compute the likelihood of any given student failing (or passing) the current course. Numerical results are presented to evaluate and compare the performance of the developed models and their predictive accuracy.
An analytical study for the design of advanced rotor airfoils
NASA Technical Reports Server (NTRS)
Kemp, L. D.
1973-01-01
A theoretical study has been conducted to design and evaluate two airfoils for helicopter rotors. The best basic shape, designed with a transonic hodograph design method, was modified to meet subsonic criteria. One airfoil had an additional constraint for low pitching-moment at the transonic design point. Airfoil characteristics were predicted. Results of a comparative analysis of helicopter performance indicate that the new airfoils will produce reduced rotor power requirements compared to the NACA 0012. The hodograph design method, written in CDC Algol, is listed and described.
Bassuoni, M M
2014-03-01
The dehumidifier is a key component in liquid desiccant air-conditioning systems. Analytical solutions have more advantages than numerical solutions in studying the dehumidifier performance parameters. This paper presents the performance results of exit parameters from an analytical model of an adiabatic cross-flow liquid desiccant air dehumidifier. Calcium chloride is used as desiccant material in this investigation. A program performing the analytical solution is developed using the engineering equation solver software. Good accuracy has been found between analytical solution and reliable experimental results with a maximum deviation of +6.63% and -5.65% in the moisture removal rate. The method developed here can be used in the quick prediction of the dehumidifier performance. The exit parameters from the dehumidifier are evaluated under the effects of variables such as air temperature and humidity, desiccant temperature and concentration, and air to desiccant flow rates. The results show that hot humid air and desiccant concentration have the greatest impact on the performance of the dehumidifier. The moisture removal rate is decreased with increasing both air inlet temperature and desiccant temperature while increases with increasing air to solution mass ratio, inlet desiccant concentration, and inlet air humidity ratio.
Casella, Innocenzo G; Pierri, Marianna; Contursi, Michela
2006-02-24
The electrochemical behaviour of the polycrystalline platinum electrode towards the oxidation/reduction of short-chain unsaturated aliphatic molecules such as acrylamide and acrylic acid was investigated in acidic solutions. Analytes were separated by reverse phase liquid chromatographic and quantified using a pulsed amperometric detection. A new two-step waveform, is introduced for detection of acrylamide and acrylic acid. Detection limits (LOD) of 20 nM (1. 4 microg/kg) and 45 nM (3.2 microg/kg) were determined in water solutions containing acrylamide and acrylic acid, respectively. Compared to the classical three-step waveform, the proposed two-step waveform shows favourable analytical performance in terms of LOD, linear range, precision and improved long-term reproducibility. The proposed analytical method combined with clean-up procedure accomplished by Carrez clearing reagent and subsequent extraction with a strong cation exchanger cartridges (SPE), was successfully used for the quantification of low concentrations of acrylamide in foodstuffs such as coffee and potato fries.
Thermodynamics of Newman-Unti-Tamburino charged spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mann, Robert; Department of Physics, University of Waterloo, 200 University Avenue West, Waterloo, Ontario, N2L 3G1; Stelea, Cristian
We discuss and compare at length the results of two methods used recently to describe the thermodynamics of Taub-Newman-Unti-Tamburino (NUT) solutions in a de Sitter background. In the first approach (C approach), one deals with an analytically continued version of the metric while in the second approach (R approach), the discussion is carried out using the unmodified metric with Lorentzian signature. No analytic continuation is performed on the coordinates and/or the parameters that appear in the metric. We find that the results of both these approaches are completely equivalent modulo analytic continuation and we provide the exact prescription that relatesmore » the results in both methods. The extension of these results to the AdS/flat cases aims to give a physical interpretation of the thermodynamics of NUT-charged spacetimes in the Lorentzian sector. We also briefly discuss the higher-dimensional spaces and note that, analogous with the absence of hyperbolic NUTs in AdS backgrounds, there are no spherical Taub-NUT-dS solutions.« less
In-flight Evaluation of Aerodynamic Predictions of an Air-launched Space Booster
NASA Technical Reports Server (NTRS)
Curry, Robert E.; Mendenhall, Michael R.; Moulton, Bryan
1992-01-01
Several analytical aerodynamic design tools that were applied to the Pegasus (registered trademark) air-launched space booster were evaluated using flight measurements. The study was limited to existing codes and was conducted with limited computational resources. The flight instrumentation was constrained to have minimal impact on the primary Pegasus missions. Where appropriate, the flight measurements were compared with computational data. Aerodynamic performance and trim data from the first two flights were correlated with predictions. Local measurements in the wing and wing-body interference region were correlated with analytical data. This complex flow region includes the effect of aerothermal heating magnification caused by the presence of a corner vortex and interaction of the wing leading edge shock and fuselage boundary layer. The operation of the first two missions indicates that the aerodynamic design approach for Pegasus was adequate, and data show that acceptable margins were available. Additionally, the correlations provide insight into the capabilities of these analytical tools for more complex vehicles in which the design margins may be more stringent.
In-flight evaluation of aerodynamic predictions of an air-launched space booster
NASA Technical Reports Server (NTRS)
Curry, Robert E.; Mendenhall, Michael R.; Moulton, Bryan
1993-01-01
Several analytical aerodynamic design tools that were applied to the Pegasus air-launched space booster were evaluated using flight measurements. The study was limited to existing codes and was conducted with limited computational resources. The flight instrumentation was constrained to have minimal impact on the primary Pegasus missions. Where appropriate, the flight measurements were compared with computational data. Aerodynamic performance and trim data from the first two flights were correlated with predictions. Local measurements in the wing and wing-body interference region were correlated with analytical data. This complex flow region includes the effect of aerothermal heating magnification caused by the presence of a corner vortex and interaction of the wing leading edge shock and fuselage boundary layer. The operation of the first two missions indicates that the aerodynamic design approach for Pegasus was adequate, and data show that acceptable margins were available. Additionally, the correlations provide insight into the capabilities of these analytical tools for more complex vehicles in which design margins may be more stringent.
Conceptual data sampling for breast cancer histology image classification.
Rezk, Eman; Awan, Zainab; Islam, Fahad; Jaoua, Ali; Al Maadeed, Somaya; Zhang, Nan; Das, Gautam; Rajpoot, Nasir
2017-10-01
Data analytics have become increasingly complicated as the amount of data has increased. One technique that is used to enable data analytics in large datasets is data sampling, in which a portion of the data is selected to preserve the data characteristics for use in data analytics. In this paper, we introduce a novel data sampling technique that is rooted in formal concept analysis theory. This technique is used to create samples reliant on the data distribution across a set of binary patterns. The proposed sampling technique is applied in classifying the regions of breast cancer histology images as malignant or benign. The performance of our method is compared to other classical sampling methods. The results indicate that our method is efficient and generates an illustrative sample of small size. It is also competing with other sampling methods in terms of sample size and sample quality represented in classification accuracy and F1 measure. Copyright © 2017 Elsevier Ltd. All rights reserved.
Thermodynamic aspects of an LNG tank in fire and experimental validation
NASA Astrophysics Data System (ADS)
Hulsbosch-Dam, Corina; Atli-Veltin, Bilim; Kamperveen, Jerry; Velthuis, Han; Reinders, Johan; Spruijt, Mark; Vredeveldt, Lex
Mechanical behaviour of a Liquefied Natural Gas (LNG) tank and the thermodynamic behaviour of its containment under extreme heat load - for instance when subjected to external fire source as might occur during an accident - are extremely important when addressing safety concerns. In a scenario where external fire is present and consequent release of LNG from pressure relief valves (PRV) has occurred, escalation of the fire might occur causing difficulty for the fire response teams to approach the tank or to secure the perimeter. If the duration of the tank exposure to fire is known, the PRV opening time can be estimated based on the thermodynamic calculations. In this paper, such an accidental scenario is considered, relevant thermodynamic equations are derived and presented. Moreover, an experiment is performed with liquid nitrogen and the results are compared to the analytical ones. The analytical results match very well with the experimental observations. The resulting analytical models are suitable to be applied to other cryogenic liquids.