Sample records for factor analytic methods

  1. Methods for Estimating Uncertainty in Factor Analytic Solutions

    EPA Science Inventory

    The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...

  2. An analytically based numerical method for computing view factors in real urban environments

    NASA Astrophysics Data System (ADS)

    Lee, Doo-Il; Woo, Ju-Wan; Lee, Sang-Hyun

    2018-01-01

    A view factor is an important morphological parameter used in parameterizing in-canyon radiative energy exchange process as well as in characterizing local climate over urban environments. For realistic representation of the in-canyon radiative processes, a complete set of view factors at the horizontal and vertical surfaces of urban facets is required. Various analytical and numerical methods have been suggested to determine the view factors for urban environments, but most of the methods provide only sky-view factor at the ground level of a specific location or assume simplified morphology of complex urban environments. In this study, a numerical method that can determine the sky-view factors ( ψ ga and ψ wa ) and wall-view factors ( ψ gw and ψ ww ) at the horizontal and vertical surfaces is presented for application to real urban morphology, which are derived from an analytical formulation of the view factor between two blackbody surfaces of arbitrary geometry. The established numerical method is validated against the analytical sky-view factor estimation for ideal street canyon geometries, showing a consolidate confidence in accuracy with errors of less than 0.2 %. Using a three-dimensional building database, the numerical method is also demonstrated to be applicable in determining the sky-view factors at the horizontal (roofs and roads) and vertical (walls) surfaces in real urban environments. The results suggest that the analytically based numerical method can be used for the radiative process parameterization of urban numerical models as well as for the characterization of local urban climate.

  3. Recently published analytical methods for determining alcohol in body materials : alcohol countermeasures literature review

    DOT National Transportation Integrated Search

    1974-10-01

    The author has brought the review of published analytical methods for determining alcohol in body materials up-to- date. The review deals with analytical methods for alcohol in blood and other body fluids and tissues; breath alcohol methods; factors ...

  4. Determination of Slope Safety Factor with Analytical Solution and Searching Critical Slip Surface with Genetic-Traversal Random Method

    PubMed Central

    2014-01-01

    In the current practice, to determine the safety factor of a slope with two-dimensional circular potential failure surface, one of the searching methods for the critical slip surface is Genetic Algorithm (GA), while the method to calculate the slope safety factor is Fellenius' slices method. However GA needs to be validated with more numeric tests, while Fellenius' slices method is just an approximate method like finite element method. This paper proposed a new method to determine the minimum slope safety factor which is the determination of slope safety factor with analytical solution and searching critical slip surface with Genetic-Traversal Random Method. The analytical solution is more accurate than Fellenius' slices method. The Genetic-Traversal Random Method uses random pick to utilize mutation. A computer automatic search program is developed for the Genetic-Traversal Random Method. After comparison with other methods like slope/w software, results indicate that the Genetic-Traversal Random Search Method can give very low safety factor which is about half of the other methods. However the obtained minimum safety factor with Genetic-Traversal Random Search Method is very close to the lower bound solutions of slope safety factor given by the Ansys software. PMID:24782679

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Safari, L., E-mail: laleh.safari@ist.ac.at; Department of Physics, University of Oulu, Box 3000, FI-90014 Oulu; Santos, J. P.

    Atomic form factors are widely used for the characterization of targets and specimens, from crystallography to biology. By using recent mathematical results, here we derive an analytical expression for the atomic form factor within the independent particle model constructed from nonrelativistic screened hydrogenic wave functions. The range of validity of this analytical expression is checked by comparing the analytically obtained form factors with the ones obtained within the Hartee-Fock method. As an example, we apply our analytical expression for the atomic form factor to evaluate the differential cross section for Rayleigh scattering off neutral atoms.

  6. A GRAPHICAL DIAGNOSTIC METHOD FOR ASSESSING THE ROTATION IN FACTOR ANALYTICAL MODELS OF ATMOSPHERIC POLLUTION. (R831078)

    EPA Science Inventory

    Factor analytic tools such as principal component analysis (PCA) and positive matrix factorization (PMF), suffer from rotational ambiguity in the results: different solutions (factors) provide equally good fits to the measured data. The PMF model imposes non-negativity of both...

  7. A Structural and Correlational Analysis of Two Common Measures of Personal Epistemology

    ERIC Educational Resources Information Center

    Laster, Bonnie Bost

    2010-01-01

    Scope and Method of Study: The current inquiry is a factor analytic study which utilizes first and second order factor analytic methods to examine the internal structures of two measurements of personal epistemological beliefs: the Schommer Epistemological Questionnaire (SEQ) and Epistemic Belief Inventory (EBI). The study also examines the…

  8. Overview of mycotoxin methods, present status and future needs.

    PubMed

    Gilbert, J

    1999-01-01

    This article reviews current requirements for the analysis for mycotoxins in foods and identifies legislative as well as other factors that are driving development and validation of new methods. New regulatory limits for mycotoxins and analytical quality assurance requirements for laboratories to only use validated methods are seen as major factors driving developments. Three major classes of methods are identified which serve different purposes and can be categorized as screening, official and research. In each case the present status and future needs are assessed. In addition to an overview of trends in analytical methods, some other areas of analytical quality assurance such as participation in proficiency testing and reference materials are identified.

  9. Resilience: A Meta-Analytic Approach

    ERIC Educational Resources Information Center

    Lee, Ji Hee; Nam, Suk Kyung; Kim, A-Reum; Kim, Boram; Lee, Min Young; Lee, Sang Min

    2013-01-01

    This study investigated the relationship between psychological resilience and its relevant variables by using a meta-analytic method. The results indicated that the largest effect on resilience was found to stem from the protective factors, a medium effect from risk factors, and the smallest effect from demographic factors. (Contains 4 tables.)

  10. Comparisons of Exploratory and Confirmatory Factor Analysis.

    ERIC Educational Resources Information Center

    Daniel, Larry G.

    Historically, most researchers conducting factor analysis have used exploratory methods. However, more recently, confirmatory factor analytic methods have been developed that can directly test theory either during factor rotation using "best fit" rotation methods or during factor extraction, as with the LISREL computer programs developed…

  11. Sensitive analytical method for simultaneous analysis of some vasoconstrictors with highly overlapped analytical signals

    NASA Astrophysics Data System (ADS)

    Nikolić, G. S.; Žerajić, S.; Cakić, M.

    2011-10-01

    Multivariate calibration method is a powerful mathematical tool that can be applied in analytical chemistry when the analytical signals are highly overlapped. The method with regression by partial least squares is proposed for the simultaneous spectrophotometric determination of adrenergic vasoconstrictors in decongestive solution containing two active components: phenyleprine hydrochloride and trimazoline hydrochloride. These sympathomimetic agents are that frequently associated in pharmaceutical formulations against the common cold. The proposed method, which is, simple and rapid, offers the advantages of sensitivity and wide range of determinations without the need for extraction of the vasoconstrictors. In order to minimize the optimal factors necessary to obtain the calibration matrix by multivariate calibration, different parameters were evaluated. The adequate selection of the spectral regions proved to be important on the number of factors. In order to simultaneously quantify both hydrochlorides among excipients, the spectral region between 250 and 290 nm was selected. A recovery for the vasoconstrictor was 98-101%. The developed method was applied to assay of two decongestive pharmaceutical preparations.

  12. Analytical method development of nifedipine and its degradants binary mixture using high performance liquid chromatography through a quality by design approach

    NASA Astrophysics Data System (ADS)

    Choiri, S.; Ainurofiq, A.; Ratri, R.; Zulmi, M. U.

    2018-03-01

    Nifedipin (NIF) is a photo-labile drug that easily degrades when it exposures a sunlight. This research aimed to develop of an analytical method using a high-performance liquid chromatography and implemented a quality by design approach to obtain effective, efficient, and validated analytical methods of NIF and its degradants. A 22 full factorial design approach with a curvature as a center point was applied to optimize of the analytical condition of NIF and its degradants. Mobile phase composition (MPC) and flow rate (FR) as factors determined on the system suitability parameters. The selected condition was validated by cross-validation using a leave one out technique. Alteration of MPC affected on time retention significantly. Furthermore, an increase of FR reduced the tailing factor. In addition, the interaction of both factors affected on an increase of the theoretical plates and resolution of NIF and its degradants. The selected analytical condition of NIF and its degradants has been validated at range 1 – 16 µg/mL that had good linearity, precision, accuration and efficient due to an analysis time within 10 min.

  13. Methods for analysis of cracks in three-dimensional solids

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Newman, J. C., Jr.

    1984-01-01

    Analytical and numerical methods evaluating the stress-intensity factors for three-dimensional cracks in solids are presented, with reference to fatigue failure in aerospace structures. The exact solutions for embedded elliptical and circular cracks in infinite solids, and the approximate methods, including the finite-element, the boundary-integral equation, the line-spring models, and the mixed methods are discussed. Among the mixed methods, the superposition of analytical and finite element methods, the stress-difference, the discretization-error, the alternating, and the finite element-alternating methods are reviewed. Comparison of the stress-intensity factor solutions for some three-dimensional crack configurations showed good agreement. Thus, the choice of a particular method in evaluating the stress-intensity factor is limited only to the availability of resources and computer programs.

  14. Analytic strategies to evaluate the association of time-varying exposures to HIV-related outcomes: Alcohol consumption as an example.

    PubMed

    Cook, Robert L; Kelso, Natalie E; Brumback, Babette A; Chen, Xinguang

    2016-01-01

    As persons with HIV are living longer, there is a growing need to investigate factors associated with chronic disease, rate of disease progression and survivorship. Many risk factors for this high-risk population change over time, such as participation in treatment, alcohol consumption and drug abuse. Longitudinal datasets are increasingly available, particularly clinical data that contain multiple observations of health exposures and outcomes over time. Several analytic options are available for assessment of longitudinal data; however, it can be challenging to choose the appropriate analytic method for specific combinations of research questions and types of data. The purpose of this review is to help researchers choose the appropriate methods to analyze longitudinal data, using alcohol consumption as an example of a time-varying exposure variable. When selecting the optimal analytic method, one must consider aspects of exposure (e.g. timing, pattern, and amount) and outcome (fixed or time-varying), while also addressing minimizing bias. In this article, we will describe several analytic approaches for longitudinal data, including developmental trajectory analysis, generalized estimating equations, and mixed effect models. For each analytic strategy, we describe appropriate situations to use the method and provide an example that demonstrates the use of the method. Clinical data related to alcohol consumption and HIV are used to illustrate these methods.

  15. Psychometric Structure of a Comprehensive Objective Structured Clinical Examination: A Factor Analytic Approach

    ERIC Educational Resources Information Center

    Volkan, Kevin; Simon, Steven R.; Baker, Harley; Todres, I. David

    2004-01-01

    Problem Statement and Background: While the psychometric properties of Objective Structured Clinical Examinations (OSCEs) have been studied, their latent structures have not been well characterized. This study examines a factor analytic model of a comprehensive OSCE and addresses implications for measurement of clinical performance. Methods: An…

  16. Comparison of adjoint and analytical Bayesian inversion methods for constraining Asian sources of carbon monoxide using satellite (MOPITT) measurements of CO columns

    NASA Astrophysics Data System (ADS)

    Kopacz, Monika; Jacob, Daniel J.; Henze, Daven K.; Heald, Colette L.; Streets, David G.; Zhang, Qiang

    2009-02-01

    We apply the adjoint of an atmospheric chemical transport model (GEOS-Chem CTM) to constrain Asian sources of carbon monoxide (CO) with 2° × 2.5° spatial resolution using Measurement of Pollution in the Troposphere (MOPITT) satellite observations of CO columns in February-April 2001. Results are compared to the more common analytical method for solving the same Bayesian inverse problem and applied to the same data set. The analytical method is more exact but because of computational limitations it can only constrain emissions over coarse regions. We find that the correction factors to the a priori CO emission inventory from the adjoint inversion are generally consistent with those of the analytical inversion when averaged over the large regions of the latter. The adjoint solution reveals fine-scale variability (cities, political boundaries) that the analytical inversion cannot resolve, for example, in the Indian subcontinent or between Korea and Japan, and some of that variability is of opposite sign which points to large aggregation errors in the analytical solution. Upward correction factors to Chinese emissions from the prior inventory are largest in central and eastern China, consistent with a recent bottom-up revision of that inventory, although the revised inventory also sees the need for upward corrections in southern China where the adjoint and analytical inversions call for downward correction. Correction factors for biomass burning emissions derived from the adjoint and analytical inversions are consistent with a recent bottom-up inventory on the basis of MODIS satellite fire data.

  17. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    PubMed

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  18. Evaluation of selected methods for determining streamflow during periods of ice effect

    USGS Publications Warehouse

    Melcher, N.B.; Walker, J.F.

    1990-01-01

    The methods are classified into two general categories, subjective and analytical, depending on whether individual judgement is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods, and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used for streamflow-gaging stations where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice adjustment factor) may be appropriate for use for stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge ratio and multiple regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.

  19. COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R826238)

    EPA Science Inventory

    This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard methods that we ...

  20. Beyond Engagement Analytics: Which Online Mixed-Data Factors Predict Student Learning Outcomes?

    ERIC Educational Resources Information Center

    Strang, Kenneth David

    2017-01-01

    This mixed-method study focuses on online learning analytics, a research area of importance. Several important student attributes and their online activities are examined to identify what seems to work best to predict higher grades. The purpose is to explore the relationships between student grade and key learning engagement factors using a large…

  1. Holistic rubric vs. analytic rubric for measuring clinical performance levels in medical students.

    PubMed

    Yune, So Jung; Lee, Sang Yeoup; Im, Sun Ju; Kam, Bee Sung; Baek, Sun Yong

    2018-06-05

    Task-specific checklists, holistic rubrics, and analytic rubrics are often used for performance assessments. We examined what factors evaluators consider important in holistic scoring of clinical performance assessment, and compared the usefulness of applying holistic and analytic rubrics respectively, and analytic rubrics in addition to task-specific checklists based on traditional standards. We compared the usefulness of a holistic rubric versus an analytic rubric in effectively measuring the clinical skill performances of 126 third-year medical students who participated in a clinical performance assessment conducted by Pusan National University School of Medicine. We conducted a questionnaire survey of 37 evaluators who used all three evaluation methods-holistic rubric, analytic rubric, and task-specific checklist-for each student. The relationship between the scores on the three evaluation methods was analyzed using Pearson's correlation. Inter-rater agreement was analyzed by Kappa index. The effect of holistic and analytic rubric scores on the task-specific checklist score was analyzed using multiple regression analysis. Evaluators perceived accuracy and proficiency to be major factors in objective structured clinical examinations evaluation, and history taking and physical examination to be major factors in clinical performance examinations evaluation. Holistic rubric scores were highly related to the scores of the task-specific checklist and analytic rubric. Relatively low agreement was found in clinical performance examinations compared to objective structured clinical examinations. Meanwhile, the holistic and analytic rubric scores explained 59.1% of the task-specific checklist score in objective structured clinical examinations and 51.6% in clinical performance examinations. The results show the usefulness of holistic and analytic rubrics in clinical performance assessment, which can be used in conjunction with task-specific checklists for more efficient evaluation.

  2. Quantitative evaluation of the matrix effect in bioanalytical methods based on LC-MS: A comparison of two approaches.

    PubMed

    Rudzki, Piotr J; Gniazdowska, Elżbieta; Buś-Kwaśnik, Katarzyna

    2018-06-05

    Liquid chromatography coupled to mass spectrometry (LC-MS) is a powerful tool for studying pharmacokinetics and toxicokinetics. Reliable bioanalysis requires the characterization of the matrix effect, i.e. influence of the endogenous or exogenous compounds on the analyte signal intensity. We have compared two methods for the quantitation of matrix effect. The CVs(%) of internal standard normalized matrix factors recommended by the European Medicines Agency were evaluated against internal standard normalized relative matrix effects derived from Matuszewski et al. (2003). Both methods use post-extraction spiked samples, but matrix factors require also neat solutions. We have tested both approaches using analytes of diverse chemical structures. The study did not reveal relevant differences in the results obtained with both calculation methods. After normalization with the internal standard, the CV(%) of the matrix factor was on average 0.5% higher than the corresponding relative matrix effect. The method adopted by the European Medicines Agency seems to be slightly more conservative in the analyzed datasets. Nine analytes of different structures enabled a general overview of the problem, still, further studies are encouraged to confirm our observations. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Computing sensitivity and selectivity in parallel factor analysis and related multiway techniques: the need for further developments in net analyte signal theory.

    PubMed

    Olivieri, Alejandro C

    2005-08-01

    Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.

  4. 40 CFR 63.786 - Test methods and procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... level of sample dilution must be factored in. (2) Repeatability. First, at the 0.1-5 percent analyte... percent analyte range the results would be suspect if duplicates vary by more than 5 percent relative and...) Reproducibility. First, at the 0.1-5 percent analyte range the results would be suspect if lab to lab variation...

  5. 40 CFR 63.786 - Test methods and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... level of sample dilution must be factored in. (2) Repeatability. First, at the 0.1-5 percent analyte... percent analyte range the results would be suspect if duplicates vary by more than 5 percent relative and...) Reproducibility. First, at the 0.1-5 percent analyte range the results would be suspect if lab to lab variation...

  6. A Method for Derivation of Areas for Assessment in Marital Relationships.

    ERIC Educational Resources Information Center

    Broderick, Joan E.

    1981-01-01

    Expands upon factor-analytic and rational methods and introduces a third method for determining content areas to be assessed in marital relationships. Definitions of a "good marriage" were content analyzed, and a number of areas were added. Demographic subgroup differences were found not to be influential factors. (Author)

  7. COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R825173)

    EPA Science Inventory

    Abstract

    This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard...

  8. A multiple hollow fibre liquid-phase microextraction method for the determination of halogenated solvent residues in olive oil.

    PubMed

    Manso, J; García-Barrera, T; Gómez-Ariza, J L; González, A G

    2014-02-01

    The present paper describes a method based on the extraction of analytes by multiple hollow fibre liquid-phase microextraction and detection by ion-trap mass spectrometry and electron capture detectors after gas chromatographic separation. The limits of detection are in the range of 0.13-0.67 μg kg(-1), five orders of magnitude lower than those reached with the European Commission Official method of analysis, with three orders of magnitude of linear range (from the quantification limits to 400 μg kg(-1) for all the analytes) and recoveries in fortified olive oils in the range of 78-104 %. The main advantages of the analytical method are the absence of sample carryover (due to the disposable nature of the membranes), high enrichment factors in the range of 79-488, high throughput and low cost. The repeatability of the analytical method ranged from 8 to 15 % for all the analytes, showing a good performance.

  9. [Quality Management and Quality Specifications of Laboratory Tests in Clinical Studies--Challenges in Pre-Analytical Processes in Clinical Laboratories].

    PubMed

    Ishibashi, Midori

    2015-01-01

    The cost, speed, and quality are the three important factors recently indicated by the Ministry of Health, Labour and Welfare (MHLW) for the purpose of accelerating clinical studies. Based on this background, the importance of laboratory tests is increasing, especially in the evaluation of clinical study participants' entry and safety, and drug efficacy. To assure the quality of laboratory tests, providing high-quality laboratory tests is mandatory. For providing adequate quality assurance in laboratory tests, quality control in the three fields of pre-analytical, analytical, and post-analytical processes is extremely important. There are, however, no detailed written requirements concerning specimen collection, handling, preparation, storage, and shipping. Most laboratory tests for clinical studies are performed onsite in a local laboratory; however, a part of laboratory tests is done in offsite central laboratories after specimen shipping. As factors affecting laboratory tests, individual and inter-individual variations are well-known. Besides these factors, standardizing the factors of specimen collection, handling, preparation, storage, and shipping, may improve and maintain the high quality of clinical studies in general. Furthermore, the analytical method, units, and reference interval are also important factors. It is concluded that, to overcome the problems derived from pre-analytical processes, it is necessary to standardize specimen handling in a broad sense.

  10. Methods for Estimating Uncertainty in PMF Solutions: Examples with Ambient Air and Water Quality Data and Guidance on Reporting PMF Results

    EPA Science Inventory

    The new version of EPA’s positive matrix factorization (EPA PMF) software, 5.0, includes three error estimation (EE) methods for analyzing factor analytic solutions: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement (BS-DISP)...

  11. In-house validation of a liquid chromatography-tandem mass spectrometry method for the determination of selective androgen receptor modulators (SARMS) in bovine urine.

    PubMed

    Schmidt, Kathrin S; Mankertz, Joachim

    2018-06-01

    A sensitive and robust LC-MS/MS method allowing the rapid screening and confirmation of selective androgen receptor modulators in bovine urine was developed and successfully validated according to Commission Decision 2002/657/EC, chapter 3.1.3 'alternative validation', by applying a matrix-comprehensive in-house validation concept. The confirmation of the analytes in the validation samples was achieved both on the basis of the MRM ion ratios as laid down in Commission Decision 2002/657/EC and by comparison of their enhanced product ion (EPI) spectra with a reference mass spectral library by making use of the QTRAP technology. Here, in addition to the MRM survey scan, EPI spectra were generated in a data-dependent way according to an information-dependent acquisition criterion. Moreover, stability studies of the analytes in solution and in matrix according to an isochronous approach proved the stability of the analytes in solution and in matrix for at least the duration of the validation study. To identify factors that have a significant influence on the test method in routine analysis, a factorial effect analysis was performed. To this end, factors considered to be relevant for the method in routine analysis (e.g. operator, storage duration of the extracts before measurement, different cartridge lots and different hydrolysis conditions) were systematically varied on two levels. The examination of the extent to which these factors influence the measurement results of the individual analytes showed that none of the validation factors exerts a significant influence on the measurement results.

  12. 37Cl/35Cl isotope ratio analysis in perchlorate by ion chromatography/multi collector -ICPMS: Analytical performance and implication for biodegradation studies.

    PubMed

    Zakon, Yevgeni; Ronen, Zeev; Halicz, Ludwik; Gelman, Faina

    2017-10-01

    In the present study we propose a new analytical method for 37 Cl/ 35 Cl analysis in perchlorate by Ion Chromatography(IC) coupled to Multicollector Inductively Coupled Plasma Mass Spectrometry (MC-ICPMS). The accuracy of the analytical method was validated by analysis of international perchlorate standard materials USGS-37 and USGS -38; analytical precision better than ±0.4‰ was achieved. 37 Cl/ 35 Cl isotope ratio analysis in perchlorate during laboratory biodegradation experiment with microbial cultures enriched from the contaminated soil in Israel resulted in isotope enrichment factor ε 37 Cl = -13.3 ± 1‰, which falls in the range reported previously for perchlorate biodegradation by pure microbial cultures. The proposed analytical method may significantly simplify the procedure for isotope analysis of perchlorate which is currently applied in environmental studies. Copyright © 2017. Published by Elsevier Ltd.

  13. Centrifugal ultrafiltration of human serum for improving immunoglobulin A quantification using attenuated total reflectance infrared spectroscopy.

    PubMed

    Elsohaby, Ibrahim; McClure, J Trenton; Riley, Christopher B; Bryanton, Janet; Bigsby, Kathryn; Shaw, R Anthony

    2018-02-20

    Attenuated total reflectance infrared (ATR-IR) spectroscopy is a simple, rapid and cost-effective method for the analysis of serum. However, the complex nature of serum remains a limiting factor to the reliability of this method. We investigated the benefits of coupling the centrifugal ultrafiltration with ATR-IR spectroscopy for quantification of human serum IgA concentration. Human serum samples (n = 196) were analyzed for IgA using an immunoturbidimetric assay. ATR-IR spectra were acquired for whole serum samples and for the retentate (residue) reconstituted with saline following 300 kDa centrifugal ultrafiltration. IR-based analytical methods were developed for each of the two spectroscopic datasets, and the accuracy of each of the two methods compared. Analytical methods were based upon partial least squares regression (PLSR) calibration models - one with 5-PLS factors (for whole serum) and the second with 9-PLS factors (for the reconstituted retentate). Comparison of the two sets of IR-based analytical results to reference IgA values revealed improvements in the Pearson correlation coefficient (from 0.66 to 0.76), and the root mean squared error of prediction in IR-based IgA concentrations (from 102 to 79 mg/dL) for the ultrafiltration retentate-based method as compared to the method built upon whole serum spectra. Depleting human serum low molecular weight proteins using a 300 kDa centrifugal filter thus enhances the accuracy IgA quantification by ATR-IR spectroscopy. Further evaluation and optimization of this general approach may ultimately lead to routine analysis of a range of high molecular-weight analytical targets that are otherwise unsuitable for IR-based analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Two-factor theory – at the intersection of health care management and patient satisfaction

    PubMed Central

    Bohm, Josef

    2012-01-01

    Using data obtained from the 2004 Joint Canadian/United States Survey of Health, an analytic model using principles derived from Herzberg’s motivational hygiene theory was developed for evaluating patient satisfaction with health care. The analysis sought to determine whether survey variables associated with consumer satisfaction act as Hertzberg factors and contribute to survey participants’ self-reported levels of health care satisfaction. To validate the technique, data from the survey were analyzed using logistic regression methods and then compared with results obtained from the two-factor model. The findings indicate a high degree of correlation between the two methods. The two-factor analytical methodology offers advantages due to its ability to identify whether a factor assumes a motivational or hygienic role and assesses the influence of a factor within select populations. Its ease of use makes this methodology well suited for assessment of multidimensional variables. PMID:23055755

  15. Two-factor theory - at the intersection of health care management and patient satisfaction.

    PubMed

    Bohm, Josef

    2012-01-01

    Using data obtained from the 2004 Joint Canadian/United States Survey of Health, an analytic model using principles derived from Herzberg's motivational hygiene theory was developed for evaluating patient satisfaction with health care. The analysis sought to determine whether survey variables associated with consumer satisfaction act as Hertzberg factors and contribute to survey participants' self-reported levels of health care satisfaction. To validate the technique, data from the survey were analyzed using logistic regression methods and then compared with results obtained from the two-factor model. The findings indicate a high degree of correlation between the two methods. The two-factor analytical methodology offers advantages due to its ability to identify whether a factor assumes a motivational or hygienic role and assesses the influence of a factor within select populations. Its ease of use makes this methodology well suited for assessment of multidimensional variables.

  16. A Dimensional Analysis of College Student Satisfaction.

    ERIC Educational Resources Information Center

    Betz, Ellen L.; And Others

    Further research on the College Student Satisfaction Questionnaire (CSSQ) is reported herein (see TM 000 049). Item responses of two groups of university students were separately analyzed by three different factor analytic methods. Three factors consistently appeared across groups and methods: Compensation, Social Life, and Working Conditions. Two…

  17. Development of magnetic dispersive solid phase extraction using toner powder as an efficient and economic sorbent in combination with dispersive liquid-liquid microextraction for extraction of some widely used pesticides in fruit juices.

    PubMed

    Farajzadeh, Mir Ali; Mohebbi, Ali

    2018-01-12

    In this study, for the first time, a magnetic dispersive solid phase extraction method using an easy-accessible, cheap, and efficient magnetic sorbent (toner powder) combined with dispersive liquid-liquid microextraction has been developed for the extraction and preconcentration of some widely used pesticides (diazinon, ametryn, chlorpyrifos, penconazole, oxadiazon, diniconazole, and fenazaquin) from fruit juices prior to their determination by gas chromatography-flame ionization detection. In this method, the magnetic sorbent is mixed with an appropriate dispersive solvent (methanol-water, 80:20, v/v) and then injected into an aqueous sample containing the analytes. By this action the analytes are rapidly adsorbed on the sorbent by binding to its carbon. The sorbent particles are isolated from the aqueous solution in the presence of an external magnetic field. Then an appropriate organic solvent (acetone) is used to desorb the analytes from the sorbent. Finally, the obtained supernatant is mixed with an extraction solvent and injected into deionized water in order to achieve high enrichment factors and sensitivity. Several significant factors affecting the performance of the introduced method were investigated and optimized. Under the optimum experimental conditions, the extraction recoveries of the proposed method for the selected analytes ranged from 49-75%. The relative standard deviations were ≤7% for intra- (n = 6) and inter-day (n = 4) precisions at a concentration of 10 μg L -1 of each analyte. The limits of detection were in the range of 0.15-0.36 μg L -1 . Finally, the applicability of the proposed method was evaluated by analysis of the selected analytes in some fruit juices. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. An analytical model for pressure of volume fractured tight oil reservoir with horizontal well

    NASA Astrophysics Data System (ADS)

    Feng, Qihong; Dou, Kaiwen; Zhang, Xianmin; Xing, Xiangdong; Xia, Tian

    2017-05-01

    The property of tight oil reservoir is worse than common reservoir that we usually seen before, the porosity and permeability is low, the diffusion is very complex. Therefore, the ordinary depletion method is useless here. The volume fracture breaks through the conventional EOR mechanism, which set the target by amplifying the contact area of fracture and reservoir so as to improving the production of every single well. In order to forecast the production effectively, we use the traditional dual-porosity model, build an analytical model for production of volume fractured tight oil reservoir with horizontal well, and get the analytical solution in Laplace domain. Then we construct the log-log plot of dimensionless pressure and time by stiffest conversion. After that, we discuss the influential factors of pressure. Several factors like cross flow, skin factors and threshold pressure gradient was analyzed in the article. This model provides a useful method for tight oil production forecast and it has certain guiding significance for the production capacity prediction and dynamic analysis.

  19. Methods for analysis of cracks in three-dimensional solids

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Newman, J. C., Jr.

    1984-01-01

    Various analytical and numerical methods used to evaluate the stress intensity factors for cracks in three-dimensional (3-D) solids are reviewed. Classical exact solutions and many of the approximate methods used in 3-D analyses of cracks are reviewed. The exact solutions for embedded elliptic cracks in infinite solids are discussed. The approximate methods reviewed are the finite element methods, the boundary integral equation (BIE) method, the mixed methods (superposition of analytical and finite element method, stress difference method, discretization-error method, alternating method, finite element-alternating method), and the line-spring model. The finite element method with singularity elements is the most widely used method. The BIE method only needs modeling of the surfaces of the solid and so is gaining popularity. The line-spring model appears to be the quickest way to obtain good estimates of the stress intensity factors. The finite element-alternating method appears to yield the most accurate solution at the minimum cost.

  20. Comparison of three methods for wind turbine capacity factor estimation.

    PubMed

    Ditkovich, Y; Kuperman, A

    2014-01-01

    Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first "quasiexact" approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second "analytic" approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third "approximate" approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation.

  1. On Analytical Solutions of f(R) Modified Gravity Theories in FLRW Cosmologies

    NASA Astrophysics Data System (ADS)

    Domazet, Silvije; Radovanović, Voja; Simonović, Marko; Štefančić, Hrvoje

    2013-02-01

    A novel analytical method for f(R) modified theories without matter in Friedmann-Lemaitre-Robertson-Walker (FLRW) spacetimes is introduced. The equation of motion for the scale factor in terms of cosmic time is reduced to the equation for the evolution of the Ricci scalar R with the Hubble parameter H. The solution of equation of motion for actions of the form of power law in Ricci scalar R is presented with a detailed elaboration of the action quadratic in R. The reverse use of the introduced method is exemplified in finding functional forms f(R), which leads to specified scale factor functions. The analytical solutions are corroborated by numerical calculations with excellent agreement. Possible further applications to the phases of inflationary expansion and late-time acceleration as well as f(R) theories with radiation are outlined.

  2. Correlation of the Capacity Factor in Vesicular Electrokinetic Chromatography with the Octanol:Water Partition Coefficient for Charged and Neutral Analytes

    PubMed Central

    Razak, J. L.; Cutak, B. J.; Larive, C. K.; Lunte, C. E.

    2008-01-01

    Purpose The aim of this study was to develop a method based upon electrokinetic chromatography (EKC) using oppositely charged surfactant vesicles as a buffer modifier to estimate hydrophobicity (log P) for a range of neutral and charged compounds. Methods Vesicles were formed from cetyltrimethylammonium bromide (CTAB) and sodium n-octyl sulfate (SOS). The size and polydispersity of the vesicles were characterized by electron microscopy, dynamic light scattering, and pulsed-field gradient NMR (PFG-NMR). PFG-NMR was also used to determine if ion-pairing between cationic analytes and free SOS monomer occurred. The CTAB/SOS vesicles were used as a buffer modifier in capillary electrophoresis (CE). The capacity factor (log k′) was calculated by determining the mobility of the analytes both in the presence and absence of vesicles. Log k′ was determined for 29 neutral and charged analytes. Results There was a linear relationship between the log of capacity factor (log k′) and octanol/water partition coefficient (log P) for both neutral and basic species at pH 6.0, 7.3, and 10.2. This indicated that interaction between the cation and vesicle was dominated by hydrophobic forces. At pH 4.3, the log k′ values for the least hydrophobic basic analytes were higher than expected, indicating that electrostatic attraction as well as hydrophobic forces contributed to the overall interaction between the cation and vesicle. Anionic compounds could not be evaluated using this system. Conclusion Vesicular electrokinetic chromatography (VEKC) using surfactant vesicles as buffer modifiers is a promising method for the estimation of hydrophobicity. PMID:11336344

  3. Sample Size and Power Estimates for a Confirmatory Factor Analytic Model in Exercise and Sport: A Monte Carlo Approach

    ERIC Educational Resources Information Center

    Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying

    2011-01-01

    Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…

  4. Landslide susceptibility mapping by combining the three methods Fuzzy Logic, Frequency Ratio and Analytical Hierarchy Process in Dozain basin

    NASA Astrophysics Data System (ADS)

    Tazik, E.; Jahantab, Z.; Bakhtiari, M.; Rezaei, A.; Kazem Alavipanah, S.

    2014-10-01

    Landslides are among the most important natural hazards that lead to modification of the environment. Therefore, studying of this phenomenon is so important in many areas. Because of the climate conditions, geologic, and geomorphologic characteristics of the region, the purpose of this study was landslide hazard assessment using Fuzzy Logic, frequency ratio and Analytical Hierarchy Process method in Dozein basin, Iran. At first, landslides occurred in Dozein basin were identified using aerial photos and field studies. The influenced landslide parameters that were used in this study including slope, aspect, elevation, lithology, precipitation, land cover, distance from fault, distance from road and distance from river were obtained from different sources and maps. Using these factors and the identified landslide, the fuzzy membership values were calculated by frequency ratio. Then to account for the importance of each of the factors in the landslide susceptibility, weights of each factor were determined based on questionnaire and AHP method. Finally, fuzzy map of each factor was multiplied to its weight that obtained using AHP method. At the end, for computing prediction accuracy, the produced map was verified by comparing to existing landslide locations. These results indicate that the combining the three methods Fuzzy Logic, Frequency Ratio and Analytical Hierarchy Process method are relatively good estimators of landslide susceptibility in the study area. According to landslide susceptibility map about 51% of the occurred landslide fall into the high and very high susceptibility zones of the landslide susceptibility map, but approximately 26 % of them indeed located in the low and very low susceptibility zones.

  5. Improving the efficiency of quantitative (1)H NMR: an innovative external standard-internal reference approach.

    PubMed

    Huang, Yande; Su, Bao-Ning; Ye, Qingmei; Palaniswamy, Venkatapuram A; Bolgar, Mark S; Raglione, Thomas V

    2014-01-01

    The classical internal standard quantitative NMR (qNMR) method determines the purity of an analyte by the determination of a solution containing the analyte and a standard. Therefore, the standard must meet the requirements of chemical compatibility and lack of resonance interference with the analyte as well as a known purity. The identification of such a standard can be time consuming and must be repeated for each analyte. In contrast, the external standard qNMR method utilizes a standard with a known purity to calibrate the NMR instrument. The external standard and the analyte are measured separately, thereby eliminating the matter of chemical compatibility and resonance interference between the standard and the analyte. However, the instrumental factors, including the quality of NMR tubes, must be kept the same. Any deviations will compromise the accuracy of the results. An innovative qNMR method reported herein utilizes an internal reference substance along with an external standard to assume the role of the standard used in the traditional internal standard qNMR method. In this new method, the internal reference substance must only be chemically compatible and be free of resonance-interference with the analyte or external standard whereas the external standard must only be of a known purity. The exact purity or concentration of the internal reference substance is not required as long as the same quantity is added to the external standard and the analyte. The new method reduces the burden of searching for an appropriate standard for each analyte significantly. Therefore the efficiency of the qNMR purity assay increases while the precision of the internal standard method is retained. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. A Cluster Analytic Study of Osteoprotective Behavior in Undergraduates

    ERIC Educational Resources Information Center

    Sharp, Katherine; Thombs, Dennis L.

    2003-01-01

    Objective: To derive an empirical taxonomy of osteoprotective stages using the Precaution Adoption Process Model (PAPM) and to identify the predisposing factors associated with each stage. Methods: An anonymous survey was completed by 504 undergraduates at a Midwestern public university. Results: Cluster analytic findings indicate that only 2…

  7. Application of factorial designs to study factors involved in the determination of aldehydes present in beer by on-fiber derivatization in combination with gas chromatography and mass spectrometry.

    PubMed

    Carrillo, Génesis; Bravo, Adriana; Zufall, Carsten

    2011-05-11

    With the aim of studying the factors involved in on-fiber derivatization of Strecker aldehydes, furfural, and (E)-2-nonenal with O-(2,3,4,5,6-pentafluorobenzyl)hydroxylamine in beer, factorial designs were applied. The effect of the temperature, time, and NaCl addition on the analytes' derivatization/extraction efficiency was studied through a factorial 2(3) randomized-block design; all of the factors and their interactions were significant at the 95% confidence level for most of the analytes. The effect of temperature and its interactions separated the analytes in two groups. However, a single sampling condition was selected that optimized response for most aldehydes. The resulting method, combining on-fiber derivatization with gas chromatography-mass spectrometry, was validated. Limits of detections were between 0.015 and 1.60 μg/L, and relative standard deviations were between 1.1 and 12.2%. The efficacy of the internal standardization method was confirmed by recovery percentage (73-117%). The method was applied to the determination of aldehydes in fresh beer and after storage at 28 °C.

  8. Application of a dispersive solid-phase extraction method using an amino-based silica-coated nanomagnetic sorbent for the trace quantification of chlorophenoxyacetic acids in water samples.

    PubMed

    Ghambarian, Mahnaz; Behbahani, Mohammad; Esrafili, Ali; Sobhi, Hamid Reza

    2017-09-01

    Herein, an amino-based silica-coated nanomagnetic sorbent was applied for the effective extraction of two chlorophenoxyacetic acids (2-methyl-4-chlorophenoxyacetic acid and 2,4-dichlorophenoxyacetic acid) from various water samples. The sorbent was successfully synthesized and subsequently characterized by scanning electron microscopy, X-ray diffraction, and Fourier-transform infrared spectroscopy. The analytes were extracted by the sorbent mainly through ionic interactions. Once the extraction of analytes was completed, they were desorbed from the sorbent and detected by high-performance liquid chromatography with ultraviolet detection. A number of factors affecting the extraction and desorption of the analytes were investigated in detail and the optimum conditions were established. Under the optimum conditions, the calibration curves were linear over the concentration range of 1-250, and based on a signal-to-noise ratio of 3, the method detection limits were determined to be 0.5 μg/L for both analytes. Additionally, a preconcentration factor of 314 was achieved for the analytes. The average relative recoveries obtained from the fortified water samples varied in the range of 91-108% with relative standard deviations of 2.9-8.3%. Finally, the method was determined to be robust and effective for environmental water analysis. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Construction of RFIF using VVSFs with application

    NASA Astrophysics Data System (ADS)

    Katiyar, Kuldip; Prasad, Bhagwati

    2017-10-01

    A method of variable vertical scaling factors (VVSFs) is proposed to define the recurrent fractal interpolation function (RFIF) for fitting the data sets. A generalization of one of the recent methods using analytic approach is presented for finding variable vertical scaling factors. An application of it in reconstruction of an EEG signal is also given.

  10. MS-Based Analytical Techniques: Advances in Spray-Based Methods and EI-LC-MS Applications

    PubMed Central

    Medina, Isabel; Cappiello, Achille; Careri, Maria

    2018-01-01

    Mass spectrometry is the most powerful technique for the detection and identification of organic compounds. It can provide molecular weight information and a wealth of structural details that give a unique fingerprint for each analyte. Due to these characteristics, mass spectrometry-based analytical methods are showing an increasing interest in the scientific community, especially in food safety, environmental, and forensic investigation areas where the simultaneous detection of targeted and nontargeted compounds represents a key factor. In addition, safety risks can be identified at the early stage through online and real-time analytical methodologies. In this context, several efforts have been made to achieve analytical instrumentation able to perform real-time analysis in the native environment of samples and to generate highly informative spectra. This review article provides a survey of some instrumental innovations and their applications with particular attention to spray-based MS methods and food analysis issues. The survey will attempt to cover the state of the art from 2012 up to 2017. PMID:29850370

  11. Quantitative phase imaging method based on an analytical nonparaxial partially coherent phase optical transfer function.

    PubMed

    Bao, Yijun; Gaylord, Thomas K

    2016-11-01

    Multifilter phase imaging with partially coherent light (MFPI-PC) is a promising new quantitative phase imaging method. However, the existing MFPI-PC method is based on the paraxial approximation. In the present work, an analytical nonparaxial partially coherent phase optical transfer function is derived. This enables the MFPI-PC to be extended to the realistic nonparaxial case. Simulations over a wide range of test phase objects as well as experimental measurements on a microlens array verify higher levels of imaging accuracy compared to the paraxial method. Unlike the paraxial version, the nonparaxial MFPI-PC with obliquity factor correction exhibits no systematic error. In addition, due to its analytical expression, the increase in computation time compared to the paraxial version is negligible.

  12. No Impact of the Analytical Method Used for Determining Cystatin C on Estimating Glomerular Filtration Rate in Children.

    PubMed

    Alberer, Martin; Hoefele, Julia; Benz, Marcus R; Bökenkamp, Arend; Weber, Lutz T

    2017-01-01

    Measurement of inulin clearance is considered to be the gold standard for determining kidney function in children, but this method is time consuming and expensive. The glomerular filtration rate (GFR) is on the other hand easier to calculate by using various creatinine- and/or cystatin C (Cys C)-based formulas. However, for the determination of serum creatinine (Scr) and Cys C, different and non-interchangeable analytical methods exist. Given the fact that different analytical methods for the determination of creatinine and Cys C were used in order to validate existing GFR formulas, clinicians should be aware of the type used in their local laboratory. In this study, we compared GFR results calculated on the basis of different GFR formulas and either used Scr and Cys C values as determined by the analytical method originally employed for validation or values obtained by an alternative analytical method to evaluate any possible effects on the performance. Cys C values determined by means of an immunoturbidimetric assay were used for calculating the GFR using equations in which this analytical method had originally been used for validation. Additionally, these same values were then used in other GFR formulas that had originally been validated using a nephelometric immunoassay for determining Cys C. The effect of using either the compatible or the possibly incompatible analytical method for determining Cys C in the calculation of GFR was assessed in comparison with the GFR measured by creatinine clearance (CrCl). Unexpectedly, using GFR equations that employed Cys C values derived from a possibly incompatible analytical method did not result in a significant difference concerning the classification of patients as having normal or reduced GFR compared to the classification obtained on the basis of CrCl. Sensitivity and specificity were adequate. On the other hand, formulas using Cys C values derived from a compatible analytical method partly showed insufficient performance when compared to CrCl. Although clinicians should be aware of applying a GFR formula that is compatible with the locally used analytical method for determining Cys C and creatinine, other factors might be more crucial for the calculation of correct GFR values.

  13. Compliance measurements of chevron notched four point bend specimen

    NASA Technical Reports Server (NTRS)

    Calomino, Anthony; Bubsey, Raymond; Ghosn, Louis J.

    1994-01-01

    The experimental stress intensity factors for various chevron notched four point bend specimens are presented. The experimental compliance is verified using the analytical solution for a straight through crack four point bend specimen and the boundary integral equation method for one chevron geometry. Excellent agreement is obtained between the experimental and analytical results. In this report, stress intensity factors, loading displacements and crack mouth opening displacements are reported for different crack lengths and different chevron geometries, under four point bend loading condition.

  14. Disasters and Youth: A Meta-Analytic Examination of Posttraumatic Stress

    ERIC Educational Resources Information Center

    Furr, Jami M.; Comer, Jonathan S.; Edmunds, Julie M.; Kendall, Philip C.

    2010-01-01

    Objective: Meta-analyze the literature on posttraumatic stress (PTS) symptoms in youths post-disaster. Method: Meta-analytic synthesis of the literature (k = 96 studies; N[subscript total] = 74,154) summarizing the magnitude of associations between disasters and youth PTS, and key factors associated with variations in the magnitude of these…

  15. Pre-analytical and analytical variation of drug determination in segmented hair using ultra-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian

    2014-01-01

    Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. Universal analytical scattering form factor for shell-, core-shell, or homogeneous particles with continuously variable density profile shape.

    PubMed

    Foster, Tobias

    2011-09-01

    A novel analytical and continuous density distribution function with a widely variable shape is reported and used to derive an analytical scattering form factor that allows us to universally describe the scattering from particles with the radial density profile of homogeneous spheres, shells, or core-shell particles. Composed by the sum of two Fermi-Dirac distribution functions, the shape of the density profile can be altered continuously from step-like via Gaussian-like or parabolic to asymptotically hyperbolic by varying a single "shape parameter", d. Using this density profile, the scattering form factor can be calculated numerically. An analytical form factor can be derived using an approximate expression for the original Fermi-Dirac distribution function. This approximation is accurate for sufficiently small rescaled shape parameters, d/R (R being the particle radius), up to values of d/R ≈ 0.1, and thus captures step-like, Gaussian-like, and parabolic as well as asymptotically hyperbolic profile shapes. It is expected that this form factor is particularly useful in a model-dependent analysis of small-angle scattering data since the applied continuous and analytical function for the particle density profile can be compared directly with the density profile extracted from the data by model-free approaches like the generalized inverse Fourier transform method. © 2011 American Chemical Society

  17. Comparison of analytical methods for calculation of wind loads

    NASA Technical Reports Server (NTRS)

    Minderman, Donald J.; Schultz, Larry L.

    1989-01-01

    The following analysis is a comparison of analytical methods for calculation of wind load pressures. The analytical methods specified in ASCE Paper No. 3269, ANSI A58.1-1982, the Standard Building Code, and the Uniform Building Code were analyzed using various hurricane speeds to determine the differences in the calculated results. The winds used for the analysis ranged from 100 mph to 125 mph and applied inland from the shoreline of a large open body of water (i.e., an enormous lake or the ocean) a distance of 1500 feet or ten times the height of the building or structure considered. For a building or structure less than or equal to 250 feet in height acted upon by a wind greater than or equal to 115 mph, it was determined that the method specified in ANSI A58.1-1982 calculates a larger wind load pressure than the other methods. For a building or structure between 250 feet and 500 feet tall acted upon by a wind rangind from 100 mph to 110 mph, there is no clear choice of which method to use; for these cases, factors that must be considered are the steady-state or peak wind velocity, the geographic location, the distance from a large open body of water, and the expected design life and its risk factor.

  18. Evaluating the performance of free-formed surface parts using an analytic network process

    NASA Astrophysics Data System (ADS)

    Qian, Xueming; Ma, Yanqiao; Liang, Dezhi

    2018-03-01

    To successfully design parts with a free-formed surface, the critical issue of how to evaluate and select a favourable evaluation strategy before design is raised. The evaluation of free-formed surface parts is a multiple criteria decision-making (MCDM) problem that requires the consideration of a large number of interdependent factors. The analytic network process (ANP) is a relatively new MCDM method that can systematically deal with all kinds of dependences. In this paper, the factors, which come from the life-cycle and influence the design of free-formed surface parts, are proposed. After analysing the interdependence among these factors, a Hybrid ANP (HANP) structure for evaluating the part’s curved surface is constructed. Then, a HANP evaluation of an impeller is presented to illustrate the application of the proposed method.

  19. TEMPERATURE COEFFICIENTS OF HETEROGENEOUS U-238-U-235 FUELED REACTORS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Astley, E.R.; Mansius, C.A.

    1958-05-14

    An analytical method of determining the effective reactivity coefficient from fundamental cross sections using the four factor formula is presented. Values of the coefficient obtained by this method compare well with experiment. (A.C.)

  20. Design Evaluation of Wind Turbine Spline Couplings Using an Analytical Model: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Y.; Keller, J.; Wallen, R.

    2015-02-01

    Articulated splines are commonly used in the planetary stage of wind turbine gearboxes for transmitting the driving torque and improving load sharing. Direct measurement of spline loads and performance is extremely challenging because of limited accessibility. This paper presents an analytical model for the analysis of articulated spline coupling designs. For a given torque and shaft misalignment, this analytical model quickly yields insights into relationships between the spline design parameters and resulting loads; bending, contact, and shear stresses; and safety factors considering various heat treatment methods. Comparisons of this analytical model against previously published computational approaches are also presented.

  1. Effect of vibration on retention characteristics of screen acquisition systems. [for surface tension propellant acquisition

    NASA Technical Reports Server (NTRS)

    Tegart, J. R.; Aydelott, J. C.

    1978-01-01

    The design of surface tension propellant acquisition systems using fine-mesh screen must take into account all factors that influence the liquid pressure differentials within the system. One of those factors is spacecraft vibration. Analytical models to predict the effects of vibration have been developed. A test program to verify the analytical models and to allow a comparative evaluation of the parameters influencing the response to vibration was performed. Screen specimens were tested under conditions simulating the operation of an acquisition system, considering the effects of such parameters as screen orientation and configuration, screen support method, screen mesh, liquid flow and liquid properties. An analytical model, based on empirical coefficients, was most successful in predicting the effects of vibration.

  2. Stability of Q-Factors across Two Data Collection Methods.

    ERIC Educational Resources Information Center

    Daniel, Larry G.

    The purpose of the present study was to determine how two different data collection techniques would affect the Q-factors derived from several factor analytic procedures. Faculty members (N=146) from seven middle schools responded to 61 items taken from an instrument designed to measure aspects of an idealized middle school culture; the instrument…

  3. The Effect of Brain Based Learning on Academic Achievement: A Meta-Analytical Study

    ERIC Educational Resources Information Center

    Gozuyesil, Eda; Dikici, Ayhan

    2014-01-01

    This study's aim is to measure the effect sizes of the quantitative studies that examined the effectiveness of brain-based learning on students' academic achievement and to examine with the meta-analytical method if there is a significant difference in effect in terms of the factors of education level, subject matter, sampling size, and the…

  4. Evaluation of injection methods for fast, high peak capacity separations with low thermal mass gas chromatography.

    PubMed

    Fitz, Brian D; Mannion, Brandyn C; To, Khang; Hoac, Trinh; Synovec, Robert E

    2015-05-01

    Low thermal mass gas chromatography (LTM-GC) was evaluated for rapid, high peak capacity separations with three injection methods: liquid, headspace solid phase micro-extraction (HS-SPME), and direct vapor. An Agilent LTM equipped with a short microbore capillary column was operated at a column heating rate of 250 °C/min to produce a 60s separation. Two sets of experiments were conducted in parallel to characterize the instrumental platform. First, the three injection methods were performed in conjunction with in-house built high-speed cryo-focusing injection (HSCFI) to cryogenically trap and re-inject the analytes onto the LTM-GC column in a narrower band. Next, the three injection methods were performed natively with LTM-GC. Using HSCFI, the peak capacity of a separation of 50 nl of a 73 component liquid test mixture was 270, which was 23% higher than without HSCFI. Similar peak capacity gains were obtained when using the HSCFI with HS-SPME (25%), and even greater with vapor injection (56%). For the 100 μl vapor sample injected without HSCFI, the preconcentration factor, defined as the ratio of the maximum concentration of the detected analyte peak relative to the analyte concentration injected with the syringe, was determined to be 11 for the earliest eluting peak (most volatile analyte). In contrast, the preconcentration factor for the earliest eluting peak using HSCFI was 103. Therefore, LTM-GC is demonstrated to natively provide in situ analyte trapping, although not to as great an extent as with HSCFI. We also report the use of LTM-GC applied with time-of-flight mass spectrometry (TOFMS) detection for rapid, high peak capacity separations from SPME sampled banana peel headspace. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Accurate expressions for solar cell fill factors including series and shunt resistances

    NASA Astrophysics Data System (ADS)

    Green, Martin A.

    2016-02-01

    Together with open-circuit voltage and short-circuit current, fill factor is a key solar cell parameter. In their classic paper on limiting efficiency, Shockley and Queisser first investigated this factor's analytical properties showing, for ideal cells, it could be expressed implicitly in terms of the maximum power point voltage. Subsequently, fill factors usually have been calculated iteratively from such implicit expressions or from analytical approximations. In the absence of detrimental series and shunt resistances, analytical fill factor expressions have recently been published in terms of the Lambert W function available in most mathematical computing software. Using a recently identified perturbative relationship, exact expressions in terms of this function are derived in technically interesting cases when both series and shunt resistances are present but have limited impact, allowing a better understanding of their effect individually and in combination. Approximate expressions for arbitrary shunt and series resistances are then deduced, which are significantly more accurate than any previously published. A method based on the insights developed is also reported for deducing one-diode fits to experimental data.

  6. Procrustes Matching by Congruence Coefficients

    ERIC Educational Resources Information Center

    Korth, Bruce; Tucker, L. R.

    1976-01-01

    Matching by Procrustes methods involves the transformation of one matrix to match with another. A special least squares criterion, the congruence coefficient, has advantages as a criterion for some factor analytic interpretations. A Procrustes method maximizing the congruence coefficient is given. (Author/JKS)

  7. Development of a validated liquid chromatographic method for quantification of sorafenib tosylate in the presence of stress-induced degradation products and in biological matrix employing analytical quality by design approach.

    PubMed

    Sharma, Teenu; Khurana, Rajneet Kaur; Jain, Atul; Katare, O P; Singh, Bhupinder

    2018-05-01

    The current research work envisages an analytical quality by design-enabled development of a simple, rapid, sensitive, specific, robust and cost-effective stability-indicating reversed-phase high-performance liquid chromatographic method for determining stress-induced forced-degradation products of sorafenib tosylate (SFN). An Ishikawa fishbone diagram was constructed to embark upon analytical target profile and critical analytical attributes, i.e. peak area, theoretical plates, retention time and peak tailing. Factor screening using Taguchi orthogonal arrays and quality risk assessment studies carried out using failure mode effect analysis aided the selection of critical method parameters, i.e. mobile phase ratio and flow rate potentially affecting the chosen critical analytical attributes. Systematic optimization using response surface methodology of the chosen critical method parameters was carried out employing a two-factor-three-level-13-run, face-centered cubic design. A method operable design region was earmarked providing optimum method performance using numerical and graphical optimization. The optimum method employed a mobile phase composition consisting of acetonitrile and water (containing orthophosphoric acid, pH 4.1) at 65:35 v/v at a flow rate of 0.8 mL/min with UV detection at 265 nm using a C 18 column. Response surface methodology validation studies confirmed good efficiency and sensitivity of the developed method for analysis of SFN in mobile phase as well as in human plasma matrix. The forced degradation studies were conducted under different recommended stress conditions as per ICH Q1A (R2). Mass spectroscopy studies showed that SFN degrades in strongly acidic, alkaline and oxidative hydrolytic conditions at elevated temperature, while the drug was per se found to be photostable. Oxidative hydrolysis using 30% H 2 O 2 showed maximum degradation with products at retention times of 3.35, 3.65, 4.20 and 5.67 min. The absence of any significant change in the retention time of SFN and degradation products, formed under different stress conditions, ratified selectivity and specificity of the systematically developed method. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Development of a certified reference material (NMIJ CRM 7512-a) for the determination of trace elements in milk powder.

    PubMed

    Zhu, Yanbei; Narukawa, Tomohiro; Miyashita, Shin-ichi; Kuroiwa, Takayoshi; Inagaki, Kazumi; Chiba, Koichi; Hioki, Akiharu

    2013-01-01

    A certified reference material (CRM), NMIJ CRM 7512-a, was developed for the determination of trace elements in milk powder. At least three independent analytical methods were applied to characterize the certified value of each element; all of these analytical methods were based on microwave acid digestions and carried out using different analytical instruments. The certified value was given on a dry-mass basis, where the dry-mass correction factor was obtained by drying the sample at 65°C for 15 to 25 h. The certified values in the units of mass fractions for 13 elements were as follows: Ca, 8.65 (0.38) g kg(-1); Fe, 0.104 (0.007) g kg(-1); K, 8.41 (0.33) g kg(-1); Mg, 0.819 (0.024) g kg(-1); Na, 1.87 (0.09) g kg(-1); P, 5.62 (0.23) g kg(-1); Ba, 0.449 (0.013) mg kg(-1); Cu, 4.66 (0.23) mg kg(-1); Mn, 0.931 (0.032) mg kg(-1); Mo, 0.223 (0.012) mg kg(-1); Rb, 8.93 (0.31) mg kg(-1); Sr, 5.88 (0.20) mg kg(-1); and Zn, 41.3 (1.4) mg kg(-1), where the numbers in the parentheses are the expanded uncertainties with a coverage factor of 2. The expanded uncertainties were estimated considering the contribution of the analytical methods, the method-to-method variance, the sample homogeneity, the dry-mass correction factor, and the concentrations of the standard solutions for calibration. The concentrations of As (2.1 μg kg(-1)), Cd (0.2 μg kg(-1)), Cr (1.3 μg kg(-1)), Pb (0.3 μg kg(-1)), and Y (64 μg kg(-1)) were given as information values for the present CRM.

  9. Identifying environmental variables explaining genotype-by-environment interaction for body weight of rainbow trout (Onchorynchus mykiss): reaction norm and factor analytic models

    PubMed Central

    2014-01-01

    Background Identifying the relevant environmental variables that cause GxE interaction is often difficult when they cannot be experimentally manipulated. Two statistical approaches can be applied to address this question. When data on candidate environmental variables are available, GxE interaction can be quantified as a function of specific environmental variables using a reaction norm model. Alternatively, a factor analytic model can be used to identify the latent common factor that explains GxE interaction. This factor can be correlated with known environmental variables to identify those that are relevant. Previously, we reported a significant GxE interaction for body weight at harvest in rainbow trout reared on three continents. Here we explore their possible causes. Methods Reaction norm and factor analytic models were used to identify which environmental variables (age at harvest, water temperature, oxygen, and photoperiod) may have caused the observed GxE interaction. Data on body weight at harvest was recorded on 8976 offspring reared in various locations: (1) a breeding environment in the USA (nucleus), (2) a recirculating aquaculture system in the Freshwater Institute in West Virginia, USA, (3) a high-altitude farm in Peru, and (4) a low-water temperature farm in Germany. Akaike and Bayesian information criteria were used to compare models. Results The combination of days to harvest multiplied with daily temperature (Day*Degree) and photoperiod were identified by the reaction norm model as the environmental variables responsible for the GxE interaction. The latent common factor that was identified by the factor analytic model showed the highest correlation with Day*Degree. Day*Degree and photoperiod were the environmental variables that differed most between Peru and other environments. Akaike and Bayesian information criteria indicated that the factor analytical model was more parsimonious than the reaction norm model. Conclusions Day*Degree and photoperiod were identified as environmental variables responsible for the strong GxE interaction for body weight at harvest in rainbow trout across four environments. Both the reaction norm and the factor analytic models can help identify the environmental variables responsible for GxE interaction. A factor analytic model is preferred over a reaction norm model when limited information on differences in environmental variables between farms is available. PMID:24571451

  10. Evaluation of selected methods for determining streamflow during periods of ice effect

    USGS Publications Warehouse

    Melcher, Norwood B.; Walker, J.F.

    1992-01-01

    Seventeen methods for estimating ice-affected streamflow are evaluated for potential use with the U.S. Geological Survey streamflow-gaging station network. The methods evaluated were identified by written responses from U.S. Geological Survey field offices and by a comprehensive literature search. The methods selected and techniques used for applying the methods are described in this report. The methods are evaluated by comparing estimated results with data collected at three streamflow-gaging stations in Iowa during the winter of 1987-88. Discharge measurements were obtained at 1- to 5-day intervals during the ice-affected periods at the three stations to define an accurate baseline record. Discharge records were compiled for each method based on data available, assuming a 6-week field schedule. The methods are classified into two general categories-subjective and analytical--depending on whether individual judgment is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used at streamflow-gaging stations, where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice-adjustment factor) may be appropriate for use at stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge-ratio and multiple-regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.

  11. An analytical approach to γ-ray self-shielding effects for radioactive bodies encountered nuclear decommissioning scenarios.

    PubMed

    Gamage, K A A; Joyce, M J

    2011-10-01

    A novel analytical approach is described that accounts for self-shielding of γ radiation in decommissioning scenarios. The approach is developed with plutonium-239, cobalt-60 and caesium-137 as examples; stainless steel and concrete have been chosen as the media for cobalt-60 and caesium-137, respectively. The analytical methods have been compared MCNPX 2.6.0 simulations. A simple, linear correction factor relates the analytical results and the simulated estimates. This has the potential to greatly simplify the estimation of self-shielding effects in decommissioning activities. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Determination of Aluminum in Dialysis Concentrates by Atomic Absorption Spectrometry after Coprecipitation with Lanthanum Phosphate.

    PubMed

    Selvi, Emine Kılıçkaya; Şahin, Uğur; Şahan, Serkan

    2017-01-01

    This method was developed for the determination of trace amounts of aluminum(III) in dialysis concentrates using atomic absorption spectrometry after coprecipitation with lanthanum phosphate. The analytical parameters that influenced the quantitative coprecipitation of analyte including amount of lanthanum, amount of phosfate, pH and duration time were optimized. The % recoveries of the analyte ion were in the range of 95-105 % with limit of detection (3s) of 0.5 µg l -1 . Preconcentration factor was found as 1000 and Relative Standard Deviation (RSD) % value obtained from model solutions was 2.5% for 0.02 mg L -1 . The accuracy of the method was evaluated with standard reference material (CWW-TMD Waste Water). The method was also applied to most concentrated acidic and basic dialysis concentrates with satisfactory results.

  13. Development of a carbon-nanoparticle-coated stirrer for stir bar sorptive extraction by a simple carbon deposition in flame.

    PubMed

    Feng, Juanjuan; Sun, Min; Bu, Yanan; Luo, Chuannan

    2016-03-01

    Stir bar sorptive extraction is an environmentally friendly microextraction technique based on a stir bar with various sorbents. A commercial stirrer is a good support, but it has not been used in stir bar sorptive extraction due to difficult modification. A stirrer was modified with carbon nanoparticles by a simple carbon deposition process in flame and characterized by scanning electron microscopy and energy-dispersive X-ray spectrometry. A three-dimensional porous coating was formed with carbon nanoparticles. In combination with high-performance liquid chromatography, the stir bar was evaluated using five polycyclic aromatic hydrocarbons as model analytes. Conditions including extraction time and temperature, ionic strength, and desorption solvent were investigated by a factor-by-factor optimization method. The established method exhibited good linearity (0.01-10 μg/L) and low limits of quantification (0.01 μg/L). It was applied to detect model analytes in environmental water samples. No analyte was detected in river water, and five analytes were quantified in rain water. The recoveries of five analytes in two samples with spiked at 2 μg/L were in the range of 92.2-106% and 93.4-108%, respectively. The results indicated that the carbon nanoparticle-coated stirrer was an efficient stir bar for extraction analysis of some polycyclic aromatic hydrocarbons. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Development of variable LRFD \\0x03C6 factors for deep foundation design due to site variability.

    DOT National Transportation Integrated Search

    2012-04-01

    The current design guidelines of Load and Resistance Factor Design (LRFD) specifies constant values : for deep foundation design, based on analytical method selected and degree of redundancy of the pier. : However, investigation of multiple sites in ...

  15. Consistent approach to describing aircraft HIRF protection

    NASA Technical Reports Server (NTRS)

    Rimbey, P. R.; Walen, D. B.

    1995-01-01

    The high intensity radiated fields (HIRF) certification process as currently implemented is comprised of an inconsistent combination of factors that tend to emphasize worst case scenarios in assessing commercial airplane certification requirements. By examining these factors which include the process definition, the external HIRF environment, the aircraft coupling and corresponding internal fields, and methods of measuring equipment susceptibilities, activities leading to an approach to appraising airplane vulnerability to HIRF are proposed. This approach utilizes technically based criteria to evaluate the nature of the threat, including the probability of encountering the external HIRF environment. No single test or analytic method comprehensively addresses the full HIRF threat frequency spectrum. Additional tools such as statistical methods must be adopted to arrive at more realistic requirements to reflect commercial aircraft vulnerability to the HIRF threat. Test and analytic data are provided to support the conclusions of this report. This work was performed under NASA contract NAS1-19360, Task 52.

  16. Environmental and human monitoring of Americium-241 utilizing extraction chromatography and alpha-spectrometry.

    PubMed

    Goldstein, S J; Hensley, C A; Armenta, C E; Peters, R J

    1997-03-01

    Recent developments in extraction chromatography have simplified the separation of americium from complex matrices in preparation for alpha-spectroscopy relative to traditional methods. Here we present results of procedures developed/adapted for water, air, and bioassay samples with less than 1 g of inorganic residue. Prior analytical methods required the use of a complex, multistage procedure for separation of americium from these matrices. The newer, simplified procedure requires only a single 2 mL extraction chromatographic separation for isolation of Am and lanthanides from other components of the sample. This method has been implemented on an extensive variety of "real" environmental and bioassay samples from the Los Alamos area, and consistently reliable and accurate results with appropriate detection limits have been obtained. The new method increases analytical throughput by a factor of approximately 2 and decreases environmental hazards from acid and mixed-waste generation relative to the prior technique. Analytical accuracy, reproducibility, and reliability are also significantly improved over the more complex and laborious method used previously.

  17. Stripping Voltammetry

    NASA Astrophysics Data System (ADS)

    Lovrić, Milivoj

    Electrochemical stripping means the oxidative or reductive removal of atoms, ions, or compounds from an electrode surface (or from the electrode body, as in the case of liquid mercury electrodes with dissolved metals) [1-5]. In general, these atoms, ions, or compounds have been preliminarily immobilized on the surface of an inert electrode (or within it) as the result of a preconcentration step, while the products of the electrochemical stripping will dissolve in the electrolytic solution. Often the product of the electrochemical stripping is identical to the analyte before the preconcentration. However, there are exemptions to these rules. Electroanalytical stripping methods comprise two steps: first, the accumulation of a dissolved analyte onto, or in, the working electrode, and, second, the subsequent stripping of the accumulated substance by a voltammetric [3, 5], potentiometric [6, 7], or coulometric [8] technique. In stripping voltammetry, the condition is that there are two independent linear relationships: the first one between the activity of accumulated substance and the concentration of analyte in the sample, and the second between the maximum stripping current and the accumulated substance activity. Hence, a cumulative linear relationship between the maximum response and the analyte concentration exists. However, the electrode capacity for the analyte accumulation is limited and the condition of linearity is satisfied only well below the electrode saturation. For this reason, stripping voltammetry is used mainly in trace analysis. The limit of detection depends on the factor of proportionality between the activity of the accumulated substance and the bulk concentration of the analyte. This factor is a constant in the case of a chemical accumulation, but for electrochemical accumulation it depends on the electrode potential. The factor of proportionality between the maximum stripping current and the analyte concentration is rarely known exactly. In fact, it is frequently ignored. For the analysis it suffices to establish the linear relationship empirically. The slope of this relationship may vary from one sample to another because of different influences of the matrix. In this case the concentration of the analyte is determined by the method of standard additions [1]. After measuring the response of the sample, the concentration of the analyte is deliberately increased by adding a certain volume of its standard solution. The response is measured again, and this procedure is repeated three or four times. The unknown concentration is determined by extrapolation of the regression line to the concentration axis [9]. However, in many analytical methods, the final measurement is performed in a standard matrix that allows the construction of a calibration plot. Still, the slope of this plot depends on the active area of the working electrode surface. Each solid electrode needs a separate calibration plot, and that plot must be checked from time to time because of possible deterioration of the electrode surface [2].

  18. Replica Analysis for Portfolio Optimization with Single-Factor Model

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2017-06-01

    In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.

  19. Plate and butt-weld stresses beyond elastic limit, material and structural modeling

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1991-01-01

    Ultimate safety factors of high performance structures depend on stress behavior beyond the elastic limit, a region not too well understood. An analytical modeling approach was developed to gain fundamental insights into inelastic responses of simple structural elements. Nonlinear material properties were expressed in engineering stresses and strains variables and combined with strength of material stress and strain equations similar to numerical piece-wise linear method. Integrations are continuous which allows for more detailed solutions. Included with interesting results are the classical combined axial tension and bending load model and the strain gauge conversion to stress beyond the elastic limit. Material discontinuity stress factors in butt-welds were derived. This is a working-type document with analytical methods and results applicable to all industries of high reliability structures.

  20. Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark

    2009-01-01

    High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.

  1. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  2. Factors Affecting the Location of Road Emergency Bases in Iran Using Analytical Hierarchy Process (AHP).

    PubMed

    Bahadori, Mohammadkarim; Hajebrahimi, Ahmad; Alimohammadzadeh, Khalil; Ravangard, Ramin; Hosseini, Seyed Mojtaba

    2017-10-01

    To identify and prioritize factors affecting the location of road emergency bases in Iran using Analytical Hierarchy Process (AHP). This was a mixed method (quantitative-qualitative) study conducted in 2016. The participants in this study included the professionals and experts in the field of pre-hospital and road emergency services issues working in the Health Deputy of Iran Ministry of Health and Medical Education, which were selected using purposive sampling method. In this study at first, the factors affecting the location of road emergency bases in Iran were identified using literature review and conducting interviews with the experts. Then, the identified factors were scored and prioritized using the studied professionals and experts' viewpoints through using the analytic hierarchy process (AHP) technique and its related pair-wise questionnaire. The collected data were analyzed using MAXQDA 10.0 software to analyze the answers given to the open question and Expert Choice 10.0 software to determine the weights and priorities of the identified factors. The results showed that eight factors were effective in locating the road emergency bases in Iran from the viewpoints of the studied professionals and experts in the field of pre-hospital and road emergency services issues, including respectively distance from the next base, region population, topography and geographical situation of the region, the volume of road traffic, the existence of amenities such as water, electricity, gas, etc. and proximity to the village, accident-prone sites, University ownership of the base site, and proximity to toll-house. Among the eight factors which were effective in locating the road emergency bases from the studied professionals and experts' perspectives, "distance from the next base" and "region population" were respectively the most important ones which had great differences with other factors.

  3. Development, optimization, validation and application of faster gas chromatography - flame ionization detector method for the analysis of total petroleum hydrocarbons in contaminated soils.

    PubMed

    Zubair, Abdulrazaq; Pappoe, Michael; James, Lesley A; Hawboldt, Kelly

    2015-12-18

    This paper presents an important new approach to improving the timeliness of Total Petroleum Hydrocarbon (TPH) analysis in the soil by Gas Chromatography - Flame Ionization Detector (GC-FID) using the CCME Canada-Wide Standard reference method. The Canada-Wide Standard (CWS) method is used for the analysis of petroleum hydrocarbon compounds across Canada. However, inter-laboratory application of this method for the analysis of TPH in the soil has often shown considerable variability in the results. This could be due, in part, to the different gas chromatography (GC) conditions, other steps involved in the method, as well as the soil properties. In addition, there are differences in the interpretation of the GC results, which impacts the determination of the effectiveness of remediation at hydrocarbon-contaminated sites. In this work, multivariate experimental design approach was used to develop and validate the analytical method for a faster quantitative analysis of TPH in (contaminated) soil. A fractional factorial design (fFD) was used to screen six factors to identify the most significant factors impacting the analysis. These factors included: injection volume (μL), injection temperature (°C), oven program (°C/min), detector temperature (°C), carrier gas flow rate (mL/min) and solvent ratio (v/v hexane/dichloromethane). The most important factors (carrier gas flow rate and oven program) were then optimized using a central composite response surface design. Robustness testing and validation of model compares favourably with the experimental results with percentage difference of 2.78% for the analysis time. This research successfully reduced the method's standard analytical time from 20 to 8min with all the carbon fractions eluting. The method was successfully applied for fast TPH analysis of Bunker C oil contaminated soil. A reduced analytical time would offer many benefits including an improved laboratory reporting times, and overall improved clean up efficiency. The method was successfully applied for the analysis of TPH of Bunker C oil in contaminated soil. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  4. Enantioselective determination of representative profens in wastewater by a single-step sample treatment and chiral liquid chromatography-tandem mass spectrometry.

    PubMed

    Caballo, C; Sicilia, M D; Rubio, S

    2015-03-01

    This manuscript describes, for the first time, the simultaneous enantioselective determination of ibuprofen, naproxen and ketoprofen in wastewater based on liquid chromatography tandem mass spectrometry (LC-MS/MS). The method uses a single-step sample treatment based on microextraction with a supramolecular solvent made up of hexagonal inverted aggregates of decanoic acid, formed in situ in the wastewater sample through a spontaneous self-assembly process. Microextraction of profens was optimized and the analytical method validated. Isotopically labeled internal standards were used to compensate for both matrix interferences and recoveries. Apparent recoveries for the six enantiomers in influent and effluent wastewater samples were in the interval 97-103%. Low method detection limits (MDLs) were obtained (0.5-1.2 ng L(-1)) as a result of the high concentration factors achieved in the microextraction process (i.e. actual concentration factors 469-736). No analyte derivatization or evaporation of extracts, as it is required with GC-MS, was necessary. Relative standard deviations for enantiomers in wastewater were always below 8%. The method was applied to the determination of the concentrations and enantiomeric fractions of the targeted analytes in influents and effluents from three wastewater treatment plants. All the values found for profen enantiomers were consistent with those previously reported and confirmed again the suitability of using the enantiomeric fraction of ibuprofen as an indicator of the discharge of untreated or poorly treated wastewaters. Both the analytical and operational features of this method make it applicable to the assessment of the enantiomeric fate of profens in the environment. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Development and optimization of SPE-HPLC-UV/ELSD for simultaneous determination of nine bioactive components in Shenqi Fuzheng Injection based on Quality by Design principles.

    PubMed

    Wang, Lu; Qu, Haibin

    2016-03-01

    A method combining solid phase extraction, high performance liquid chromatography, and ultraviolet/evaporative light scattering detection (SPE-HPLC-UV/ELSD) was developed according to Quality by Design (QbD) principles and used to assay nine bioactive compounds within a botanical drug, Shenqi Fuzheng Injection. Risk assessment and a Plackett-Burman design were utilized to evaluate the impact of 11 factors on the resolutions and signal-to-noise of chromatographic peaks. Multiple regression and Pareto ranking analysis indicated that the sorbent mass, sample volume, flow rate, column temperature, evaporator temperature, and gas flow rate were statistically significant (p < 0.05) in this procedure. Furthermore, a Box-Behnken design combined with response surface analysis was employed to study the relationships between the quality of SPE-HPLC-UV/ELSD analysis and four significant factors, i.e., flow rate, column temperature, evaporator temperature, and gas flow rate. An analytical design space of SPE-HPLC-UV/ELSD was then constructed by calculated Monte Carlo probability. In the presented approach, the operating parameters of sample preparation, chromatographic separation, and compound detection were investigated simultaneously. Eight terms of method validation, i.e., system-suitability tests, method robustness/ruggedness, sensitivity, precision, repeatability, linearity, accuracy, and stability, were accomplished at a selected working point. These results revealed that the QbD principles were suitable in the development of analytical procedures for samples in complex matrices. Meanwhile, the analytical quality and method robustness were validated by the analytical design space. The presented strategy provides a tutorial on the development of a robust QbD-compliant quantitative method for samples in complex matrices.

  6. Scaling Laws Applied to a Modal Formulation of the Aeroservoelastic Equations

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.

    2002-01-01

    A method of scaling is described that easily converts the aeroelastic equations of motion of a full-sized aircraft into ones of a wind-tunnel model. To implement the method, a set of rules is provided for the conversion process involving matrix operations with scale factors. In addition, a technique for analytically incorporating a spring mounting system into the aeroelastic equations is also presented. As an example problem, a finite element model of a full-sized aircraft is introduced from the High Speed Research (HSR) program to exercise the scaling method. With a set of scale factor values, a brief outline is given of a procedure to generate the first-order aeroservoelastic analytical model representing the wind-tunnel model. To verify the scaling process as applied to the example problem, the root-locus patterns from the full-sized vehicle and the wind-tunnel model are compared to see if the root magnitudes scale with the frequency scale factor value. Selected time-history results are given from a numerical simulation of an active-controlled wind-tunnel model to demonstrate the utility of the scaling process.

  7. Task-based image quality evaluation of iterative reconstruction methods for low dose CT using computer simulations

    NASA Astrophysics Data System (ADS)

    Xu, Jingyan; Fuld, Matthew K.; Fung, George S. K.; Tsui, Benjamin M. W.

    2015-04-01

    Iterative reconstruction (IR) methods for x-ray CT is a promising approach to improve image quality or reduce radiation dose to patients. The goal of this work was to use task based image quality measures and the channelized Hotelling observer (CHO) to evaluate both analytic and IR methods for clinical x-ray CT applications. We performed realistic computer simulations at five radiation dose levels, from a clinical reference low dose D0 to 25% D0. A fixed size and contrast lesion was inserted at different locations into the liver of the XCAT phantom to simulate a weak signal. The simulated data were reconstructed on a commercial CT scanner (SOMATOM Definition Flash; Siemens, Forchheim, Germany) using the vendor-provided analytic (WFBP) and IR (SAFIRE) methods. The reconstructed images were analyzed by CHOs with both rotationally symmetric (RS) and rotationally oriented (RO) channels, and with different numbers of lesion locations (5, 10, and 20) in a signal known exactly (SKE), background known exactly but variable (BKEV) detection task. The area under the receiver operating characteristic curve (AUC) was used as a summary measure to compare the IR and analytic methods; the AUC was also used as the equal performance criterion to derive the potential dose reduction factor of IR. In general, there was a good agreement in the relative AUC values of different reconstruction methods using CHOs with RS and RO channels, although the CHO with RO channels achieved higher AUCs than RS channels. The improvement of IR over analytic methods depends on the dose level. The reference dose level D0 was based on a clinical low dose protocol, lower than the standard dose due to the use of IR methods. At 75% D0, the performance improvement was statistically significant (p < 0.05). The potential dose reduction factor also depended on the detection task. For the SKE/BKEV task involving 10 lesion locations, a dose reduction of at least 25% from D0 was achieved.

  8. Allocation of Transaction Cost to Market Participants Using an Analytical Method in Deregulated Market

    NASA Astrophysics Data System (ADS)

    Jeyasankari, S.; Jeslin Drusila Nesamalar, J.; Charles Raja, S.; Venkatesh, P.

    2014-04-01

    Transmission cost allocation is one of the major challenges in transmission open access faced by the electric power sector. The purpose of this work is to provide an analytical method for allocating transmission transaction cost in deregulated market. This research work provides a usage based transaction cost allocation method based on line-flow impact factor (LIF) which relates the power flow in each line with respect to transacted power for the given transaction. This method provides the impact of line flows without running iterative power flow solution and is well suited for real time applications. The proposed method is compared with the Newton-Raphson (NR) method of cost allocation on sample six bus and practical Indian utility 69 bus systems by considering multilateral transaction.

  9. Novel and sensitive reversed-phase high-pressure liquid chromatography method with electrochemical detection for the simultaneous and fast determination of eight biogenic amines and metabolites in human brain tissue.

    PubMed

    Van Dam, Debby; Vermeiren, Yannick; Aerts, Tony; De Deyn, Peter Paul

    2014-08-01

    A fast and simple RP-HPLC method with electrochemical detection (ECD) and ion pair chromatography was developed, optimized and validated in order to simultaneously determine eight different biogenic amines and metabolites in post-mortem human brain tissue in a single-run analytical approach. The compounds of interest are the indolamine serotonin (5-hydroxytryptamine, 5-HT), the catecholamines dopamine (DA) and (nor)epinephrine ((N)E), as well as their respective metabolites, i.e. 3,4-dihydroxyphenylacetic acid (DOPAC) and homovanillic acid (HVA), 5-hydroxy-3-indoleacetic acid (5-HIAA) and 3-methoxy-4-hydroxyphenylglycol (MHPG). A two-level fractional factorial experimental design was applied to study the effect of five experimental factors (i.e. the ion-pair counter concentration, the level of organic modifier, the pH of the mobile phase, the temperature of the column, and the voltage setting of the detector) on the chromatographic behaviour. The cross effect between the five quantitative factors and the capacity and separation factors of the analytes were then analysed using a Standard Least Squares model. The optimized method was fully validated according to the requirements of SFSTP (Société Française des Sciences et Techniques Pharmaceutiques). Our human brain tissue sample preparation procedure is straightforward and relatively short, which allows samples to be loaded onto the HPLC system within approximately 4h. Additionally, a high sample throughput was achieved after optimization due to a total runtime of maximally 40min per sample. The conditions and settings of the HPLC system were found to be accurate with high intra and inter-assay repeatability, recovery and accuracy rates. The robust analytical method results in very low detection limits and good separation for all of the eight biogenic amines and metabolites in this complex mixture of biological analytes. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Electrodialytic in-line preconcentration for ionic solute analysis.

    PubMed

    Ohira, Shin-Ichi; Yamasaki, Takayuki; Koda, Takumi; Kodama, Yuko; Toda, Kei

    2018-04-01

    Preconcentration is an effective way to improve analytical sensitivity. Many types of methods are used for enrichment of ionic solute analytes. However, current methods are batchwise and include procedures such as trapping and elution. In this manuscript, we propose in-line electrodialytic enrichment of ionic solutes. The method can enrich ionic solutes within seconds by quantitative transfer of analytes from the sample solution to the acceptor solution under an electric field. Because of quantitative ion transfer, the enrichment factor (the ratio of the concentration in the sample and to that in the obtained acceptor solution) only depends on the flow rate ratio of the sample solution to the acceptor solution. The ratios of the concentrations and flow rates are equal for ratios up to 70, 20, and 70 for the tested ionic solutes of inorganic cations, inorganic anions, and heavy metal ions, respectively. The sensitivity of ionic solute determinations is also improved based on the enrichment factor. The method can also simultaneously achieve matrix isolation and enrichment. The method was successively applied to determine the concentrations of trace amounts of chloroacetic acids in tap water. The regulated concentration levels cannot be determined by conventional high-performance liquid chromatography with ultraviolet detection (HPLC-UV) without enrichment. However, enrichment with the present method is effective for determination of tap water quality by improving the limits of detection of HPLC-UV. The standard addition test with real tap water samples shows good recoveries (94.9-109.6%). Copyright © 2017 Elsevier B.V. All rights reserved.

  11. A screening tool for delineating subregions of steady recharge within groundwater models

    USGS Publications Warehouse

    Dickinson, Jesse; Ferré, T.P.A.; Bakker, Mark; Crompton, Becky

    2014-01-01

    We have developed a screening method for simplifying groundwater models by delineating areas within the domain that can be represented using steady-state groundwater recharge. The screening method is based on an analytical solution for the damping of sinusoidal infiltration variations in homogeneous soils in the vadose zone. The damping depth is defined as the depth at which the flux variation damps to 5% of the variation at the land surface. Groundwater recharge may be considered steady where the damping depth is above the depth of the water table. The analytical solution approximates the vadose zone diffusivity as constant, and we evaluated when this approximation is reasonable. We evaluated the analytical solution through comparison of the damping depth computed by the analytic solution with the damping depth simulated by a numerical model that allows variable diffusivity. This comparison showed that the screening method conservatively identifies areas of steady recharge and is more accurate when water content and diffusivity are nearly constant. Nomograms of the damping factor (the ratio of the flux amplitude at any depth to the amplitude at the land surface) and the damping depth were constructed for clay and sand for periodic variations between 1 and 365 d and flux means and amplitudes from nearly 0 to 1 × 10−3 m d−1. We applied the screening tool to Central Valley, California, to identify areas of steady recharge. A MATLAB script was developed to compute the damping factor for any soil and any sinusoidal flux variation.

  12. Integration of gas chromatography mass spectrometry methods for differentiating ricin preparation methods.

    PubMed

    Wunschel, David S; Melville, Angela M; Ehrhardt, Christopher J; Colburn, Heather A; Victry, Kristin D; Antolick, Kathryn C; Wahl, Jon H; Wahl, Karen L

    2012-05-07

    The investigation of crimes involving chemical or biological agents is infrequent, but presents unique analytical challenges. The protein toxin ricin is encountered more frequently than other agents and is found in the seeds of Ricinus communis, commonly known as the castor plant. Typically, the toxin is extracted from castor seeds utilizing a variety of different recipes that result in varying purity of the toxin. Moreover, these various purification steps can also leave or differentially remove a variety of exogenous and endogenous residual components with the toxin that may indicate the type and number of purification steps involved. We have applied three gas chromatography-mass spectrometry (GC-MS) based analytical methods to measure the variation in seed carbohydrates and castor oil ricinoleic acid, as well as the presence of solvents used for purification. These methods were applied to the same samples prepared using four previously identified toxin preparation methods, starting from four varieties of castor seeds. The individual data sets for seed carbohydrate profiles, ricinoleic acid, or acetone amount each provided information capable of differentiating different types of toxin preparations across seed types. However, the integration of the data sets using multivariate factor analysis provided a clear distinction of all samples based on the preparation method, independent of the seed source. In particular, the abundance of mannose, arabinose, fucose, ricinoleic acid, and acetone were shown to be important differentiating factors. These complementary tools provide a more confident determination of the method of toxin preparation than would be possible using a single analytical method.

  13. Comparison of Three Methods for Wind Turbine Capacity Factor Estimation

    PubMed Central

    Ditkovich, Y.; Kuperman, A.

    2014-01-01

    Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first “quasiexact” approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second “analytic” approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third “approximate” approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation. PMID:24587755

  14. Convergence analysis of two-node CMFD method for two-group neutron diffusion eigenvalue problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong, Yongjin; Park, Jinsu; Lee, Hyun Chul

    2015-12-01

    In this paper, the nonlinear coarse-mesh finite difference method with two-node local problem (CMFD2N) is proven to be unconditionally stable for neutron diffusion eigenvalue problems. The explicit current correction factor (CCF) is derived based on the two-node analytic nodal method (ANM2N), and a Fourier stability analysis is applied to the linearized algorithm. It is shown that the analytic convergence rate obtained by the Fourier analysis compares very well with the numerically measured convergence rate. It is also shown that the theoretical convergence rate is only governed by the converged second harmonic buckling and the mesh size. It is also notedmore » that the convergence rate of the CCF of the CMFD2N algorithm is dependent on the mesh size, but not on the total problem size. This is contrary to expectation for eigenvalue problem. The novel points of this paper are the analytical derivation of the convergence rate of the CMFD2N algorithm for eigenvalue problem, and the convergence analysis based on the analytic derivations.« less

  15. Approximate method of variational Bayesian matrix factorization/completion with sparse prior

    NASA Astrophysics Data System (ADS)

    Kawasumi, Ryota; Takeda, Koujin

    2018-05-01

    We derive the analytical expression of a matrix factorization/completion solution by the variational Bayes method, under the assumption that the observed matrix is originally the product of low-rank, dense and sparse matrices with additive noise. We assume the prior of a sparse matrix is a Laplace distribution by taking matrix sparsity into consideration. Then we use several approximations for the derivation of a matrix factorization/completion solution. By our solution, we also numerically evaluate the performance of a sparse matrix reconstruction in matrix factorization, and completion of a missing matrix element in matrix completion.

  16. Progressive damage, fracture predictions and post mortem correlations for fiber composites

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Lewis Research Center is involved in the development of computational mechanics methods for predicting the structural behavior and response of composite structures. In conjunction with the analytical methods development, experimental programs including post failure examination are conducted to study various factors affecting composite fracture such as laminate thickness effects, ply configuration, and notch sensitivity. Results indicate that the analytical capabilities incorporated in the CODSTRAN computer code are effective in predicting the progressive damage and fracture of composite structures. In addition, the results being generated are establishing a data base which will aid in the characterization of composite fracture.

  17. Constraints on the [Formula: see text] form factor from analyticity and unitarity.

    PubMed

    Ananthanarayan, B; Caprini, I; Kubis, B

    Motivated by the discrepancies noted recently between the theoretical calculations of the electromagnetic [Formula: see text] form factor and certain experimental data, we investigate this form factor using analyticity and unitarity in a framework known as the method of unitarity bounds. We use a QCD correlator computed on the spacelike axis by operator product expansion and perturbative QCD as input, and exploit unitarity and the positivity of its spectral function, including the two-pion contribution that can be reliably calculated using high-precision data on the pion form factor. From this information, we derive upper and lower bounds on the modulus of the [Formula: see text] form factor in the elastic region. The results provide a significant check on those obtained with standard dispersion relations, confirming the existence of a disagreement with experimental data in the region around [Formula: see text].

  18. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  19. Measuring Parenting Practices among Parents of Elementary School-Age Youth

    ERIC Educational Resources Information Center

    Randolph, Karen A.; Radey, Melissa

    2011-01-01

    Objectives: The objective of this study is to establish the factor structure of the Alabama Parenting Questionnaire (APQ), an instrument designed to measure parenting practices among parents of elementary school children. Methods: Exploratory (EFA) and confirmatory factor analytic (CFA) procedures are used to validate the APQ with 790 parents of…

  20. The Effect of Sample Size on Parametric and Nonparametric Factor Analytical Methods

    ERIC Educational Resources Information Center

    Kalkan, Ömür Kaya; Kelecioglu, Hülya

    2016-01-01

    Linear factor analysis models used to examine constructs underlying the responses are not very suitable for dichotomous or polytomous response formats. The associated problems cannot be eliminated by polychoric or tetrachoric correlations in place of the Pearson correlation. Therefore, we considered parameters obtained from the NOHARM and FACTOR…

  1. Properties of water as a novel stationary phase in capillary gas chromatography.

    PubMed

    Gallant, Jonathan A; Thurbide, Kevin B

    2014-09-12

    A novel method of separation that uses water as a stationary phase in capillary gas chromatography (GC) is presented. Through applying a water phase to the interior walls of a stainless steel capillary, good separations were obtained for a large variety of analytes in this format. It was found that carrier gas humidification and backpressure were key factors in promoting stable operation over time at various temperatures. For example, with these measures in place, the retention time of an acetone test analyte was found to reduce by only 44s after 100min of operation at a column temperature of 100°C. In terms of efficiency, under optimum conditions the method produced about 20,000 plates for an acetone test analyte on a 250μm i.d.×30m column. Overall, retention on the stationary phase generally increased with analyte water solubility and polarity, but was relatively little correlated with analyte volatility. Conversely, non-polar analytes were essentially unretained in the system. These features were applied to the direct analysis of different polar analytes in both aqueous and organic samples. Results suggest that this approach could provide an interesting alternative tool in capillary GC separations. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Folding-paper-based preconcentrator for low dispersion of preconcentration plug

    NASA Astrophysics Data System (ADS)

    Lee, Kyungjae; Yoo, Yong Kyoung; Han, Sung Il; Lee, Junwoo; Lee, Dohwan; Kim, Cheonjung; Lee, Jeong Hoon

    2017-12-01

    Ion concentration polarization (ICP) has been widely studied for collecting target analytes as it is a powerful preconcentrator method employed for charged molecules. Although the method is quite robust, simple, cheap, and yields a high preconcentration factor, a major hurdle to be addressed is extracting the preconcentrated samples without dispersing the plug. This study investigates a 3D folding-paper-based ICP preconcentrator for preconcentrated plug extraction without the dispersion effect. The ICP preconcentrator is printed on a cellulose paper with pre-patterned hydrophobic wax. To extract and isolate the preconcentration plug with minimal dispersion, a 3D pop-up structure is fabricated via water drain, and a preconcentration factor of 300-fold for 10 min is achieved. By optimizing factors such as the electric field, water drain, and sample volume, the technique was enhanced by facilitating sample preconcentration and isolation, thereby providing the possibility for extensive applications in analytical devices such as lateral flow assays and FTAR cards.

  3. Optimal homotopy asymptotic method for flow and heat transfer of a viscoelastic fluid in an axisymmetric channel with a porous wall.

    PubMed

    Mabood, Fazle; Khan, Waqar A; Ismail, Ahmad Izani Md

    2013-01-01

    In this article, an approximate analytical solution of flow and heat transfer for a viscoelastic fluid in an axisymmetric channel with porous wall is presented. The solution is obtained through the use of a powerful method known as Optimal Homotopy Asymptotic Method (OHAM). We obtained the approximate analytical solution for dimensionless velocity and temperature for various parameters. The influence and effect of different parameters on dimensionless velocity, temperature, friction factor, and rate of heat transfer are presented graphically. We also compared our solution with those obtained by other methods and it is found that OHAM solution is better than the other methods considered. This shows that OHAM is reliable for use to solve strongly nonlinear problems in heat transfer phenomena.

  4. Optimal Homotopy Asymptotic Method for Flow and Heat Transfer of a Viscoelastic Fluid in an Axisymmetric Channel with a Porous Wall

    PubMed Central

    Mabood, Fazle; Khan, Waqar A.; Ismail, Ahmad Izani

    2013-01-01

    In this article, an approximate analytical solution of flow and heat transfer for a viscoelastic fluid in an axisymmetric channel with porous wall is presented. The solution is obtained through the use of a powerful method known as Optimal Homotopy Asymptotic Method (OHAM). We obtained the approximate analytical solution for dimensionless velocity and temperature for various parameters. The influence and effect of different parameters on dimensionless velocity, temperature, friction factor, and rate of heat transfer are presented graphically. We also compared our solution with those obtained by other methods and it is found that OHAM solution is better than the other methods considered. This shows that OHAM is reliable for use to solve strongly nonlinear problems in heat transfer phenomena. PMID:24376722

  5. Construct validity of the Beck Hopelessness Scale (BHS) among university students: A multitrait-multimethod approach.

    PubMed

    Boduszek, Daniel; Dhingra, Katie

    2016-10-01

    There is considerable debate about the underlying factor structure of the Beck Hopelessness Scale (BHS) in the literature. An established view is that it reflects a unitary or bidimensional construct in nonclinical samples. There are, however, reasons to reconsider this conceptualization. Based on previous factor analytic findings from both clinical and nonclinical studies, the aim of the present study was to compare 16 competing models of the BHS in a large university student sample (N = 1, 733). Sixteen distinct factor models were specified and tested using conventional confirmatory factor analytic techniques, along with confirmatory bifactor modeling. A 3-factor solution with 2 method effects (i.e., a multitrait-multimethod model) provided the best fit to the data. The reliability of this conceptualization was supported by McDonald's coefficient omega and the differential relationships exhibited between the 3 hopelessness factors ("feelings about the future," "loss of motivation," and "future expectations") and measures of goal disengagement, brooding rumination, suicide ideation, and suicide attempt history. The results provide statistical support for a 3-trait and 2-method factor model, and hence the 3 dimensions of hopelessness theorized by Beck. The theoretical and methodological implications of these findings are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Sweeping as a multistep enrichment process in micellar electrokinetic chromatography: the retention factor gradient effect.

    PubMed

    El-Awady, Mohamed; Pyell, Ute

    2013-07-05

    The application of a new method developed for the assessment of sweeping efficiency in MEKC under homogeneous and inhomogeneous electric field conditions is extended to the general case, in which the distribution coefficient and the electric conductivity of the analyte in the sample zone and in the separation compartment are varied. As test analytes p-hydroxybenzoates (parabens), benzamide and some aromatic amines are studied under MEKC conditions with SDS as anionic surfactant. We show that in the general case - in contrast to the classical description - the obtainable enrichment factor is not only dependent on the retention factor of the analyte in the sample zone but also dependent on the retention factor in the background electrolyte (BGE). It is shown that in the general case sweeping is inherently a multistep focusing process. We describe an additional focusing/defocusing step (the retention factor gradient effect, RFGE) quantitatively by extending the classical equation employed for the description of the sweeping process with an additional focusing/defocusing factor. The validity of this equation is demonstrated experimentally (and theoretically) under variation of the organic solvent content (in the sample and/or the BGE), the type of organic solvent (in the sample and/or the BGE), the electric conductivity (in the sample), the pH (in the sample), and the concentration of surfactant (in the BGE). It is shown that very high enrichment factors can be obtained, if the pH in the sample zone makes possible to convert the analyte into a charged species that has a high distribution coefficient with respect to an oppositely charged micellar phase, while the pH in the BGE enables separation of the neutral species under moderate retention factor conditions. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Evaluation of the risk factors associated with rectal neuroendocrine tumors: a big data analytic study from a health screening center.

    PubMed

    Pyo, Jeung Hui; Hong, Sung Noh; Min, Byung-Hoon; Lee, Jun Haeng; Chang, Dong Kyung; Rhee, Poong-Lyul; Kim, Jae Jun; Choi, Sun Kyu; Jung, Sin-Ho; Son, Hee Jung; Kim, Young-Ho

    2016-12-01

    Rectal neuroendocrine tumor (NET) is the most common NET in Asia. The risk factors associated with rectal NETs are unclear because of the overall low incidence rate of these tumors and the associated difficulty in conducting large epidemiological studies on rare cases. The aim of this study was to exploit the benefits of big data analytics to assess the risk factors associated with rectal NET. A retrospective case-control study was conducted, including 102 patients with histologically confirmed rectal NETs and 52,583 healthy controls who underwent screening colonoscopy at the Center for Health Promotion of the Samsung Medical Center in Korea between January 2002 and December 2012. Information on different risk factors was collected and logistic regression analysis applied to identify predictive factors. Four factors were significantly associated with rectal NET: higher levels of cholesterol [odds ratio (OR) = 1.007, 95 % confidence interval (CI), 1.001-1.013, p = 0.016] and ferritin (OR = 1.502, 95 % CI, 1.167-1.935, p = 0.002), presence of metabolic syndrome (OR = 1.768, 95 % CI, 1.071-2.918, p = 0.026), and family history of cancer among first-degree relatives (OR = 1.664, 95 % CI, 1.019-2.718, p = 0.042). The findings of our study demonstrate the benefits of using big data analytics for research and clinical risk factor studies. Specifically, in this study, this analytical method was applied to identify higher levels of serum cholesterol and ferritin, metabolic syndrome, and family history of cancer as factors that may explain the increasing incidence and prevalence of rectal NET.

  8. Pre-analytical Factors Influence Accuracy of Urine Spot Iodine Assessment in Epidemiological Surveys.

    PubMed

    Doggui, Radhouene; El Ati-Hellal, Myriam; Traissac, Pierre; El Ati, Jalila

    2018-03-26

    Urinary iodine concentration (UIC) is commonly used to assess iodine status of subjects in epidemiological surveys. As pre-analytical factors are an important source of measurement error and studies about this phase are scarce, our objective was to assess the influence of urine sampling conditions on UIC, i.e., whether the child ate breakfast or not, urine void rank of the day, and time span between last meal and urine collection. A nationwide, two-stage, stratified, cross-sectional study including 1560 children (6-12 years) was performed in 2012. UIC was determined by the Sandell-Kolthoff method. Pre-analytical factors were assessed from children's mothers by using a questionnaire. Association between iodine status and pre-analytical factors were adjusted for one another and socio-economic characteristics by multivariate linear and multinomial regression models (RPR: relative prevalence ratios). Skipping breakfast prior to morning urine sampling decreased UIC by 40 to 50 μg/L and the proportion of UIC < 100 μg/L was higher among children having those skipped breakfast (RPR = 3.2[1.0-10.4]). In unadjusted analyses, UIC was less among children sampled more than 5 h from their last meal. UIC decreased with rank of urine void (e.g., first vs. second, P < 0.001); also, the proportion of UIC < 100 μg/L was greater among 4th rank samples (vs. second RPR = 2.1[1.1-4.0]). Subjects' breakfast status and urine void rank should be accounted for when assessing iodine status. Providing recommendations to standardize pre-analytical factors is a key step toward improving accuracy and comparability of survey results for assessing iodine status from spot urine samples. These recommendations have to be evaluated by future research.

  9. Comprehensive characterizations of nanoparticle biodistribution following systemic injection in mice

    NASA Astrophysics Data System (ADS)

    Liao, Wei-Yin; Li, Hui-Jing; Chang, Ming-Yao; Tang, Alan C. L.; Hoffman, Allan S.; Hsieh, Patrick C. H.

    2013-10-01

    Various nanoparticle (NP) properties such as shape and surface charge have been studied in an attempt to enhance the efficacy of NPs in biomedical applications. When trying to undermine the precise biodistribution of NPs within the target organs, the analytical method becomes the determining factor in measuring the precise quantity of distributed NPs. High performance liquid chromatography (HPLC) represents a more powerful tool in quantifying NP biodistribution compared to conventional analytical methods such as an in vivo imaging system (IVIS). This, in part, is due to better curve linearity offered by HPLC than IVIS. Furthermore, HPLC enables us to fully analyze each gram of NPs present in the organs without compromising the signals and the depth-related sensitivity as is the case in IVIS measurements. In addition, we found that changing physiological conditions improved large NP (200-500 nm) distribution in brain tissue. These results reveal the importance of selecting analytic tools and physiological environment when characterizing NP biodistribution for future nanoscale toxicology, therapeutics and diagnostics.Various nanoparticle (NP) properties such as shape and surface charge have been studied in an attempt to enhance the efficacy of NPs in biomedical applications. When trying to undermine the precise biodistribution of NPs within the target organs, the analytical method becomes the determining factor in measuring the precise quantity of distributed NPs. High performance liquid chromatography (HPLC) represents a more powerful tool in quantifying NP biodistribution compared to conventional analytical methods such as an in vivo imaging system (IVIS). This, in part, is due to better curve linearity offered by HPLC than IVIS. Furthermore, HPLC enables us to fully analyze each gram of NPs present in the organs without compromising the signals and the depth-related sensitivity as is the case in IVIS measurements. In addition, we found that changing physiological conditions improved large NP (200-500 nm) distribution in brain tissue. These results reveal the importance of selecting analytic tools and physiological environment when characterizing NP biodistribution for future nanoscale toxicology, therapeutics and diagnostics. Electronic supplementary information (ESI) available. See DOI: 10.1039/c3nr03954d

  10. Enhancement in the sensitivity of microfluidic enzyme-linked immunosorbent assays through analyte preconcentration.

    PubMed

    Yanagisawa, Naoki; Dutta, Debashis

    2012-08-21

    In this Article, we describe a microfluidic enzyme-linked immunosorbent assay (ELISA) method whose sensitivity can be substantially enhanced through preconcentration of the target analyte around a semipermeable membrane. The reported preconcentration has been accomplished in our current work via electrokinetic means allowing a significant increase in the amount of captured analyte relative to nonspecific binding in the trapping/detection zone. Upon introduction of an enzyme substrate into this region, the rate of generation of the ELISA reaction product (resorufin) was observed to increase by over a factor of 200 for the sample and 2 for the corresponding blank compared to similar assays without analyte trapping. Interestingly, in spite of nonuniformities in the amount of captured analyte along the surface of our analysis channel, the measured fluorescence signal in the preconcentration zone increased linearly with time over an enzyme reaction period of 30 min and at a rate that was proportional to the analyte concentration in the bulk sample. In our current study, the reported technique has been shown to reduce the smallest detectable concentration of the tumor marker CA 19-9 and Blue Tongue Viral antibody by over 2 orders of magnitude compared to immunoassays without analyte preconcentration. When compared to microwell based ELISAs, the reported microfluidic approach not only yielded a similar improvement in the smallest detectable analyte concentration but also reduced the sample consumption in the assay by a factor of 20 (5 μL versus 100 μL).

  11. Adapting Surface Ground Motion Relations to Underground conditions: A case study for the Sudbury Neutrino Observatory in Sudbury, Ontario, Canada

    NASA Astrophysics Data System (ADS)

    Babaie Mahani, A.; Eaton, D. W.

    2013-12-01

    Ground Motion Prediction Equations (GMPEs) are widely used in Probabilistic Seismic Hazard Assessment (PSHA) to estimate ground-motion amplitudes at Earth's surface as a function of magnitude and distance. Certain applications, such as hazard assessment for caprock integrity in the case of underground storage of CO2, waste disposal sites, and underground pipelines, require subsurface estimates of ground motion; at present, such estimates depend upon theoretical modeling and simulations. The objective of this study is to derive correction factors for GMPEs to enable estimation of amplitudes in the subsurface. We use a semi-analytic approach along with finite-difference simulations of ground-motion amplitudes for surface and underground motions. Spectral ratios of underground to surface motions are used to calculate the correction factors. Two predictive methods are used. The first is a semi-analytic approach based on a quarter-wavelength method that is widely used for earthquake site-response investigations; the second is a numerical approach based on elastic finite-difference simulations of wave propagation. Both methods are evaluated using recordings of regional earthquakes by broadband seismometers installed at the surface and at depths of 1400 m and 2100 m in the Sudbury Neutrino Observatory, Canada. Overall, both methods provide a reasonable fit to the peaks and troughs observed in the ratios of real data. The finite-difference method, however, has the capability to simulate ground motion ratios more accurately than the semi-analytic approach.

  12. Environmental and human monitoring of Americium-241 utilizing extraction chromatography and {alpha}-Spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldstein, S.J.; Hensley, C.A.; Armenta, C.E.

    1997-03-01

    Recent developments in extraction chromatography have simplified the separation of americium from complex matrices in preparation for {alpha}-spectroscopy relative to traditional methods. Here we present results of procedures developed/adapted for water, air, and bioassay samples with less than 1 g of inorganic residue. Prior analytical methods required the use of a complex, multistage procedure for separation of americium from these matrices. The newer, simplified procedure requires only a single 2 mL extraction chromatographic separation for isolation of Am and lanthanides from other components of the sample. This method has been implemented on an extensive variety of `real` environmental and bioassaymore » samples from the Los Alamos area, and consistently reliable and accurate results with appropriate detection limits have been obtained. The new method increases analytical throughput by a factor of {approx}2 and decreases environmental hazards from acid and mixed-waste generation relative to the prior technique. Analytical accuracy, reproducibility, and reliability are also significantly improved over the more complex and laborious method used previously. 24 refs., 2 figs., 2 tabs.« less

  13. Green Chemistry Metrics with Special Reference to Green Analytical Chemistry.

    PubMed

    Tobiszewski, Marek; Marć, Mariusz; Gałuszka, Agnieszka; Namieśnik, Jacek

    2015-06-12

    The concept of green chemistry is widely recognized in chemical laboratories. To properly measure an environmental impact of chemical processes, dedicated assessment tools are required. This paper summarizes the current state of knowledge in the field of development of green chemistry and green analytical chemistry metrics. The diverse methods used for evaluation of the greenness of organic synthesis, such as eco-footprint, E-Factor, EATOS, and Eco-Scale are described. Both the well-established and recently developed green analytical chemistry metrics, including NEMI labeling and analytical Eco-scale, are presented. Additionally, this paper focuses on the possibility of the use of multivariate statistics in evaluation of environmental impact of analytical procedures. All the above metrics are compared and discussed in terms of their advantages and disadvantages. The current needs and future perspectives in green chemistry metrics are also discussed.

  14. Measurement of company effectiveness using analytic network process method

    NASA Astrophysics Data System (ADS)

    Goran, Janjić; Zorana, Tanasić; Borut, Kosec

    2017-07-01

    The sustainable development of an organisation is monitored through the organisation's performance, which beforehand incorporates all stakeholders' requirements in its strategy. The strategic management concept enables organisations to monitor and evaluate their effectiveness along with efficiency by monitoring of the implementation of set strategic goals. In the process of monitoring and measuring effectiveness, an organisation can use multiple-criteria decision-making methods as help. This study uses the method of analytic network process (ANP) to define the weight factors of the mutual influences of all the important elements of an organisation's strategy. The calculation of an organisation's effectiveness is based on the weight factors and the degree of fulfilment of the goal values of the strategic map measures. New business conditions influence the changes in the importance of certain elements of an organisation's business in relation to competitive advantage on the market, and on the market, increasing emphasis is given to non-material resources in the process of selection of the organisation's most important measures.

  15. Shape design of an optimal comfortable pillow based on the analytic hierarchy process method

    PubMed Central

    Liu, Shuo-Fang; Lee, Yann-Long; Liang, Jung-Chin

    2011-01-01

    Objective Few studies have analyzed the shapes of pillows. The purpose of this study was to investigate the relationship between the pillow shape design and subjective comfort level for asymptomatic subjects. Methods Four basic pillow designs factors were selected on the basis of literature review and recombined into 8 configurations for testing the rank of degrees of comfort. The data were analyzed by the analytic hierarchy process method to determine the most comfortable pillow. Results Pillow number 4 was the most comfortable pillow in terms of head, neck, shoulder, height, and overall comfort. The design factors of pillow number 4 were using a combination of standard, cervical, and shoulder pillows. A prototype of this pillow was developed on the basis of the study results for designing future pillow shapes. Conclusions This study investigated the comfort level of particular users and redesign features of a pillow. A deconstruction analysis would simplify the process of determining the most comfortable pillow design and aid designers in designing pillows for groups. PMID:22654680

  16. Method development and qualification of capillary zone electrophoresis for investigation of therapeutic monoclonal antibody quality.

    PubMed

    Suba, Dávid; Urbányi, Zoltán; Salgó, András

    2016-10-01

    Capillary electrophoresis techniques are widely used in the analytical biotechnology. Different electrophoretic techniques are very adequate tools to monitor size-and charge heterogenities of protein drugs. Method descriptions and development studies of capillary zone electrophoresis (CZE) have been described in literature. Most of them are performed based on the classical one-factor-at-time (OFAT) approach. In this study a very simple method development approach is described for capillary zone electrophoresis: a "two-phase-four-step" approach is introduced which allows a rapid, iterative method development process and can be a good platform for CZE method. In every step the current analytical target profile and an appropriate control strategy were established to monitor the current stage of development. A very good platform was established to investigate intact and digested protein samples. Commercially available monoclonal antibody was chosen as model protein for the method development study. The CZE method was qualificated after the development process and the results were presented. The analytical system stability was represented by the calculated RSD% value of area percentage and migration time of the selected peaks (<0.8% and <5%) during the intermediate precision investigation. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Deducing the form factors for shear used in the calculus of the displacements based on strain energy methods. Mathematical approach for currently used shapes

    NASA Astrophysics Data System (ADS)

    Constantinescu, E.; Oanta, E.; Panait, C.

    2017-08-01

    The paper presents an initial study concerning the form factors for shear, for a rectangular and for a circular cross section, being used an analytical method and a numerical study. The numerical study considers a division of the cross section in small areas and uses the power of the definitions in order to compute the according integrals. The accurate values of the form factors are increasing the accuracy of the displacements computed by the use of the strain energy methods. The knowledge resulted from this study will be used for several directions of development: calculus of the form factors for a ring-type cross section of variable ratio of the inner and outer diameters, calculus of the geometrical characteristics of an inclined circular segment and, using a Bool algebra that operates with geometrical shapes, for an inclined circular ring segment. These shapes may be used to analytically define the geometrical model of a complex composite section, i.e. a ship hull cross section. The according calculus relations are also useful for the development of customized design commands in CAD commercial applications. The paper is a result of the long run development of original computer based instruments in engineering of the authors.

  18. Application of person-centered analytic methodology in longitudinal research: exemplars from the Women's Health Initiative Clinical Trial data.

    PubMed

    Zaslavsky, Oleg; Cochrane, Barbara B; Herting, Jerald R; Thompson, Hilaire J; Woods, Nancy F; Lacroix, Andrea

    2014-02-01

    Despite the variety of available analytic methods, longitudinal research in nursing has been dominated by use of a variable-centered analytic approach. The purpose of this article is to present the utility of person-centered methodology using a large cohort of American women 65 and older enrolled in the Women's Health Initiative Clinical Trial (N = 19,891). Four distinct trajectories of energy/fatigue scores were identified. Levels of fatigue were closely linked to age, socio-demographic factors, comorbidities, health behaviors, and poor sleep quality. These findings were consistent regardless of the methodological framework. Finally, we demonstrated that energy/fatigue levels predicted future hospitalization in non-disabled elderly. Person-centered methods provide unique opportunities to explore and statistically model the effects of longitudinal heterogeneity within a population. © 2013 Wiley Periodicals, Inc.

  19. Tests of Measurement Invariance without Subgroups: A Generalization of Classical Methods

    ERIC Educational Resources Information Center

    Merkle, Edgar C.; Zeileis, Achim

    2013-01-01

    The issue of measurement invariance commonly arises in factor-analytic contexts, with methods for assessment including likelihood ratio tests, Lagrange multiplier tests, and Wald tests. These tests all require advance definition of the number of groups, group membership, and offending model parameters. In this paper, we study tests of measurement…

  20. Development, Validation, and Interlaboratory Evaluation of a Quantitative Multiplexing Method To Assess Levels of Ten Endogenous Allergens in Soybean Seed and Its Application to Field Trials Spanning Three Growing Seasons.

    PubMed

    Hill, Ryan C; Oman, Trent J; Wang, Xiujuan; Shan, Guomin; Schafer, Barry; Herman, Rod A; Tobias, Rowel; Shippar, Jeff; Malayappan, Bhaskar; Sheng, Li; Xu, Austin; Bradshaw, Jason

    2017-07-12

    As part of the regulatory approval process in Europe, comparison of endogenous soybean allergen levels between genetically engineered (GE) and non-GE plants has been requested. A quantitative multiplex analytical method using tandem mass spectrometry was developed and validated to measure 10 potential soybean allergens from soybean seed. The analytical method was implemented at six laboratories to demonstrate the robustness of the method and further applied to three soybean field studies across multiple growing seasons (including 21 non-GE soybean varieties) to assess the natural variation of allergen levels. The results show environmental factors contribute more than genetic factors to the large variation in allergen abundance (2- to 50-fold between environmental replicates) as well as a large contribution of Gly m 5 and Gly m 6 to the total allergen profile, calling into question the scientific rational for measurement of endogenous allergen levels between GE and non-GE varieties in the safety assessment.

  1. Use of experimental design in the investigation of stir bar sorptive extraction followed by ultra-high-performance liquid chromatography-tandem mass spectrometry for the analysis of explosives in water samples.

    PubMed

    Schramm, Sébastien; Vailhen, Dominique; Bridoux, Maxime Cyril

    2016-02-12

    A method for the sensitive quantification of trace amounts of organic explosives in water samples was developed by using stir bar sorptive extraction (SBSE) followed by liquid desorption and ultra-high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS). The proposed method was developed and optimized using a statistical design of experiment approach. Use of experimental designs allowed a complete study of 10 factors and 8 analytes including nitro-aromatics, amino-nitro-aromatics and nitric esters. The liquid desorption study was performed using a full factorial experimental design followed by a kinetic study. Four different variables were tested here: the liquid desorption mode (stirring or sonication), the chemical nature of the stir bar (PDMS or PDMS-PEG), the composition of the liquid desorption phase and finally, the volume of solvent used for the liquid desorption. On the other hand, the SBSE extraction study was performed using a Doehlert design. SBSE extraction conditions such as extraction time profiles, sample volume, modifier addition, and acetic acid addition were examined. After optimization of the experimental parameters, sensitivity was improved by a factor 5-30, depending on the compound studied, due to the enrichment factors reached using the SBSE method. Limits of detection were in the ng/L level for all analytes studied. Reproducibility of the extraction with different stir bars was close to the reproducibility of the analytical method (RSD between 4 and 16%). Extractions in various water sample matrices (spring, mineral and underground water) have shown similar enrichment compared to ultrapure water, revealing very low matrix effects. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. A symmetrical subtraction combined with interpolated values for eliminating scattering from fluorescence EEM data

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Liu, Xiaofei; Wang, Yutian

    2016-08-01

    Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components.

  3. Application of analytical methods in authentication and adulteration of honey.

    PubMed

    Siddiqui, Amna Jabbar; Musharraf, Syed Ghulam; Choudhary, M Iqbal; Rahman, Atta-Ur-

    2017-02-15

    Honey is synthesized from flower nectar and it is famous for its tremendous therapeutic potential since ancient times. Many factors influence the basic properties of honey including the nectar-providing plant species, bee species, geographic area, and harvesting conditions. Quality and composition of honey is also affected by many other factors, such as overfeeding of bees with sucrose, harvesting prior to maturity, and adulteration with sugar syrups. Due to the complex nature of honey, it is often challenging to authenticate the purity and quality by using common methods such as physicochemical parameters and more specialized procedures need to be developed. This article reviews the literature (between 2000 and 2016) on the use of analytical techniques, mainly NMR spectroscopy, for authentication of honey, its botanical and geographical origin, and adulteration by sugar syrups. NMR is a powerful technique and can be used as a fingerprinting technique to compare various samples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. PNEUMATIC MICROVALVE FOR ELECTROKINETIC SAMPLE PRECONCENTRATION AND CAPILLARY ELECTROPHORESIS INJECTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cong, Yongzheng; Rausch, Sarah J.; Geng, Tao

    2014-10-27

    Here we show that a closed pneumatic microvalve on a PDMS chip can serve as a semipermeable membrane under an applied potential, enabling current to pass through while blocking the passage of charged analytes. Enrichment of both anionic and cationic species has been demonstrated, and concentration factors of ~70 have been achieved in just 8 s. Once analytes are concentrated, the valve is briefly opened and the sample is hydrodynamically injected onto an integrated microchip or capillary electrophoresis (CE) column. In contrast to existing preconcentration approaches, the membrane-based method described here enables both rapid analyte concentration as well as highmore » resolution separations.« less

  5. Single-Cell Detection of Secreted Aβ and sAPPα from Human IPSC-Derived Neurons and Astrocytes.

    PubMed

    Liao, Mei-Chen; Muratore, Christina R; Gierahn, Todd M; Sullivan, Sarah E; Srikanth, Priya; De Jager, Philip L; Love, J Christopher; Young-Pearse, Tracy L

    2016-02-03

    Secreted factors play a central role in normal and pathological processes in every tissue in the body. The brain is composed of a highly complex milieu of different cell types and few methods exist that can identify which individual cells in a complex mixture are secreting specific analytes. By identifying which cells are responsible, we can better understand neural physiology and pathophysiology, more readily identify the underlying pathways responsible for analyte production, and ultimately use this information to guide the development of novel therapeutic strategies that target the cell types of relevance. We present here a method for detecting analytes secreted from single human induced pluripotent stem cell (iPSC)-derived neural cells and have applied the method to measure amyloid β (Aβ) and soluble amyloid precursor protein-alpha (sAPPα), analytes central to Alzheimer's disease pathogenesis. Through these studies, we have uncovered the dynamic range of secretion profiles of these analytes from single iPSC-derived neuronal and glial cells and have molecularly characterized subpopulations of these cells through immunostaining and gene expression analyses. In examining Aβ and sAPPα secretion from single cells, we were able to identify previously unappreciated complexities in the biology of APP cleavage that could not otherwise have been found by studying averaged responses over pools of cells. This technique can be readily adapted to the detection of other analytes secreted by neural cells, which would have the potential to open new perspectives into human CNS development and dysfunction. We have established a technology that, for the first time, detects secreted analytes from single human neurons and astrocytes. We examine secretion of the Alzheimer's disease-relevant factors amyloid β (Aβ) and soluble amyloid precursor protein-alpha (sAPPα) and present novel findings that could not have been observed without a single-cell analytical platform. First, we identify a previously unappreciated subpopulation that secretes high levels of Aβ in the absence of detectable sAPPα. Further, we show that multiple cell types secrete high levels of Aβ and sAPPα, but cells expressing GABAergic neuronal markers are overrepresented. Finally, we show that astrocytes are competent to secrete high levels of Aβ and therefore may be a significant contributor to Aβ accumulation in the brain. Copyright © 2016 the authors 0270-6474/16/361730-17$15.00/0.

  6. Signal Enhancement in HPLC/Micro-Coil NMR Using Automated Column Trapping

    PubMed Central

    Djukovic, Danijel; Liu, Shuhui; Henry, Ian; Tobias, Brian; Raftery, Daniel

    2008-01-01

    A new HPLC-NMR system is described that performs analytical separation, pre-concentration, and NMR spectroscopy in rapid succession. The central component of our method is the online pre-concentration sequence that improves the match between post-column analyte peak volume and the micro-coil NMR detection volume. Separated samples are collected on to a C18 guard column with a mobile phase composed of 90% D2O/10% acetonitrile-D3, and back-flashed to the NMR micro-coil probe with 90% acetonitrile-D3/10% D2O. In order to assess the performance of our unit, we separated a standard mixture of 1 mM ibuprofen, naproxen, and phenylbutazone using a commercially available C18 analytical column. The S/N measurements from the NMR acquisitions indicated that we achieved signal enhancement factors up to 10.4 (±1.2)-fold. Furthermore, we observed that pre-concentration factors increased as the injected amount of analyte decreased. The highest concentration enrichment of 14.7 (±2.2)-fold was attained injecting 100 μL solution of 0.2 mM (~4 μg) ibuprofen. PMID:17037915

  7. Prioritizing the causes and correctors of smoking towards the solution of tobacco free future using enhanced analytic hierarchy process

    NASA Astrophysics Data System (ADS)

    Halim, Tisya Farida Abdul; Sapiri, Hasimah; Abidin, Norhaslinda Zainal

    2017-11-01

    This paper presents a method for prioritizing the causes and correctors of smoking habits in Malaysia. In order to identify the driving forces that causes (initiation factors) smoking habits and its correctors (anti-smoking strategies), a method called Enhanced Analytic Hierarchy Process (EAHP) is employed. The EAHP has advantages over normal Analytic Hierarchy Process (AHP) based on its capability to eliminate inconsistency (consistency ratio > 0.1) in evaluating expert's judgment. Based on the Theory of Triadic Influence, the identified initiation factors were personal beliefs and values, personal psychological, family influence, psychosocial influence, culture and legislative. There are five anti-smoking strategies that have been implemented in Malaysia, namely packaging and labelling, pricing and taxation, advertising, smoke-free legislation and education and support. Findings from the study shows that psychosocial influence was considered as the initiation factor of smoking among Malaysian adults, and mass media campaign was the most effective anti-smoking strategies to reduce smoking prevalence. The implementation of an effective anti-smoking strategies should be considered towards the endgame of tobacco by the year 2040 as outlined by the government. The findings in turn can provide insights and guidelines for researchers as well as policy makers to assess the effectiveness of anti-smoking strategies towards a better policy planning decisions in the future.

  8. Surrogate analyte approach for quantitation of endogenous NAD(+) in human acidified blood samples using liquid chromatography coupled with electrospray ionization tandem mass spectrometry.

    PubMed

    Liu, Liling; Cui, Zhiyi; Deng, Yuzhong; Dean, Brian; Hop, Cornelis E C A; Liang, Xiaorong

    2016-02-01

    A high-performance liquid chromatography tandem mass spectrometry (LC-MS/MS) assay for the quantitative determination of NAD(+) in human whole blood using a surrogate analyte approach was developed and validated. Human whole blood was acidified using 0.5N perchloric acid at a ratio of 1:3 (v:v, blood:perchloric acid) during sample collection. 25μL of acidified blood was extracted using a protein precipitation method and the resulting extracts were analyzed using reverse-phase chromatography and positive electrospray ionization mass spectrometry. (13)C5-NAD(+) was used as the surrogate analyte for authentic analyte, NAD(+). The standard curve ranging from 0.250 to 25.0μg/mL in acidified human blood for (13)C5-NAD(+) was fitted to a 1/x(2) weighted linear regression model. The LC-MS/MS response between surrogate analyte and authentic analyte at the same concentration was obtained before and after the batch run. This response factor was not applied when determining the NAD(+) concentration from the (13)C5-NAD(+) standard curve since the percent difference was less than 5%. The precision and accuracy of the LC-MS/MS assay based on the five analytical QC levels were well within the acceptance criteria from both FDA and EMA guidance for bioanalytical method validation. Average extraction recovery of (13)C5-NAD(+) was 94.6% across the curve range. Matrix factor was 0.99 for both high and low QC indicating minimal ion suppression or enhancement. The validated assay was used to measure the baseline level of NAD(+) in 29 male and 21 female human subjects. This assay was also used to study the circadian effect of endogenous level of NAD(+) in 10 human subjects. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Silicon carbide nanomaterial as a coating for solid-phase microextraction.

    PubMed

    Tian, Yu; Feng, Juanjuan; Wang, Xiuqin; Sun, Min; Luo, Chuannan

    2018-01-26

    Silicon carbide has excellent properties, such as corrosion resistance, high strength, oxidation resistance, high temperature, and so on. Based on these properties, silicon carbide was coated on stainless-steel wire and used as a solid-phase microextraction coating, and polycyclic aromatic hydrocarbons were employed as model analytes. Using gas chromatography, some important factors that affect the extraction efficiency were optimized one by one, and an analytical method was established. The analytical method showed wide linear ranges (0.1-30, 0.03-30, and 0.01-30 μg/L) with satisfactory correlation coefficients (0.9922-0.9966) and low detection limits (0.003-0.03 μg/L). To investigate the practical application of the method, rainwater and cigarette ash aqueous solution were collected as real samples for extraction and detection. The results indicate that silicon carbide has excellent application in the field of solid-phase microextraction. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Empirical Analysis of the Subjective Impressions and Objective Measures of Domain Scientists’ Visual Analytic Judgments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dasgupta, Aritra; Burrows, Susannah M.; Han, Kyungsik

    2017-05-08

    Scientists often use specific data analysis and presentation methods familiar within their domain. But does high familiarity drive better analytical judgment? This question is especially relevant when familiar methods themselves can have shortcomings: many visualizations used conventionally for scientific data analysis and presentation do not follow established best practices. This necessitates new methods that might be unfamiliar yet prove to be more effective. But there is little empirical understanding of the relationships between scientists’ subjective impressions about familiar and unfamiliar visualizations and objective measures of their visual analytic judgments. To address this gap and to study these factors, we focusmore » on visualizations used for comparison of climate model performance. We report on a comprehensive survey-based user study with 47 climate scientists and present an analysis of : i) relationships among scientists’ familiarity, their perceived lev- els of comfort, confidence, accuracy, and objective measures of accuracy, and ii) relationships among domain experience, visualization familiarity, and post-study preference.« less

  11. Solvent microextraction-flame atomic absorption spectrometry (SME-FAAS) for determination of ultratrace amounts of cadmium in meat and fish samples.

    PubMed

    Goudarzi, Nasser

    2009-02-11

    A simple, low cost and highly sensitive method based on solvent microextraction (SME) for separation/preconcentration and flame atomic absorption spectrometry (FAAS) was proposed for the determination of ultratrace amounts of cadmium in meat and fish samples. The analytical procedure involved the formation of a hydrophobic complex by mixing the analyte solution with an ammonium pyrrolidinedithiocarbamate (APDC) solution. In suitable conditions, the complex of cadmium-APDC entered the micro organic phase, and thus, separation of the analyte from the matrix was achieved. Under optimal chemical and instrumental conditions, a detection limit (3 sigma) of 0.8 ng L(-1) and an enrichment factor of 93 were achieved. The relative standard deviation for the method was found to be 2.2% for Cd. The interference effects of some anions and cations were also investigated. The developed method has been applied to the determination of trace Cd in meat and fish samples.

  12. Fault feature analysis of cracked gear based on LOD and analytical-FE method

    NASA Astrophysics Data System (ADS)

    Wu, Jiateng; Yang, Yu; Yang, Xingkai; Cheng, Junsheng

    2018-01-01

    At present, there are two main ideas for gear fault diagnosis. One is the model-based gear dynamic analysis; the other is signal-based gear vibration diagnosis. In this paper, a method for fault feature analysis of gear crack is presented, which combines the advantages of dynamic modeling and signal processing. Firstly, a new time-frequency analysis method called local oscillatory-characteristic decomposition (LOD) is proposed, which has the attractive feature of extracting fault characteristic efficiently and accurately. Secondly, an analytical-finite element (analytical-FE) method which is called assist-stress intensity factor (assist-SIF) gear contact model, is put forward to calculate the time-varying mesh stiffness (TVMS) under different crack states. Based on the dynamic model of the gear system with 6 degrees of freedom, the dynamic simulation response was obtained for different tooth crack depths. For the dynamic model, the corresponding relation between the characteristic parameters and the degree of the tooth crack is established under a specific condition. On the basis of the methods mentioned above, a novel gear tooth root crack diagnosis method which combines the LOD with the analytical-FE is proposed. Furthermore, empirical mode decomposition (EMD) and ensemble empirical mode decomposition (EEMD) are contrasted with the LOD by gear crack fault vibration signals. The analysis results indicate that the proposed method performs effectively and feasibility for the tooth crack stiffness calculation and the gear tooth crack fault diagnosis.

  13. Determination of uranium isotopes in environmental samples by anion exchange in sulfuric and hydrochloric acid media.

    PubMed

    Popov, L

    2016-09-01

    Method for determination of uranium isotopes in various environmental samples is presented. The major advantages of the method are the low cost of the analysis, high radiochemical yields and good decontamination factors from the matrix elements, natural and man-made radionuclides. The separation and purification of uranium is attained by adsorption with strong base anion exchange resin in sulfuric and hydrochloric acid media. Uranium is electrodeposited on a stainless steel disk and measured by alpha spectrometry. The analytical method has been applied for the determination of concentrations of uranium isotopes in mineral, spring and tap waters from Bulgaria. The analytical quality was checked by analyzing reference materials. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Photonic crystal-based optical biosensor: a brief investigation

    NASA Astrophysics Data System (ADS)

    Divya, J.; Selvendran, S.; Sivanantha Raja, A.

    2018-06-01

    In this paper, a two-dimensional photonic crystal biosensor for medical applications based on two waveguides and a nanocavity was explored with different shoulder-coupled nanocavity structures. The most important biosensor parameters, like the sensitivity and quality factor, can be significantly improved. By injecting an analyte into a sensing hole, the refractive index of the hole was changed. This refractive index biosensor senses the changes and shifts its operating wavelength accordingly. The transmission characteristics of light in the biosensor under different refractive indices that correspond to the change in the analyte concentration are analyzed by the finite-difference time-domain method. The band gap for each structure is designed and observed by the plane wave expansion method. These proposed structures are designed to obtain an analyte refractive index variation of about 1–1.5 in an optical wavelength range of 1.250–1.640 µm. Accordingly, an improved sensitivity of 136.6 nm RIU‑1 and a quality factor as high as 3915 is achieved. An important feature of this structure is its very small dimensions. Such a combination of attributes makes the designed structure a promising element for label-free biosensing applications.

  15. A novel second-order standard addition analytical method based on data processing with multidimensional partial least-squares and residual bilinearization.

    PubMed

    Lozano, Valeria A; Ibañez, Gabriela A; Olivieri, Alejandro C

    2009-10-05

    In the presence of analyte-background interactions and a significant background signal, both second-order multivariate calibration and standard addition are required for successful analyte quantitation achieving the second-order advantage. This report discusses a modified second-order standard addition method, in which the test data matrix is subtracted from the standard addition matrices, and quantitation proceeds via the classical external calibration procedure. It is shown that this novel data processing method allows one to apply not only parallel factor analysis (PARAFAC) and multivariate curve resolution-alternating least-squares (MCR-ALS), but also the recently introduced and more flexible partial least-squares (PLS) models coupled to residual bilinearization (RBL). In particular, the multidimensional variant N-PLS/RBL is shown to produce the best analytical results. The comparison is carried out with the aid of a set of simulated data, as well as two experimental data sets: one aimed at the determination of salicylate in human serum in the presence of naproxen as an additional interferent, and the second one devoted to the analysis of danofloxacin in human serum in the presence of salicylate.

  16. Analytical Expressions for the Mixed-Order Kinetics Parameters of TL Glow Peaks Based on the two Heating Rates Method.

    PubMed

    Maghrabi, Mufeed; Al-Abdullah, Tariq; Khattari, Ziad

    2018-03-24

    The two heating rates method (originally developed for first-order glow peaks) was used for the first time to evaluate the activation energy (E) from glow peaks obeying mixed-order (MO) kinetics. The derived expression for E has an insignificant additional term (on the scale of a few meV) when compared with the first-order case. Hence, the original expression for E using the two heating rates method can be used with excellent accuracy in the case of MO glow peaks. In addition, we derived a simple analytical expression for the MO parameter. The present procedure has the advantage that the MO parameter can now be evaluated using analytical expression instead of using the graphical representation between the geometrical factor and the MO parameter as given by the existing peak shape methods. The applicability of the derived expressions for real samples was demonstrated for the glow curve of Li 2 B 4 O 7 :Mn single crystal. The obtained parameters compare very well with those obtained by glow curve fitting and with the available published data.

  17. Comparison of analytical and predictive methods for water, protein, fat, sugar, and gross energy in marine mammal milk.

    PubMed

    Oftedal, O T; Eisert, R; Barrell, G K

    2014-01-01

    Mammalian milks may differ greatly in composition from cow milk, and these differences may affect the performance of analytical methods. High-fat, high-protein milks with a preponderance of oligosaccharides, such as those produced by many marine mammals, present a particular challenge. We compared the performance of several methods against reference procedures using Weddell seal (Leptonychotes weddellii) milk of highly varied composition (by reference methods: 27-63% water, 24-62% fat, 8-12% crude protein, 0.5-1.8% sugar). A microdrying step preparatory to carbon-hydrogen-nitrogen (CHN) gas analysis slightly underestimated water content and had a higher repeatability relative standard deviation (RSDr) than did reference oven drying at 100°C. Compared with a reference macro-Kjeldahl protein procedure, the CHN (or Dumas) combustion method had a somewhat higher RSDr (1.56 vs. 0.60%) but correlation between methods was high (0.992), means were not different (CHN: 17.2±0.46% dry matter basis; Kjeldahl 17.3±0.49% dry matter basis), there were no significant proportional or constant errors, and predictive performance was high. A carbon stoichiometric procedure based on CHN analysis failed to adequately predict fat (reference: Röse-Gottlieb method) or total sugar (reference: phenol-sulfuric acid method). Gross energy content, calculated from energetic factors and results from reference methods for fat, protein, and total sugar, accurately predicted gross energy as measured by bomb calorimetry. We conclude that the CHN (Dumas) combustion method and calculation of gross energy are acceptable analytical approaches for marine mammal milk, but fat and sugar require separate analysis by appropriate analytic methods and cannot be adequately estimated by carbon stoichiometry. Some other alternative methods-low-temperature drying for water determination; Bradford, Lowry, and biuret methods for protein; the Folch and the Bligh and Dyer methods for fat; and enzymatic and reducing sugar methods for total sugar-appear likely to produce substantial error in marine mammal milks. It is important that alternative analytical methods be properly validated against a reference method before being used, especially for mammalian milks that differ greatly from cow milk in analyte characteristics and concentrations. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. Semi-Analytic Reconstruction of Flux in Finite Volume Formulations

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2006-01-01

    Semi-analytic reconstruction uses the analytic solution to a second-order, steady, ordinary differential equation (ODE) to simultaneously evaluate the convective and diffusive flux at all interfaces of a finite volume formulation. The second-order ODE is itself a linearized approximation to the governing first- and second- order partial differential equation conservation laws. Thus, semi-analytic reconstruction defines a family of formulations for finite volume interface fluxes using analytic solutions to approximating equations. Limiters are not applied in a conventional sense; rather, diffusivity is adjusted in the vicinity of changes in sign of eigenvalues in order to achieve a sufficiently small cell Reynolds number in the analytic formulation across critical points. Several approaches for application of semi-analytic reconstruction for the solution of one-dimensional scalar equations are introduced. Results are compared with exact analytic solutions to Burger s Equation as well as a conventional, upwind discretization using Roe s method. One approach, the end-point wave speed (EPWS) approximation, is further developed for more complex applications. One-dimensional vector equations are tested on a quasi one-dimensional nozzle application. The EPWS algorithm has a more compact difference stencil than Roe s algorithm but reconstruction time is approximately a factor of four larger than for Roe. Though both are second-order accurate schemes, Roe s method approaches a grid converged solution with fewer grid points. Reconstruction of flux in the context of multi-dimensional, vector conservation laws including effects of thermochemical nonequilibrium in the Navier-Stokes equations is developed.

  19. Analytical Method Validation of High-Performance Liquid Chromatography and Stability-Indicating Study of Medroxyprogesterone Acetate Intravaginal Sponges

    PubMed Central

    Batrawi, Nidal; Wahdan, Shorouq; Abualhasan, Murad

    2017-01-01

    Medroxyprogesterone acetate is widely used in veterinary medicine as intravaginal dosage for the synchronization of breeding cycle in ewes and goats. The main goal of this study was to develop reverse-phase high-performance liquid chromatography method for the quantification of medroxyprogesterone acetate in veterinary vaginal sponges. A single high-performance liquid chromatography/UV isocratic run was used for the analytical assay of the active ingredient medroxyprogesterone. The chromatographic system consisted of a reverse-phase C18 column as the stationary phase and a mixture of 60% acetonitrile and 40% potassium dihydrogen phosphate buffer as the mobile phase; the pH was adjusted to 5.6. The method was validated according to the International Council for Harmonisation (ICH) guidelines. Forced degradation studies were also performed to evaluate the stability-indicating properties and specificity of the method. Medroxyprogesterone was eluted at 5.9 minutes. The linearity of the method was confirmed in the range of 0.0576 to 0.1134 mg/mL (R2 > 0.999). The limit of quantification was shown to be 3.9 µg/mL. Precision and accuracy ranges were found to be %RSD <0.2 and 98% to 102%, respectively. Medroxyprogesterone capacity factor value of 2.1, tailing factor value of 1.03, and resolution value of 3.9 were obtained in accordance with ICH guidelines. Based on the obtained results, a rapid, precise, accurate, sensitive, and cost-effective analysis procedure was proposed for quantitative determination of medroxyprogesterone in vaginal sponges. This analytical method is the only available method to analyse medroxyprogesterone in veterinary intravaginal dosage form. PMID:28469407

  20. Identification and quantification of carbamate pesticides in dried lime tree flowers by means of excitation-emission molecular fluorescence and parallel factor analysis when quenching effect exists.

    PubMed

    Rubio, L; Ortiz, M C; Sarabia, L A

    2014-04-11

    A non-separative, fast and inexpensive spectrofluorimetric method based on the second order calibration of excitation-emission fluorescence matrices (EEMs) was proposed for the determination of carbaryl, carbendazim and 1-naphthol in dried lime tree flowers. The trilinearity property of three-way data was used to handle the intrinsic fluorescence of lime flowers and the difference in the fluorescence intensity of each analyte. It also made possible to identify unequivocally each analyte. Trilinearity of the data tensor guarantees the uniqueness of the solution obtained through parallel factor analysis (PARAFAC), so the factors of the decomposition match up with the analytes. In addition, an experimental procedure was proposed to identify, with three-way data, the quenching effect produced by the fluorophores of the lime flowers. This procedure also enabled the selection of the adequate dilution of the lime flowers extract to minimize the quenching effect so the three analytes can be quantified. Finally, the analytes were determined using the standard addition method for a calibration whose standards were chosen with a D-optimal design. The three analytes were unequivocally identified by the correlation between the pure spectra and the PARAFAC excitation and emission spectral loadings. The trueness was established by the accuracy line "calculated concentration versus added concentration" in all cases. Better decision limit values (CCα), in x0=0 with the probability of false positive fixed at 0.05, were obtained for the calibration performed in pure solvent: 2.97 μg L(-1) for 1-naphthol, 3.74 μg L(-1) for carbaryl and 23.25 μg L(-1) for carbendazim. The CCα values for the second calibration carried out in matrix were 1.61, 4.34 and 51.75 μg L(-1) respectively; while the values obtained considering only the pure samples as calibration set were: 2.65, 8.61 and 28.7 μg L(-1), respectively. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. A three-step approach for the derivation and validation of high-performing predictive models using an operational dataset: congestive heart failure readmission case study.

    PubMed

    AbdelRahman, Samir E; Zhang, Mingyuan; Bray, Bruce E; Kawamoto, Kensaku

    2014-05-27

    The aim of this study was to propose an analytical approach to develop high-performing predictive models for congestive heart failure (CHF) readmission using an operational dataset with incomplete records and changing data over time. Our analytical approach involves three steps: pre-processing, systematic model development, and risk factor analysis. For pre-processing, variables that were absent in >50% of records were removed. Moreover, the dataset was divided into a validation dataset and derivation datasets which were separated into three temporal subsets based on changes to the data over time. For systematic model development, using the different temporal datasets and the remaining explanatory variables, the models were developed by combining the use of various (i) statistical analyses to explore the relationships between the validation and the derivation datasets; (ii) adjustment methods for handling missing values; (iii) classifiers; (iv) feature selection methods; and (iv) discretization methods. We then selected the best derivation dataset and the models with the highest predictive performance. For risk factor analysis, factors in the highest-performing predictive models were analyzed and ranked using (i) statistical analyses of the best derivation dataset, (ii) feature rankers, and (iii) a newly developed algorithm to categorize risk factors as being strong, regular, or weak. The analysis dataset consisted of 2,787 CHF hospitalizations at University of Utah Health Care from January 2003 to June 2013. In this study, we used the complete-case analysis and mean-based imputation adjustment methods; the wrapper subset feature selection method; and four ranking strategies based on information gain, gain ratio, symmetrical uncertainty, and wrapper subset feature evaluators. The best-performing models resulted from the use of a complete-case analysis derivation dataset combined with the Class-Attribute Contingency Coefficient discretization method and a voting classifier which averaged the results of multi-nominal logistic regression and voting feature intervals classifiers. Of 42 final model risk factors, discharge disposition, discretized age, and indicators of anemia were the most significant. This model achieved a c-statistic of 86.8%. The proposed three-step analytical approach enhanced predictive model performance for CHF readmissions. It could potentially be leveraged to improve predictive model performance in other areas of clinical medicine.

  2. Influence of a strong sample solvent on analyte dispersion in chromatographic columns.

    PubMed

    Mishra, Manoranjan; Rana, Chinar; De Wit, A; Martin, Michel

    2013-07-05

    In chromatographic columns, when the eluting strength of the sample solvent is larger than that of the carrier liquid, a deformation of the analyte zone occurs because its frontal part moves at a relatively high velocity due to a low retention factor in the sample solvent while the rear part of the analyte zone is more retained in the carrier liquid and hence moves at a lower velocity. The influence of this solvent strength effect on the separation of analytes is studied here theoretically using a mass balance model describing the spatio-temporal evolution of the eluent, the sample solvent and the analyte. The viscosity of the sample solvent and carrier fluid is supposed to be the same (i.e. no viscous fingering effects are taken into account). A linear isotherm adsorption with a retention factor depending upon the local concentration of the liquid phase is considered. The governing equations are numerically solved by using a Fourier spectral method and parametric studies are performed to analyze the effect of various governing parameters on the dispersion and skewness of the analyte zone. The distortion of this zone is found to depend strongly on the difference in eluting strength between the mobile phase and the sample solvent as well as on the sample volume. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. The Impact of Institutional Factors on the Relationship Between High School Mathematics Curricula and College Mathematics Course-Taking and Achievement

    ERIC Educational Resources Information Center

    Harwell, Michael

    2013-01-01

    Meta-analytic methods were used to examine the moderating effect of institutional factors on the relationship between high school mathematics curricula and college mathematics course-taking and achievement from a sample of 32 colleges. The findings suggest that the impact of curriculum on college mathematics outcomes is not generally moderated by…

  4. Theory of ground state factorization in quantum cooperative systems.

    PubMed

    Giampaolo, Salvatore M; Adesso, Gerardo; Illuminati, Fabrizio

    2008-05-16

    We introduce a general analytic approach to the study of factorization points and factorized ground states in quantum cooperative systems. The method allows us to determine rigorously the existence, location, and exact form of separable ground states in a large variety of, generally nonexactly solvable, spin models belonging to different universality classes. The theory applies to translationally invariant systems, irrespective of spatial dimensionality, and for spin-spin interactions of arbitrary range.

  5. Antecedents of obesity - analysis, interpretation, and use of longitudinal data.

    PubMed

    Gillman, Matthew W; Kleinman, Ken

    2007-07-01

    The obesity epidemic causes misery and death. Most epidemiologists accept the hypothesis that characteristics of the early stages of human development have lifelong influences on obesity-related health outcomes. Unfortunately, there is a dearth of data of sufficient scope and individual history to help unravel the associations of prenatal, postnatal, and childhood factors with adult obesity and health outcomes. Here the authors discuss analytic methods, the interpretation of models, and the use to which such rare and valuable data may be put in developing interventions to combat the epidemic. For example, analytic methods such as quantile and multinomial logistic regression can describe the effects on body mass index range rather than just its mean; structural equation models may allow comparison of the contributions of different factors at different periods in the life course. Interpretation of the data and model construction is complex, and it requires careful consideration of the biologic plausibility and statistical interpretation of putative causal factors. The goals of discovering modifiable determinants of obesity during the prenatal, postnatal, and childhood periods must be kept in sight, and analyses should be built to facilitate them. Ultimately, interventions in these factors may help prevent obesity-related adverse health outcomes for future generations.

  6. Analysis of Thickness and Quality factor of a Double Paddle Oscillator at Room Temperature.

    PubMed

    Shakeel, Hamza; Metcalf, Thomas H; Pomeroy, J M

    2016-01-01

    In this paper, we evaluate the quality (Q) factor and the resonance frequency of a double paddle oscillator (DPO) with different thickness using analytical, computational and experimental methods. The study is carried out for the 2 nd anti-symmetric resonance mode that provides extremely high experimental Q factors on the order of 10 5 . The results show that both the Q factor and the resonance frequency of a DPO increase with the thickness at room temperature.

  7. Handling Missing Data in Educational Research Using SPSS

    ERIC Educational Resources Information Center

    Cheema, Jehanzeb

    2012-01-01

    This study looked at the effect of a number of factors such as the choice of analytical method, the handling method for missing data, sample size, and proportion of missing data, in order to evaluate the effect of missing data treatment on accuracy of estimation. In order to accomplish this a methodological approach involving simulated data was…

  8. The analytical solution for drug delivery system with nonhomogeneous moving boundary condition

    NASA Astrophysics Data System (ADS)

    Saudi, Muhamad Hakimi; Mahali, Shalela Mohd; Harun, Fatimah Noor

    2017-08-01

    This paper discusses the development and the analytical solution of a mathematical model based on drug release system from a swelling delivery device. The mathematical model is represented by a one-dimensional advection-diffusion equation with nonhomogeneous moving boundary condition. The solution procedures consist of three major steps. Firstly, the application of steady state solution method, which is used to transform the nonhomogeneous moving boundary condition to homogeneous boundary condition. Secondly, the application of the Landau transformation technique that gives a significant impact in removing the advection term in the system of equation and transforming the moving boundary condition to a fixed boundary condition. Thirdly, the used of separation of variables method to find the analytical solution for the resulted initial boundary value problem. The results show that the swelling rate of delivery device and drug release rate is influenced by value of growth factor r.

  9. Factor structure of the Halstead-Reitan Neuropsychological Battery for children: a brief report supplement.

    PubMed

    Ross, Sylvia An; Allen, Daniel N; Goldstein, Gerald

    2014-01-01

    The Halstead-Reitan Neuropsychological Battery (HRNB) is the first factor-analyzed neuropsychological battery and consists of three batteries for young children, older children, and adults. Halstead's original factor analysis extracted four factors from the adult version of the battery, which were the basis for his theory of biological intelligence. These factors were called Central Integrative Field, Abstraction, Power, and Directional. Since this original analysis, Reitan's additions to the battery, and the development of the child versions of the test, this factor-analytic research continued. An introduction and the adult literature are reviewed in Ross, Allen, and Goldstein ( in press ). In this supplemental article, factor-analytic studies of the HRNB with children are reviewed. It is concluded that factor analysis of the HRNB or Reitan-Indiana Neuropsychological Battery with children does not replicate the extensiveness of the adult literature, although there is some evidence that when the traditional battery for older children is used, the factor structure is similar to what is found in adult studies. Reitan's changes to the battery appear to have added factors including language and sensory-perceptual factors. When other tests and scoring methods are used in addition to the core battery, differing solutions are produced.

  10. Uncertainty Estimation for the Determination of Ni, Pb and Al in Natural Water Samples by SPE-ICP-OES

    NASA Astrophysics Data System (ADS)

    Ghorbani, A.; Farahani, M. Mahmoodi; Rabbani, M.; Aflaki, F.; Waqifhosain, Syed

    2008-01-01

    In this paper we propose uncertainty estimation for the analytical results we obtained from determination of Ni, Pb and Al by solidphase extraction and inductively coupled plasma optical emission spectrometry (SPE-ICP-OES). The procedure is based on the retention of analytes in the form of 8-hydroxyquinoline (8-HQ) complexes on a mini column of XAD-4 resin and subsequent elution with nitric acid. The influence of various analytical parameters including the amount of solid phase, pH, elution factors (concentration and volume of eluting solution), volume of sample solution, and amount of ligand on the extraction efficiency of analytes was investigated. To estimate the uncertainty of analytical result obtained, we propose assessing trueness by employing spiked sample. Two types of bias are calculated in the assessment of trueness: a proportional bias and a constant bias. We applied Nested design for calculating proportional bias and Youden method to calculate the constant bias. The results we obtained for proportional bias are calculated from spiked samples. In this case, the concentration found is plotted against the concentration added and the slop of standard addition curve is an estimate of the method recovery. Estimated method of average recovery in Karaj river water is: (1.004±0.0085) for Ni, (0.999±0.010) for Pb and (0.987±0.008) for Al.

  11. Human factors/ergonomics implications of big data analytics: Chartered Institute of Ergonomics and Human Factors annual lecture.

    PubMed

    Drury, Colin G

    2015-01-01

    In recent years, advances in sensor technology, connectedness and computational power have come together to produce huge data-sets. The treatment and analysis of these data-sets is known as big data analytics (BDA), and the somewhat related term data mining. Fields allied to human factors/ergonomics (HFE), e.g. statistics, have developed computational methods to derive meaningful, actionable conclusions from these data bases. This paper examines BDA, often characterised by volume, velocity and variety, giving examples of successful BDA use. This examination provides context by considering examples of using BDA on human data, using BDA in HFE studies, and studies of how people perform BDA. Significant issues for HFE are the reliance of BDA on correlation rather than hypotheses and theory, the ethics of BDA and the use of HFE in data visualisation.

  12. Standardization and optimization of fluorescence in situ hybridization (FISH) for HER-2 assessment in breast cancer: A single center experience.

    PubMed

    Bogdanovska-Todorovska, Magdalena; Petrushevska, Gordana; Janevska, Vesna; Spasevska, Liljana; Kostadinova-Kunovska, Slavica

    2018-05-20

    Accurate assessment of human epidermal growth factor receptor 2 (HER-2) is crucial in selecting patients for targeted therapy. Commonly used methods for HER-2 testing are immunohistochemistry (IHC) and fluorescence in situ hybridization (FISH). Here we presented the implementation, optimization and standardization of two FISH protocols using breast cancer samples and assessed the impact of pre-analytical and analytical factors on HER-2 testing. Formalin fixed paraffin embedded (FFPE) tissue samples from 70 breast cancer patients were tested for HER-2 using PathVysion™ HER-2 DNA Probe Kit and two different paraffin pretreatment kits, Vysis/Abbott Paraffin Pretreatment Reagent Kit (40 samples) and DAKO Histology FISH Accessory Kit (30 samples). The concordance between FISH and IHC results was determined. Pre-analytical and analytical factors (i.e., fixation, baking, digestion, and post-hybridization washing) affected the efficiency and quality of hybridization. The overall hybridization success in our study was 98.6% (69/70); the failure rate was 1.4%. The DAKO pretreatment kit was more time-efficient and resulted in more uniform signals that were easier to interpret, compared to the Vysis/Abbott kit. The overall concordance between IHC and FISH was 84.06%, kappa coefficient 0.5976 (p < 0.0001). The greatest discordance (82%) between IHC and FISH was observed in IHC 2+ group. A standardized FISH protocol for HER-2 assessment, with high hybridization efficiency, is necessary due to variability in tissue processing and individual tissue characteristics. Differences in the pre-analytical and analytical steps can affect the hybridization quality and efficiency. The use of DAKO pretreatment kit is time-saving and cost-effective.

  13. Evaluation of generalized degrees of freedom for sparse estimation by replica method

    NASA Astrophysics Data System (ADS)

    Sakata, A.

    2016-12-01

    We develop a method to evaluate the generalized degrees of freedom (GDF) for linear regression with sparse regularization. The GDF is a key factor in model selection, and thus its evaluation is useful in many modelling applications. An analytical expression for the GDF is derived using the replica method in the large-system-size limit with random Gaussian predictors. The resulting formula has a universal form that is independent of the type of regularization, providing us with a simple interpretation. Within the framework of replica symmetric (RS) analysis, GDF has a physical meaning as the effective fraction of non-zero components. The validity of our method in the RS phase is supported by the consistency of our results with previous mathematical results. The analytical results in the RS phase are calculated numerically using the belief propagation algorithm.

  14. A symmetrical subtraction combined with interpolated values for eliminating scattering from fluorescence EEM data.

    PubMed

    Xu, Jing; Liu, Xiaofei; Wang, Yutian

    2016-08-05

    Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Comparison of two microextraction methods based on solidification of floating organic droplet for the determination of multiclass analytes in river water samples by liquid chromatography tandem mass spectrometry using Central Composite Design.

    PubMed

    Asati, Ankita; Satyanarayana, G N V; Patel, Devendra K

    2017-09-01

    Two low density organic solvents based liquid-liquid microextraction methods, namely Vortex assisted liquid-liquid microextraction based on solidification of floating organic droplet (VALLME-SFO) and Dispersive liquid-liquid microextraction based on solidification of floating organic droplet(DLLME-SFO) have been compared for the determination of multiclass analytes (pesticides, plasticizers, pharmaceuticals and personal care products) in river water samples by using liquid chromatography tandem mass spectrometry (LC-MS/MS). The effect of various experimental parameters on the efficiency of the two methods and their optimum values were studied with the aid of Central Composite Design (CCD) and Response Surface Methodology(RSM). Under optimal conditions, VALLME-SFO was validated in terms of limit of detection, limit of quantification, dynamic linearity range, determination of coefficient, enrichment factor and extraction recovery for which the respective values were (0.011-0.219ngmL -1 ), (0.035-0.723ngmL -1 ), (0.050-0.500ngmL -1 ), (R 2 =0.992-0.999), (40-56), (80-106%). However, when the DLLME-SFO method was validated under optimal conditions, the range of values of limit of detection, limit of quantification, dynamic linearity range, determination of coefficient, enrichment factor and extraction recovery were (0.025-0.377ngmL -1 ), (0.083-1.256ngmL -1 ), (0.100-1.000ngmL -1 ), (R 2 =0.990-0.999), (35-49), (69-98%) respectively. Interday and intraday precisions were calculated as percent relative standard deviation (%RSD) and the values were ≤15% for VALLME-SFO and DLLME-SFO methods. Both methods were successfully applied for determining multiclass analytes in river water samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. A Newton-Krylov method with an approximate analytical Jacobian for implicit solution of Navier-Stokes equations on staggered overset-curvilinear grids with immersed boundaries.

    PubMed

    Asgharzadeh, Hafez; Borazjani, Iman

    2017-02-15

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 - 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.

  17. A Newton–Krylov method with an approximate analytical Jacobian for implicit solution of Navier–Stokes equations on staggered overset-curvilinear grids with immersed boundaries

    PubMed Central

    Asgharzadeh, Hafez; Borazjani, Iman

    2016-01-01

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42 – 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80–90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future. PMID:28042172

  18. A Newton-Krylov method with an approximate analytical Jacobian for implicit solution of Navier-Stokes equations on staggered overset-curvilinear grids with immersed boundaries

    NASA Astrophysics Data System (ADS)

    Asgharzadeh, Hafez; Borazjani, Iman

    2017-02-01

    The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for non-linear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form a preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the diagonal of the Jacobian further improves the performance by 42-74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal and full Jacobian, respectivley, when the stretching factor was increased. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.

  19. Exploring the Different Trajectories of Analytical Thinking Ability Factors: An Application of the Second-Order Growth Curve Factor Model

    ERIC Educational Resources Information Center

    Saengprom, Narumon; Erawan, Waraporn; Damrongpanit, Suntonrapot; Sakulku, Jaruwan

    2015-01-01

    The purposes of this study were 1) Compare analytical thinking ability by testing the same sets of students 5 times 2) Develop and verify whether analytical thinking ability of students corresponds to second-order growth curve factors model. Samples were 1,093 eighth-grade students. The results revealed that 1) Analytical thinking ability scores…

  20. Reflecting Solutions of High Order Elliptic Differential Equations in Two Independent Variables Across Analytic Arcs. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Carleton, O.

    1972-01-01

    Consideration is given specifically to sixth order elliptic partial differential equations in two independent real variables x, y such that the coefficients of the highest order terms are real constants. It is assumed that the differential operator has distinct characteristics and that it can be factored as a product of second order operators. By analytically continuing into the complex domain and using the complex characteristic coordinates of the differential equation, it is shown that its solutions, u, may be reflected across analytic arcs on which u satisfies certain analytic boundary conditions. Moreover, a method is given whereby one can determine a region into which the solution is extensible. It is seen that this region of reflection is dependent on the original domain of difinition of the solution, the arc and the coefficients of the highest order terms of the equation and not on any sufficiently small quantities; i.e., the reflection is global in nature. The method employed may be applied to similar differential equations of order 2n.

  1. Thawing as a critical pre-analytical step in the lipidomic profiling of plasma samples: New standardized protocol.

    PubMed

    Pizarro, Consuelo; Arenzana-Rámila, Irene; Pérez-del-Notario, Nuria; Pérez-Matute, Patricia; González-Sáiz, José María

    2016-03-17

    Lipid profiling is a promising tool for the discovery and subsequent identification of biomarkers associated with various diseases. However, data quality is quite dependent on the pre-analytical methods employed. To date, potential confounding factors that may affect lipid metabolite levels after the thawing of plasma for biomarker exploration studies have not been thoroughly evaluated. In this study, by means of experimental design methodology, we performed the first in-depth examination of the ways in which thawing conditions affect lipid metabolite levels. After the optimization stage, we concluded that temperature, sample volume and the thawing method were the determining factors that had to be exhaustively controlled in the thawing process to ensure the quality of biomarker discovery. Best thawing conditions were found to be: 4 °C, with 0.25 mL of human plasma and ultrasound (US) thawing. The new US proposed thawing method was quicker than the other methods we studied, allowed more features to be identified and increased the signal of the lipids. In view of its speed, efficiency and detectability, the US thawing method appears to be a simple, economical method for the thawing of plasma samples, which could easily be applied in clinical laboratories before lipid profiling studies. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Profiling Cholinesterase Adduction: A High-Throughput Prioritization Method for Organophosphate Exposure Samples

    PubMed Central

    Carter, Melissa D.; Crow, Brian S.; Pantazides, Brooke G.; Watson, Caroline M.; deCastro, B. Rey; Thomas, Jerry D.; Blake, Thomas A.; Johnson, Rudolph C.

    2017-01-01

    A high-throughput prioritization method was developed for use with a validated confirmatory method detecting organophosphorus nerve agent exposure by immunomagnetic separation-HPLC-MS/MS. A ballistic gradient was incorporated into this analytical method in order to profile unadducted butyrylcholinesterase (BChE) in clinical samples. With Zhang, et al. 1999’s Z′-factor of 0.88 ± 0.01 (SD) of control analytes and Z-factor of 0.25 ± 0.06 (SD) of serum samples, the assay is rated an “excellent assay” for the synthetic peptide controls used and a “double assay” when used to prioritize clinical samples. Hits, defined as samples containing BChE Ser-198 adducts or no BChE present, were analyzed in a confirmatory method for identification and quantitation of the BChE adduct, if present. The ability to prioritize samples by highest exposure for confirmatory analysis is of particular importance in an exposure to cholinesterase inhibitors such as organophosphorus nerve agents where a large number of clinical samples may be collected. In an initial blind screen, 67 out of 70 samples were accurately identified giving an assay accuracy of 96% and yielded no false negatives. The method is the first to provide a high-throughput prioritization assay for profiling adduction of Ser-198 BChE in clinical samples. PMID:23954929

  3. Environmental Risk Assessment System for Phosphogypsum Tailing Dams

    PubMed Central

    Sun, Xin; Tang, Xiaolong; Yi, Honghong; Li, Kai; Zhou, Lianbi; Xu, Xianmang

    2013-01-01

    This paper may be of particular interest to the readers as it provides a new environmental risk assessment system for phosphogypsum tailing dams. In this paper, we studied the phosphogypsum tailing dams which include characteristics of the pollution source, environmental risk characteristics and evaluation requirements to identify the applicable environmental risk assessment methods. Two analytical methods, that is, the analytic hierarchy process (AHP) and fuzzy logic, were used to handle the complexity of the environmental and nonquantitative data. Using our assessment method, different risk factors can be ranked according to their contributions to the environmental risk, thereby allowing the calculation of their relative priorities during decision making. Thus, environmental decision-makers can use this approach to develop alternative management strategies for proposed, ongoing, and completed PG tailing dams. PMID:24382947

  4. Environmental risk assessment system for phosphogypsum tailing dams.

    PubMed

    Sun, Xin; Ning, Ping; Tang, Xiaolong; Yi, Honghong; Li, Kai; Zhou, Lianbi; Xu, Xianmang

    2013-01-01

    This paper may be of particular interest to the readers as it provides a new environmental risk assessment system for phosphogypsum tailing dams. In this paper, we studied the phosphogypsum tailing dams which include characteristics of the pollution source, environmental risk characteristics and evaluation requirements to identify the applicable environmental risk assessment methods. Two analytical methods, that is, the analytic hierarchy process (AHP) and fuzzy logic, were used to handle the complexity of the environmental and nonquantitative data. Using our assessment method, different risk factors can be ranked according to their contributions to the environmental risk, thereby allowing the calculation of their relative priorities during decision making. Thus, environmental decision-makers can use this approach to develop alternative management strategies for proposed, ongoing, and completed PG tailing dams.

  5. Defining dignity in terminally ill cancer patients: a factor-analytic approach.

    PubMed

    Hack, Thomas F; Chochinov, Harvey Max; Hassard, Thomas; Kristjanson, Linda J; McClement, Susan; Harlos, Mike

    2004-10-01

    The construct of 'dignity' is frequently raised in discussions about quality end of life care for terminal cancer patients, and is invoked by parties on both sides of the euthanasia debate. Lacking in this general debate has been an empirical explication of 'dignity' from the viewpoint of cancer patients themselves. The purpose of the present study was to use factor-analytic and regression methods to analyze dignity data gathered from 213 cancer patients having less than 6 months to live. Patients rated their sense of dignity, and completed measures of symptom distress and psychological well-being. The results showed that although the majority of patients had an intact sense of dignity, there were 99 (46%) patients who reported at least some, or occasional loss of dignity, and 16 (7.5%) patients who indicated that loss of dignity was a significant problem. The exploratory factor analysis yielded six primary factors: (1) Pain; (2) Intimate Dependency; (3) Hopelessness/Depression; (4) Informal Support Network; (5) Formal Support Network; and (6) Quality of Life. Subsequent regression analyses of modifiable factors produced a final two-factor (Hopelessness/Depression and Intimate Dependency) model of statistical significance. These results provide empirical support for the dignity model, and suggest that the provision of end of life care should include methods for treating depression, fostering hope, and facilitating functional independence. Copyright 2004 John Wiley & Sons, Ltd.

  6. Global Processing Speed in Children with Low Reading Ability and in Children and Adults with Typical Reading Ability: Exploratory Factor Analytic Models

    ERIC Educational Resources Information Center

    Peter, Beate; Matsushita, Mark; Raskind, Wendy H.

    2011-01-01

    Purpose: To investigate processing speed as a latent dimension in children with dyslexia and children and adults with typical reading skills. Method: Exploratory factor analysis (FA) was based on a sample of multigenerational families, each ascertained through a child with dyslexia. Eleven measures--6 of them timed--represented verbal and…

  7. Glycidyl fatty acid esters in food by LC-MS/MS: method development.

    PubMed

    Becalski, A; Feng, S Y; Lau, B P-Y; Zhao, T

    2012-07-01

    An improved method based on liquid chromatography-tandem mass spectrometry (LC-MS/MS) for the analysis of glycidyl fatty acid esters in oils was developed. The method incorporates stable isotope dilution analysis (SIDA) for quantifying the five target analytes: glycidyl esters of palmitic (C16:0), stearic (C18:0), oleic (C18:1), linoleic (C18:2) and linolenic acid (C18:3). For the analysis, 10 mg sample of edible oil or fat is dissolved in acetone, spiked with deuterium labelled analogs of glycidyl esters and purified by a two-step chromatography on C18 and normal silica solid phase extraction (SPE) cartridges using methanol and 5% ethyl acetate in hexane, respectively. If the concentration of analytes is expected to be below 0.5 mg/kg, 0.5 g sample of oil is pre-concentrated first using a silica column. The dried final extract is re-dissolved in 250 μL of a mixture of methanol/isopropanol (1:1, v/v), 15 μL is injected on the analytical C18 LC column and analytes are eluted with 100% methanol. Detection of target glycidyl fatty acid esters is accomplished by LC-MS/MS using positive ion atmospheric pressure chemical ionization operating in Multiple Reaction Monitoring mode monitoring 2 ion transitions for each analyte. The method was tested on replicates of a virgin olive oil which was free of glycidyl esters. The method detection limit was calculated to be in the range of 70-150 μg/kg for each analyte using 10 mg sample and 1-3 μg/kg using 0.5 g sample of oil. Average recoveries of 5 glycidyl esters spiked at 10, 1 and 0.1 mg/kg were in the range 84% to 108%. The major advantage of our method is use of SIDA for all analytes using commercially available internal standards and detection limits that are lower by a factor of 5-10 from published methods when 0.5 g sample of oil is used. Additionally, MS/MS mass chromatograms offer greater specificity than liquid chromatography-mass spectrometry operated in selected ion monitoring mode. The method will be applied to the survey of glycidyl fatty acid esters in food products on the Canadian market.

  8. Diversity, Neoliberalism and Teacher Education

    ERIC Educational Resources Information Center

    Rodriguez, Arturo; Magill, Kevin Russell

    2016-01-01

    In this essay, we conduct a brief analytical review of teacher preparation programs, which claim to prepare lifelong culturally responsive teachers. Initial evaluation revealed factors limiting program success, they include: deeply embedded dominant ideological assumptions, use of traditional methods to train teachers, inability to understand or…

  9. Validation of the Classroom Behavior Inventory

    ERIC Educational Resources Information Center

    Blunden, Dale; And Others

    1974-01-01

    Factor-analytic methods were used toassess contruct validity of the Classroom Behavior Inventory, a scale for rating behaviors associated with hyperactivity. The Classroom Behavior Inventory measures three dimensions of behavior: Hyperactivity, Hostility, and Sociability. Significant concurrent validity was obtained for only one Classroom Behavior…

  10. Analytical modeling of demagnetizing effect in magnetoelectric ferrite/PZT/ferrite trilayers taking into account a mechanical coupling

    NASA Astrophysics Data System (ADS)

    Loyau, V.; Aubert, A.; LoBue, M.; Mazaleyrat, F.

    2017-03-01

    In this paper, we investigate the demagnetizing effect in ferrite/PZT/ferrite magnetoelectric (ME) trilayer composites consisting of commercial PZT discs bonded by epoxy layers to Ni-Co-Zn ferrite discs made by a reactive Spark Plasma Sintering (SPS) technique. ME voltage coefficients (transversal mode) were measured on ferrite/PZT/ferrite trilayer ME samples with different thicknesses or phase volume ratio in order to highlight the influence of the magnetic field penetration governed by these geometrical parameters. Experimental ME coefficients and voltages were compared to analytical calculations using a quasi-static model. Theoretical demagnetizing factors of two magnetic discs that interact together in parallel magnetic structures were derived from an analytical calculation based on a superposition method. These factors were introduced in ME voltage calculations which take account of the demagnetizing effect. To fit the experimental results, a mechanical coupling factor was also introduced in the theoretical formula. This reflects the differential strain that exists in the ferrite and PZT layers due to shear effects near the edge of the ME samples and within the bonding epoxy layers. From this study, an optimization in magnitude of the ME voltage is obtained. Lastly, an analytical calculation of demagnetizing effect was conducted for layered ME composites containing higher numbers of alternated layers (n ≥ 5). The advantage of such a structure is then discussed.

  11. Effects of pre-analytical variables on flow cytometric diagnosis of canine lymphoma: A retrospective study (2009-2015).

    PubMed

    Comazzi, S; Cozzi, M; Bernardi, S; Zanella, D R; Aresu, L; Stefanello, D; Marconato, L; Martini, V

    2018-02-01

    Flow cytometry (FC) is increasingly being used for immunophenotyping and staging of canine lymphoma. The aim of this retrospective study was to assess pre-analytical variables that might influence the diagnostic utility of FC of lymph node (LN) fine needle aspirate (FNA) specimens from dogs with lymphoproliferative diseases. The study included 987 cases with LN FNA specimens sent for immunophenotyping that were submitted to a diagnostic laboratory in Italy from 2009 to 2015. Cases were grouped into 'diagnostic' and 'non-diagnostic'. Pre-analytical factors analysed by univariate and multivariate analyses were animal-related factors (breed, age, sex, size), operator-related factors (year, season, shipping method, submitting veterinarian) and sample-related factors (type of sample material, cellular concentration, cytological smears, artefacts). The submitting veterinarian, sample material, sample cellularity and artefacts affected the likelihood of having a diagnostic sample. The availability of specimens from different sites and of cytological smears increased the odds of obtaining a diagnostic result. Major artefacts affecting diagnostic utility included poor cellularity and the presence of dead cells. Flow cytometry on LN FNA samples yielded conclusive results in more than 90% of cases with adequate sample quality and sampling conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Scattering from phase-separated vesicles. I. An analytical form factor for multiple static domains

    DOE PAGES

    Heberle, Frederick A.; Anghel, Vinicius N. P.; Katsaras, John

    2015-08-18

    This is the first in a series of studies considering elastic scattering from laterally heterogeneous lipid vesicles containing multiple domains. Unique among biophysical tools, small-angle neutron scattering can in principle give detailed information about the size, shape and spatial arrangement of domains. A general theory for scattering from laterally heterogeneous vesicles is presented, and the analytical form factor for static domains with arbitrary spatial configuration is derived, including a simplification for uniformly sized round domains. The validity of the model, including series truncation effects, is assessed by comparison with simulated data obtained from a Monte Carlo method. Several aspects ofmore » the analytical solution for scattering intensity are discussed in the context of small-angle neutron scattering data, including the effect of varying domain size and number, as well as solvent contrast. Finally, the analysis indicates that effects of domain formation are most pronounced when the vesicle's average scattering length density matches that of the surrounding solvent.« less

  13. A combined analytical formulation and genetic algorithm to analyze the nonlinear damage responses of continuous fiber toughened composites

    NASA Astrophysics Data System (ADS)

    Jeon, Haemin; Yu, Jaesang; Lee, Hunsu; Kim, G. M.; Kim, Jae Woo; Jung, Yong Chae; Yang, Cheol-Min; Yang, B. J.

    2017-09-01

    Continuous fiber-reinforced composites are important materials that have the highest commercialized potential in the upcoming future among existing advanced materials. Despite their wide use and value, their theoretical mechanisms have not been fully established due to the complexity of the compositions and their unrevealed failure mechanisms. This study proposes an effective three-dimensional damage modeling of a fibrous composite by combining analytical micromechanics and evolutionary computation. The interface characteristics, debonding damage, and micro-cracks are considered to be the most influential factors on the toughness and failure behaviors of composites, and a constitutive equation considering these factors was explicitly derived in accordance with the micromechanics-based ensemble volume averaged method. The optimal set of various model parameters in the analytical model were found using modified evolutionary computation that considers human-induced error. The effectiveness of the proposed formulation was validated by comparing a series of numerical simulations with experimental data from available studies.

  14. Vortex-assisted magnetic β-cyclodextrin/attapulgite-linked ionic liquid dispersive liquid-liquid microextraction coupled with high-performance liquid chromatography for the fast determination of four fungicides in water samples.

    PubMed

    Yang, Miyi; Xi, Xuefei; Wu, Xiaoling; Lu, Runhua; Zhou, Wenfeng; Zhang, Sanbing; Gao, Haixiang

    2015-02-13

    A novel microextraction technique combining magnetic solid-phase microextraction (MSPME) with ionic liquid dispersive liquid-liquid microextraction (IL-DLLME) to determine four fungicides is presented in this work for the first time. The main factors affecting the extraction efficiency were optimized by the one-factor-at-a-time approach and the impacts of these factors were studied by an orthogonal design. Without tedious clean-up procedure, analytes were extracted from the sample to the adsorbent and organic solvent and then desorbed in acetonitrile prior to chromatographic analysis. Under the optimum conditions, good linearity and high enrichment factors were obtained for all analytes, with correlation coefficients ranging from 0.9998 to 1.0000 and enrichment factors ranging 135 and 159 folds. The recoveries for proposed approach were between 98% and 115%, the limits of detection were between 0.02 and 0.04 μg L(-1) and the RSDs changed from 2.96 to 4.16. The method was successfully applied in the analysis of four fungicides (azoxystrobin, chlorothalonil, cyprodinil and trifloxystrobin) in environmental water samples. The recoveries for the real water samples ranged between 81% and 109%. The procedure proved to be a time-saving, environmentally friendly, and efficient analytical technique. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Improving the analyte ion signal in matrix-assisted laser desorption/ionization imaging mass spectrometry via electrospray deposition by enhancing incorporation of the analyte in the matrix.

    PubMed

    Malys, Brian J; Owens, Kevin G

    2017-05-15

    Matrix-assisted laser desorption/ionization (MALDI) is widely used as the ionization method in high-resolution chemical imaging studies that seek to visualize the distribution of analytes within sectioned biological tissues. This work extends the use of electrospray deposition (ESD) to apply matrix with an additional solvent spray to incorporate and homogenize analyte within the matrix overlayer. Analytes and matrix are sequentially and independently applied by ESD to create a sample from which spectra are collected, mimicking a MALDI imaging mass spectrometry (IMS) experiment. Subsequently, an incorporation spray consisting of methanol is applied by ESD to the sample and another set of spectra are collected. The spectra prior to and after the incorporation spray are compared to evaluate the improvement in the analyte signal. Prior to the incorporation spray, samples prepared using α-cyano-4-hydroxycinnamic acid (CHCA) and 2,5-dihydroxybenzoic acid (DHB) as the matrix showed low signal while the sample using sinapinic acid (SA) initially exhibited good signal. Following the incorporation spray, the sample using SA did not show an increase in signal; the sample using DHB showed moderate gain factors of 2-5 (full ablation spectra) and 12-336 (raster spectra), while CHCA samples saw large increases in signal, with gain factors of 14-172 (full ablation spectra) and 148-1139 (raster spectra). The use of an incorporation spray to apply solvent by ESD to a matrix layer already deposited by ESD provides an increase in signal by both promoting incorporation of the analyte within and homogenizing the distribution of the incorporated analyte throughout the matrix layer. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. The cooperative effect of reduced graphene oxide and Triton X-114 on the electromembrane microextraction efficiency of Pramipexole as a model analyte in urine samples.

    PubMed

    Fashi, Armin; Khanban, Fatemeh; Yaftian, Mohammad Reza; Zamani, Abbasali

    2017-01-01

    A new design of electromembrane microextraction coupled with high-performance liquid chromatography was developed for the determination of Pramipexole as a model analyte in urine samples. The presence of reduced graphene oxide in the membrane and Triton X-114 in the donor phase augments the extraction efficiency of Pramipexole by the proposed method. Dispersed reduced graphene oxide in the organic solvent was held in the pores of the fiber wall by capillary forces and sonication. It is possible that the immobilized reduced graphene oxide acts as a sorbent, affording an additional pathway for analyte transportation. Besides, the presence of Triton X-114 in the donor phase promotes effective migration of ionic analytes across the membrane. The parameters influencing the extraction procedure, such as type and concentration of surfactant, type of organic solvent, amount of reduced graphene oxide, sonication time, applied voltage, extraction time, ionic strength, pH of the donor and acceptor solutions, and stirring rate were optimized. The linear working ranges of the method for preconcentration- determination of Pramipexole in water and urine samples were found to be 0.13-1000 and 0.47-1000ngmL -1 with corresponding detection limits of 0.04 and 0.14ngmL -1 , respectively. The proposed method allows achieving enrichment factors of 301 and 265 for preconcentration of the analyte in water and urine samples, respectively. The method was successfully applied for the determination of Pramipexole in the urine samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Analytic thinking reduces belief in conspiracy theories.

    PubMed

    Swami, Viren; Voracek, Martin; Stieger, Stefan; Tran, Ulrich S; Furnham, Adrian

    2014-12-01

    Belief in conspiracy theories has been associated with a range of negative health, civic, and social outcomes, requiring reliable methods of reducing such belief. Thinking dispositions have been highlighted as one possible factor associated with belief in conspiracy theories, but actual relationships have only been infrequently studied. In Study 1, we examined associations between belief in conspiracy theories and a range of measures of thinking dispositions in a British sample (N=990). Results indicated that a stronger belief in conspiracy theories was significantly associated with lower analytic thinking and open-mindedness and greater intuitive thinking. In Studies 2-4, we examined the causational role played by analytic thinking in relation to conspiracist ideation. In Study 2 (N=112), we showed that a verbal fluency task that elicited analytic thinking reduced belief in conspiracy theories. In Study 3 (N=189), we found that an alternative method of eliciting analytic thinking, which related to cognitive disfluency, was effective at reducing conspiracist ideation in a student sample. In Study 4, we replicated the results of Study 3 among a general population sample (N=140) in relation to generic conspiracist ideation and belief in conspiracy theories about the July 7, 2005, bombings in London. Our results highlight the potential utility of supporting attempts to promote analytic thinking as a means of countering the widespread acceptance of conspiracy theories. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Approaching near real-time biosensing: microfluidic microsphere based biosensor for real-time analyte detection.

    PubMed

    Cohen, Noa; Sabhachandani, Pooja; Golberg, Alexander; Konry, Tania

    2015-04-15

    In this study we describe a simple lab-on-a-chip (LOC) biosensor approach utilizing well mixed microfluidic device and a microsphere-based assay capable of performing near real-time diagnostics of clinically relevant analytes such cytokines and antibodies. We were able to overcome the adsorption kinetics reaction rate-limiting mechanism, which is diffusion-controlled in standard immunoassays, by introducing the microsphere-based assay into well-mixed yet simple microfluidic device with turbulent flow profiles in the reaction regions. The integrated microsphere-based LOC device performs dynamic detection of the analyte in minimal amount of biological specimen by continuously sampling micro-liter volumes of sample per minute to detect dynamic changes in target analyte concentration. Furthermore we developed a mathematical model for the well-mixed reaction to describe the near real time detection mechanism observed in the developed LOC method. To demonstrate the specificity and sensitivity of the developed real time monitoring LOC approach, we applied the device for clinically relevant analytes: Tumor Necrosis Factor (TNF)-α cytokine and its clinically used inhibitor, anti-TNF-α antibody. Based on the reported results herein, the developed LOC device provides continuous sensitive and specific near real-time monitoring method for analytes such as cytokines and antibodies, reduces reagent volumes by nearly three orders of magnitude as well as eliminates the washing steps required by standard immunoassays. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Computation of Anisotropic Bi-Material Interfacial Fracture Parameters and Delamination Creteria

    NASA Technical Reports Server (NTRS)

    Chow, W-T.; Wang, L.; Atluri, S. N.

    1998-01-01

    This report documents the recent developments in methodologies for the evaluation of the integrity and durability of composite structures, including i) the establishment of a stress-intensity-factor based fracture criterion for bimaterial interfacial cracks in anisotropic materials (see Sec. 2); ii) the development of a virtual crack closure integral method for the evaluation of the mixed-mode stress intensity factors for a bimaterial interfacial crack (see Sec. 3). Analytical and numerical results show that the proposed fracture criterion is a better fracture criterion than the total energy release rate criterion in the characterization of the bimaterial interfacial cracks. The proposed virtual crack closure integral method is an efficient and accurate numerical method for the evaluation of mixed-mode stress intensity factors.

  20. The spatial distribution patterns of condensed phase post-blast explosive residues formed during detonation.

    PubMed

    Abdul-Karim, Nadia; Blackman, Christopher S; Gill, Philip P; Karu, Kersti

    2016-10-05

    The continued usage of explosive devices, as well as the ever growing threat of 'dirty' bombs necessitates a comprehensive understanding of particle dispersal during detonation events in order to develop effectual methods for targeting explosive and/or additive remediation efforts. Herein, the distribution of explosive analytes from controlled detonations of aluminised ammonium nitrate and an RDX-based explosive composition were established by systematically sampling sites positioned around each firing. This is the first experimental study to produce evidence that the post-blast residue mass can distribute according to an approximate inverse-square law model, while also demonstrating for the first time that distribution trends can vary depending on individual analytes. Furthermore, by incorporating blast-wave overpressure measurements, high-speed imaging for fireball volume recordings, and monitoring of environmental conditions, it was determined that the principle factor affecting all analyte dispersals was the wind direction, with other factors affecting specific analytes to varying degrees. The dispersal mechanism for explosive residue is primarily the smoke cloud, a finding which in itself has wider impacts on the environment and fundamental detonation theory. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Green approach using monolithic column for simultaneous determination of coformulated drugs.

    PubMed

    Yehia, Ali M; Mohamed, Heba M

    2016-06-01

    Green chemistry and sustainability is now entirely encompassed across the majority of pharmaceutical companies and research labs. Researchers' attention is careworn toward implementing the green analytical chemistry principles for more eco-friendly analytical methodologies. Solvents play a dominant role in determining the greenness of the analytical procedure. Using safer solvents, the greenness profile of the methodology could be increased remarkably. In this context, a green chromatographic method has been developed and validated for the simultaneous determination of phenylephrine, paracetamol, and guaifenesin in their ternary pharmaceutical mixture. The chromatographic separation was carried out using monolithic column and green solvents as mobile phase. The use of monolithic column allows efficient separation protocols at higher flow rates, which results in short time of analysis. Two-factor three-level experimental design was used to optimize the chromatographic conditions. The greenness profile of the proposed methodology was assessed using eco-scale as a green metrics and was found to be an excellent green method with regard to the usage and production of hazardous chemicals and solvents, energy consumption, and amount of produced waste. The proposed method improved the environmental impact without compromising the analytical performance criteria and could be used as a safer alternate for the routine analysis of the studied drugs. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Thermodynamic properties and static structure factor for a Yukawa fluid in the mean spherical approximation.

    PubMed

    Montes-Perez, J; Cruz-Vera, A; Herrera, J N

    2011-12-01

    This work presents the full analytic expressions for the thermodynamic properties and the static structure factor for a hard sphere plus 1-Yukawa fluid within the mean spherical approximation. To obtain these properties of the fluid type Yukawa analytically it was necessary to solve an equation of fourth order for the scaling parameter on a large scale. The physical root of this equation was determined by imposing physical conditions. The results of this work are obtained from seminal papers of Blum and Høye. We show that is not necessary the use the series expansion to solve the equation for the scaling parameter. We applied our theoretical result to find the thermodynamic and the static structure factor for krypton. Our results are in good agreement with those obtained in an experimental form or by simulation using the Monte Carlo method.

  3. Critical Factors in Data Governance for Learning Analytics

    ERIC Educational Resources Information Center

    Elouazizi, Noureddine

    2014-01-01

    This paper identifies some of the main challenges of data governance modelling in the context of learning analytics for higher education institutions, and discusses the critical factors for designing data governance models for learning analytics. It identifies three fundamental common challenges that cut across any learning analytics data…

  4. Determination of the distribution constants of aromatic compounds and steroids in biphasic micellar phosphonium ionic liquid/aqueous buffer systems by capillary electrokinetic chromatography.

    PubMed

    Lokajová, Jana; Railila, Annika; King, Alistair W T; Wiedmer, Susanne K

    2013-09-20

    The distribution constants of some analytes, closely connected to the petrochemical industry, between an aqueous phase and a phosphonium ionic liquid phase, were determined by ionic liquid micellar electrokinetic chromatography (MEKC). The phosphonium ionic liquids studied were the water-soluble tributyl(tetradecyl)phosphonium with chloride or acetate as the counter ion. The retention factors were calculated and used for determination of the distribution constants. For calculating the retention factors the electrophoretic mobilities of the ionic liquids were required, thus, we adopted the iterative process, based on a homologous series of alkyl benzoates. Calculation of the distribution constants required information on the phase-ratio of the systems. For this the critical micelle concentrations (CMC) of the ionic liquids were needed. The CMCs were calculated using a method based on PeakMaster simulations, using the electrophoretic mobilities of system peaks. The resulting distribution constants for the neutral analytes between the ionic liquid and the aqueous (buffer) phase were compared with octanol-water partitioning coefficients. The results indicate that there are other factors affecting the distribution of analytes between phases, than just simple hydrophobic interactions. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Optical activity of chirally distorted nanocrystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tepliakov, Nikita V.; Baimuratov, Anvar S.; Baranov, Alexander V.

    2016-05-21

    We develop a general theory of optical activity of semiconductor nanocrystals whose chirality is induced by a small perturbation of their otherwise achiral electronic subsystems. The optical activity is described using the quantum-mechanical expressions for the rotatory strengths and dissymmetry factors introduced by Rosenfeld. We show that the rotatory strengths of optically active transitions are decomposed on electric dipole and magnetic dipole contributions, which correspond to the electric dipole and magnetic dipole transitions between the unperturbed quantum states. Remarkably, while the two kinds of rotatory strengths are of the same order of magnitude, the corresponding dissymmetry factors can differ bymore » a factor of 10{sup 5}. By maximizing the dissymmetry of magnetic dipole absorption one can significantly enhance the enantioselectivity in the interaction of semiconductor nanocrystals with circularly polarized light. This feature may advance chiral and analytical methods, which will benefit biophysics, chemistry, and pharmaceutical science. The developed theory is illustrated by an example of intraband transitions inside a semiconductor nanocuboid, whose rotatory strengths and dissymmetry factors are calculated analytically.« less

  6. Optical activity of chirally distorted nanocrystals

    NASA Astrophysics Data System (ADS)

    Tepliakov, Nikita V.; Baimuratov, Anvar S.; Baranov, Alexander V.; Fedorov, Anatoly V.; Rukhlenko, Ivan D.

    2016-05-01

    We develop a general theory of optical activity of semiconductor nanocrystals whose chirality is induced by a small perturbation of their otherwise achiral electronic subsystems. The optical activity is described using the quantum-mechanical expressions for the rotatory strengths and dissymmetry factors introduced by Rosenfeld. We show that the rotatory strengths of optically active transitions are decomposed on electric dipole and magnetic dipole contributions, which correspond to the electric dipole and magnetic dipole transitions between the unperturbed quantum states. Remarkably, while the two kinds of rotatory strengths are of the same order of magnitude, the corresponding dissymmetry factors can differ by a factor of 105. By maximizing the dissymmetry of magnetic dipole absorption one can significantly enhance the enantioselectivity in the interaction of semiconductor nanocrystals with circularly polarized light. This feature may advance chiral and analytical methods, which will benefit biophysics, chemistry, and pharmaceutical science. The developed theory is illustrated by an example of intraband transitions inside a semiconductor nanocuboid, whose rotatory strengths and dissymmetry factors are calculated analytically.

  7. Method Effects on an Adaptation of the Rosenberg Self-Esteem Scale in Greek and the Role of Personality Traits.

    PubMed

    Michaelides, Michalis P; Koutsogiorgi, Chrystalla; Panayiotou, Georgia

    2016-01-01

    Rosenberg's Self-Esteem Scale is a balanced, 10-item scale designed to be unidimensional; however, research has repeatedly shown that its factorial structure is contaminated by method effects due to item wording. Beyond the substantive self-esteem factor, 2 additional factors linked to the positive and negative wording of items have been theoretically specified and empirically supported. Initial evidence has revealed systematic relations of the 2 method factors with variables expressing approach and avoidance motivation. This study assessed the fit of competing confirmatory factor analytic models for the Rosenberg Self-Esteem Scale using data from 2 samples of adult participants in Cyprus. Models that accounted for both positive and negative wording effects via 2 latent method factors had better fit compared to alternative models. Measures of experiential avoidance, social anxiety, and private self-consciousness were associated with the method factors in structural equation models. The findings highlight the need to specify models with wording effects for a more accurate representation of the scale's structure and support the hypothesis of method factors as response styles, which are associated with individual characteristics related to avoidance motivation, behavioral inhibition, and anxiety.

  8. Empirical Analysis of the Subjective Impressions and Objective Measures of Domain Scientists’ Analytical Judgment Using Visualizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dasgupta, Aritra; Burrows, Susannah M.; Han, Kyungsik

    Scientists working in a particular domain often adhere to conventional data analysis and presentation methods and this leads to familiarity with these methods over time. But does high familiarity always lead to better analytical judgment? This question is especially relevant when visualizations are used in scientific tasks, as there can be discrepancies between visualization best practices and domain conventions. However, there is little empirical evidence of the relationships between scientists’ subjective impressions about familiar and unfamiliar visualizations and objective measures of their effect on scientific judgment. To address this gap and to study these factors, we focus on the climatemore » science domain, specifically on visualizations used for comparison of model performance. We present a comprehensive user study with 47 climate scientists where we explored the following factors: i) relationships between scientists’ familiarity, their perceived levels of com- fort, confidence, accuracy, and objective measures of accuracy, and ii) relationships among domain experience, visualization familiarity, and post-study preference.« less

  9. [Impact factor, its variants and its influence in academic promotion].

    PubMed

    Puche, Rodolfo C

    2011-01-01

    Bibliometrics is a set of methods used to study or measure texts and information. While bibliometric methods are most often used in the field of library and information science, bibliometrics variables have wide applications in other areas. One popular bibliometric variable is Garfield's Impact Factor (IF). IF is used to explore the impact of a given field, the impact of a set of researchers, or the impact of a particular paper. This variable is used to assess academic output and it is believed to affect adversely the traditional approach and assessment of scientific research. In our country, the members of the evaluation committees of intensive research institutions, e.g. the National Scientific and Technical Research Council (CONICET) use IF to assess the quality of research. This article revises the exponential growth of bibliometrics and attempts to expose the overall dissatisfaction with the analytical quality of IF. Such dissatisfaction is expressed in the number of investigations attempting to obtain a better variable of improved analytical quality.

  10. Applications of organo-silica nanocomposites for SPNE of Hg(II)

    NASA Astrophysics Data System (ADS)

    Kaur, Anupreet

    2016-02-01

    An analytical method using modified SiO2 nanoparticles as solid-phase extractant has been developed for the preconcentration of trace amounts of Hg(II) in different water samples. Conditions of the analysis such as preconcentration factor, effect of pH, sample volume, shaking time, elution conditions and effects of interfering ions for the recovery of analyte were investigated. The adsorption capacity of nanometer SiO2-APTMS was found to be 181.42 µmol g-1 at optimum pH and the detection limit (3σ) was 0.45 µg L-1. The extractant showed rapid kinetic sorption. The adsorption equilibrium of Hg(II) on nanometer SiO2-APTMS was achieved just in 15 min. Adsorbed Hg(II) was easily eluted with 4 mL of 2.0 M hydrochloric acid. The maximum preconcentration factor was 75. The method was applied for the determination of trace amounts of Hg(II) in various synthetic samples and water samples.

  11. Critical evaluation of connectivity-based point of care testing systems of glucose in a hospital environment.

    PubMed

    Floré, Katelijne M J; Fiers, Tom; Delanghe, Joris R

    2008-01-01

    In recent years a number of point of care testing (POCT) glucometers were introduced on the market. We investigated the analytical variability (lot-to-lot variation, calibration error, inter-instrument and inter-operator variability) of glucose POCT systems in a university hospital environment and compared these results with the analytical needs required for tight glucose monitoring. The reference hexokinase method was compared to different POCT systems based on glucose oxidase (blood gas instruments) or glucose dehydrogenase (handheld glucometers). Based upon daily internal quality control data, total errors were calculated for the various glucose methods and the analytical variability of the glucometers was estimated. The total error of the glucometers exceeded by far the desirable analytical specifications (based on a biological variability model). Lot-to-lot variation, inter-instrument variation and inter-operator variability contributed approximately equally to total variance. As in a hospital environment, distribution of hematocrit values is broad, converting blood glucose into plasma values using a fixed factor further increases variance. The percentage of outliers exceeded the ISO 15197 criteria in a broad glucose concentration range. Total analytical variation of handheld glucometers is larger than expected. Clinicians should be aware that the variability of glucose measurements obtained by blood gas instruments is lower than results obtained with handheld glucometers on capillary blood.

  12. A new method for the determination of short-chain fatty acids from the aliphatic series in wines by headspace solid-phase microextraction-gas chromatography-ion trap mass spectrometry.

    PubMed

    Olivero, Sergio J Pérez; Trujillo, Juan P Pérez

    2011-06-24

    A new analytical method for the determination of nine short-chain fatty acids (acetic, propionic, isobutyric, butyric, isovaleric, 2-methylbutyric, hexanoic, octanoic and decanoic acids) in wines using the automated HS/SPME-GC-ITMS technique was developed and optimised. Five different SPME fibers were tested and the influence of different factors such as temperature and time of extraction, temperature and time of desorption, pH, strength ionic, tannins, anthocyans, SO(2), sugar and ethanol content were studied and optimised using model solutions. Some analytes showed matrix effect so a study of recoveries was performed. The proposed HS/SPME-GC-ITMS method, that covers the concentration range of the different analytes in wines, showed wide linear ranges, values of repeatability and reproducibility lower than 4.0% of RSD and detection limits between 3 and 257 μgL(-1), lower than the olfactory thresholds. The optimised method is a suitable technique for the quantitative analysis of short-chain fatty acids from the aliphatic series in real samples of white, rose and red wines. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Ringer tablet-based ionic liquid phase microextraction: Application in extraction and preconcentration of neonicotinoid insecticides from fruit juice and vegetable samples.

    PubMed

    Farajzadeh, Mir Ali; Bamorowat, Mahdi; Mogaddam, Mohammad Reza Afshar

    2016-11-01

    An efficient, reliable, sensitive, rapid, and green analytical method for the extraction and determination of neonicotinoid insecticides in aqueous samples has been developed using ionic liquid phase microextraction coupled with high performance liquid chromatography-diode array detector. In this method, a few microliters of 1-hexyl-3-methylimidazolium hexafluorophosphate (as an extractant) is added onto a ringer tablet and it is transferred into a conical test tube containing aqueous phase of the analytes. By manually shaking, the ringer tablet is dissolved and the extractant is released into the aqueous phase as very tiny droplets to provide a cloudy solution. After centrifuging the extracted analytes into ionic liquid are collected at the bottom of a conical test tube. Under the optimum extraction conditions, the method showed low limits of detection and quantification between 0.12 and 0.33 and 0.41 and 1.11ngmL(-1), respectively. Extraction recoveries and enrichment factors were from 66% to 84% and 655% to 843%, respectively. Finally different aqueous samples were successfully analyzed using the proposed method. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Notch Sensitivity of Woven Ceramic Matrix Composites Under Tensile Loading: An Experimental, Analytical, and Finite Element Study

    NASA Technical Reports Server (NTRS)

    Haque, A.; Ahmed, L.; Ware, T.; Jeelani, S.; Verrilli, Michael J. (Technical Monitor)

    2001-01-01

    The stress concentrations associated with circular notches and subjected to uniform tensile loading in woven ceramic matrix composites (CMCs) have been investigated for high-efficient turbine engine applications. The CMC's were composed of Nicalon silicon carbide woven fabric in SiNC matrix manufactured through polymer impregnation process (PIP). Several combinations of hole diameter/plate width ratios and ply orientations were considered in this study. In the first part, the stress concentrations were calculated measuring strain distributions surrounding the hole using strain gages at different locations of the specimens during the initial portion of the stress-strain curve before any microdamage developed. The stress concentration was also calculated analytically using Lekhnitskii's solution for orthotropic plates. A finite-width correction factor for anisotropic and orthotropic composite plate was considered. The stress distributions surrounding the circular hole of a CMC's plate were further studied using finite element analysis. Both solid and shell elements were considered. The experimental results were compared with both the analytical and finite element solutions. Extensive optical and scanning electron microscopic examinations were carried out for identifying the fracture behavior and failure mechanisms of both the notched and notched specimens. The stress concentration factors (SCF) determined by analytical method overpredicted the experimental results. But the numerical solution underpredicted the experimental SCF. Stress concentration factors are shown to increase with enlarged hole size and the effects of ply orientations on stress concentration factors are observed to be negligible. In all the cases, the crack initiated at the notch edge and propagated along the width towards the edge of the specimens.

  15. Development of a dynamic headspace gas chromatography-mass spectrometry method for on-site analysis of sulfur mustard degradation products in sediments.

    PubMed

    Magnusson, R; Nordlander, T; Östin, A

    2016-01-15

    Sampling teams performing work at sea in areas where chemical munitions may have been dumped require rapid and reliable analytical methods for verifying sulfur mustard leakage from suspected objects. Here we present such an on-site analysis method based on dynamic headspace GC-MS for analysis of five cyclic sulfur mustard degradation products that have previously been detected in sediments from chemical weapon dumping sites: 1,4-oxathiane, 1,3-dithiolane, 1,4-dithiane, 1,4,5-oxadithiephane, and 1,2,5-trithiephane. An experimental design involving authentic Baltic Sea sediments spiked with the target analytes was used to develop an optimized protocol for sample preparation, headspace extraction and analysis that afforded recoveries of up to 60-90%. The optimized method needs no organic solvents, uses only two grams of sediment on a dry weight basis and involves a unique sample presentation whereby sediment is spread uniformly as a thin layer inside the walls of a glass headspace vial. The method showed good linearity for analyte concentrations of 5-200 ng/g dw, good repeatability, and acceptable carry-over. The method's limits of detection for spiked sediment samples ranged from 2.5 to 11 μg/kg dw, with matrix interference being the main limiting factor. The instrumental detection limits were one to two orders of magnitude lower. Full-scan GC-MS analysis enabled the use of automated mass spectral deconvolution for rapid identification of target analytes. Using this approach, analytes could be identified in spiked sediment samples at concentrations down to 13-65 μg/kg dw. On-site validation experiments conducted aboard the research vessel R/V Oceania demonstrated the method's practical applicability, enabling the successful identification of four cyclic sulfur mustard degradation products at concentrations of 15-308μg/kg in sediments immediately after being collected near a wreck at the Bornholm Deep dumpsite in the Baltic Sea. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Analytical Techniques and Pharmacokinetics of Gastrodia elata Blume and Its Constituents.

    PubMed

    Wu, Jinyi; Wu, Bingchu; Tang, Chunlan; Zhao, Jinshun

    2017-07-08

    Gastrodia elata Blume ( G. elata ), commonly called Tianma in Chinese, is an important and notable traditional Chinese medicine (TCM), which has been used in China as an anticonvulsant, analgesic, sedative, anti-asthma, anti-immune drug since ancient times. The aim of this review is to provide an overview of the abundant efforts of scientists in developing analytical techniques and performing pharmacokinetic studies of G. elata and its constituents, including sample pretreatment methods, analytical techniques, absorption, distribution, metabolism, excretion (ADME) and influence factors to its pharmacokinetics. Based on the reported pharmacokinetic property data of G. elata and its constituents, it is hoped that more studies will focus on the development of rapid and sensitive analytical techniques, discovering new therapeutic uses and understanding the specific in vivo mechanisms of action of G. elata and its constituents from the pharmacokinetic viewpoint in the near future. The present review discusses analytical techniques and pharmacokinetics of G. elata and its constituents reported from 1985 onwards.

  17. Systematic Development and Validation of a Thin-Layer Densitometric Bioanalytical Method for Estimation of Mangiferin Employing Analytical Quality by Design (AQbD) Approach

    PubMed Central

    Khurana, Rajneet Kaur; Rao, Satish; Beg, Sarwar; Katare, O.P.; Singh, Bhupinder

    2016-01-01

    The present work aims at the systematic development of a simple, rapid and highly sensitive densitometry-based thin-layer chromatographic method for the quantification of mangiferin in bioanalytical samples. Initially, the quality target method profile was defined and critical analytical attributes (CAAs) earmarked, namely, retardation factor (Rf), peak height, capacity factor, theoretical plates and separation number. Face-centered cubic design was selected for optimization of volume loaded and plate dimensions as the critical method parameters selected from screening studies employing D-optimal and Plackett–Burman design studies, followed by evaluating their effect on the CAAs. The mobile phase containing a mixture of ethyl acetate : acetic acid : formic acid : water in a 7 : 1 : 1 : 1 (v/v/v/v) ratio was finally selected as the optimized solvent for apt chromatographic separation of mangiferin at 262 nm with Rf 0.68 ± 0.02 and all other parameters within the acceptance limits. Method validation studies revealed high linearity in the concentration range of 50–800 ng/band for mangiferin. The developed method showed high accuracy, precision, ruggedness, robustness, specificity, sensitivity, selectivity and recovery. In a nutshell, the bioanalytical method for analysis of mangiferin in plasma revealed the presence of well-resolved peaks and high recovery of mangiferin. PMID:26912808

  18. Analysis of Vertiport Studies Funded by the Airport Improvement Program (AIP)

    DTIC Science & Technology

    1994-05-01

    the general population and travel behavior factors from surveys and other sources. FEASIBILITY The vertiport studies recognize the need to address the ... behavior factors obtained from surveys and other sources. All of the methods were dependent upon various secondary data and/or information sources that...economic responses and of travel behavior . The five types, in order of increasing analytical sophistication, are briefly identified as follows. I

  19. Invited Commentary: Antecedents of Obesity—Analysis, Interpretation, and Use of Longitudinal Data

    PubMed Central

    Gillman, Matthew W.; Kleinman, Ken

    2007-01-01

    The obesity epidemic causes misery and death. Most epidemiologists accept the hypothesis that characteristics of the early stages of human development have lifelong influences on obesity-related health outcomes. Unfortunately, there is a dearth of data of sufficient scope and individual history to help unravel the associations of prenatal, postnatal, and childhood factors with adult obesity and health outcomes. Here the authors discuss analytic methods, the interpretation of models, and the use to which such rare and valuable data may be put in developing interventions to combat the epidemic. For example, analytic methods such as quantile and multinomial logistic regression can describe the effects on body mass index range rather than just its mean; structural equation models may allow comparison of the contributions of different factors at different periods in the life course. Interpretation of the data and model construction is complex, and it requires careful consideration of the biologic plausibility and statistical interpretation of putative causal factors. The goals of discovering modifiable determinants of obesity during the prenatal, postnatal, and childhood periods must be kept in sight, and analyses should be built to facilitate them. Ultimately, interventions in these factors may help prevent obesity-related adverse health outcomes for future generations. PMID:17490988

  20. Validation of an advanced analytical procedure applied to the measurement of environmental radioactivity.

    PubMed

    Thanh, Tran Thien; Vuong, Le Quang; Ho, Phan Long; Chuong, Huynh Dinh; Nguyen, Vo Hoang; Tao, Chau Van

    2018-04-01

    In this work, an advanced analytical procedure was applied to calculate radioactivity in spiked water samples in a close geometry gamma spectroscopy. It included MCNP-CP code in order to calculate the coincidence summing correction factor (CSF). The CSF results were validated by a deterministic method using ETNA code for both p-type HPGe detectors. It showed that a good agreement for both codes. Finally, the validity of the developed procedure was confirmed by a proficiency test to calculate the activities of various radionuclides. The results of the radioactivity measurement with both detectors using the advanced analytical procedure were received the ''Accepted'' statuses following the proficiency test. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Advances in Instrumental Analysis of Brominated Flame Retardants: Current Status and Future Perspectives

    PubMed Central

    2014-01-01

    This review aims to highlight the recent advances and methodological improvements in instrumental techniques applied for the analysis of different brominated flame retardants (BFRs). The literature search strategy was based on the recent analytical reviews published on BFRs. The main selection criteria involved the successful development and application of analytical methods for determination of the target compounds in various environmental matrices. Different factors affecting chromatographic separation and mass spectrometric detection of brominated analytes were evaluated and discussed. Techniques using advanced instrumentation to achieve outstanding results in quantification of different BFRs and their metabolites/degradation products were highlighted. Finally, research gaps in the field of BFR analysis were identified and recommendations for future research were proposed. PMID:27433482

  2. Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wada, Takao

    2014-07-01

    A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.

  3. Quantitative Method of Measuring Metastatic Activity

    NASA Technical Reports Server (NTRS)

    Morrison, Dennis R. (Inventor)

    1999-01-01

    The metastatic potential of tumors can be evaluated by the quantitative detection of urokinase and DNA. The cell sample selected for examination is analyzed for the presence of high levels of urokinase and abnormal DNA using analytical flow cytometry and digital image analysis. Other factors such as membrane associated uroldnase, increased DNA synthesis rates and certain receptors can be used in the method for detection of potentially invasive tumors.

  4. Evaluation of soil erosion risk using Analytic Network Process and GIS: a case study from Spanish mountain olive plantations.

    PubMed

    Nekhay, Olexandr; Arriaza, Manuel; Boerboom, Luc

    2009-07-01

    The study presents an approach that combined objective information such as sampling or experimental data with subjective information such as expert opinions. This combined approach was based on the Analytic Network Process method. It was applied to evaluate soil erosion risk and overcomes one of the drawbacks of USLE/RUSLE soil erosion models, namely that they do not consider interactions among soil erosion factors. Another advantage of this method is that it can be used if there are insufficient experimental data. The lack of experimental data can be compensated for through the use of expert evaluations. As an example of the proposed approach, the risk of soil erosion was evaluated in olive groves in Southern Spain, showing the potential of the ANP method for modelling a complex physical process like soil erosion.

  5. Theory and practical understanding of the migration behavior of proteins and peptides in CE and related techniques.

    PubMed

    Freitag, Ruth; Hilbrig, Frank

    2007-07-01

    CEC is defined as an analytical method, where the analytes are separated on a chromatographic column in the presence of an applied voltage. The separation of charged analytes in CEC is complex, since chromatographic interaction, electroosmosis and electrophoresis contribute to the experimentally observed behavior. The putative contribution of effects such as surface electrodiffusion has been suggested. A sound theoretical treatment incorporating all effects is currently not available. The question of whether the different effects contribute in an independent or an interdependent manner is still under discussion. In this contribution, the state-of-the-art in the theoretical description of the individual contributions as well as models for the retention behavior and in particular possible dimensionless 'retention factors' is discussed, together with the experimental database for the separation of charged analytes, in particular proteins and peptides, by CEC and related techniques.

  6. The case for visual analytics of arsenic concentrations in foods.

    PubMed

    Johnson, Matilda O; Cohly, Hari H P; Isokpehi, Raphael D; Awofolu, Omotayo R

    2010-05-01

    Arsenic is a naturally occurring toxic metal and its presence in food could be a potential risk to the health of both humans and animals. Prolonged ingestion of arsenic contaminated water may result in manifestations of toxicity in all systems of the body. Visual Analytics is a multidisciplinary field that is defined as the science of analytical reasoning facilitated by interactive visual interfaces. The concentrations of arsenic vary in foods making it impractical and impossible to provide regulatory limit for each food. This review article presents a case for the use of visual analytics approaches to provide comparative assessment of arsenic in various foods. The topics covered include (i) metabolism of arsenic in the human body; (ii) arsenic concentrations in various foods; (ii) factors affecting arsenic uptake in plants; (ii) introduction to visual analytics; and (iv) benefits of visual analytics for comparative assessment of arsenic concentration in foods. Visual analytics can provide an information superstructure of arsenic in various foods to permit insightful comparative risk assessment of the diverse and continually expanding data on arsenic in food groups in the context of country of study or origin, year of study, method of analysis and arsenic species.

  7. The Case for Visual Analytics of Arsenic Concentrations in Foods

    PubMed Central

    Johnson, Matilda O.; Cohly, Hari H.P.; Isokpehi, Raphael D.; Awofolu, Omotayo R.

    2010-01-01

    Arsenic is a naturally occurring toxic metal and its presence in food could be a potential risk to the health of both humans and animals. Prolonged ingestion of arsenic contaminated water may result in manifestations of toxicity in all systems of the body. Visual Analytics is a multidisciplinary field that is defined as the science of analytical reasoning facilitated by interactive visual interfaces. The concentrations of arsenic vary in foods making it impractical and impossible to provide regulatory limit for each food. This review article presents a case for the use of visual analytics approaches to provide comparative assessment of arsenic in various foods. The topics covered include (i) metabolism of arsenic in the human body; (ii) arsenic concentrations in various foods; (ii) factors affecting arsenic uptake in plants; (ii) introduction to visual analytics; and (iv) benefits of visual analytics for comparative assessment of arsenic concentration in foods. Visual analytics can provide an information superstructure of arsenic in various foods to permit insightful comparative risk assessment of the diverse and continually expanding data on arsenic in food groups in the context of country of study or origin, year of study, method of analysis and arsenic species. PMID:20623005

  8. Self-Concept: The Interplay of Theory and Methods.

    DTIC Science & Technology

    1981-04-01

    Matrix," Psychological Bulletin, 1959, 56, 81-105. Coopersmith , S., The Antecedents of Self-esteem, San Francisco: Free- man, 1967. Fernandes, L., "The...Casuality, New York: Wiley, 1979. Kokenes, B. M., "A Factor Analytic Study of the Coopersmith Self Esteem Inventory," Dissertation Abstacts

  9. Child Development in Developing Countries: Introduction and Methods

    ERIC Educational Resources Information Center

    Bornstein, Marc H.; Britto, Pia Rebello; Nonoyama-Tarumi, Yuko; Ota, Yumiko; Petrovic, Oliver; Putnick, Diane L.

    2012-01-01

    The Multiple Indicator Cluster Survey (MICS) is a nationally representative, internationally comparable household survey implemented to examine protective and risk factors of child development in developing countries around the world. This introduction describes the conceptual framework, nature of the MICS3, and general analytic plan of articles…

  10. New robust bilinear least squares method for the analysis of spectral-pH matrix data.

    PubMed

    Goicoechea, Héctor C; Olivieri, Alejandro C

    2005-07-01

    A new second-order multivariate method has been developed for the analysis of spectral-pH matrix data, based on a bilinear least-squares (BLLS) model achieving the second-order advantage and handling multiple calibration standards. A simulated Monte Carlo study of synthetic absorbance-pH data allowed comparison of the newly proposed BLLS methodology with constrained parallel factor analysis (PARAFAC) and with the combination multivariate curve resolution-alternating least-squares (MCR-ALS) technique under different conditions of sample-to-sample pH mismatch and analyte-background ratio. The results indicate an improved prediction ability for the new method. Experimental data generated by measuring absorption spectra of several calibration standards of ascorbic acid and samples of orange juice were subjected to second-order calibration analysis with PARAFAC, MCR-ALS, and the new BLLS method. The results indicate that the latter method provides the best analytical results in regard to analyte recovery in samples of complex composition requiring strict adherence to the second-order advantage. Linear dependencies appear when multivariate data are produced by using the pH or a reaction time as one of the data dimensions, posing a challenge to classical multivariate calibration models. The presently discussed algorithm is useful for these latter systems.

  11. Separating method factors and higher order traits of the Big Five: a meta-analytic multitrait-multimethod approach.

    PubMed

    Chang, Luye; Connelly, Brian S; Geeza, Alexis A

    2012-02-01

    Though most personality researchers now recognize that ratings of the Big Five are not orthogonal, the field has been divided about whether these trait intercorrelations are substantive (i.e., driven by higher order factors) or artifactual (i.e., driven by correlated measurement error). We used a meta-analytic multitrait-multirater study to estimate trait correlations after common method variance was controlled. Our results indicated that common method variance substantially inflates trait correlations, and, once controlled, correlations among the Big Five became relatively modest. We then evaluated whether two different theories of higher order factors could account for the pattern of Big Five trait correlations. Our results did not support Rushton and colleagues' (Rushton & Irwing, 2008; Rushton et al., 2009) proposed general factor of personality, but Digman's (1997) α and β metatraits (relabeled by DeYoung, Peterson, and Higgins (2002) as Stability and Plasticity, respectively) produced viable fit. However, our models showed considerable overlap between Stability and Emotional Stability and between Plasticity and Extraversion, raising the question of whether these metatraits are redundant with their dominant Big Five traits. This pattern of findings was robust when we included only studies whose observers were intimately acquainted with targets. Our results underscore the importance of using a multirater approach to studying personality and the need to separate the causes and outcomes of higher order metatraits from those of the Big Five. We discussed the implications of these findings for the array of research fields in which personality is studied.

  12. A method for direct, semi-quantitative analysis of gas phase samples using gas chromatography-inductively coupled plasma-mass spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, Kimberly E; Gerdes, Kirk

    2013-07-01

    A new and complete GC–ICP-MS method is described for direct analysis of trace metals in a gas phase process stream. The proposed method is derived from standard analytical procedures developed for ICP-MS, which are regularly exercised in standard ICP-MS laboratories. In order to implement the method, a series of empirical factors were generated to calibrate detector response with respect to a known concentration of an internal standard analyte. Calibrated responses are ultimately used to determine the concentration of metal analytes in a gas stream using a semi-quantitative algorithm. The method was verified using a traditional gas injection from a GCmore » sampling valve and a standard gas mixture containing either a 1 ppm Xe + Kr mix with helium balance or 100 ppm Xe with helium balance. Data collected for Xe and Kr gas analytes revealed that agreement of 6–20% with the actual concentration can be expected for various experimental conditions. To demonstrate the method using a relevant “unknown” gas mixture, experiments were performed for continuous 4 and 7 hour periods using a Hg-containing sample gas that was co-introduced into the GC sample loop with the xenon gas standard. System performance and detector response to the dilute concentration of the internal standard were pre-determined, which allowed semi-quantitative evaluation of the analyte. The calculated analyte concentrations varied during the course of the 4 hour experiment, particularly during the first hour of the analysis where the actual Hg concentration was under predicted by up to 72%. Calculated concentration improved to within 30–60% for data collected after the first hour of the experiment. Similar results were seen during the 7 hour test with the deviation from the actual concentration being 11–81% during the first hour and then decreasing for the remaining period. The method detection limit (MDL) was determined for the mercury by injecting the sample gas into the system following a period of equilibration. The MDL for Hg was calculated as 6.8 μg · m -3. This work describes the first complete GC–ICP-MS method to directly analyze gas phase samples, and detailed sample calculations and comparisons to conventional ICP-MS methods are provided.« less

  13. Environmental monitoring of phenolic pollutants in water by cloud point extraction prior to micellar electrokinetic chromatography.

    PubMed

    Stege, Patricia W; Sombra, Lorena L; Messina, Germán A; Martinez, Luis D; Silva, María F

    2009-05-01

    Many aromatic compounds can be found in the environment as a result of anthropogenic activities and some of them are highly toxic. The need to determine low concentrations of pollutants requires analytical methods with high sensitivity, selectivity, and resolution for application to soil, sediment, water, and other environmental samples. Complex sample preparation involving analyte isolation and enrichment is generally necessary before the final analysis. The present paper outlines a novel, simple, low-cost, and environmentally friendly method for the simultaneous determination of p-nitrophenol (PNP), p-aminophenol (PAP), and hydroquinone (HQ) by micellar electrokinetic capillary chromatography after preconcentration by cloud point extraction. Enrichment factors of 180 to 200 were achieved. The limits of detection of the analytes for the preconcentration of 50-ml sample volume were 0.10 microg L(-1) for PNP, 0.20 microg L(-1) for PAP, and 0.16 microg L(-1) for HQ. The optimized procedure was applied to the determination of phenolic pollutants in natural waters from San Luis, Argentina.

  14. Optimization of a Precolumn OPA Derivatization HPLC Assay for Monitoring of l-Asparagine Depletion in Serum during l-Asparaginase Therapy.

    PubMed

    Zhang, Mei; Zhang, Yong; Ren, Siqi; Zhang, Zunjian; Wang, Yongren; Song, Rui

    2018-06-06

    A method for monitoring l-asparagine (ASN) depletion in patients' serum using reversed-phase high-performance liquid chromatography with precolumn o-phthalaldehyde and ethanethiol (ET) derivatization is described. In order to improve the signal and stability of analytes, several important factors including precipitant reagent, derivatization conditions and detection wavelengths were optimized. The recovery of the analytes in biological matrix was the highest when 4% sulfosalicylic acid (1:1, v/v) was used as a precipitant reagent. Optimal fluorescence detection parameters were determined as λex = 340 nm and λem = 444 nm for maximal signal. The signal of analytes was the highest when the reagent ET and borate buffer of pH 9.9 were used in the derivatization solution. And the corresponding derivative products were stable up to 19 h. The validated method had been successfully applied to monitor ASN depletion and l-aspartic acid, l-glutamine, l-glutamic acid levels in pediatric patients during l-asparaginase therapy.

  15. Transport, biodegradation and isotopic fractionation of chlorinated ethenes: modeling and parameter estimation methods

    NASA Astrophysics Data System (ADS)

    Béranger, Sandra C.; Sleep, Brent E.; Lollar, Barbara Sherwood; Monteagudo, Fernando Perez

    2005-01-01

    An analytical, one-dimensional, multi-species, reactive transport model for simulating the concentrations and isotopic signatures of tetrachloroethylene (PCE) and its daughter products was developed. The simulation model was coupled to a genetic algorithm (GA) combined with a gradient-based (GB) method to estimate the first order decay coefficients and enrichment factors. In testing with synthetic data, the hybrid GA-GB method reduced the computational requirements for parameter estimation by a factor as great as 300. The isotopic signature profiles were observed to be more sensitive than the concentration profiles to estimates of both the first order decay constants and enrichment factors. Including isotopic data for parameter estimation significantly increased the GA convergence rate and slightly improved the accuracy of estimation of first order decay constants.

  16. High-throughput method for the quantitation of metabolites and co-factors from homocysteine-methionine cycle for nutritional status assessment.

    PubMed

    Da Silva, Laeticia; Collino, Sebastiano; Cominetti, Ornella; Martin, Francois-Pierre; Montoliu, Ivan; Moreno, Sergio Oller; Corthesy, John; Kaput, Jim; Kussmann, Martin; Monteiro, Jacqueline Pontes; Guiraud, Seu Ping

    2016-09-01

    There is increasing interest in the profiling and quantitation of methionine pathway metabolites for health management research. Currently, several analytical approaches are required to cover metabolites and co-factors. We report the development and the validation of a method for the simultaneous detection and quantitation of 13 metabolites in red blood cells. The method, validated in a cohort of healthy human volunteers, shows a high level of accuracy and reproducibility. This high-throughput protocol provides a robust coverage of central metabolites and co-factors in one single analysis and in a high-throughput fashion. In large-scale clinical settings, the use of such an approach will significantly advance the field of nutritional research in health and disease.

  17. Chemometric-assisted QuEChERS extraction method for post-harvest pesticide determination in fruits and vegetables

    NASA Astrophysics Data System (ADS)

    Li, Minmin; Dai, Chao; Wang, Fengzhong; Kong, Zhiqiang; He, Yan; Huang, Ya Tao; Fan, Bei

    2017-02-01

    An effective analysis method was developed based on a chemometric tool for the simultaneous quantification of five different post-harvest pesticides (2,4-dichlorophenoxyacetic acid (2,4-D), carbendazim, thiabendazole, iprodione, and prochloraz) in fruits and vegetables. In the modified QuEChERS (quick, easy, cheap, effective, rugged and safe) method, the factors and responses for optimization of the extraction and cleanup analyses were compared using the Plackett-Burman (P-B) screening design. Furthermore, the significant factors (toluene percentage, hydrochloric acid (HCl) percentage, and graphitized carbon black (GCB) amount) were optimized using a central composite design (CCD) combined with Derringer’s desirability function (DF). The limits of quantification (LOQs) were estimated to be 1.0 μg/kg for 2,4-D, carbendazim, thiabendazole, and prochloraz, and 1.5 μg/kg for iprodione in food matrices. The mean recoveries were in the range of 70.4-113.9% with relative standard deviations (RSDs) of less than 16.9% at three spiking levels. The measurement uncertainty of the analytical method was determined using the bottom-up approach, which yielded an average value of 7.6%. Carbendazim was most frequently found in real samples analyzed using the developed method. Consequently, the analytical method can serve as an advantageous and rapid tool for determination of five preservative pesticides in fruits and vegetables.

  18. Chemometric-assisted QuEChERS extraction method for post-harvest pesticide determination in fruits and vegetables

    PubMed Central

    Li, Minmin; Dai, Chao; Wang, Fengzhong; Kong, Zhiqiang; He, Yan; Huang, Ya Tao; Fan, Bei

    2017-01-01

    An effective analysis method was developed based on a chemometric tool for the simultaneous quantification of five different post-harvest pesticides (2,4-dichlorophenoxyacetic acid (2,4-D), carbendazim, thiabendazole, iprodione, and prochloraz) in fruits and vegetables. In the modified QuEChERS (quick, easy, cheap, effective, rugged and safe) method, the factors and responses for optimization of the extraction and cleanup analyses were compared using the Plackett–Burman (P–B) screening design. Furthermore, the significant factors (toluene percentage, hydrochloric acid (HCl) percentage, and graphitized carbon black (GCB) amount) were optimized using a central composite design (CCD) combined with Derringer’s desirability function (DF). The limits of quantification (LOQs) were estimated to be 1.0 μg/kg for 2,4-D, carbendazim, thiabendazole, and prochloraz, and 1.5 μg/kg for iprodione in food matrices. The mean recoveries were in the range of 70.4–113.9% with relative standard deviations (RSDs) of less than 16.9% at three spiking levels. The measurement uncertainty of the analytical method was determined using the bottom-up approach, which yielded an average value of 7.6%. Carbendazim was most frequently found in real samples analyzed using the developed method. Consequently, the analytical method can serve as an advantageous and rapid tool for determination of five preservative pesticides in fruits and vegetables. PMID:28225030

  19. Chemometric-assisted QuEChERS extraction method for post-harvest pesticide determination in fruits and vegetables.

    PubMed

    Li, Minmin; Dai, Chao; Wang, Fengzhong; Kong, Zhiqiang; He, Yan; Huang, Ya Tao; Fan, Bei

    2017-02-22

    An effective analysis method was developed based on a chemometric tool for the simultaneous quantification of five different post-harvest pesticides (2,4-dichlorophenoxyacetic acid (2,4-D), carbendazim, thiabendazole, iprodione, and prochloraz) in fruits and vegetables. In the modified QuEChERS (quick, easy, cheap, effective, rugged and safe) method, the factors and responses for optimization of the extraction and cleanup analyses were compared using the Plackett-Burman (P-B) screening design. Furthermore, the significant factors (toluene percentage, hydrochloric acid (HCl) percentage, and graphitized carbon black (GCB) amount) were optimized using a central composite design (CCD) combined with Derringer's desirability function (DF). The limits of quantification (LOQs) were estimated to be 1.0 μg/kg for 2,4-D, carbendazim, thiabendazole, and prochloraz, and 1.5 μg/kg for iprodione in food matrices. The mean recoveries were in the range of 70.4-113.9% with relative standard deviations (RSDs) of less than 16.9% at three spiking levels. The measurement uncertainty of the analytical method was determined using the bottom-up approach, which yielded an average value of 7.6%. Carbendazim was most frequently found in real samples analyzed using the developed method. Consequently, the analytical method can serve as an advantageous and rapid tool for determination of five preservative pesticides in fruits and vegetables.

  20. Graphene oxide assisted electromembrane extraction with gas chromatography for the determination of methamphetamine as a model analyte in hair and urine samples.

    PubMed

    Bagheri, Hasan; Zavareh, Alireza Fakhari; Koruni, Mohammad Hossein

    2016-03-01

    In the present study, graphene oxide reinforced two-phase electromembrane extraction (EME) coupled with gas chromatography was applied for the determination of methamphetamine as a model analyte in biological samples. The presence of graphene oxide in the hollow fiber wall can increase the effective surface area, interactions with analyte and polarity of support liquid membrane that leads to an enhancement in the analyte migration. To investigate the influence of the presence of graphene oxide in the support liquid membrane on the extraction efficiency, a comparative study was performed between graphene oxide and graphene oxide/EME methods. The extraction parameters such as type of organic solvent, pH of the donor phase, stirring speed, time, voltage, salt addition and the concentration of graphene oxide were optimized. Under the optimum conditions, the proposed microextraction technique provided low limit of detection (2.4 ng/mL), high preconcentration factor (195-198) and high relative recovery (95-98.5%). Finally, the method was successfully employed for the determination of methamphetamine in urine and hair samples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Simultaneous determination of eight flavonoids in propolis using chemometrics-assisted high performance liquid chromatography-diode array detection.

    PubMed

    Sun, Yan-Mei; Wu, Hai-Long; Wang, Jian-Yao; Liu, Zhi; Zhai, Min; Yu, Ru-Qin

    2014-07-01

    A fast analytical strategy of second-order calibration method based on the alternating trilinear decomposition algorithm (ATLD)-assisted high performance liquid chromatography coupled with a diode array detector (HPLC-DAD) was established for the simultaneous determination of eight flavonoids (rutin, quercetin, luteolin, kaempferol, isorhamnetin, apigenin, galangin and chrysin) in propolis capsules samples. The chromatographic separation was implemented on a Wondasil™ C18 column (250mm×4.6mm, 5μm) within 13min with a binary mobile phase composed of water with 1% formic acid and methanol at a flow rate of 1.0mLmin(-1) after flavonoids were only extracted with methanol by ultrasound extraction for 15min. The baseline problem was overcome by considering background drift as additional compositions or factors as well as the target analytes, and ATLD was employed to handle the overlapping peaks from analytes of interest or from analytes and co-eluting matrix compounds. The linearity was good with the correlation coefficients no less than 0.9947; the limit of detections (LODs) within the range of 3.39-33.05ngmL(-1) were low enough; the accuracy was confirmed by the recoveries ranged from 91.9% to 110.2% and the root-mean-square-error of predictions (RMSEPs) less than 1.1μg/mL. The results indicated that the chromatographic method with the aid of ATLD is efficient, sensitive and cost-effective and can realize the resolution and accurate quantification of flavonoids even in the presence of interferences, thus providing an alternative method for accurate quantification of analytes especially when the complete separation is not easily accomplished. The method was successfully applied to propolis capsules samples and the satisfactory results were obtained. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Application of analytical quality by design principles for the determination of alkyl p-toluenesulfonates impurities in Aprepitant by HPLC. Validation using total-error concept.

    PubMed

    Zacharis, Constantinos K; Vastardi, Elli

    2018-02-20

    In the research presented we report the development of a simple and robust liquid chromatographic method for the quantification of two genotoxic alkyl sulphonate impurities (namely methyl p-toluenesulfonate and isopropyl p-toluenesulfonate) in Aprepitant API substances using the Analytical Quality by Design (AQbD) approach. Following the steps of AQbD protocol, the selected critical method attributes (CMAs) were the separation criterions between the critical peak pairs, the analysis time and the peak efficiencies of the analytes. The critical method parameters (CMPs) included the flow rate, the gradient slope and the acetonitrile content at the first step of the gradient elution program. Multivariate experimental designs namely Plackett-Burman and Box-Behnken designs were conducted sequentially for factor screening and optimization of the method parameters. The optimal separation conditions were estimated using the desirability function. The method was fully validated in the range of 10-200% of the target concentration limit of the analytes using the "total error" approach. Accuracy profiles - a graphical decision making tool - were constructed using the results of the validation procedures. The β-expectation tolerance intervals did not exceed the acceptance criteria of±10%, meaning that 95% of future results will be included in the defined bias limits. The relative bias ranged between - 1.3-3.8% for both analytes, while the RSD values for repeatability and intermediate precision were less than 1.9% in all cases. The achieved limit of detection (LOD) and the limit of quantification (LOQ) were adequate for the specific purpose and found to be 0.02% (corresponding to 48μgg -1 in sample) for both methyl and isopropyl p-toluenesulfonate. As proof-of-concept, the validated method was successfully applied in the analysis of several Aprepitant batches indicating that this methodology could be used for routine quality control analyses. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Relative frequencies of constrained events in stochastic processes: An analytical approach.

    PubMed

    Rusconi, S; Akhmatskaya, E; Sokolovski, D; Ballard, N; de la Cal, J C

    2015-10-01

    The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈10(4)). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.

  4. Selecting Evaluation Comparison Groups: A Cluster Analytic Approach.

    ERIC Educational Resources Information Center

    Davis, Todd Mclin; McLean, James E.

    A persistent problem in the evaluation of field-based projects is the lack of no-treatment comparison groups. Frequently, potential comparison groups are confounded by socioeconomic, racial, or other factors. Among the possible methods for dealing with this problem are various matching procedures, but they are cumbersome to use with multiple…

  5. 40 CFR 761.20 - Prohibitions and exceptions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... present an unreasonable risk of injury to health within the United States. This finding is based upon the... any scientifically acceptable analytical method, may be significant, depending on such factors as the... burners in the automotive industry may burn used oil generated from automotive sources in used oil-fired...

  6. 40 CFR 761.20 - Prohibitions and exceptions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... present an unreasonable risk of injury to health within the United States. This finding is based upon the... any scientifically acceptable analytical method, may be significant, depending on such factors as the... burners in the automotive industry may burn used oil generated from automotive sources in used oil-fired...

  7. Communication Network Analysis Methods.

    ERIC Educational Resources Information Center

    Farace, Richard V.; Mabee, Timothy

    This paper reviews a variety of analytic procedures that can be applied to network data, discussing the assumptions and usefulness of each procedure when applied to the complexity of human communication. Special attention is paid to the network properties measured or implied by each procedure. Factor analysis and multidimensional scaling are among…

  8. Finding Optimal Gains In Linear-Quadratic Control Problems

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.; Scheid, Robert E., Jr.

    1990-01-01

    Analytical method based on Volterra factorization leads to new approximations for optimal control gains in finite-time linear-quadratic control problem of system having infinite number of dimensions. Circumvents need to analyze and solve Riccati equations and provides more transparent connection between dynamics of system and optimal gain.

  9. The Latent Structure of Child Depression: A Taxometric Analysis

    ERIC Educational Resources Information Center

    Richey, J. Anthony; Schmidt, Norman B.; Lonigan, Christopher J.; Phillips, Beth M.; Catanzaro, Salvatore J.; Laurent, Jeff; Gerhardstein, Rebecca R.; Kotov, Roman

    2009-01-01

    Background: The current study examined the categorical versus continuous nature of child and adolescent depression among three samples of children and adolescents ranging from 5 to 19 years. Methods: Depression was measured using the Children's Depression Inventory (CDI). Indicators derived from the CDI were based on factor analytic research on…

  10. Analytical quality by design: a tool for regulatory flexibility and robust analytics.

    PubMed

    Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy

    2015-01-01

    Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).

  11. Analytical Quality by Design: A Tool for Regulatory Flexibility and Robust Analytics

    PubMed Central

    Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy

    2015-01-01

    Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT). PMID:25722723

  12. Rapid and sensitive determination of tellurium in soil and plant samples by sector-field inductively coupled plasma mass spectrometry.

    PubMed

    Yang, Guosheng; Zheng, Jian; Tagami, Keiko; Uchida, Shigeo

    2013-11-15

    In this work, we report a rapid and highly sensitive analytical method for the determination of tellurium in soil and plant samples using sector field inductively coupled plasma mass spectrometry (SF-ICP-MS). Soil and plant samples were digested using Aqua regia. After appropriate dilution, Te in soil and plant samples was directly analyzed without any separation and preconcentration. This simple sample preparation approach avoided to a maximum extent any contamination and loss of Te prior to the analysis. The developed analytical method was validated by the analysis of soil/sediment and plant reference materials. Satisfactory detection limits of 0.17 ng g(-1) for soil and 0.02 ng g(-1) for plant samples were achieved, which meant that the developed method was applicable to studying the soil-to-plant transfer factor of Te. Our work represents for the first time that data on the soil-to-plant transfer factor of Te were obtained for Japanese samples which can be used for the estimation of internal radiation dose of radioactive tellurium due to the Fukushima Daiichi Nuclear Power Plant accident. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Determination of immersion factors for radiance sensors in marine and inland waters: a semi-analytical approach using refractive index approximation

    NASA Astrophysics Data System (ADS)

    Dev, Pravin J.; Shanmugam, P.

    2016-05-01

    Underwater radiometers are generally calibrated in air using a standard source. The immersion factors are required for these radiometers to account for the change in the in-water measurements with respect to in-air due to the different refractive index of the medium. The immersion factors previously determined for RAMSES series of commercial radiometers manufactured by TriOS are applicable to clear oceanic waters. In typical inland and turbid productive coastal waters, these experimentally determined immersion factors yield significantly large errors in water-leaving radiances (Lw) and hence remote sensing reflectances (Rrs). To overcome this limitation, a semi-analytical method with based on the refractive index approximation is proposed in this study, with the aim of obtaining reliable Lw and Rrs from RAMSES radiometers for turbid and productive waters within coastal and inland water environments. We also briefly show the variation of pure water immersion factors (Ifw) and newly derived If on Lw and Rrs for clear and turbid waters. The remnant problems other than the immersion factor coefficients such as transmission, air-water and water-air Fresnel's reflectances are also discussed.

  14. Identifying Factors that Influence State-Specific Hunger Rates in the U.S.: A Simple Analytic Method for Understanding a Persistent Problem

    ERIC Educational Resources Information Center

    Edwards, Mark Evan; Weber, Bruce; Bernell, Stephanie

    2007-01-01

    An existing measure of food insecurity with hunger in the United States may serve as an effective indicator of quality of life. State level differences in that measure can reveal important differences in quality of life across places. In this study, we advocate and demonstrate two simple methods by which analysts can explore state-specific…

  15. Quantitative method of measuring cancer cell urokinase and metastatic potential

    NASA Technical Reports Server (NTRS)

    Morrison, Dennis R. (Inventor)

    1993-01-01

    The metastatic potential of tumors can be evaluated by the quantitative detection of urokinase and DNA. The cell sample selected for examination is analyzed for the presence of high levels of urokinase and abnormal DNA using analytical flow cytometry and digital image analysis. Other factors such as membrane associated urokinase, increased DNA synthesis rates and certain receptors can be used in the method for detection of potentially invasive tumors.

  16. Error analysis regarding the calculation of nonlinear force-free field

    NASA Astrophysics Data System (ADS)

    Liu, S.; Zhang, H. Q.; Su, J. T.

    2012-02-01

    Magnetic field extrapolation is an alternative method to study chromospheric and coronal magnetic fields. In this paper, two semi-analytical solutions of force-free fields (Low and Lou in Astrophys. J. 352:343, 1990) have been used to study the errors of nonlinear force-free (NLFF) fields based on force-free factor α. Three NLFF fields are extrapolated by approximate vertical integration (AVI) Song et al. (Astrophys. J. 649:1084, 2006), boundary integral equation (BIE) Yan and Sakurai (Sol. Phys. 195:89, 2000) and optimization (Opt.) Wiegelmann (Sol. Phys. 219:87, 2004) methods. Compared with the first semi-analytical field, it is found that the mean values of absolute relative standard deviations (RSD) of α along field lines are about 0.96-1.19, 0.63-1.07 and 0.43-0.72 for AVI, BIE and Opt. fields, respectively. While for the second semi-analytical field, they are about 0.80-1.02, 0.67-1.34 and 0.33-0.55 for AVI, BIE and Opt. fields, respectively. As for the analytical field, the calculation error of <| RSD|> is about 0.1˜0.2. It is also found that RSD does not apparently depend on the length of field line. These provide the basic estimation on the deviation of extrapolated field obtained by proposed methods from the real force-free field.

  17. Electron scattering intensities and Patterson functions of Skyrmions

    NASA Astrophysics Data System (ADS)

    Karliner, M.; King, C.; Manton, N. S.

    2016-06-01

    The scattering of electrons off nuclei is one of the best methods of probing nuclear structure. In this paper we focus on electron scattering off nuclei with spin and isospin zero within the Skyrme model. We consider two distinct methods and simplify our calculations by use of the Born approximation. The first method is to calculate the form factor of the spherically averaged Skyrmion charge density; the second uses the Patterson function to calculate the scattering intensity off randomly oriented Skyrmions, and spherically averages at the end. We compare our findings with experimental scattering data. We also find approximate analytical formulae for the first zero and first stationary point of a form factor.

  18. Analytical method for promoting process capability of shock absorption steel.

    PubMed

    Sung, Wen-Pei; Shih, Ming-Hsiang; Chen, Kuen-Suan

    2003-01-01

    Mechanical properties and low cycle fatigue are two factors that must be considered in developing new type steel for shock absorption. Process capability and process control are significant factors in achieving the purpose of research and development programs. Often-used evaluation methods failed to measure process yield and process centering; so this paper uses Taguchi loss function as basis to establish an evaluation method and the steps for assessing the quality of mechanical properties and process control of an iron and steel manufacturer. The establishment of this method can serve the research and development and manufacturing industry and lay a foundation in enhancing its process control ability to select better manufacturing processes that are more reliable than decision making by using the other commonly used methods.

  19. Advantages of using tetrahydrofuran-water as mobile phases in the quantitation of cyclosporin A in monkey and rat plasma by liquid chromatography-tandem mass spectrometry.

    PubMed

    Li, Austin C; Li, Yinghe; Guirguis, Micheal S; Caldwell, Robert G; Shou, Wilson Z

    2007-01-04

    A new analytical method is described here for the quantitation of anti-inflammatory drug cyclosporin A (CyA) in monkey and rat plasma. The method used tetrahydrofuran (THF)-water mobile phases to elute the analyte and internal standard, cyclosporin C (CyC). The gradient mobile phase program successfully eluted CyA into a sharp peak and therefore improved resolution between the analyte and possible interfering materials compared with previously reported analytical approaches, where CyA was eluted as a broad peak due to the rapid conversion between different conformers. The sharp peak resulted from this method facilitated the quantitative calculation as multiple smoothing and large number of bunching factors were not necessary. The chromatography in the new method was performed at 30 degrees C instead of 65-70 degrees C as reported previously. Other advantages of the method included simple and fast sample extraction-protein precipitation, direct injection of the extraction supernatant to column for analysis, and elimination of evaporation and reconstitution steps, which were needed in solid phase extraction or liquid-liquid extraction reported before. This method is amenable to high-throughput analysis with a total chromatographic run time of 3 min. This approach has been verified as sensitive, linear (0.977-4000 ng/mL), accurate and precise for the quantitation of CyA in monkey and rat plasma. However, compared with the usage of conventional mobile phases, the only drawback of this approach was the reduced detection response from the mass spectrometer that was possibly caused by poor desolvation in the ionization source. This is the first report to demonstrate the advantages of using THF-water mobile phases to elute CyA in liquid chromatography.

  20. Q-controlled amplitude modulation atomic force microscopy in liquids: An analysis

    NASA Astrophysics Data System (ADS)

    Hölscher, H.; Schwarz, U. D.

    2006-08-01

    An analysis of amplitude modulation atomic force microscopy in liquids is presented with respect to the application of the Q-Control technique. The equation of motion is solved by numerical and analytic methods with and without Q-Control in the presence of a simple model interaction force adequate for many liquid environments. In addition, the authors give an explicit analytical formula for the tip-sample indentation showing that higher Q factors reduce the tip-sample force. It is found that Q-Control suppresses unwanted deformations of the sample surface, leading to the enhanced image quality reported in several experimental studies.

  1. High-performance heat pipes for heat recovery applications

    NASA Technical Reports Server (NTRS)

    Saaski, E. W.; Hartl, J. H.

    1980-01-01

    Methods to improve the performance of reflux heat pipes for heat recovery applications were examined both analytically and experimentally. Various models for the estimation of reflux heat pipe transport capacity were surveyed in the literature and compared with experimental data. A high transport capacity reflux heat pipe was developed that provides up to a factor of 10 capacity improvement over conventional open tube designs; analytical models were developed for this device and incorporated into a computer program HPIPE. Good agreement of the model predictions with data for R-11 and benzene reflux heat pipes was obtained.

  2. First Order Reliability Application and Verification Methods for Semistatic Structures

    NASA Technical Reports Server (NTRS)

    Verderaime, Vincent

    1994-01-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.

  3. Uncertainty of relative sensitivity factors in glow discharge mass spectrometry

    NASA Astrophysics Data System (ADS)

    Meija, Juris; Methven, Brad; Sturgeon, Ralph E.

    2017-10-01

    The concept of the relative sensitivity factors required for the correction of the measured ion beam ratios in pin-cell glow discharge mass spectrometry is examined in detail. We propose a data-driven model for predicting the relative response factors, which relies on a non-linear least squares adjustment and analyte/matrix interchangeability phenomena. The model provides a self-consistent set of response factors for any analyte/matrix combination of any element that appears as either an analyte or matrix in at least one known response factor.

  4. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  5. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  6. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  7. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  8. 7 CFR 94.303 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  9. SAM Radiochemical Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target radiochemical analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select radiochemical analytes.

  10. Development of a selective and sensitive flotation method for determination of trace amounts of cobalt, nickel, copper and iron in environmental samples.

    PubMed

    Karimi, H; Ghaedi, M; Shokrollahi, A; Rajabi, H R; Soylak, M; Karami, B

    2008-02-28

    A simple, selective and rapid flotation method for the separation-preconcentration of trace amounts of cobalt, nickel, iron and copper ions using phenyl 2-pyridyl ketone oxime (PPKO) has been developed prior to their flame atomic absorption spectrometric determinations. The influence of pH, amount of PPKO as collector, type and amount of eluting agent, type and amount of surfactant as floating agent and ionic strength was evaluated on the recoveries of analytes. The influences of the concomitant ions on the recoveries of the analyte ions were also examined. The enrichment factor was 93. The detection limits based on 3 sigma for Cu, Ni, Co and Fe were 0.7, 0.7, 0.8, and 0.7 ng mL(-1), respectively. The method has been successfully applied for determination of trace amounts of ions in various real samples.

  11. A Method for Calculating Viscosity and Thermal Conductivity of a Helium-Xenon Gas Mixture

    NASA Technical Reports Server (NTRS)

    Johnson, Paul K.

    2006-01-01

    A method for calculating viscosity and thermal conductivity of a helium-xenon (He-Xe) gas mixture was employed, and results were compared to AiResearch (part of Honeywell) analytical data. The method of choice was that presented by Hirschfelder with Singh's third-order correction factor applied to thermal conductivity. Values for viscosity and thermal conductivity were calculated over a temperature range of 400 to 1200 K for He-Xe gas mixture molecular weights of 20.183, 39.94, and 83.8 kg/kmol. First-order values for both transport properties were in good agreement with AiResearch analytical data. Third-order-corrected thermal conductivity values were all greater than AiResearch data, but were considered to be a better approximation of thermal conductivity because higher-order effects of mass and temperature were taken into consideration. Viscosity, conductivity, and Prandtl number were then compared to experimental data presented by Taylor.

  12. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  13. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  14. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  15. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  16. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  17. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  18. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  19. 7 CFR 98.4 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture... Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MREs are listed as follows: (1) Official Methods of Analysis of AOAC...

  20. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  1. 7 CFR 93.4 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  2. Trends in tungsten coil atomic spectrometry

    NASA Astrophysics Data System (ADS)

    Donati, George L.

    Renewed interest in electrothermal atomic spectrometric methods based on tungsten coil atomizers is a consequence of a world wide increasing demand for fast, inexpensive, sensitive, and portable analytical methods for trace analysis. In this work, tungsten coil atomic absorption spectrometry (WCAAS) and tungsten coil atomic emission spectrometry (WCAES) are used to determine several different metals and even a non-metal at low levels in different samples. Improvements in instrumentation and new strategies to reduce matrix effects and background signals are presented. Investigation of the main factors affecting both WCAAS and WCAES analytical signals points to the importance of a reducing, high temperature gas phase in the processes leading to atomic cloud generation. Some more refractory elements such as V and Ti were determined for the first time by double tungsten coil atomic emission spectrometry (DWCAES). The higher temperatures provided by two atomizers in DWCAES also allowed the detection of Ag, Cu and Sn emission signals for the first time. Simultaneous determination of several elements by WCAES in relatively complex sample matrices was possible after a simple acid extraction. The results show the potential of this method as an alternative to more traditional, expensive methods for fast, more effective analyses and applications in the field. The development of a new metallic atomization cell is also presented. Lower limits of detection in both WCAAS and WCAES determinations were obtained due to factors such as better control of background signal, smaller, more isothermal system, with atomic cloud concentration at the optical path for a longer period of time. Tungsten coil-based methods are especially well suited to applications requiring low sample volume, low cost, sensitivity and portability. Both WCAAS and WCAES have great commercial potential in fields as diverse as archeology and industrial quality control. They are simple, inexpensive, effective methods for trace metal determinations in several different samples, representing an important asset in today's analytical chemistry.

  3. Integration of Gas Chromatography Mass Spectrometry Methods for Differentiating Ricin Preparation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wunschel, David S.; Melville, Angela M.; Ehrhardt, Christopher J.

    2012-05-17

    The investigation of crimes involving chemical or biological agents is infrequent, but presents unique analytical challenges. The protein toxin ricin is encountered more frequently than other agents and is found in the seeds of the castor plant Ricinus communis. Typically, the toxin is extracted from castor seeds utilizing a variety of different recipes that result in varying purity of the toxin. Moreover, these various purification steps can also leave or differentially remove a variety of exogenous and endogenous residual components with the toxin that may indicate the type and number of purification steps involved. We have applied three gas chromatographicmore » - mass spectrometric (GC-MS) based analytical methods to measure the variation in seed carbohydrates and castor oil ricinoleic acid as well as the presence of solvents used for purification. These methods were applied to the same samples prepared using four previously identified toxin preparation methods starting from four varieties of castor seeds. The individual data sets for seed carbohydrate profiles, ricinoleic acid or acetone amount each provided information capable of differentiating different types of toxin preparations across seed types. However, the integration of the data sets using multivariate factor analysis provided a clear distinction of all samples based on the preparation method and independent of the seed source. In particular the abundance of mannose, arabinose, fucose, ricinoleic acid and acetone were shown to be important differentiating factors. These complementary tools provide a more confident determination of the method of toxin preparation.« less

  4. Systematic Development and Validation of a Thin-Layer Densitometric Bioanalytical Method for Estimation of Mangiferin Employing Analytical Quality by Design (AQbD) Approach.

    PubMed

    Khurana, Rajneet Kaur; Rao, Satish; Beg, Sarwar; Katare, O P; Singh, Bhupinder

    2016-01-01

    The present work aims at the systematic development of a simple, rapid and highly sensitive densitometry-based thin-layer chromatographic method for the quantification of mangiferin in bioanalytical samples. Initially, the quality target method profile was defined and critical analytical attributes (CAAs) earmarked, namely, retardation factor (Rf), peak height, capacity factor, theoretical plates and separation number. Face-centered cubic design was selected for optimization of volume loaded and plate dimensions as the critical method parameters selected from screening studies employing D-optimal and Plackett-Burman design studies, followed by evaluating their effect on the CAAs. The mobile phase containing a mixture of ethyl acetate : acetic acid : formic acid : water in a 7 : 1 : 1 : 1 (v/v/v/v) ratio was finally selected as the optimized solvent for apt chromatographic separation of mangiferin at 262 nm withRf 0.68 ± 0.02 and all other parameters within the acceptance limits. Method validation studies revealed high linearity in the concentration range of 50-800 ng/band for mangiferin. The developed method showed high accuracy, precision, ruggedness, robustness, specificity, sensitivity, selectivity and recovery. In a nutshell, the bioanalytical method for analysis of mangiferin in plasma revealed the presence of well-resolved peaks and high recovery of mangiferin. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Designing a Double-Pole Nanoscale Relay Based on a Carbon Nanotube: A Theoretical Study

    NASA Astrophysics Data System (ADS)

    Mu, Weihua; Ou-Yang, Zhong-can; Dresselhaus, Mildred S.

    2017-08-01

    We theoretically investigate a novel and powerful double-pole nanoscale relay based on a carbon nanotube, which is one of the nanoelectromechanical switches being able to work under the strong nuclear radiation, and analyze the physical mechanism of the operating stages in the operation, including "pull in," "connection," and "pull back," as well as the key factors influencing the efficiency of the devices. We explicitly provide the analytical expression of the two important operation voltages, Vpull in and Vpull back , therefore clearly showing the dependence of the material properties and geometry of the present devices by the analytical method from basic physics, avoiding complex numerical calculations. Our method is easy to use in preparing the design guide for fabricating the present device and other nanoelectromechanical devices.

  6. Acrylamide analysis in food by liquid chromatographic and gas chromatographic methods.

    PubMed

    Elbashir, Abdalla A; Omar, Mei M Ali; Ibrahim, Wan Aini Wan; Schmitz, Oliver J; Aboul-Enein, Hassan Y

    2014-01-01

    Acrylamide (AA) is a compound classified as carcinogenic to humans by the International Agency for Research on Cancer. It was first discovered to be present in certain heated processed food by the Swedish National Food Administration (SNFA) and University of Stockholm in early 2002. The major pathway for AA formation in food is the Maillard reaction between reducing sugar and the amino acid asparagine at high temperature. Since the discovery of AA's presence in food, many analytical methods have been developed for determination of AA contents in different food matrices. Also, several studies have been conducted to develop extraction procedures for AA from difficult food matrices. AA is a small, highly polar molecule, which makes its extraction and analysis challenging. Many articles and reviews have been published dealing with AA in food. The aim of the review is to discuss AA formation in food, the factors affecting AA formation and removal, AA exposure assessment, AA extraction and cleanup from food samples, and analytical methods used in AA determination, such as high-performance liquid chromatography (HPLC) and gas chromatography (GC). Special attention is given to sample extraction and cleanup procedures and analytical techniques used for AA determination.

  7. TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections.

    PubMed

    Kim, Minjeong; Kang, Kyeongpil; Park, Deokgun; Choo, Jaegul; Elmqvist, Niklas

    2017-01-01

    Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets.

  8. Deep eutectic solvent-based ultrasound-assisted dispersive liquid-liquid microextraction coupled with high-performance liquid chromatography for the determination of ultraviolet filters in water samples.

    PubMed

    Wang, Huazi; Hu, Lu; Liu, Xinya; Yin, Shujun; Lu, Runhua; Zhang, Sanbing; Zhou, Wenfeng; Gao, Haixiang

    2017-09-22

    In the present study, a simple and rapid sample preparation method designated ultrasound-assisted dispersive liquid-liquid microextraction based on a deep eutectic solvent (DES) followed by high-performance liquid chromatography with ultraviolet (UV) detection (HPLC-UVD) was developed for the extraction and determination of UV filters from water samples. The model analytes were 2,4-dihydroxybenzophenone (BP-1), benzophenone (BP) and 2-hydroxy-4-methoxybenzophenone (BP-3). The hydrophobic DES was prepared by mixing trioctylmethylammonium chloride (TAC) and decanoic acid (DecA). Various influencing factors (selection of the extractant, amount of DES, ultrasound duration, salt addition, sample volume, sample pH, centrifuge rate and duration) on UV filter recovery were systematically investigated. Under optimal conditions, the proposed method provided good recoveries in the range of 90.2-103.5% and relative standard deviations (inter-day and intra-day precision, n=5) below 5.9%. The enrichment factors for the analytes ranged from 67 to 76. The limits of detection varied from 0.15 to 0.30ngmL -1 , depending on the analytes. The linearities were between 0.5 and 500ngmL -1 for BP-1 and BP and between 1 and 500ngmL -1 for BP-3, with coefficients of determination greater than 0.99. Finally, the proposed method was applied to the determination of UV filters in swimming pool and river water samples, and acceptable relative recoveries ranging from 82.1 to 106.5% were obtained. Copyright © 2017. Published by Elsevier B.V.

  9. 40 CFR 161.180 - Enforcement analytical method.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 161.180... DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data Requirements § 161.180 Enforcement analytical method. An analytical method suitable for enforcement purposes must be...

  10. An Analytic Hierarchy Process-based Method to Rank the Critical Success Factors of Implementing a Pharmacy Barcode System.

    PubMed

    Alharthi, Hana; Sultana, Nahid; Al-Amoudi, Amjaad; Basudan, Afrah

    2015-01-01

    Pharmacy barcode scanning is used to reduce errors during the medication dispensing process. However, this technology has rarely been used in hospital pharmacies in Saudi Arabia. This article describes the barriers to successful implementation of a barcode scanning system in Saudi Arabia. A literature review was conducted to identify the relevant critical success factors (CSFs) for a successful dispensing barcode system implementation. Twenty-eight pharmacists from a local hospital in Saudi Arabia were interviewed to obtain their perception of these CSFs. In this study, planning (process flow issues and training requirements), resistance (fear of change, communication issues, and negative perceptions about technology), and technology (software, hardware, and vendor support) were identified as the main barriers. The analytic hierarchy process (AHP), one of the most widely used tools for decision making in the presence of multiple criteria, was used to compare and rank these identified CSFs. The results of this study suggest that resistance barriers have a greater impact than planning and technology barriers. In particular, fear of change is the most critical factor, and training is the least critical factor.

  11. Approaches to acceptable risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whipple, C

    Several alternative approaches to address the question {open_quotes}How safe is safe enough?{close_quotes} are reviewed and an attempt is made to apply the reasoning behind these approaches to the issue of acceptability of radiation exposures received in space. The approaches to the issue of the acceptability of technological risk described here are primarily analytical, and are drawn from examples in the management of environmental health risks. These include risk-based approaches, in which specific quantitative risk targets determine the acceptability of an activity, and cost-benefit and decision analysis, which generally focus on the estimation and evaluation of risks, benefits and costs, inmore » a framework that balances these factors against each other. These analytical methods tend by their quantitative nature to emphasize the magnitude of risks, costs and alternatives, and to downplay other factors, especially those that are not easily expressed in quantitative terms, that affect acceptance or rejection of risk. Such other factors include the issues of risk perceptions and how and by whom risk decisions are made.« less

  12. Analytical solutions for solute transport in groundwater and riverine flow using Green's Function Method and pertinent coordinate transformation method

    NASA Astrophysics Data System (ADS)

    Sanskrityayn, Abhishek; Suk, Heejun; Kumar, Naveen

    2017-04-01

    In this study, analytical solutions of one-dimensional pollutant transport originating from instantaneous and continuous point sources were developed in groundwater and riverine flow using both Green's Function Method (GFM) and pertinent coordinate transformation method. Dispersion coefficient and flow velocity are considered spatially and temporally dependent. The spatial dependence of the velocity is linear, non-homogeneous and that of dispersion coefficient is square of that of velocity, while the temporal dependence is considered linear, exponentially and asymptotically decelerating and accelerating. Our proposed analytical solutions are derived for three different situations depending on variations of dispersion coefficient and velocity, respectively which can represent real physical processes occurring in groundwater and riverine systems. First case refers to steady solute transport situation in steady flow in which dispersion coefficient and velocity are only spatially dependent. The second case represents transient solute transport in steady flow in which dispersion coefficient is spatially and temporally dependent while the velocity is spatially dependent. Finally, the third case indicates transient solute transport in unsteady flow in which both dispersion coefficient and velocity are spatially and temporally dependent. The present paper demonstrates the concentration distribution behavior from a point source in realistically occurring flow domains of hydrological systems including groundwater and riverine water in which the dispersivity of pollutant's mass is affected by heterogeneity of the medium as well as by other factors like velocity fluctuations, while velocity is influenced by water table slope and recharge rate. Such capabilities give the proposed method's superiority about application of various hydrological problems to be solved over other previously existing analytical solutions. Especially, to author's knowledge, any other solution doesn't exist for both spatially and temporally variations of dispersion coefficient and velocity. In this study, the existing analytical solutions from previous widely known studies are used for comparison as validation tools to verify the proposed analytical solution as well as the numerical code of the Two-Dimensional Subsurface Flow, Fate and Transport of Microbes and Chemicals (2DFATMIC) code and the developed 1D finite difference code (FDM). All such solutions show perfect match with the respective proposed solutions.

  13. Applying analytic hierarchy process to assess healthcare-oriented cloud computing service systems.

    PubMed

    Liao, Wen-Hwa; Qiu, Wan-Li

    2016-01-01

    Numerous differences exist between the healthcare industry and other industries. Difficulties in the business operation of the healthcare industry have continually increased because of the volatility and importance of health care, changes to and requirements of health insurance policies, and the statuses of healthcare providers, which are typically considered not-for-profit organizations. Moreover, because of the financial risks associated with constant changes in healthcare payment methods and constantly evolving information technology, healthcare organizations must continually adjust their business operation objectives; therefore, cloud computing presents both a challenge and an opportunity. As a response to aging populations and the prevalence of the Internet in fast-paced contemporary societies, cloud computing can be used to facilitate the task of balancing the quality and costs of health care. To evaluate cloud computing service systems for use in health care, providing decision makers with a comprehensive assessment method for prioritizing decision-making factors is highly beneficial. Hence, this study applied the analytic hierarchy process, compared items related to cloud computing and health care, executed a questionnaire survey, and then classified the critical factors influencing healthcare cloud computing service systems on the basis of statistical analyses of the questionnaire results. The results indicate that the primary factor affecting the design or implementation of optimal cloud computing healthcare service systems is cost effectiveness, with the secondary factors being practical considerations such as software design and system architecture.

  14. Microextraction by packed sorbent: an emerging, selective and high-throughput extraction technique in bioanalysis.

    PubMed

    Pereira, Jorge; Câmara, José S; Colmsjö, Anders; Abdel-Rehim, Mohamed

    2014-06-01

    Sample preparation is an important analytical step regarding the isolation and concentration of desired components from complex matrices and greatly influences their reliable and accurate analysis and data quality. It is the most labor-intensive and error-prone process in analytical methodology and, therefore, may influence the analytical performance of the target analytes quantification. Many conventional sample preparation methods are relatively complicated, involving time-consuming procedures and requiring large volumes of organic solvents. Recent trends in sample preparation include miniaturization, automation, high-throughput performance, on-line coupling with analytical instruments and low-cost operation through extremely low volume or no solvent consumption. Micro-extraction techniques, such as micro-extraction by packed sorbent (MEPS), have these advantages over the traditional techniques. This paper gives an overview of MEPS technique, including the role of sample preparation in bioanalysis, the MEPS description namely MEPS formats (on- and off-line), sorbents, experimental and protocols, factors that affect the MEPS performance, and the major advantages and limitations of MEPS compared with other sample preparation techniques. We also summarize MEPS recent applications in bioanalysis. Copyright © 2014 John Wiley & Sons, Ltd.

  15. ESTIMATING UNCERTAINITIES IN FACTOR ANALYTIC MODELS

    EPA Science Inventory

    When interpreting results from factor analytic models as used in receptor modeling, it is important to quantify the uncertainties in those results. For example, if the presence of a species on one of the factors is necessary to interpret the factor as originating from a certain ...

  16. Quantitative Method for Simultaneous Analysis of Acetaminophen and 6 Metabolites.

    PubMed

    Lammers, Laureen A; Achterbergh, Roos; Pistorius, Marcel C M; Romijn, Johannes A; Mathôt, Ron A A

    2017-04-01

    Hepatotoxicity after ingestion of high-dose acetaminophen [N-acetyl-para-aminophenol (APAP)] is caused by the metabolites of the drug. To gain more insight into factors influencing susceptibility to APAP hepatotoxicity, quantification of APAP and metabolites is important. A few methods have been developed to simultaneously quantify APAP and its most important metabolites. However, these methods require a comprehensive sample preparation and long run times. The aim of this study was to develop and validate a simplified, but sensitive method for the simultaneous quantification of acetaminophen, the main metabolites acetaminophen glucuronide and acetaminophen sulfate, and 4 Cytochrome P450-mediated metabolites by using liquid chromatography with mass spectrometric (LC-MS) detection. The method was developed and validated for the human plasma, and it entailed a single method for sample preparation, enabling quick processing of the samples followed by an LC-MS method with a chromatographic run time of 9 minutes. The method was validated for selectivity, linearity, accuracy, imprecision, dilution integrity, recovery, process efficiency, ionization efficiency, and carryover effect. The method showed good selectivity without matrix interferences. For all analytes, the mean process efficiency was >86%, and the mean ionization efficiency was >94%. Furthermore, the accuracy was between 90.3% and 112% for all analytes, and the within- and between-run imprecision were <20% for the lower limit of quantification and <14.3% for the middle level and upper limit of quantification. The method presented here enables the simultaneous quantification of APAP and 6 of its metabolites. It is less time consuming than previously reported methods because it requires only a single and simple method for the sample preparation followed by an LC-MS method with a short run time. Therefore, this analytical method provides a useful method for both clinical and research purposes.

  17. Addressing unmeasured confounding in comparative observational research.

    PubMed

    Zhang, Xiang; Faries, Douglas E; Li, Hu; Stamey, James D; Imbens, Guido W

    2018-04-01

    Observational pharmacoepidemiological studies can provide valuable information on the effectiveness or safety of interventions in the real world, but one major challenge is the existence of unmeasured confounder(s). While many analytical methods have been developed for dealing with this challenge, they appear under-utilized, perhaps due to the complexity and varied requirements for implementation. Thus, there is an unmet need to improve understanding the appropriate course of action to address unmeasured confounding under a variety of research scenarios. We implemented a stepwise search strategy to find articles discussing the assessment of unmeasured confounding in electronic literature databases. Identified publications were reviewed and characterized by the applicable research settings and information requirements required for implementing each method. We further used this information to develop a best practice recommendation to help guide the selection of appropriate analytical methods for assessing the potential impact of unmeasured confounding. Over 100 papers were reviewed, and 15 methods were identified. We used a flowchart to illustrate the best practice recommendation which was driven by 2 critical components: (1) availability of information on the unmeasured confounders; and (2) goals of the unmeasured confounding assessment. Key factors for implementation of each method were summarized in a checklist to provide further assistance to researchers for implementing these methods. When assessing comparative effectiveness or safety in observational research, the impact of unmeasured confounding should not be ignored. Instead, we suggest quantitatively evaluating the impact of unmeasured confounding and provided a best practice recommendation for selecting appropriate analytical methods. Copyright © 2018 John Wiley & Sons, Ltd.

  18. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Enforcement analytical method. 158.355... DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An analytical method suitable for enforcement purposes must be provided for each active ingredient in the...

  19. Analyzing Response Times in Tests with Rank Correlation Approaches

    ERIC Educational Resources Information Center

    Ranger, Jochen; Kuhn, Jorg-Tobias

    2013-01-01

    It is common practice to log-transform response times before analyzing them with standard factor analytical methods. However, sometimes the log-transformation is not capable of linearizing the relation between the response times and the latent traits. Therefore, a more general approach to response time analysis is proposed in the current…

  20. Factors That Attenuate the Correlation Coefficient and Its Analogs.

    ERIC Educational Resources Information Center

    Dolenz, Beverly

    The correlation coefficient is an integral part of many other statistical techniques (analysis of variance, t-tests, etc.), since all analytic methods are actually correlational (G. V. Glass and K. D. Hopkins, 1984). The correlation coefficient is a statistical summary that represents the degree and direction of relationship between two variables.…

  1. Use of Latent Profile Analysis in Studies of Gifted Students

    ERIC Educational Resources Information Center

    Mammadov, Sakhavat; Ward, Thomas J.; Cross, Jennifer Riedl; Cross, Tracy L.

    2016-01-01

    To date, in gifted education and related fields various conventional factor analytic and clustering techniques have been used extensively for investigation of the underlying structure of data. Latent profile analysis is a relatively new method in the field. In this article, we provide an introduction to latent profile analysis for gifted education…

  2. Subdimensions of Adolescent Belonging in High School

    ERIC Educational Resources Information Center

    Wallace, Tanner LeBaron; Ye, Feifei; Chhuon, Vichet

    2012-01-01

    Adolescents' sense of belonging in high school may serve a protective function, linking school-based relationships to positive youth outcomes. To advance the study of sense of belonging, we conducted a mixed method, factor analytic study (Phase 1 focus groups, N = 72; Phase 2 cross-sectional survey, N = 890) to explore the multidimensionality of…

  3. Birth Cohort Change in the Vocational Interests of Female and Male College Students

    ERIC Educational Resources Information Center

    Bubany, Shawn T.; Hansen, Jo-Ida C.

    2011-01-01

    The purpose of this research was to investigate the extent to which vocational interests have changed across birth cohorts of college students to better understand how socio-cultural factors may have an impact on career development. Using meta-analytic data collection methods, dissertations and journal articles presenting interests scores…

  4. UTOPIAN: user-driven topic modeling based on interactive nonnegative matrix factorization.

    PubMed

    Choo, Jaegul; Lee, Changhyun; Reddy, Chandan K; Park, Haesun

    2013-12-01

    Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis/VAST paper data set and product review data sets.

  5. Gaussian process based modeling and experimental design for sensor calibration in drifting environments

    PubMed Central

    Geng, Zongyu; Yang, Feng; Chen, Xi; Wu, Nianqiang

    2016-01-01

    It remains a challenge to accurately calibrate a sensor subject to environmental drift. The calibration task for such a sensor is to quantify the relationship between the sensor’s response and its exposure condition, which is specified by not only the analyte concentration but also the environmental factors such as temperature and humidity. This work developed a Gaussian Process (GP)-based procedure for the efficient calibration of sensors in drifting environments. Adopted as the calibration model, GP is not only able to capture the possibly nonlinear relationship between the sensor responses and the various exposure-condition factors, but also able to provide valid statistical inference for uncertainty quantification of the target estimates (e.g., the estimated analyte concentration of an unknown environment). Built on GP’s inference ability, an experimental design method was developed to achieve efficient sampling of calibration data in a batch sequential manner. The resulting calibration procedure, which integrates the GP-based modeling and experimental design, was applied on a simulated chemiresistor sensor to demonstrate its effectiveness and its efficiency over the traditional method. PMID:26924894

  6. Rational use and interpretation of urine drug testing in chronic opioid therapy.

    PubMed

    Reisfield, Gary M; Salazar, Elaine; Bertholf, Roger L

    2007-01-01

    Urine drug testing (UDT) has become an essential feature of pain management, as physicians seek to verify adherence to prescribed opioid regimens and to detect the use of illicit or unauthorized licit drugs. Results of urine drug tests have important consequences in regard to therapeutic decisions and the trust between physician and patient. However, reliance on UDT to confirm adherence can be problematic if the results are not interpreted correctly, and evidence suggests that many physicians lack an adequate understanding of the complexities of UDT and the factors that can affect test results. These factors include metabolic conversion between drugs, genetic variations in drug metabolism, the sensitivity and specificity of the analytical method for a particular drug or metabolite, and the effects of intentional and unintentional interferants. In this review, we focus on the technical features and limitations of analytical methods used for detecting drugs or their metabolites in urine, the statistical constructs that are pertinent to ordering UDT and interpreting test results, and the application of these concepts to the clinical monitoring of patients maintained on chronic opioid therapy.

  7. Reference intervals for selected serum biochemistry analytes in cheetahs Acinonyx jubatus.

    PubMed

    Hudson-Lamb, Gavin C; Schoeman, Johan P; Hooijberg, Emma H; Heinrich, Sonja K; Tordiffe, Adrian S W

    2016-02-26

    Published haematologic and serum biochemistry reference intervals are very scarce for captive cheetahs and even more for free-ranging cheetahs. The current study was performed to establish reference intervals for selected serum biochemistry analytes in cheetahs. Baseline serum biochemistry analytes were analysed from 66 healthy Namibian cheetahs. Samples were collected from 30 captive cheetahs at the AfriCat Foundation and 36 free-ranging cheetahs from central Namibia. The effects of captivity-status, age, sex and haemolysis score on the tested serum analytes were investigated. The biochemistry analytes that were measured were sodium, potassium, magnesium, chloride, urea and creatinine. The 90% confidence interval of the reference limits was obtained using the non-parametric bootstrap method. Reference intervals were preferentially determined by the non-parametric method and were as follows: sodium (128 mmol/L - 166 mmol/L), potassium (3.9 mmol/L - 5.2 mmol/L), magnesium (0.8 mmol/L - 1.2 mmol/L), chloride (97 mmol/L - 130 mmol/L), urea (8.2 mmol/L - 25.1 mmol/L) and creatinine (88 µmol/L - 288 µmol/L). Reference intervals from the current study were compared with International Species Information System values for cheetahs and found to be narrower. Moreover, age, sex and haemolysis score had no significant effect on the serum analytes in this study. Separate reference intervals for captive and free-ranging cheetahs were also determined. Captive cheetahs had higher urea values, most likely due to dietary factors. This study is the first to establish reference intervals for serum biochemistry analytes in cheetahs according to international guidelines. These results can be used for future health and disease assessments in both captive and free-ranging cheetahs.

  8. Robust analysis of the hydrophobic basic analytes loratadine and desloratadine in pharmaceutical preparations and biological fluids by sweeping-cyclodextrin-modified micellar electrokinetic chromatography.

    PubMed

    El-Awady, Mohamed; Belal, Fathalla; Pyell, Ute

    2013-09-27

    The analysis of hydrophobic basic analytes by micellar electrokinetic chromatography (MEKC) is usually challenging because of the tendency of these analytes to be adsorbed onto the inner capillary wall in addition to the difficulty to separate these compounds as they exhibit extremely high retention factors. A robust and reliable method for the simultaneous determination of loratadine (LOR) and its major metabolite desloratadine (DSL) is developed based on cyclodextrin-modified micellar electrokinetic chromatography (CD-MEKC) with acidic sample matrix and basic background electrolyte (BGE). The influence of the sample matrix on the reachable focusing efficiency is studied. It is shown that the application of a low pH sample solution mitigates problems associated with the low solubility of the hydrophobic basic analytes in aqueous solution while having advantages with regard to on-line focusing. Moreover, the use of a basic BGE reduces the adsorption of these analytes in the separation compartment. The separation of the studied analytes is achieved in less than 7min using a BGE consisting of 10mmolL(-1) disodium tetraborate buffer, pH 9.30 containing 40mmolL(-1) SDS and 20mmolL(-1) hydroxypropyl-β-CD while the sample solution is composed of 10mmolL(-1) phosphoric acid, pH 2.15. A full validation study of the developed method based on the pharmacopeial guidelines is performed. The method is successfully applied to the analysis of the studied drugs in tablets without interference of tablet additives as well as the analysis of spiked human urine without any sample pretreatment. Furthermore, DSL can be detected as an impurity in LOR bulk powder at the stated pharmacopeial limit (0.1%, w/w). The selectivity of the developed method allows the analysis of LOR and DSL in combination with the co-formulated drug pseudoephedrine. It is shown that in CD-MEKC with basic BGE, solute-wall interactions are effectively suppressed allowing the development of efficient and precise methods for the determination of hydrophobic basic analytes, whereas the use of a low pH sample solution has a positive impact on the attainable sweeping efficiency without compromising peak shape and resolution. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. A rapid and sensitive analytical method for the determination of 14 pyrethroids in water samples.

    PubMed

    Feo, M L; Eljarrat, E; Barceló, D

    2010-04-09

    A simple, efficient and environmentally friendly analytical methodology is proposed for extracting and preconcentrating pyrethroids from water samples prior to gas chromatography-negative ion chemical ionization mass spectrometry (GC-NCI-MS) analysis. Fourteen pyrethroids were selected for this work: bifenthrin, cyfluthrin, lambda-cyhalothrin, cypermethrin, deltamethrin, esfenvalerate, fenvalerate, fenpropathrin, tau-fluvalinate, permethrin, phenothrin, resmethrin, tetramethrin and tralomethrin. The method is based on ultrasound-assisted emulsification-extraction (UAEE) of a water-immiscible solvent in an aqueous medium. Chloroform was used as extraction solvent in the UAEE technique. Target analytes were quantitatively extracted achieving an enrichment factor of 200 when 20 mL aliquot of pure water spiked with pyrethroid standards was extracted. The method was also evaluated with tap water and river water samples. Method detection limits (MDLs) ranged from 0.03 to 35.8 ng L(-1) with RSDs values < or =3-25% (n=5). The coefficients of estimation of the calibration curves obtained following the proposed methodology were > or =0.998. Recovery values were in the range of 45-106%, showing satisfactory robustness of the method for analyzing pyrethroids in water samples. The proposed methodology was applied for the analysis of river water samples. Cypermethrin was detected at concentration levels ranging from 4.94 to 30.5 ng L(-1). Copyright 2010 Elsevier B.V. All rights reserved.

  10. Significance Testing in Confirmatory Factor Analytic Models.

    ERIC Educational Resources Information Center

    Khattab, Ali-Maher; Hocevar, Dennis

    Traditionally, confirmatory factor analytic models are tested against a null model of total independence. Using randomly generated factors in a matrix of 46 aptitude tests, this approach is shown to be unlikely to reject even random factors. An alternative null model, based on a single general factor, is suggested. In addition, an index of model…

  11. A Superior Kirchhoff Method for Aeroacoustic Noise Prediction: The Ffowcs Williams-Hawkings Equation

    NASA Technical Reports Server (NTRS)

    Brentner, Kenneth S.

    1997-01-01

    The prediction of aeroacoustic noise is important; all new aircraft must meet noise certification requirements. Local noise standards can be even more stringent. The NASA noise reduction goal is to reduce perceived noise levels by a factor of two in 10 years. The objective of this viewgraph presentation is to demonstrate the superiority of the FW-H approach over the Kirchoff method for aeroacoustics, both analytically and numerically.

  12. Development of an ultra high performance liquid chromatography method for determining triamcinolone acetonide in hydrogels using the design of experiments/design space strategy in combination with process capability index.

    PubMed

    Oliva, Alexis; Monzón, Cecilia; Santoveña, Ana; Fariña, José B; Llabrés, Matías

    2016-07-01

    An ultra high performance liquid chromatography method was developed and validated for the quantitation of triamcinolone acetonide in an injectable ophthalmic hydrogel to determine the contribution of analytical method error in the content uniformity measurement. During the development phase, the design of experiments/design space strategy was used. For this, the free R-program was used as a commercial software alternative, a fast efficient tool for data analysis. The process capability index was used to find the permitted level of variation for each factor and to define the design space. All these aspects were analyzed and discussed under different experimental conditions by the Monte Carlo simulation method. Second, a pre-study validation procedure was performed in accordance with the International Conference on Harmonization guidelines. The validated method was applied for the determination of uniformity of dosage units and the reasons for variability (inhomogeneity and the analytical method error) were analyzed based on the overall uncertainty. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Investigation of hydrophobic substrates for solution residue analysis utilizing an ambient desorption liquid sampling-atmospheric pressure glow discharge microplasma.

    PubMed

    Paing, Htoo W; Marcus, R Kenneth

    2018-03-12

    A practical method for preparation of solution residue samples for analysis utilizing the ambient desorption liquid sampling-atmospheric pressure glow discharge optical emission spectroscopy (AD-LS-APGD-OES) microplasma is described. Initial efforts involving placement of solution aliquots in wells drilled into copper substrates, proved unsuccessful. A design-of-experiment (DOE) approach was carried out to determine influential factors during sample deposition including solution volume, solute concentration, number of droplets deposited, and the solution matrix. These various aspects are manifested in the mass of analyte deposited as well as the size/shape of the product residue. Statistical analysis demonstrated that only those initial attributes were significant factors towards the emission response of the analyte. Various approaches were investigated to better control the location/uniformity of the deposited sample. Three alternative substrates, a glass slide, a poly(tetrafluoro)ethylene (PTFE) sheet, and a polydimethylsiloxane (PDMS)-coated glass slide, were evaluated towards the microplasma analytical performance. Co-deposition with simple organic dyes provided an accurate means of determining the location of the analyte with only minor influence on emission responses. The PDMS-coated glass provided the best performance by virtue of its providing a uniform spatial distribution of the residue material. This uniformity yielded an improved limits of detection by approximately 22× for 20 μL and 4 x for 2 μL over the other two substrates. While they operate by fundamentally different processes, this choice of substrate is not restricted to the LS-APGD, but may also be applicable to other AD methods such as DESI, DART, or LIBS. Further developments will be directed towards a field-deployable ambient desorption OES source for quantitative analysis of microvolume solution residues of nuclear forensics importance.

  14. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  15. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  16. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  17. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  18. 7 CFR 94.103 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  19. 7 CFR 94.4 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture... POULTRY AND EGG PRODUCTS Mandatory Analyses of Egg Products § 94.4 Analytical methods. The majority of analytical methods used by the USDA laboratories to perform mandatory analyses for egg products are listed as...

  20. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...

  1. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...

  2. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...

  3. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...

  4. The transfer of analytical procedures.

    PubMed

    Ermer, J; Limberger, M; Lis, K; Wätzig, H

    2013-11-01

    Analytical method transfers are certainly among the most discussed topics in the GMP regulated sector. However, they are surprisingly little regulated in detail. General information is provided by USP, WHO, and ISPE in particular. Most recently, the EU emphasized the importance of analytical transfer by including it in their draft of the revised GMP Guideline. In this article, an overview and comparison of these guidelines is provided. The key to success for method transfers is the excellent communication between sending and receiving unit. In order to facilitate this communication, procedures, flow charts and checklists for responsibilities, success factors, transfer categories, the transfer plan and report, strategies in case of failed transfers, tables with acceptance limits are provided here, together with a comprehensive glossary. Potential pitfalls are described such that they can be avoided. In order to assure an efficient and sustainable transfer of analytical procedures, a practically relevant and scientifically sound evaluation with corresponding acceptance criteria is crucial. Various strategies and statistical tools such as significance tests, absolute acceptance criteria, and equivalence tests are thoroughly descibed and compared in detail giving examples. Significance tests should be avoided. The success criterion is not statistical significance, but rather analytical relevance. Depending on a risk assessment of the analytical procedure in question, statistical equivalence tests are recommended, because they include both, a practically relevant acceptance limit and a direct control of the statistical risks. However, for lower risk procedures, a simple comparison of the transfer performance parameters to absolute limits is also regarded as sufficient. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Product competitiveness analysis for e-commerce platform of special agricultural products

    NASA Astrophysics Data System (ADS)

    Wan, Fucheng; Ma, Ning; Yang, Dongwei; Xiong, Zhangyuan

    2017-09-01

    On the basis of analyzing the influence factors of the product competitiveness of the e-commerce platform of the special agricultural products and the characteristics of the analytical methods for the competitiveness of the special agricultural products, the price, the sales volume, the postage included service, the store reputation, the popularity, etc. were selected in this paper as the dimensionality for analyzing the competitiveness of the agricultural products, and the principal component factor analysis was taken as the competitiveness analysis method. Specifically, the web crawler was adopted to capture the information of various special agricultural products in the e-commerce platform ---- chi.taobao.com. Then, the original data captured thereby were preprocessed and MYSQL database was adopted to establish the information library for the special agricultural products. Then, the principal component factor analysis method was adopted to establish the analysis model for the competitiveness of the special agricultural products, and SPSS was adopted in the principal component factor analysis process to obtain the competitiveness evaluation factor system (support degree factor, price factor, service factor and evaluation factor) of the special agricultural products. Then, the linear regression method was adopted to establish the competitiveness index equation of the special agricultural products for estimating the competitiveness of the special agricultural products.

  6. An improved LC-MS/MS method for the quantification of alverine and para hydroxy alverine in human plasma for a bioequivalence study☆.

    PubMed

    Rathod, Dhiraj M; Patel, Keyur R; Mistri, Hiren N; Jangid, Arvind G; Shrivastav, Pranav S; Sanyal, Mallika

    2017-04-01

    A highly sensitive and selective high performance liquid chromatography-tandem mass spectrometry method was developed and validated for the quantification of alverine (ALV) and its active metabolite, para hydroxy alverine (PHA), in human plasma. For sample preparation, solid phase extraction of analytes was performed on Phenomenex Strata-X cartridges using alverine-d5 as the internal standard. The analytes were separated on Symmetry Shield RP 18 (150 mm×3.9 mm, 5 µm) column with a mobile phase consisting of acetonitrile and 10 mM ammonium formate (65:35, v/v). Detection and quantitation was done by electrospray ionization mass spectrometry in the positive mode using multiple reaction monitoring. The assay method was fully validated over the concentration range of 15.0-15,000 pg/mL for ALV and 30.0-15,000 pg/mL for PHA. The intra-day and inter-day accuracy and precision (% CV) ranged from 94.00% to 96.00% and 0.48% to 4.15% for both the analytes. The mean recovery obtained for ALV and PHA was 80.59% and 81.26%, respectively. Matrix effect, expressed as IS-normalized matrix factor ranged from 0.982 to 1.009 for both the analytes. The application of the method was demonstrated for the specific analysis of ALV and PHA for a bioequivalence study in 52 healthy subjects using 120 mg ALV capsules. The assay reproducibility was also verified by reanalysis of 175 incurred subject samples.

  7. Exposure assessment for endocrine disruptors: some considerations in the design of studies.

    PubMed Central

    Rice, Carol; Birnbaum, Linda S; Cogliano, James; Mahaffey, Kathryn; Needham, Larry; Rogan, Walter J; vom Saal, Frederick S

    2003-01-01

    In studies designed to evaluate exposure-response relationships in children's development from conception through puberty, multiple factors that affect the generation of meaningful exposure metrics must be considered. These factors include multiple routes of exposure; the timing, frequency, and duration of exposure; need for qualitative and quantitative data; sample collection and storage protocols; and the selection and documentation of analytic methods. The methods for exposure data collection and analysis must be sufficiently robust to accommodate the a priori hypotheses to be tested, as well as hypotheses generated from the data. A number of issues that must be considered in study design are summarized here. PMID:14527851

  8. Considerations regarding the validation of chromatographic mass spectrometric methods for the quantification of endogenous substances in forensics.

    PubMed

    Hess, Cornelius; Sydow, Konrad; Kueting, Theresa; Kraemer, Michael; Maas, Alexandra

    2018-02-01

    The requirement for correct evaluation of forensic toxicological results in daily routine work and scientific studies is reliable analytical data based on validated methods. Validation of a method gives the analyst tools to estimate the efficacy and reliability of the analytical method. Without validation, data might be contested in court and lead to unjustified legal consequences for a defendant. Therefore, new analytical methods to be used in forensic toxicology require careful method development and validation of the final method. Until now, there are no publications on the validation of chromatographic mass spectrometric methods for the detection of endogenous substances although endogenous analytes can be important in Forensic Toxicology (alcohol consumption marker, congener alcohols, gamma hydroxy butyric acid, human insulin and C-peptide, creatinine, postmortal clinical parameters). For these analytes, conventional validation instructions cannot be followed completely. In this paper, important practical considerations in analytical method validation for endogenous substances will be discussed which may be used as guidance for scientists wishing to develop and validate analytical methods for analytes produced naturally in the human body. Especially the validation parameters calibration model, analytical limits, accuracy (bias and precision) and matrix effects and recovery have to be approached differently. Highest attention should be paid to selectivity experiments. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Analysis of recovery efficiency in high-temperature aquifer thermal energy storage: a Rayleigh-based method

    NASA Astrophysics Data System (ADS)

    Schout, Gilian; Drijver, Benno; Gutierrez-Neri, Mariene; Schotting, Ruud

    2014-01-01

    High-temperature aquifer thermal energy storage (HT-ATES) is an important technique for energy conservation. A controlling factor for the economic feasibility of HT-ATES is the recovery efficiency. Due to the effects of density-driven flow (free convection), HT-ATES systems applied in permeable aquifers typically have lower recovery efficiencies than conventional (low-temperature) ATES systems. For a reliable estimation of the recovery efficiency it is, therefore, important to take the effect of density-driven flow into account. A numerical evaluation of the prime factors influencing the recovery efficiency of HT-ATES systems is presented. Sensitivity runs evaluating the effects of aquifer properties, as well as operational variables, were performed to deduce the most important factors that control the recovery efficiency. A correlation was found between the dimensionless Rayleigh number (a measure of the relative strength of free convection) and the calculated recovery efficiencies. Based on a modified Rayleigh number, two simple analytical solutions are proposed to calculate the recovery efficiency, each one covering a different range of aquifer thicknesses. The analytical solutions accurately reproduce all numerically modeled scenarios with an average error of less than 3 %. The proposed method can be of practical use when considering or designing an HT-ATES system.

  10. Determination of linear short chain aliphatic aldehyde and ketone vapors in air using a polystyrene-coated quartz crystal nanobalance sensor.

    PubMed

    Mirmohseni, Abdolreza; Olad, Ali

    2010-01-01

    A polystyrene coated quartz crystal nanobalance (QCN) sensor was developed for use in the determination of a number of linear short-chain aliphatic aldehyde and ketone vapors contained in air. The quartz crystal was modified by a thin-layer coating of a commercial grade general purpose polystyrene (GPPS) from Tabriz petrochemical company using a solution casting method. Determination was based on frequency shifts of the modified quartz crystal due to the adsorption of analytes at the surface of modified electrode in exposure to various concentrations of analytes. The frequency shift was found to have a linear relation to the concentration of analytes. Linear calibration curves were obtained for 7-70 mg l(-1) of analytes with correlation coefficients in the range of 0.9935-0.9989 and sensitivity factors in the range of 2.07-6.74 Hz/mg l(-1). A storage period of over three months showed no loss in the sensitivity and performance of the sensor.

  11. Modeling of classical swirl injector dynamics

    NASA Astrophysics Data System (ADS)

    Ismailov, Maksud M.

    The knowledge of the dynamics of a swirl injector is crucial in designing a stable liquid rocket engine. Since the swirl injector is a complex fluid flow device in itself, not much work has been conducted to describe its dynamics either analytically or by using computational fluid dynamics techniques. Even the experimental observation is limited up to date. Thus far, there exists an analytical linear theory by Bazarov [1], which is based on long-wave disturbances traveling on the free surface of the injector core. This theory does not account for variation of the nozzle reflection coefficient as a function of disturbance frequency, and yields a response function which is strongly dependent on the so called artificial viscosity factor. This causes an uncertainty in designing an injector for the given operational combustion instability frequencies in the rocket engine. In this work, the author has studied alternative techniques to describe the swirl injector response, both analytically and computationally. In the analytical part, by using the linear small perturbation analysis, the entire phenomenon of unsteady flow in swirl injectors is dissected into fundamental components, which are the phenomena of disturbance wave refraction and reflection, and vortex chamber resonance. This reveals the nature of flow instability and the driving factors leading to maximum injector response. In the computational part, by employing the nonlinear boundary element method (BEM), the author sets the boundary conditions such that they closely simulate those in the analytical part. The simulation results then show distinct peak responses at frequencies that are coincident with those resonant frequencies predicted in the analytical part. Moreover, a cold flow test of the injector related to this study also shows a clear growth of instability with its maximum amplitude at the first fundamental frequency predicted both by analytical methods and BEM. It shall be noted however that Bazarov's theory does not predict the resonant peaks. Overall this methodology provides clearer understanding of the injector dynamics compared to Bazarov's. Even though the exact value of response is not possible to obtain at this stage of theoretical, computational, and experimental investigation, this methodology sets the starting point from where the theoretical description of reflection/refraction, resonance, and their interaction between each other may be refined to higher order to obtain its more precise value.

  12. Multivariate Approaches for Simultaneous Determination of Avanafil and Dapoxetine by UV Chemometrics and HPLC-QbD in Binary Mixtures and Pharmaceutical Product.

    PubMed

    2016-04-07

    Multivariate UV-spectrophotometric methods and Quality by Design (QbD) HPLC are described for concurrent estimation of avanafil (AV) and dapoxetine (DP) in the binary mixture and in the dosage form. Chemometric methods have been developed, including classical least-squares, principal component regression, partial least-squares, and multiway partial least-squares. Analytical figures of merit, such as sensitivity, selectivity, analytical sensitivity, LOD, and LOQ were determined. QbD consists of three steps, starting with the screening approach to determine the critical process parameter and response variables. This is followed by understanding of factors and levels, and lastly the application of a Box-Behnken design containing four critical factors that affect the method. From an Ishikawa diagram and a risk assessment tool, four main factors were selected for optimization. Design optimization, statistical calculation, and final-condition optimization of all the reactions were Carried out. Twenty-five experiments were done, and a quadratic model was used for all response variables. Desirability plot, surface plot, design space, and three-dimensional plots were calculated. In the optimized condition, HPLC separation was achieved on Phenomenex Gemini C18 column (250 × 4.6 mm, 5 μm) using acetonitrile-buffer (ammonium acetate buffer at pH 3.7 with acetic acid) as a mobile phase at flow rate of 0.7 mL/min. Quantification was done at 239 nm, and temperature was set at 20°C. The developed methods were validated and successfully applied for simultaneous determination of AV and DP in the dosage form.

  13. Altered amygdalar resting-state connectivity in depression is explained by both genes and environment.

    PubMed

    Córdova-Palomera, Aldo; Tornador, Cristian; Falcón, Carles; Bargalló, Nuria; Nenadic, Igor; Deco, Gustavo; Fañanás, Lourdes

    2015-10-01

    Recent findings indicate that alterations of the amygdalar resting-state fMRI connectivity play an important role in the etiology of depression. While both depression and resting-state brain activity are shaped by genes and environment, the relative contribution of genetic and environmental factors mediating the relationship between amygdalar resting-state connectivity and depression remain largely unexplored. Likewise, novel neuroimaging research indicates that different mathematical representations of resting-state fMRI activity patterns are able to embed distinct information relevant to brain health and disease. The present study analyzed the influence of genes and environment on amygdalar resting-state fMRI connectivity, in relation to depression risk. High-resolution resting-state fMRI scans were analyzed to estimate functional connectivity patterns in a sample of 48 twins (24 monozygotic pairs) informative for depressive psychopathology (6 concordant, 8 discordant and 10 healthy control pairs). A graph-theoretical framework was employed to construct brain networks using two methods: (i) the conventional approach of filtered BOLD fMRI time-series and (ii) analytic components of this fMRI activity. Results using both methods indicate that depression risk is increased by environmental factors altering amygdalar connectivity. When analyzing the analytic components of the BOLD fMRI time-series, genetic factors altering the amygdala neural activity at rest show an important contribution to depression risk. Overall, these findings show that both genes and environment modify different patterns the amygdala resting-state connectivity to increase depression risk. The genetic relationship between amygdalar connectivity and depression may be better elicited by examining analytic components of the brain resting-state BOLD fMRI signals. © 2015 Wiley Periodicals, Inc.

  14. Analytical treatment of the deformation behavior of EUVL masks during electrostatic chucking

    NASA Astrophysics Data System (ADS)

    Brandstetter, Gerd; Govindjee, Sanjay

    2012-03-01

    A new analytical approach is presented to predict mask deformation during electro-static chucking in next generation extreme-ultraviolet-lithography (EUVL). Given an arbitrary profile measurement of the mask and chuck non-flatness, this method has been developed as an alternative to time-consuming finite element simulations for overlay error correction algorithms. We consider the feature transfer of each harmonic component in the profile shapes via linear elasticity theory and demonstrate analytically how high spatial frequencies are filtered. The method is compared to presumably more accurate finite element simulations and has been tested successfully in an overlay error compensation experiment, where the residual error y-component could be reduced by a factor 2. As a side outcome, the formulation provides a tool to estimate the critical pin-size and -pitch such that the distortion on the mask front-side remains within given tolerances. We find for a numerical example that pin-pitches of less than 5 mm will result in a mask pattern-distortion of less than 1 nm if the chucking pressure is below 30 kPa.

  15. Analytical treatment of the deformation behavior of extreme-ultraviolet-lithography masks during electrostatic chucking

    NASA Astrophysics Data System (ADS)

    Brandstetter, Gerd; Govindjee, Sanjay

    2012-10-01

    A new analytical approach is presented to predict mask deformation during electrostatic chucking in next-generation extreme-ultraviolet-lithography. Given an arbitrary profile measurement of the mask and chuck nonflatness, this method has been developed as an alternative to time-consuming finite element simulations for overlay error correction algorithms. We consider the feature transfer of each harmonic component in the profile shapes via linear elasticity theory and demonstrate analytically how high spatial frequencies are filtered. The method is compared to presumably more accurate finite element simulations and has been tested successfully in an overlay error compensation experiment, where the residual error y-component could be reduced by a factor of 2. As a side outcome, the formulation provides a tool to estimate the critical pin-size and -pitch such that the distortion on the mask front-side remains within given tolerances. We find for a numerical example that pin-pitches of less than 5 mm will result in a mask pattern distortion of less than 1 nm if the chucking pressure is below 30 kPa.

  16. Determination of free formaldehyde in cosmetics containing formaldehyde-releasing preservatives by reversed-phase dispersive liquid-liquid microextraction and liquid chromatography with post-column derivatization.

    PubMed

    Miralles, Pablo; Chisvert, Alberto; Alonso, M José; Hernandorena, Sandra; Salvador, Amparo

    2018-03-30

    An analytical method for the determination of traces of formaldehyde in cosmetic products containing formaldehyde-releasing preservatives has been developed. The method is based on reversed-phase dispersive liquid-liquid microextraction (RP-DLLME), that allows the extraction of highly polar compounds, followed by liquid chromatography-ultraviolet/visible (LC-UV/vis) determination with post-column derivatization. The variables involved in the RP-DLLME process were studied to provide the best enrichment factors. Under the selected conditions, a mixture of 500 μL of acetonitrile (disperser solvent) and 50 μL of water (extraction solvent) was rapidly injected into 5 mL of toluene sample solution. The extracts were injected into the LC-UV/vis system using phosphate buffer 6 mmol L -1 at pH 2 as mobile phase. After chromatographic separation, the eluate merged with a flow stream of pentane-2,4-dione in ammonium acetate solution as derivatizing reagent and passed throughout a post-column reactor at 85 °C in order to derivatize formaldehyde into 3,5-diacetyl-1,4-dihydrolutidine, according to Hantzsch reaction, which was finally measured spectrophotometrically at 407 nm. The method was successfully validated showing good linearity, an enrichment factor of 86 ± 2, limits of detection and quantification of 0.7 and 2.3 ng mL -1 , respectively, and good repeatability (RSD < 9.2%). Finally, the proposed analytical method was applied to the determination of formaldehyde in different commercial cosmetic samples containing formaldehyde-releasing preservatives, such as bronopol, diazolidinyl urea, imidazolidinyl urea, and DMDM hydantoin, with good relative recovery values (91-113%) thus showing that matrix effects were negligible. The good analytical features of the proposed method besides of its simplicity and affordability, make it useful to carry out the quality control of cosmetic products containing formaldehyde-releasing preservatives. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Application of variational principles and adjoint integrating factors for constructing numerical GFD models

    NASA Astrophysics Data System (ADS)

    Penenko, Vladimir; Tsvetova, Elena; Penenko, Alexey

    2015-04-01

    The proposed method is considered on an example of hydrothermodynamics and atmospheric chemistry models [1,2]. In the development of the existing methods for constructing numerical schemes possessing the properties of total approximation for operators of multiscale process models, we have developed a new variational technique, which uses the concept of adjoint integrating factors. The technique is as follows. First, a basic functional of the variational principle (the integral identity that unites the model equations, initial and boundary conditions) is transformed using Lagrange's identity and the second Green's formula. As a result, the action of the operators of main problem in the space of state functions is transferred to the adjoint operators defined in the space of sufficiently smooth adjoint functions. By the choice of adjoint functions the order of the derivatives becomes lower by one than those in the original equations. We obtain a set of new balance relationships that take into account the sources and boundary conditions. Next, we introduce the decomposition of the model domain into a set of finite volumes. For multi-dimensional non-stationary problems, this technique is applied in the framework of the variational principle and schemes of decomposition and splitting on the set of physical processes for each coordinate directions successively at each time step. For each direction within the finite volume, the analytical solutions of one-dimensional homogeneous adjoint equations are constructed. In this case, the solutions of adjoint equations serve as integrating factors. The results are the hybrid discrete-analytical schemes. They have the properties of stability, approximation and unconditional monotony for convection-diffusion operators. These schemes are discrete in time and analytic in the spatial variables. They are exact in case of piecewise-constant coefficients within the finite volume and along the coordinate lines of the grid area in each direction on a time step. In each direction, they have tridiagonal structure. They are solved by the sweep method. An important advantage of the discrete-analytical schemes is that the values of derivatives at the boundaries of finite volume are calculated together with the values of the unknown functions. This technique is particularly attractive for problems with dominant convection, as it does not require artificial monotonization and limiters. The same idea of integrating factors is applied in temporal dimension to the stiff systems of equations describing chemical transformation models [2]. The proposed method is applicable for the problems involving convection-diffusion-reaction operators. The work has been partially supported by the Presidium of RAS under Program 43, and by the RFBR grants 14-01-00125 and 14-01-31482. References: 1. V.V. Penenko, E.A. Tsvetova, A.V. Penenko. Variational approach and Euler's integrating factors for environmental studies// Computers and Mathematics with Applications, (2014) V.67, Issue 12, P. 2240-2256. 2. V.V.Penenko, E.A.Tsvetova. Variational methods of constructing monotone approximations for atmospheric chemistry models // Numerical analysis and applications, 2013, V. 6, Issue 3, pp 210-220.

  18. Micro-solid phase extraction of benzene, toluene, ethylbenzene and xylenes from aqueous solutions using water-insoluble β-cyclodextrin polymer as sorbent.

    PubMed

    Nojavan, Saeed; Yazdanpanah, Mina

    2017-11-24

    Water-insoluble β-cyclodextrin polymer was synthesized by chemical cross-linking using epichlorohydrin (EPI) as a cross-linker agent. The produced water-insoluble polymer was used as a sorbent for the micro-solid phase extraction (μ-SPE) of benzene, toluene, ethylbenzene and xylenes (BTEX) from water samples. The μ-SPE device consisted of a sealed tea bag envelope containing 15mg of sorbent. For the evaluation of the extraction efficiency, parameters such as extraction and desorption time, desorption solvent and salt concentration were investigated. At an extraction time of 30min in the course of the extraction process, analytes were extracted from a 10mL aqueous sample solution. The analytes were desorbed by ultrasonication in 200μL of acetonitrile for 20min. Analysis of the analytes was done by a gas chromatography-flame ionization detector (GC-FID) system. The enrichment factor (EF) was found to be in the range 23.0-45.4 (EF max =50.0). The method provided linearity ranges of between 0.5 and 500.0ng/mL (depending on the analytes), with good coefficients of determination (r 2 ) ranging between 0.997 and 0.999 under optimized conditions. Detection limits for BTEX were in the range of between 0.15 and 0.60ng/mL, while corresponding recoveries were in the range of 46.0-90.0%. The relative standard deviation of the method for the analytes at 100.0ng/mL concentration level ranged from 5.5 to 11.2% (n=5). The proposed method was concluded to be a cost effective and environmentally-friendly extraction technique with ease of operation and minimal usage of organic solvent. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Rapid Detection of Transition Metals in Welding Fumes Using Paper-Based Analytical Devices

    PubMed Central

    Volckens, John

    2014-01-01

    Metals in particulate matter (PM) are considered a driving factor for many pathologies. Despite the hazards associated with particulate metals, personal exposures for at-risk workers are rarely assessed due to the cost and effort associated with monitoring. As a result, routine exposure assessments are performed for only a small fraction of the exposed workforce. The objective of this research was to evaluate a relatively new technology, microfluidic paper-based analytical devices (µPADs), for measuring the metals content in welding fumes. Fumes from three common welding techniques (shielded metal arc, metal inert gas, and tungsten inert gas welding) were sampled in two welding shops. Concentrations of acid-extractable Fe, Cu, Ni, and Cr were measured and independently verified using inductively coupled plasma-optical emission spectroscopy (ICP-OES). Results from the µPAD sensors agreed well with ICP-OES analysis; the two methods gave statistically similar results in >80% of the samples analyzed. Analytical costs for the µPAD technique were ~50 times lower than market-rate costs with ICP-OES. Further, the µPAD method was capable of providing same-day results (as opposed several weeks for ICP laboratory analysis). Results of this work suggest that µPAD sensors are a viable, yet inexpensive alternative to traditional analytic methods for transition metals in welding fume PM. These sensors have potential to enable substantially higher levels of hazard surveillance for a given resource cost, especially in resource-limited environments. PMID:24515892

  20. Rapid detection of transition metals in welding fumes using paper-based analytical devices.

    PubMed

    Cate, David M; Nanthasurasak, Pavisara; Riwkulkajorn, Pornpak; L'Orange, Christian; Henry, Charles S; Volckens, John

    2014-05-01

    Metals in particulate matter (PM) are considered a driving factor for many pathologies. Despite the hazards associated with particulate metals, personal exposures for at-risk workers are rarely assessed due to the cost and effort associated with monitoring. As a result, routine exposure assessments are performed for only a small fraction of the exposed workforce. The objective of this research was to evaluate a relatively new technology, microfluidic paper-based analytical devices (µPADs), for measuring the metals content in welding fumes. Fumes from three common welding techniques (shielded metal arc, metal inert gas, and tungsten inert gas welding) were sampled in two welding shops. Concentrations of acid-extractable Fe, Cu, Ni, and Cr were measured and independently verified using inductively coupled plasma-optical emission spectroscopy (ICP-OES). Results from the µPAD sensors agreed well with ICP-OES analysis; the two methods gave statistically similar results in >80% of the samples analyzed. Analytical costs for the µPAD technique were ~50 times lower than market-rate costs with ICP-OES. Further, the µPAD method was capable of providing same-day results (as opposed several weeks for ICP laboratory analysis). Results of this work suggest that µPAD sensors are a viable, yet inexpensive alternative to traditional analytic methods for transition metals in welding fume PM. These sensors have potential to enable substantially higher levels of hazard surveillance for a given resource cost, especially in resource-limited environments.

  1. A calibration method for fringe reflection technique based on the analytical phase-slope description

    NASA Astrophysics Data System (ADS)

    Wu, Yuxiang; Yue, Huimin; Pan, Zhipeng; Liu, Yong

    2018-05-01

    The fringe reflection technique (FRT) has been one of the most popular methods to measure the shape of specular surface these years. The existing system calibration methods of FRT usually contain two parts, which are camera calibration and geometric calibration. In geometric calibration, the liquid crystal display (LCD) screen position calibration is one of the most difficult steps among all the calibration procedures, and its accuracy is affected by the factors such as the imaging aberration, the plane mirror flatness, and LCD screen pixel size accuracy. In this paper, based on the deduction of FRT analytical phase-slope description, we present a novel calibration method with no requirement to calibrate the position of LCD screen. On the other hand, the system can be arbitrarily arranged, and the imaging system can either be telecentric or non-telecentric. In our experiment of measuring the 5000mm radius sphere mirror, the proposed calibration method achieves 2.5 times smaller measurement error than the geometric calibration method. In the wafer surface measuring experiment, the measurement result with the proposed calibration method is closer to the interferometer result than the geometric calibration method.

  2. Contamination in food from packaging material.

    PubMed

    Lau, O W; Wong, S K

    2000-06-16

    Packaging has become an indispensible element in the food manufacturing process, and different types of additives, such as antioxidants, stabilizers, lubricants, anti-static and anti-blocking agents, have also been developed to improve the performance of polymeric packaging materials. Recently the packaging has been found to represent a source of contamination itself through the migration of substances from the packaging into food. Various analytical methods have been developed to analyze the migrants in the foodstuff, and migration evaluation procedures based on theoretical prediction of migration from plastic food contact material were also introduced recently. In this paper, the regulatory control, analytical methodology, factors affecting the migration and migration evaluation are reviewed.

  3. Hypothesis Testing Using Factor Score Regression

    PubMed Central

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2015-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886

  4. Approximation methods of European option pricing in multiscale stochastic volatility model

    NASA Astrophysics Data System (ADS)

    Ni, Ying; Canhanga, Betuel; Malyarenko, Anatoliy; Silvestrov, Sergei

    2017-01-01

    In the classical Black-Scholes model for financial option pricing, the asset price follows a geometric Brownian motion with constant volatility. Empirical findings such as volatility smile/skew, fat-tailed asset return distributions have suggested that the constant volatility assumption might not be realistic. A general stochastic volatility model, e.g. Heston model, GARCH model and SABR volatility model, in which the variance/volatility itself follows typically a mean-reverting stochastic process, has shown to be superior in terms of capturing the empirical facts. However in order to capture more features of the volatility smile a two-factor, of double Heston type, stochastic volatility model is more useful as shown in Christoffersen, Heston and Jacobs [12]. We consider one modified form of such two-factor volatility models in which the volatility has multiscale mean-reversion rates. Our model contains two mean-reverting volatility processes with a fast and a slow reverting rate respectively. We consider the European option pricing problem under one type of the multiscale stochastic volatility model where the two volatility processes act as independent factors in the asset price process. The novelty in this paper is an approximating analytical solution using asymptotic expansion method which extends the authors earlier research in Canhanga et al. [5, 6]. In addition we propose a numerical approximating solution using Monte-Carlo simulation. For completeness and for comparison we also implement the semi-analytical solution by Chiarella and Ziveyi [11] using method of characteristics, Fourier and bivariate Laplace transforms.

  5. Coherent and partially coherent dark hollow beams with rectangular symmetry and paraxial propagation properties

    NASA Astrophysics Data System (ADS)

    Cai, Yangjian; Zhang, Lei

    2006-07-01

    A theoretical model is proposed to describe coherent dark hollow beams (DHBs) with rectangular symmetry. The electric field of a coherent rectangular DHB is expressed as a superposition of a series of the electric field of a finite series of fundamental Gaussian beams. Analytical propagation formulas for a coherent rectangular DHB passing through paraxial optical systems are derived in a tensor form. Furthermore, for the more general case, we propose a theoretical model to describe a partially coherent rectangular DHB. Analytical propagation formulas for a partially coherent rectangular DHB passing through paraxial optical systems are derived. The beam propagation factor (M2 factor) for both coherent and partially coherent rectangular DHBs are studied. Numerical examples are given by using the derived formulas. Our models and method provide an effective way to describe and treat the propagation of coherent and partially coherent rectangular DHBs.

  6. Developing an Emergency Physician Productivity Index Using Descriptive Health Analytics.

    PubMed

    Khalifa, Mohamed

    2015-01-01

    Emergency department (ED) crowding became a major barrier to receiving timely emergency care. At King Faisal Specialist Hospital and Research Center, Saudi Arabia, we identified variables and factors affecting crowding and performance to develop indicators to help evaluation and improvement. Measuring efficiency of work and activity of throughput processes; it was important to develop an ED physician productivity index. Data on all ED patients' encounters over the last six months of 2014 were retrieved and descriptive health analytics methods were used. Three variables were identified for their influence on productivity and performance; Number of Treated Patients per Physician, Patient Acuity Level and Treatment Time. The study suggested a formula to calculate the productivity index of each physician through dividing the Number of Treated Patients by Patient Acuity Level squared and Treatment Time to identify physicians with low productivity index and investigate causes and factors.

  7. Matrix Factorizations at Scale: a Comparison of Scientific Data Analytics in Spark and C+MPI Using Three Case Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gittens, Alex; Devarakonda, Aditya; Racah, Evan

    We explore the trade-offs of performing linear algebra using Apache Spark, compared to traditional C and MPI implementations on HPC platforms. Spark is designed for data analytics on cluster computing platforms with access to local disks and is optimized for data-parallel tasks. We examine three widely-used and important matrix factorizations: NMF (for physical plausibility), PCA (for its ubiquity) and CX (for data interpretability). We apply these methods to 1.6TB particle physics, 2.2TB and 16TB climate modeling and 1.1TB bioimaging data. The data matrices are tall-and-skinny which enable the algorithms to map conveniently into Spark’s data parallel model. We perform scalingmore » experiments on up to 1600 Cray XC40 nodes, describe the sources of slowdowns, and provide tuning guidance to obtain high performance.« less

  8. Dimensions of Early Speech Sound Disorders: A Factor Analytic Study

    ERIC Educational Resources Information Center

    Lewis, Barbara A.; Freebairn, Lisa A.; Hansen, Amy J.; Stein, Catherine M.; Shriberg, Lawrence D.; Iyengar, Sudha K.; Taylor, H. Gerry

    2006-01-01

    The goal of this study was to classify children with speech sound disorders (SSD) empirically, using factor analytic techniques. Participants were 3-7-year olds enrolled in speech/language therapy (N=185). Factor analysis of an extensive battery of speech and language measures provided support for two distinct factors, representing the skill…

  9. Development Of Antibody-Based Fiber-Optic Sensors

    NASA Astrophysics Data System (ADS)

    Tromberg, Bruce J.; Sepaniak, Michael J.; Vo-Dinh, Tuan

    1988-06-01

    The speed and specificity characteristic of immunochemical complex formation has encouraged the development of numerous antibody-based analytical techniques. The scope and versatility of these established methods can be enhanced by combining the principles of conventional immunoassay with laser-based fiber-optic fluorimetry. This merger of spectroscopy and immunochemistry provides the framework for the construction of highly sensitive and selective fiber-optic devices (fluoroimmuno-sensors) capable of in-situ detection of drugs, toxins, and naturally occurring biochemicals. Fluoroimmuno-sensors (FIS) employ an immobilized reagent phase at the sampling terminus of a single quartz optical fiber. Laser excitation of antibody-bound analyte produces a fluorescence signal which is either directly proportional (as in the case of natural fluorophor and "antibody sandwich" assays) or inversely proportional (as in the case of competitive-binding assays) to analyte concentration. Factors which influence analysis time, precision, linearity, and detection limits include the nature (solid or liquid) and amount of the reagent phase, the method of analyte delivery (passive diffusion, convection, etc.), and whether equilibrium or non-equilibrium assays are performed. Data will be presented for optical fibers whose sensing termini utilize: (1) covalently-bound solid antibody reagent phases, and (2) membrane-entrapped liquid antibody reagents. Assays for large-molecular weight proteins (antigens) and small-molecular weight, carcinogenic, polynuclear aromatics (haptens) will be considered. In this manner, the influence of a system's chemical characteristics and measurement requirements on sensor design, and the consequence of various sensor designs on analytical performance will be illustrated.

  10. Analysis of standard reference materials by absolute INAA

    NASA Astrophysics Data System (ADS)

    Heft, R. E.; Koszykowski, R. F.

    1981-07-01

    Three standard reference materials: flyash, soil, and ASI 4340 steel, are analyzed by a method of absolute instrumental neutron activation analysis. Two different light water pool-type reactors were used to produce equivalent analytical results even though the epithermal to thermal flux ratio in one reactor was higher than that in the other by a factor of two.

  11. Conflagration Analysis System II: Bibliography.

    DTIC Science & Technology

    1985-04-01

    Therefore, it is Lmportant to examine both the reinforcement and the supplemental considerations Eor the quantitative methods for conflagration...and the meaningful quantitative factors for conflagration analysis are determined, the relevatn literature will be brought into the nainstream of the... quantitative :hods. Fire Development in Multiple Structures From a purely analytical view, the research identified in the literature fire development in

  12. RSE-40: An Alternate Scoring System for the Rosenberg Self-Esteem Scale (RSE).

    ERIC Educational Resources Information Center

    Wallace, Gaylen R.

    The Rosenberg Self-Esteem Inventory (RSE) is a 10-item scale purporting to measure self-esteem using self-acceptance and self-worth statements. This analysis covers concerns about the degree to which the RSE items represent a particular content universe, the RSE's applicability, factor analytic methods used, and the RSE's reliability and validity.…

  13. Adaptive and Challenged Parenting among African American Mothers: Parenting Profiles Relate to Head Start Children's Aggression and Hyperactivity

    ERIC Educational Resources Information Center

    Carpenter, Johanna L.; Mendez, Julia

    2013-01-01

    Research Findings: This study used a within-group research design and person-centered analytic methods to identify multidimensional profiles of parenting styles, parenting practices, and related emotional factors in a sample of 274 African American mothers recruited from Head Start programs in the northeastern and southeastern United States.…

  14. Using Primary Sources to Teach Civil War History: A Case Study in Pedagogical Decision Making

    ERIC Educational Resources Information Center

    Snook, David L.

    2017-01-01

    This exploratory study combined the process of modified analytic induction with a mixed methods approach to analyze various factors that affected or might have affected participating teachers' decisions to use or not use various primary source based teaching strategies to teach historical thinking skills. Four participating eighth and ninth grade…

  15. The Hazardous-Drums Project: A Multiweek Laboratory Exercise for General Chemistry Involving Environmental, Quality Control, and Cost Evaluation

    ERIC Educational Resources Information Center

    Hayes, David; Widanski, Bozena

    2013-01-01

    A laboratory experiment is described that introduces students to "real-world" hazardous waste management issues chemists face. The students are required to define an analytical problem, choose a laboratory analysis method, investigate cost factors, consider quality-control issues, interpret the meaning of results, and provide management…

  16. The Motivational Effects of Success or Failure in Urban Elementary School Teaching

    ERIC Educational Resources Information Center

    Waterman, Bradford H.

    2012-01-01

    This study describes teachers' experiences of success and failure in teaching through interviews. The analytical framework for this study was based on Activity Theory (Leon'tev, 1978), and the research methods were developed by Herzberg et al. (1959). The inclusion of factors identified by Seligman (2006) and Maslach (1982) allowed for…

  17. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2011-04-01 2011-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...

  18. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2014-04-01 2014-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...

  19. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2012-04-01 2012-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...

  20. 21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2013-04-01 2013-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...

  1. Liquid Metering Centrifuge Sticks (LMCS): A Centrifugal Approach to Metering Known Sample Volumes for Colorimetric Solid Phase Extraction (C-SPE)

    NASA Technical Reports Server (NTRS)

    Gazda, Daniel B.; Schultz, John R.; Clarke, Mark S.

    2007-01-01

    Phase separation is one of the most significant obstacles encountered during the development of analytical methods for water quality monitoring in spacecraft environments. Removing air bubbles from water samples prior to analysis is a routine task on earth; however, in the absence of gravity, this routine task becomes extremely difficult. This paper details the development and initial ground testing of liquid metering centrifuge sticks (LMCS), devices designed to collect and meter a known volume of bubble-free water in microgravity. The LMCS uses centrifugal force to eliminate entrapped air and reproducibly meter liquid sample volumes for analysis with Colorimetric Solid Phase Extraction (C-SPE). C-SPE is a sorption-spectrophotometric platform that is being developed as a potential spacecraft water quality monitoring system. C-SPE utilizes solid phase extraction membranes impregnated with analyte-specific colorimetric reagents to concentrate and complex target analytes in spacecraft water samples. The mass of analyte extracted from the water sample is determined using diffuse reflectance (DR) data collected from the membrane surface and an analyte-specific calibration curve. The analyte concentration can then be calculated from the mass of extracted analyte and the volume of the sample analyzed. Previous flight experiments conducted in microgravity conditions aboard the NASA KC-135 aircraft demonstrated that the inability to collect and meter a known volume of water using a syringe was a limiting factor in the accuracy of C-SPE measurements. Herein, results obtained from ground based C-SPE experiments using ionic silver as a test analyte and either the LMCS or syringes for sample metering are compared to evaluate the performance of the LMCS. These results indicate very good agreement between the two sample metering methods and clearly illustrate the potential of utilizing centrifugal forces to achieve phase separation and metering of water samples in microgravity.

  2. Source apportionment of PAH in Hamilton Harbour suspended sediments: comparison of two factor analysis methods.

    PubMed

    Sofowote, Uwayemi M; McCarry, Brian E; Marvin, Christopher H

    2008-08-15

    A total of 26 suspended sediment samples collected over a 5-year period in Hamilton Harbour, Ontario, Canada and surrounding creeks were analyzed for a suite of polycyclic aromatic hydrocarbons and sulfur heterocycles. Hamilton Harbour sediments contain relatively high levels of polycyclic aromatic compounds and heavy metals due to emissions from industrial and mobile sources. Two receptor modeling methods using factor analyses were compared to determine the profiles and relative contributions of pollution sources to the harbor; these methods are principal component analyses (PCA) with multiple linear regression analysis (MLR) and positive matrix factorization (PMF). Both methods identified four factors and gave excellent correlation coefficients between predicted and measured levels of 25 aromatic compounds; both methods predicted similar contributions from coal tar/coal combustion sources to the harbor (19 and 26%, respectively). One PCA factor was identified as contributions from vehicular emissions (61%); PMF was able to differentiate vehicular emissions into two factors, one attributed to gasoline emissions sources (28%) and the other to diesel emissions sources (24%). Overall, PMF afforded better source identification than PCA with MLR. This work constitutes one of the few examples of the application of PMF to the source apportionment of sediments; the addition of sulfur heterocycles to the analyte list greatly aided in the source identification process.

  3. Bayes and empirical Bayes methods for reduced rank regression models in matched case-control studies.

    PubMed

    Satagopan, Jaya M; Sen, Ananda; Zhou, Qin; Lan, Qing; Rothman, Nathaniel; Langseth, Hilde; Engel, Lawrence S

    2016-06-01

    Matched case-control studies are popular designs used in epidemiology for assessing the effects of exposures on binary traits. Modern studies increasingly enjoy the ability to examine a large number of exposures in a comprehensive manner. However, several risk factors often tend to be related in a nontrivial way, undermining efforts to identify the risk factors using standard analytic methods due to inflated type-I errors and possible masking of effects. Epidemiologists often use data reduction techniques by grouping the prognostic factors using a thematic approach, with themes deriving from biological considerations. We propose shrinkage-type estimators based on Bayesian penalization methods to estimate the effects of the risk factors using these themes. The properties of the estimators are examined using extensive simulations. The methodology is illustrated using data from a matched case-control study of polychlorinated biphenyls in relation to the etiology of non-Hodgkin's lymphoma. © 2015, The International Biometric Society.

  4. SAM Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target chemical, radiochemical, pathogens, and biotoxin analytes in environmental samples can use this online query tool to identify analytical methods included in EPA's Selected Analytical Methods for Environmental Remediation

  5. Aerosol hygroscopic growth parameterization based on a solute specific coefficient

    NASA Astrophysics Data System (ADS)

    Metzger, S.; Steil, B.; Xu, L.; Penner, J. E.; Lelieveld, J.

    2011-09-01

    Water is a main component of atmospheric aerosols and its amount depends on the particle chemical composition. We introduce a new parameterization for the aerosol hygroscopic growth factor (HGF), based on an empirical relation between water activity (aw) and solute molality (μs) through a single solute specific coefficient νi. Three main advantages are: (1) wide applicability, (2) simplicity and (3) analytical nature. (1) Our approach considers the Kelvin effect and covers ideal solutions at large relative humidity (RH), including CCN activation, as well as concentrated solutions with high ionic strength at low RH such as the relative humidity of deliquescence (RHD). (2) A single νi coefficient suffices to parameterize the HGF for a wide range of particle sizes, from nanometer nucleation mode to micrometer coarse mode particles. (3) In contrast to previous methods, our analytical aw parameterization depends not only on a linear correction factor for the solute molality, instead νi also appears in the exponent in form x · ax. According to our findings, νi can be assumed constant for the entire aw range (0-1). Thus, the νi based method is computationally efficient. In this work we focus on single solute solutions, where νi is pre-determined with the bisection method from our analytical equations using RHD measurements and the saturation molality μssat. The computed aerosol HGF and supersaturation (Köhler-theory) compare well with the results of the thermodynamic reference model E-AIM for the key compounds NaCl and (NH4)2SO4 relevant for CCN modeling and calibration studies. The equations introduced here provide the basis of our revised gas-liquid-solid partitioning model, i.e. version 4 of the EQuilibrium Simplified Aerosol Model (EQSAM4), described in a companion paper.

  6. Models for randomly distributed nanoscopic domains on spherical vesicles

    NASA Astrophysics Data System (ADS)

    Anghel, Vinicius N. P.; Bolmatov, Dima; Katsaras, John

    2018-06-01

    The existence of lipid domains in the plasma membrane of biological systems has proven controversial, primarily due to their nanoscopic size—a length scale difficult to interrogate with most commonly used experimental techniques. Scattering techniques have recently proven capable of studying nanoscopic lipid domains populating spherical vesicles. However, the development of analytical methods able of predicting and analyzing domain pair correlations from such experiments has not kept pace. Here, we developed models for the random distribution of monodisperse, circular nanoscopic domains averaged on the surface of a spherical vesicle. Specifically, the models take into account (i) intradomain correlations corresponding to form factors and interdomain correlations corresponding to pair distribution functions, and (ii) the analytical computation of interdomain correlations for cases of two and three domains on a spherical vesicle. In the case of more than three domains, these correlations are treated either by Monte Carlo simulations or by spherical analogs of the Ornstein-Zernike and Percus-Yevick (PY) equations. Importantly, the spherical analog of the PY equation works best in the case of nanoscopic size domains, a length scale that is mostly inaccessible by experimental approaches such as, for example, fluorescent techniques and optical microscopies. The analytical form factors and structure factors of nanoscopic domains populating a spherical vesicle provide a new and important framework for the quantitative analysis of experimental data from commonly studied phase-separated vesicles used in a wide range of biophysical studies.

  7. Development and Interlaboratory Validation of a Simple Screening Method for Genetically Modified Maize Using a ΔΔC(q)-Based Multiplex Real-Time PCR Assay.

    PubMed

    Noguchi, Akio; Nakamura, Kosuke; Sakata, Kozue; Sato-Fukuda, Nozomi; Ishigaki, Takumi; Mano, Junichi; Takabatake, Reona; Kitta, Kazumi; Teshima, Reiko; Kondo, Kazunari; Nishimaki-Mogami, Tomoko

    2016-04-19

    A number of genetically modified (GM) maize events have been developed and approved worldwide for commercial cultivation. A screening method is needed to monitor GM maize approved for commercialization in countries that mandate the labeling of foods containing a specified threshold level of GM crops. In Japan, a screening method has been implemented to monitor approved GM maize since 2001. However, the screening method currently used in Japan is time-consuming and requires generation of a calibration curve and experimental conversion factor (C(f)) value. We developed a simple screening method that avoids the need for a calibration curve and C(f) value. In this method, ΔC(q) values between the target sequences and the endogenous gene are calculated using multiplex real-time PCR, and the ΔΔC(q) value between the analytical and control samples is used as the criterion for determining analytical samples in which the GM organism content is below the threshold level for labeling of GM crops. An interlaboratory study indicated that the method is applicable independently with at least two models of PCR instruments used in this study.

  8. Dynamic characteristics of a novel damped outrigger system

    NASA Astrophysics Data System (ADS)

    Tan, Ping; Fang, Chuangjie; Zhou, Fulin

    2014-06-01

    This paper presents exact analytical solutions for a novel damped outrigger system, in which viscous dampers are vertically installed between perimeter columns and the core of a high-rise building. An improved analytical model is developed by modeling the effect of the damped outrigger as a general rotational spring acting on a Bernoulli-Euler beam. The equivalent rotational spring stiffness incorporating the combined effects of dampers and axial stiffness of perimeter columns is derived. The dynamic stiffness method (DSM) is applied to formulate the governing equation of the damped outrigger system. The accuracy and efficiency are verified in comparison with those obtained from compatibility equations and boundary equations. Parametric analysis of three non-dimensional factors is conducted to evaluate the influences of various factors, such as the stiffness ratio of the core to the beam, position of the damped outrigger, and the installed damping coefficient. Results show that the modal damping ratio is significantly influenced by the stiffness ratio of the core to the column, and is more sensitive to damping than the position of the damped outrigger. The proposed analytical model in combination with DSM can be extended to the study of structures with more outriggers.

  9. Designing stellarator coils by a modified Newton method using FOCUS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  10. An Investigation of the Overlap Between the Statistical Discrete Gust and the Power Spectral Density Analysis Methods

    NASA Technical Reports Server (NTRS)

    Perry, Boyd, III; Pototzky, Anthony S.; Woods, Jessica A.

    1989-01-01

    The results of a NASA investigation of a claimed Overlap between two gust response analysis methods: the Statistical Discrete Gust (SDG) Method and the Power Spectral Density (PSD) Method are presented. The claim is that the ratio of an SDG response to the corresponding PSD response is 10.4. Analytical results presented for several different airplanes at several different flight conditions indicate that such an Overlap does appear to exist. However, the claim was not met precisely: a scatter of up to about 10 percent about the 10.4 factor can be expected.

  11. Designing stellarator coils by a modified Newton method using FOCUS

    NASA Astrophysics Data System (ADS)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi

    2018-06-01

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  12. Designing stellarator coils by a modified Newton method using FOCUS

    DOE PAGES

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; ...

    2018-03-22

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  13. Determination of organophosphorus pesticides and their major degradation product residues in food samples by HPLC-UV.

    PubMed

    Peng, Guilong; He, Qiang; Lu, Ying; Mmereki, Daniel; Zhong, Zhihui

    2016-10-01

    A simple method based on dispersive solid-phase extraction (DSPE) and dispersive liquid-liquid microextraction method based on solidification of floating organic droplets (DLLME-SFO) was developed for the extraction of chlorpyrifos (CP), chlorpyrifos-methyl (CPM), and their main degradation product 3,5,6-trichloro-2-pyridinol (TCP) in tomato and cucumber samples. The determination was carried out by high performance liquid chromatography with ultraviolet detection (HPLC-UV). In the DSPE-DLLME-SFO, the analytes were first extracted with acetone. The clean-up of the extract by DSPE was carried out by directly adding activated carbon sorbent into the extract solution, followed by shaking and filtration. Under the optimum conditions, the proposed method was sensitive and showed a good linearity within a range of 2-500 ng/g, with the correlation coefficients (r) varying from 0.9991 to 0.9996. The enrichment factors ranged from 127 to 138. The limit of detections (LODs) were in the range of 0.12-0.68 ng/g, and the relative standard deviations (RSDs) for 50 ng/g of each analytes in tomato samples were in the range of 3.25-6.26 % (n = 5). The proposed method was successfully applied for the extraction and determination of the mentioned analytes residues in tomato and cucumber samples, and satisfactory results were obtained.

  14. SAM Pathogen Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target pathogen analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select pathogens.

  15. Comprehensive quantification of signal-to-noise ratio and g-factor for image-based and k-space-based parallel imaging reconstructions.

    PubMed

    Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A

    2008-10-01

    Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.

  16. 40 CFR 136.6 - Method modifications and analytical requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method... (analytical method) provided that the chemistry of the method or the determinative technique is not changed... prevent efficient recovery of organic pollutants and prevent the method from meeting QC requirements, the...

  17. GC-MS analyses of the volatiles of Houttuynia cordata Thunb.

    PubMed

    Yang, Zhan-Nan; Luo, Shi-Qiong; Ma, Jing; Wu, Dan; Hong, Liang; Yu, Zheng-Wen

    2016-09-01

    GC-MS is the basis of analysis of plant volatiles. Several protocols employed for the assay have resulted in inconsistent results in the literature. We developed a GC-MS method, which were applied to analyze 25 volatiles (α-pinene, camphene, β-pinene, 2-methyl-2-pentenal, myrcene, (+)-limonene, eucalyptol, trans-2-hexenal, γ-terpinene, cis-3-hexeneyl-acetate, 1-hexanol, α-pinene oxide, cis-3-hexen-1-ol, trans-2-hexen-1-ol, decanal, linalool, acetyl-borneol, β-caryophyllene, 2-undecanone, 4-terpineol, borneol, decanol, eugenol, isophytol and phytol) of Houttuynia cordata Thunb. Linear behaviors for all analytes were observed with a linear regression relationship (r2>0.9991) at the concentrations tested. Recoveries of the 25 analytes were 98.56-103.77% with RSDs <3.0%. Solution extraction (SE), which involved addition of an internal standard, could avoid errors for factors in sample preparation by steam distillation (SD) and solidphase micro extraction (SPME). Less sample material (≍0.05g fresh leaves of H. cordata) could be used to determine the contents of 25 analytes by our proposed method and, after collection, did not affect the normal physiological activity or growth of H. cordata. This method can be used to monitor the metabolic accumulation of H. cordata volatiles.

  18. Pesticide data for selected Wyoming streams, 1976-78

    USGS Publications Warehouse

    Butler, David L.

    1987-01-01

    In 1976, the U.S. Geological Survey, in cooperation with the Wyoming Department of Agriculture, started a monitoring program to determine pesticide concentrations in Wyoming streams. This program was incorporated into the water-quality data-collection system already in operation. Samples were collected at 20 sites for analysis of various insecticides, herbicides, polychlorinated biphenyls, and polychlorinated napthalenes.\\The results through 1978 revealed small concentrations of pesticides in water and bottom-material samples were DDE (39 percent of the concentrations equal to or greater than the minimum reported concentrations of the analytical methods), DDD (20 percent), dieldrin (21 percent), and polychlorinated biphenyls (29 percent). The herbicides most commonly found in water samples were 2,4-D (29 percent of the concentrations equal to or greater than the minimum reported concentrations of the analytical method) and picloram (23 percent). Most concentrations were significantly less than concentrations thought to be harmful to freshwater aquatic life based on available toxicity data. However for some pesticides, U.S. Environmental Protection Agency water-quality criteria for freshwater aquatic life are based on bioaccumulation factors that result in criteria concentrations less than the minimum reported concentrations of the analytical methods. It is not known if certain pesticides were present at concentrations less than the minimum reported concentrations that exceeded these criteria.

  19. SAM Biotoxin Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target biotoxin analytes in environmental samples can use this online query tool to identify analytical methods included in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select biotoxins.

  20. SAM Chemical Methods Query

    EPA Pesticide Factsheets

    Laboratories measuring target chemical, radiochemical, pathogens, and biotoxin analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery

  1. Fast HPLC-DAD quantification of nine polyphenols in honey by using second-order calibration method based on trilinear decomposition algorithm.

    PubMed

    Zhang, Xiao-Hua; Wu, Hai-Long; Wang, Jian-Yao; Tu, De-Zhu; Kang, Chao; Zhao, Juan; Chen, Yao; Miu, Xiao-Xia; Yu, Ru-Qin

    2013-05-01

    This paper describes the use of second-order calibration for development of HPLC-DAD method to quantify nine polyphenols in five kinds of honey samples. The sample treatment procedure was simplified effectively relative to the traditional ways. Baselines drift was also overcome by means of regarding the drift as additional factor(s) as well as the analytes of interest in the mathematical model. The contents of polyphenols obtained by the alternating trilinear decomposition (ATLD) method have been successfully used to distinguish different types of honey. This method shows good linearity (r>0.99), rapidity (t<7.60 min) and accuracy, which may be extremely promising as an excellent routine strategy for identification and quantification of polyphenols in the complex matrices. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. First-order reliability application and verification methods for semistatic structures

    NASA Astrophysics Data System (ADS)

    Verderaime, V.

    1994-11-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.

  3. Factors affecting dental service quality.

    PubMed

    Bahadori, Mohammadkarim; Raadabadi, Mehdi; Ravangard, Ramin; Baldacchino, Donia

    2015-01-01

    Measuring dental clinic service quality is the first and most important factor in improving care. The quality provided plays an important role in patient satisfaction. The purpose of this paper is to identify factors affecting dental service quality from the patients' viewpoint. This cross-sectional, descriptive-analytical study was conducted in a dental clinic in Tehran between January and June 2014. A sample of 385 patients was selected from two work shifts using stratified sampling proportional to size and simple random sampling methods. The data were collected, a self-administered questionnaire designed for the purpose of the study, based on the Parasuraman and Zeithaml's model of service quality which consisted of two parts: the patients' demographic characteristics and a 30-item questionnaire to measure the five dimensions of the service quality. The collected data were analysed using SPSS 21.0 and Amos 18.0 through some descriptive statistics such as mean, standard deviation, as well as analytical methods, including confirmatory factor. Results showed that the correlation coefficients for all dimensions were higher than 0.5. In this model, assurance (regression weight=0.99) and tangibility (regression weight=0.86) had, respectively, the highest and lowest effects on dental service quality. The Parasuraman and Zeithaml's model is suitable to measure quality in dental services. The variables related to dental services quality have been made according to the model. This is a pioneering study that uses Parasuraman and Zeithaml's model and CFA in a dental setting. This study provides useful insights and guidance for dental service quality assurance.

  4. Laser Ablation in situ (U-Th-Sm)/He and U-Pb Double-Dating of Apatite and Zircon: Techniques and Applications

    NASA Astrophysics Data System (ADS)

    McInnes, B.; Danišík, M.; Evans, N.; McDonald, B.; Becker, T.; Vermeesch, P.

    2015-12-01

    We present a new laser-based technique for rapid, quantitative and automated in situ microanalysis of U, Th, Sm, Pb and He for applications in geochronology, thermochronometry and geochemistry (Evans et al., 2015). This novel capability permits a detailed interrogation of the time-temperature history of rocks containing apatite, zircon and other accessory phases by providing both (U-Th-Sm)/He and U-Pb ages (+trace element analysis) on single crystals. In situ laser microanalysis offers several advantages over conventional bulk crystal methods in terms of safety, cost, productivity and spatial resolution. We developed and integrated a suite of analytical instruments including a 193 nm ArF excimer laser system (RESOlution M-50A-LR), a quadrupole ICP-MS (Agilent 7700s), an Alphachron helium mass spectrometry system and swappable flow-through and ultra-high vacuum analytical chambers. The analytical protocols include the following steps: mounting/polishing in PFA Teflon using methods similar to those adopted for fission track etching; laser He extraction and analysis using a 2 s ablation at 5 Hz and 2-3 J/cm2fluence; He pit volume measurement using atomic force microscopy, and U-Th-Sm-Pb (plus optional trace element) analysis using traditional laser ablation methods. The major analytical challenges for apatite include the low U, Th and He contents relative to zircon and the elevated common Pb content. On the other hand, apatite typically has less extreme and less complex zoning of parent isotopes (primarily U and Th). A freeware application has been developed for determining (U-Th-Sm)/He ages from the raw analytical data and Iolite software was used for U-Pb age and trace element determination. In situ double-dating has successfully replicated conventional U-Pb and (U-Th)/He age variations in xenocrystic zircon from the diamondiferous Ellendale lamproite pipe, Western Australia and increased zircon analytical throughput by a factor of 50 over conventional methods.Reference: Evans NJ, McInnes BIA, McDonald B, Becker T, Vermeesch P, Danisik M, Shelley M, Marillo-Sialer E and Patterson D. An in situ technique for (U-Th-Sm)/He and U-Pb double dating. J Analytical Atomic Spectrometry, 30, 1636 - 1645.

  5. 7 CFR 91.23 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...

  6. 7 CFR 91.23 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...

  7. 7 CFR 91.23 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...

  8. Factors Affecting Higher Order Thinking Skills of Students: A Meta-Analytic Structural Equation Modeling Study

    ERIC Educational Resources Information Center

    Budsankom, Prayoonsri; Sawangboon, Tatsirin; Damrongpanit, Suntorapot; Chuensirimongkol, Jariya

    2015-01-01

    The purpose of the research is to develop and identify the validity of factors affecting higher order thinking skills (HOTS) of students. The thinking skills can be divided into three types: analytical, critical, and creative thinking. This analysis is done by applying the meta-analytic structural equation modeling (MASEM) based on a database of…

  9. Stable oxygen and hydrogen isotopes of brines - comparing isotope ratio mass spectrometry and isotope ratio infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Ahrens, Christian; Koeniger, Paul; van Geldern, Robert; Stadler, Susanne

    2013-04-01

    Today's standard analytical methods for high precision stable isotope analysis of fluids are gas-water equilibration and high temperature pyrolysis coupled to isotope ratio mass spectrometers (IRMS). In recent years, relatively new laser-based analytical instruments entered the market that are said to allow high isotope precision data on nearly every media. This optical technique is referred to as isotope ratio infrared spectroscopy (IRIS). The objective of this study is to evaluate the capability of this new instrument type for highly saline solutions and a comparison of the analytical results with traditional IRMS analysis. It has been shown for the equilibration method that the presence of salts influences the measured isotope values depending on the salt concentration (see Lécuyer et al, 2009; Martineau, 2012). This so-called 'isotope salt effect' depends on the salt type and salt concentration. These factors change the activity in the fluid and therefore shift the isotope ratios measured by the equilibration method. Consequently, correction factors have to be applied to these analytical data. Direct conversion techniques like pyrolysis or the new laser instruments allow the measurement of the water molecule from the sample directly and should therefore not suffer from the salt effect, i.e. no corrections of raw values are necessary. However, due to high salt concentrations this might cause technical problems with the analytical hardware and may require labor-intensive sample preparation (e.g. vacuum distillation). This study evaluates the salt isotope effect for the IRMS equilibration technique (Thermo Gasbench II coupled to Delta Plus XP) and the laser-based IRIS instruments with liquid injection (Picarro L2120-i). Synthetic salt solutions (NaCl, KCl, CaCl2, MgCl2, MgSO4, CaSO4) and natural brines collected from the Stassfurt Salt Anticline (Germany; Stadler et al., 2012) were analysed with both techniques. Salt concentrations ranged from seawater salinity up to full saturation. References Lécuyer, C. et al. (2009). Chem. Geol., 264, 122-126. [doi:10.1016/j.chemgeo.2009.02.017] Martineau, F. et al. (2012). Chem. Geol., 291, 236-240. [doi:10.1016/j.chemgeo.2011.10.017] Stadler, S. et al. (2012). Chem. Geol., 294-295, 226-242. [doi:10.1016/j.chemgeo.2011.12.006

  10. Net analyte signal standard addition method (NASSAM) as a novel spectrofluorimetric and spectrophotometric technique for simultaneous determination, application to assay of melatonin and pyridoxine

    NASA Astrophysics Data System (ADS)

    Asadpour-Zeynali, Karim; Bastami, Mohammad

    2010-02-01

    In this work a new modification of the standard addition method called "net analyte signal standard addition method (NASSAM)" is presented for the simultaneous spectrofluorimetric and spectrophotometric analysis. The proposed method combines the advantages of standard addition method with those of net analyte signal concept. The method can be applied for the determination of analyte in the presence of known interferents. The accuracy of the predictions against H-point standard addition method is not dependent on the shape of the analyte and interferent spectra. The method was successfully applied to simultaneous spectrofluorimetric and spectrophotometric determination of pyridoxine (PY) and melatonin (MT) in synthetic mixtures and in a pharmaceutical formulation.

  11. Detecting a wide range of environmental contaminants in human blood samples--combining QuEChERS with LC-MS and GC-MS methods.

    PubMed

    Plassmann, Merle M; Schmidt, Magdalena; Brack, Werner; Krauss, Martin

    2015-09-01

    Exposure to environmental pollution and consumer products may result in an uptake of chemicals into human tissues. Several studies have reported the presence of diverse environmental contaminants in human blood samples. However, previously developed multi-target methods for the analysis of human blood include a fairly limited amount of compounds stemming from one or two related compound groups. Thus, the sample preparation method QuEChERS (quick easy cheap effective rugged and safe) was tested for the extraction of 64 analytes covering a broad compound domain followed by detection using liquid and gas chromatography coupled to mass spectrometry (LC- and GC-MS). Forty-seven analytes showed absolute recoveries above 70% in the first QuEChERS step, being a simple liquid-liquid extraction (LLE) using acetonitrile and salt. The second QuEChERS step, being a dispersive solid phase extraction, did not result in an overall improvement of recoveries or removal of background signals. Using solely the LLE step, eight analytes could subsequently be detected in human blood samples from the German Environmental Specimen Bank. Using a LC-multiple reaction monitoring (MRM) method with a triple quadrupole instrument, better recoveries were achieved than with an older LC-high-resolution (HR) MS full scan orbitrap instrument, which required a higher concentration factor of the extracts. However, the application of HRMS full scan methods could be used for the detection of additional compounds retrospectively.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sahoo, Satiprasad; Dhar, Anirban, E-mail: anirban.dhar@gmail.com; Kar, Amlanjyoti

    Environmental management of an area describes a policy for its systematic and sustainable environmental protection. In the present study, regional environmental vulnerability assessment in Hirakud command area of Odisha, India is envisaged based on Grey Analytic Hierarchy Process method (Grey–AHP) using integrated remote sensing (RS) and geographic information system (GIS) techniques. Grey–AHP combines the advantages of classical analytic hierarchy process (AHP) and grey clustering method for accurate estimation of weight coefficients. It is a new method for environmental vulnerability assessment. Environmental vulnerability index (EVI) uses natural, environmental and human impact related factors, e.g., soil, geology, elevation, slope, rainfall, temperature, windmore » speed, normalized difference vegetation index, drainage density, crop intensity, agricultural DRASTIC value, population density and road density. EVI map has been classified into four environmental vulnerability zones (EVZs) namely: ‘low’, ‘moderate’ ‘high’, and ‘extreme’ encompassing 17.87%, 44.44%, 27.81% and 9.88% of the study area, respectively. EVI map indicates that the northern part of the study area is more vulnerable from an environmental point of view. EVI map shows close correlation with elevation. Effectiveness of the zone classification is evaluated by using grey clustering method. General effectiveness is in between “better” and “common classes”. This analysis demonstrates the potential applicability of the methodology. - Highlights: • Environmental vulnerability zone identification based on Grey Analytic Hierarchy Process (AHP) • The effectiveness evaluation by means of a grey clustering method with support from AHP • Use of grey approach eliminates the excessive dependency on the experience of experts.« less

  13. Personality, Cognitive Style, Motivation, and Aptitude Predict Systematic Trends in Analytic Forecasting Behavior.

    PubMed

    Poore, Joshua C; Forlines, Clifton L; Miller, Sarah M; Regan, John R; Irvine, John M

    2014-12-01

    The decision sciences are increasingly challenged to advance methods for modeling analysts, accounting for both analytic strengths and weaknesses, to improve inferences taken from increasingly large and complex sources of data. We examine whether psychometric measures-personality, cognitive style, motivated cognition-predict analytic performance and whether psychometric measures are competitive with aptitude measures (i.e., SAT scores) as analyst sample selection criteria. A heterogeneous, national sample of 927 participants completed an extensive battery of psychometric measures and aptitude tests and was asked 129 geopolitical forecasting questions over the course of 1 year. Factor analysis reveals four dimensions among psychometric measures; dimensions characterized by differently motivated "top-down" cognitive styles predicted distinctive patterns in aptitude and forecasting behavior. These dimensions were not better predictors of forecasting accuracy than aptitude measures. However, multiple regression and mediation analysis reveals that these dimensions influenced forecasting accuracy primarily through bias in forecasting confidence. We also found that these facets were competitive with aptitude tests as forecast sampling criteria designed to mitigate biases in forecasting confidence while maximizing accuracy. These findings inform the understanding of individual difference dimensions at the intersection of analytic aptitude and demonstrate that they wield predictive power in applied, analytic domains.

  14. Personality, Cognitive Style, Motivation, and Aptitude Predict Systematic Trends in Analytic Forecasting Behavior

    PubMed Central

    Forlines, Clifton L.; Miller, Sarah M.; Regan, John R.; Irvine, John M.

    2014-01-01

    The decision sciences are increasingly challenged to advance methods for modeling analysts, accounting for both analytic strengths and weaknesses, to improve inferences taken from increasingly large and complex sources of data. We examine whether psychometric measures—personality, cognitive style, motivated cognition—predict analytic performance and whether psychometric measures are competitive with aptitude measures (i.e., SAT scores) as analyst sample selection criteria. A heterogeneous, national sample of 927 participants completed an extensive battery of psychometric measures and aptitude tests and was asked 129 geopolitical forecasting questions over the course of 1 year. Factor analysis reveals four dimensions among psychometric measures; dimensions characterized by differently motivated “top-down” cognitive styles predicted distinctive patterns in aptitude and forecasting behavior. These dimensions were not better predictors of forecasting accuracy than aptitude measures. However, multiple regression and mediation analysis reveals that these dimensions influenced forecasting accuracy primarily through bias in forecasting confidence. We also found that these facets were competitive with aptitude tests as forecast sampling criteria designed to mitigate biases in forecasting confidence while maximizing accuracy. These findings inform the understanding of individual difference dimensions at the intersection of analytic aptitude and demonstrate that they wield predictive power in applied, analytic domains. PMID:25983670

  15. Empirical evaluation of humpback whale telomere length estimates; quality control and factors causing variability in the singleplex and multiplex qPCR methods.

    PubMed

    Olsen, Morten Tange; Bérubé, Martine; Robbins, Jooke; Palsbøll, Per J

    2012-09-06

    Telomeres, the protective cap of chromosomes, have emerged as powerful markers of biological age and life history in model and non-model species. The qPCR method for telomere length estimation is one of the most common methods for telomere length estimation, but has received recent critique for being too error-prone and yielding unreliable results. This critique coincides with an increasing awareness of the potentials and limitations of the qPCR technique in general and the proposal of a general set of guidelines (MIQE) for standardization of experimental, analytical, and reporting steps of qPCR. In order to evaluate the utility of the qPCR method for telomere length estimation in non-model species, we carried out four different qPCR assays directed at humpback whale telomeres, and subsequently performed a rigorous quality control to evaluate the performance of each assay. Performance differed substantially among assays and only one assay was found useful for telomere length estimation in humpback whales. The most notable factors causing these inter-assay differences were primer design and choice of using singleplex or multiplex assays. Inferred amplification efficiencies differed by up to 40% depending on assay and quantification method, however this variation only affected telomere length estimates in the worst performing assays. Our results suggest that seemingly well performing qPCR assays may contain biases that will only be detected by extensive quality control. Moreover, we show that the qPCR method for telomere length estimation can be highly precise and accurate, and thus suitable for telomere measurement in non-model species, if effort is devoted to optimization at all experimental and analytical steps. We conclude by highlighting a set of quality controls which may serve for further standardization of the qPCR method for telomere length estimation, and discuss some of the factors that may cause variation in qPCR experiments.

  16. Empirical evaluation of humpback whale telomere length estimates; quality control and factors causing variability in the singleplex and multiplex qPCR methods

    PubMed Central

    2012-01-01

    Background Telomeres, the protective cap of chromosomes, have emerged as powerful markers of biological age and life history in model and non-model species. The qPCR method for telomere length estimation is one of the most common methods for telomere length estimation, but has received recent critique for being too error-prone and yielding unreliable results. This critique coincides with an increasing awareness of the potentials and limitations of the qPCR technique in general and the proposal of a general set of guidelines (MIQE) for standardization of experimental, analytical, and reporting steps of qPCR. In order to evaluate the utility of the qPCR method for telomere length estimation in non-model species, we carried out four different qPCR assays directed at humpback whale telomeres, and subsequently performed a rigorous quality control to evaluate the performance of each assay. Results Performance differed substantially among assays and only one assay was found useful for telomere length estimation in humpback whales. The most notable factors causing these inter-assay differences were primer design and choice of using singleplex or multiplex assays. Inferred amplification efficiencies differed by up to 40% depending on assay and quantification method, however this variation only affected telomere length estimates in the worst performing assays. Conclusion Our results suggest that seemingly well performing qPCR assays may contain biases that will only be detected by extensive quality control. Moreover, we show that the qPCR method for telomere length estimation can be highly precise and accurate, and thus suitable for telomere measurement in non-model species, if effort is devoted to optimization at all experimental and analytical steps. We conclude by highlighting a set of quality controls which may serve for further standardization of the qPCR method for telomere length estimation, and discuss some of the factors that may cause variation in qPCR experiments. PMID:22954451

  17. Using social autopsy to understand maternal, newborn, and child mortality in low-resource settings: a systematic review of the literature

    PubMed Central

    Moyer, Cheryl A.; Johnson, Cassidy; Kaselitz, Elizabeth; Aborigo, Raymond

    2017-01-01

    ABSTRACT Background: Social, cultural, and behavioral factors are often potent upstream contributors to maternal, neonatal, and child mortality, especially in low- and middle-income countries (LMICs). Social autopsy is one method of identifying the impact of such factors, yet it is unclear how social autopsy methods are being used in LMICs. Objective: This study aimed to identify the most common social autopsy instruments, describe overarching findings across populations and geography, and identify gaps in the existing social autopsy literature. Methods: A systematic search of the peer-reviewed literature from 2005 to 2016 was conducted. Studies were included if they were conducted in an LMIC, focused on maternal/neonatal/infant/child health, reported on the results of original research, and explicitly mentioned the use of a social autopsy tool. Results: Sixteen articles out of 1950 citations were included, representing research conducted in 11 countries. Five different tools were described, with two primary conceptual frameworks used to guide analysis: Pathway to Survival and Three Delays models. Studies varied in methods for identifying deaths, and recall periods for respondents ranged from 6 weeks to 5+ years. Across studies, recognition of danger signs appeared to be high, while subsequent care-seeking was inconsistent. Cost, distance to facility, and transportation issues were frequently cited barriers to care-seeking, however, additional barriers were reported that varied by location. Gaps in the social autopsy literature include the lack of: harmonized tools and analytical methods that allow for cross-study comparisons, discussion of complexity of decision making for care seeking, qualitative narratives that address inconsistencies in responses, and the explicit inclusion of perspectives from husbands and fathers. Conclusion: Despite the nascence of the field, research across 11 countries has included social autopsy methods, using a variety of tools, sampling methods, and analytical frameworks to determine how social factors impact maternal, neonatal, and child health outcomes. PMID:29261449

  18. Two-dimensional fracture analysis of piezoelectric material based on the scaled boundary node method

    NASA Astrophysics Data System (ADS)

    Shen-Shen, Chen; Juan, Wang; Qing-Hua, Li

    2016-04-01

    A scaled boundary node method (SBNM) is developed for two-dimensional fracture analysis of piezoelectric material, which allows the stress and electric displacement intensity factors to be calculated directly and accurately. As a boundary-type meshless method, the SBNM employs the moving Kriging (MK) interpolation technique to an approximate unknown field in the circumferential direction and therefore only a set of scattered nodes are required to discretize the boundary. As the shape functions satisfy Kronecker delta property, no special techniques are required to impose the essential boundary conditions. In the radial direction, the SBNM seeks analytical solutions by making use of analytical techniques available to solve ordinary differential equations. Numerical examples are investigated and satisfactory solutions are obtained, which validates the accuracy and simplicity of the proposed approach. Project supported by the National Natural Science Foundation of China (Grant Nos. 11462006 and 21466012), the Foundation of Jiangxi Provincial Educational Committee, China (Grant No. KJLD14041), and the Foundation of East China Jiaotong University, China (Grant No. 09130020).

  19. A History of Collapse Factor Modeling and Empirical Data for Cryogenic Propellant Tanks

    NASA Technical Reports Server (NTRS)

    deQuay, Laurence; Hodge, B. Keith

    2010-01-01

    One of the major technical problems associated with cryogenic liquid propellant systems used to supply rocket engines and their subassemblies and components is the phenomenon of propellant tank pressurant and ullage gas collapse. This collapse is mainly caused by heat transfer from ullage gas to tank walls and interfacing propellant, which are both at temperatures well below those of this gas. Mass transfer between ullage gas and cryogenic propellant can also occur and have minor to significant secondary effects that can increase or decrease ullage gas collapse. Pressurant gas is supplied into cryogenic propellant tanks in order to initially pressurize these tanks and then maintain required pressures as propellant is expelled from these tanks. The net effect of pressurant and ullage gas collapse is increased total mass and mass flow rate requirements of pressurant gases. For flight vehicles this leads to significant and undesirable weight penalties. For rocket engine component and subassembly ground test facilities this results in significantly increased facility hardware, construction, and operational costs. "Collapse Factor" is a parameter used to quantify the pressurant and ullage gas collapse. Accurate prediction of collapse factors, through analytical methods and modeling tools, and collection and evaluation of collapse factor data has evolved over the years since the start of space exploration programs in the 1950 s. Through the years, numerous documents have been published to preserve results of studies associated with the collapse factor phenomenon. This paper presents a summary and selected details of prior literature that document the aforementioned studies. Additionally other literature that present studies and results of heat and mass transfer processes, related to or providing important insights or analytical methods for the studies of collapse factor, are presented.

  20. Achieving optimal SERS through enhanced experimental design

    PubMed Central

    Fisk, Heidi; Westley, Chloe; Turner, Nicholas J.

    2016-01-01

    One of the current limitations surrounding surface‐enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal‐based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd. PMID:27587905

  1. X-ray fluorescence analysis of K, Al and trace elements in chloroaluminate melts

    NASA Astrophysics Data System (ADS)

    Shibitko, A. O.; Abramov, A. V.; Denisov, E. I.; Lisienko, D. G.; Rebrin, O. I.; Bunkov, G. M.; Rychkov, V. N.

    2017-09-01

    Energy dispersive x-ray fluorescence spectrometry was applied to quantitative determination of K, Al, Cr, Fe and Ni in chloroaluminate melts. To implement the external standard calibration method, an unconventional way of samples preparation was suggested. A mixture of metal chlorides was melted in a quartz cell at 350-450 °C under a slightly excessive pressure of purified argon (99.999 %). The composition of the calibration samples (CSs) prepared was controlled by means of the inductively coupled plasma atomic emission spectrometry (ICP-AES). The optimal conditions for analytical lines excitation were determined, the analytes calibration curves were obtained. There was some influence of matrix effects in synthesized samples on the analytical signal of some elements. The CSs are to be stored in inert gas atmosphere. The precision, accuracy, and reproducibility factors of the quantitative chemical analysis were computed.

  2. Diaion HP-2MG modified with 2-(2,6-dichlorobenzylideneamino) benzenethiol as new adsorbent for solid phase extraction and flame atomic absorption spectrometric determination of metal ions.

    PubMed

    Ghaedi, M; Montazerozohori, M; Haghdoust, S; Zaare, F; Soylak, M

    2013-04-01

    A solid phase extraction method for enrichment-separation and the determination of cobalt (Co(2+)), copper (Cu(2+)), nickel (Ni(2+)), zinc (Zn(2+)) and lead (Pb(2+)) ions in real samples has been proposed. The influences of some analytical parameters like pH, flow rate, eluent type and interference of matrix ions on recoveries of analytes were optimized. The limits of detection were found in the range of 1.6-3.9 µg L(-1), while preconcentration factor for all understudy metal ions were found to be 166 with loading half time (t 1/2) less than 10 min. The procedure was applied for the enrichment-separation of analyte ions in environmental samples with recoveries higher than 94.8% and relative SD <4.9% (N = 5).

  3. Achieving optimal SERS through enhanced experimental design.

    PubMed

    Fisk, Heidi; Westley, Chloe; Turner, Nicholas J; Goodacre, Royston

    2016-01-01

    One of the current limitations surrounding surface-enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal-based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd.

  4. An implict LU scheme for the Euler equations applied to arbitrary cascades. [new method of factoring

    NASA Technical Reports Server (NTRS)

    Buratynski, E. K.; Caughey, D. A.

    1984-01-01

    An implicit scheme for solving the Euler equations is derived and demonstrated. The alternating-direction implicit (ADI) technique is modified, using two implicit-operator factors corresponding to lower-block-diagonal (L) or upper-block-diagonal (U) algebraic systems which can be easily inverted. The resulting LU scheme is implemented in finite-volume mode and applied to 2D subsonic and transonic cascade flows with differing degrees of geometric complexity. The results are presented graphically and found to be in good agreement with those of other numerical and analytical approaches. The LU method is also 2.0-3.4 times faster than ADI, suggesting its value in calculating 3D problems.

  5. Fuzzy comprehensive evaluation of multiple environmental factors for swine building assessment and control.

    PubMed

    Xie, Qiuju; Ni, Ji-Qin; Su, Zhongbin

    2017-10-15

    In confined swine buildings, temperature, humidity, and air quality are all important for animal health and productivity. However, the current swine building environmental control is only based on temperature; and evaluation and control methods based on multiple environmental factors are needed. In this paper, fuzzy comprehensive evaluation (FCE) theory was adopted for multi-factor assessment of environmental quality in two commercial swine buildings using real measurement data. An assessment index system and membership functions were established; and predetermined weights were given using analytic hierarchy process (AHP) combined with knowledge of experts. The results show that multi-factors such as temperature, humidity, and concentrations of ammonia (NH 3 ), carbon dioxide (CO 2 ), and hydrogen sulfide (H 2 S) can be successfully integrated in FCE for swine building environment assessment. The FCE method has a high correlation coefficient of 0.737 compared with the method of single-factor evaluation (SFE). The FCE method can significantly increase the sensitivity and perform an effective and integrative assessment. It can be used as part of environmental controlling and warning systems for swine building environment management to improve swine production and welfare. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Teaching Analytical Method Transfer through Developing and Validating Then Transferring Dissolution Testing Methods for Pharmaceuticals

    ERIC Educational Resources Information Center

    Kimaru, Irene; Koether, Marina; Chichester, Kimberly; Eaton, Lafayette

    2017-01-01

    Analytical method transfer (AMT) and dissolution testing are important topics required in industry that should be taught in analytical chemistry courses. Undergraduate students in senior level analytical chemistry laboratory courses at Kennesaw State University (KSU) and St. John Fisher College (SJFC) participated in development, validation, and…

  7. Simplex optimization of headspace factors for headspace gas chromatography determination of residual solvents in pharmaceutical products.

    PubMed

    Grodowska, Katarzyna; Parczewski, Andrzej

    2013-01-01

    The purpose of the present work was to find optimum conditions of headspace gas chromatography (HS-GC) determination of residual solvents which usually appear in pharmaceutical products. Two groups of solvents were taken into account in the present examination. Group I consisted of isopropanol, n-propanol, isobutanol, n-butanol and 1,4-dioxane and group II included cyclohexane, n-hexane and n-heptane. The members of the groups were selected in previous investigations in which experimental design and chemometric methods were applied. Four factors were taken into consideration in optimization which describe HS conditions: sample volume, equilibration time, equilibrium temperature and NaCl concentration in a sample. The relative GC peak area served as an optimization criterion which was considered separately for each analyte. Sequential variable size simplex optimization strategy was used and the progress of optimization was traced and visualized in various ways simultaneously. The optimum HS conditions appeared different for the groups of solvents tested, which proves that influence of experimental conditions (factors) depends on analyte properties. The optimization resulted in significant signal increase (from seven to fifteen times).

  8. Personalized dynamic prediction of death according to tumour progression and high-dimensional genetic factors: Meta-analysis with a joint model.

    PubMed

    Emura, Takeshi; Nakatochi, Masahiro; Matsui, Shigeyuki; Michimae, Hirofumi; Rondeau, Virginie

    2017-01-01

    Developing a personalized risk prediction model of death is fundamental for improving patient care and touches on the realm of personalized medicine. The increasing availability of genomic information and large-scale meta-analytic data sets for clinicians has motivated the extension of traditional survival prediction based on the Cox proportional hazards model. The aim of our paper is to develop a personalized risk prediction formula for death according to genetic factors and dynamic tumour progression status based on meta-analytic data. To this end, we extend the existing joint frailty-copula model to a model allowing for high-dimensional genetic factors. In addition, we propose a dynamic prediction formula to predict death given tumour progression events possibly occurring after treatment or surgery. For clinical use, we implement the computation software of the prediction formula in the joint.Cox R package. We also develop a tool to validate the performance of the prediction formula by assessing the prediction error. We illustrate the method with the meta-analysis of individual patient data on ovarian cancer patients.

  9. Analytical analysis and implementation of a low-speed high-torque permanent magnet vernier in-wheel motor for electric vehicle

    NASA Astrophysics Data System (ADS)

    Li, Jiangui; Wang, Junhua; Zhigang, Zhao; Yan, Weili

    2012-04-01

    In this paper, analytical analysis of the permanent magnet vernier (PMV) is presented. The key is to analytically solve the governing Laplacian/quasi-Poissonian field equations in the motor regions. By using the time-stepping finite element method, the analytical method is verified. Hence, the performances of the PMV machine are quantitatively compared with that of the analytical results. The analytical results agree well with the finite element method results. Finally, the experimental results are given to further show the validity of the analysis.

  10. Developing Metrics for Effective Teaching in Extension Education: A Multi-State Factor-Analytic and Psychometric Analysis of Effective Teaching

    ERIC Educational Resources Information Center

    McKim, Billy R.; Lawver, Rebecca G.; Enns, Kellie; Smith, Amy R.; Aschenbrener, Mollie S.

    2013-01-01

    To successfully educate the public about agriculture, food, and natural resources, we must have effective educators in both formal and nonformal settings. Specifically, this study, which is a valuable part of a larger sequential mixed-method study addressing effective teaching in formal and nonformal agricultural education, provides direction for…

  11. Evaluation of Time Management Behaviors and Its Related Factors in the Senior Nurse Managers, Kermanshah-Iran

    PubMed Central

    Ziapour, Arash; Khatony, Alireza; Jafari, Faranak; Kianipour, Neda

    2015-01-01

    Background and Objective: Time management is an extensive concept that is associated with promoting the performance of managers. The present study was carried out to investigate the time management behaviors along with its related factors among senior nurse mangers. Materials and Methods: In this descriptive-analytical study, 180 senior nurse managers were selected using census method. The instrument for data collection was a standard time behavior questionnaire. Data were analyzed by descriptive and analytical statistics. Results: The findings showed that among the dimensions of time management behaviors, setting objectives and prioritization, and mechanics of time management dimensions obtained the highest and lowest frequency, respectively. Comparison of the mean scores of time management behaviors indicated a significant difference in the gender (p<0.05), age (p<0.001), education (p=0.015), job experience (p<0.001), managerial experience (p<0.001) and management rank management (p<0.029). Conclusion: On the whole, senior nurse managers enjoyed a favorable time management skill. Given the importance of time management behaviors, it seems that teaching these behaviors more seriously through regular educational programs can effectively promote the performance of senior nurse managers. PMID:25716413

  12. Simultaneous screening of four epidermal growth factor receptor antagonists from Curcuma longa via cell membrane chromatography online coupled with HPLC-MS.

    PubMed

    Sun, Meng; Ma, Wei-na; Guo, Ying; Hu, Zhi-gang; He, Lang-chong

    2013-07-01

    The epidermal growth factor receptors (EGFRs) are significant targets for screening active compounds. In this work, an analytical method was established for rapid screening, separation, and identification of EGFRs antagonists from Curcuma longa. Human embryonic kidney 293 cells with a steadily high expression of EGFRs were used to prepare the cell membrane stationary phase in a cell membrane chromatography model for screening active compounds. Separation and identification of the retention chromatographic peaks was achieved by HPLC-MS. The active sites, docking extents and inhibitory effects of the active compounds were also demonstrated. The screening result found that ar-turmerone, curcumin, demethoxycurcumin, and bisdemethoxycurcumin from Curcuma longa could be active components in a similar manner to gefitinib. Biological trials showed that all of four compounds can inhibit EGFRs protein secretion and cell growth in a dose-dependent manner, and downregulate the phosphorylation of EGFRs. This analytical method demonstrated fast and effective characteristics for screening, separation and identification of the active compounds from a complex system and should be useful for drug discovery with natural medicinal herbs. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Some Comments on Mapping from Disease-Specific to Generic Health-Related Quality-of-Life Scales

    PubMed Central

    Palta, Mari

    2013-01-01

    An article by Lu et al. in this issue of Value in Health addresses the mapping of treatment or group differences in disease-specific measures (DSMs) of health-related quality of life onto differences in generic health-related quality-of-life scores, with special emphasis on how the mapping is affected by the reliability of the DSM. In the proposed mapping, a factor analytic model defines a conversion factor between the scores as the ratio of factor loadings. Hence, the mapping applies to convert true underlying scales and has desirable properties facilitating the alignment of instruments and understanding their relationship in a coherent manner. It is important to note, however, that when DSM means or differences in mean DSMs are estimated, their mapping is still of a measurement error–prone predictor, and the correct conversion coefficient is the true mapping multiplied by the reliability of the DSM in the relevant sample. In addition, the proposed strategy for estimating the factor analytic mapping in practice requires assumptions that may not hold. We discuss these assumptions and how they may be the reason we obtain disparate estimates of the mapping factor in an application of the proposed methods to groups of patients. PMID:23337233

  14. [Urodynamics foundations: contractile potency and urethral doppler].

    PubMed

    Benítez Navío, Julio; Caballero Gómez, Pilar; Delgado Elipe, Ildefonso

    2002-12-01

    To calculate the bladder softening factor, elastic constant and contractile potency. For the analysis we considered bladder behavior like that of a spring. See articles 1 and 2 published in this issue. Using flowmetry, Doppler ultrasound and abdominal pressure (Transrectal pressure register catheter) an analytical solution that permits calculation of factors defining bladder behavior was looked for. Doppler ultrasound allows us to know urine velocity through the prostatic urethra and, therefore, to calculate bladder contractile potency. Equations are solved reaching an analytical solution that allows calculating those factors that define bladder behavior: Bladder contractile potency, detrusor elastic constant, considering it behaves like a spring, and calculation of muscle resistance to movement. All thanks to Doppler ultrasound that allows to know urine speed. The bladder voiding phase is defined with the aforementioned factors; storage phase behavior can be indirectly inferred. Only uroflowmetry curves, Doppler ultrasound and abdominal pressure value are used. We comply with the so called non invasive urodynamics although for us it is just another phase in the biomechanical study of the detrusor muscle. Main conclusion is the addition of Doppler ultrasound to the urodynamist armamentarium as an essential instrument for the comprehension of bladder dynamics and calculation of bladder behavior defining factors. It is not a change in the focus but in the methods, gaining knowledge and diminishing invasion.

  15. A sample preparation method for recovering suppressed analyte ions in MALDI TOF MS.

    PubMed

    Lou, Xianwen; de Waal, Bas F M; Milroy, Lech-Gustav; van Dongen, Joost L J

    2015-05-01

    In matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI TOF MS), analyte signals can be substantially suppressed by other compounds in the sample. In this technical note, we describe a modified thin-layer sample preparation method that significantly reduces the analyte suppression effect (ASE). In our method, analytes are deposited on top of the surface of matrix preloaded on the MALDI plate. To prevent embedding of analyte into the matrix crystals, the sample solution were prepared without matrix and efforts were taken not to re-dissolve the preloaded matrix. The results with model mixtures of peptides, synthetic polymers and lipids show that detection of analyte ions, which were completely suppressed using the conventional dried-droplet method, could be effectively recovered by using our method. Our findings suggest that the incorporation of analytes in the matrix crystals has an important contributory effect on ASE. By reducing ASE, our method should be useful for the direct MALDI MS analysis of multicomponent mixtures. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Demonstration/Validation of the Snap Sampler Passive Ground Water Sampling Device for Sampling Inorganic Analytes at the Former Pease Air Force Base

    DTIC Science & Technology

    2009-07-01

    viii Unit Conversion Factors...sampler is also an economic alternative for sampling for inorganic analytes. ERDC/CRREL TR-09-12 xii Unit Conversion Factors Multiply By To Obtain...head- space and then covered with two layers of tightly fitting aluminum foil. To dissolve the analytes, the solutions were stirred for approximately

  17. Development of a Certified Reference Material (NMIJ CRM 7203-a) for Elemental Analysis of Tap Water.

    PubMed

    Zhu, Yanbei; Narukawa, Tomohiro; Inagaki, Kazumi; Miyashita, Shin-Ichi; Kuroiwa, Takayoshi; Ariga, Tomoko; Kudo, Izumi; Koguchi, Masae; Heo, Sung Woo; Suh, Jung Ki; Lee, Kyoung-Seok; Yim, Yong-Hyeon; Lim, Youngran

    2017-01-01

    A certified reference material (CRM), NMIJ CRM 7203-a, was developed for the elemental analysis of tap water. At least two independent analytical methods were applied to characterize the certified value of each element. The elements certified in the present CRM were as follows: Al, As, B, Ca, Cd, Cr, Cu, Fe, K, Mg, Mn, Mo, Na, Ni, Pb, Rb, Sb, Se, Sr, and Zn. The certified value for each element was given as the (property value ± expanded uncertainty), with a coverage factor of 2 for the expanded uncertainty. The expanded uncertainties were estimated while considering the contribution of the analytical methods, the method-to-method variance, the sample homogeneity, the long-term stability, and the concentrations of the standard solutions for calibration. The concentration of Hg (0.39 μg kg -1 ) was given as the information value, since loss of Hg was observed when the sample was stored at room temperature and exposed to light. The certified values of selected elements were confirmed by a co-analysis carried out independently by the NMIJ (Japan) and the KRISS (Korea).

  18. Modeling cometary photopolarimetric characteristics with Sh-matrix method

    NASA Astrophysics Data System (ADS)

    Kolokolova, L.; Petrov, D.

    2017-12-01

    Cometary dust is dominated by particles of complex shape and structure, which are often considered as fractal aggregates. Rigorous modeling of light scattering by such particles, even using parallelized codes and NASA supercomputer resources, is very computer time and memory consuming. We are presenting a new approach to modeling cometary dust that is based on the Sh-matrix technique (e.g., Petrov et al., JQSRT, 112, 2012). This method is based on the T-matrix technique (e.g., Mishchenko et al., JQSRT, 55, 1996) and was developed after it had been found that the shape-dependent factors could be separated from the size- and refractive-index-dependent factors and presented as a shape matrix, or Sh-matrix. Size and refractive index dependences are incorporated through analytical operations on the Sh-matrix to produce the elements of T-matrix. Sh-matrix method keeps all advantages of the T-matrix method, including analytical averaging over particle orientation. Moreover, the surface integrals describing the Sh-matrix elements themselves can be solvable analytically for particles of any shape. This makes Sh-matrix approach an effective technique to simulate light scattering by particles of complex shape and surface structure. In this paper, we present cometary dust as an ensemble of Gaussian random particles. The shape of these particles is described by a log-normal distribution of their radius length and direction (Muinonen, EMP, 72, 1996). Changing one of the parameters of this distribution, the correlation angle, from 0 to 90 deg., we can model a variety of particles from spheres to particles of a random complex shape. We survey the angular and spectral dependencies of intensity and polarization resulted from light scattering by such particles, studying how they depend on the particle shape, size, and composition (including porous particles to simulate aggregates) to find the best fit to the cometary observations.

  19. Static penetration resistance of soils

    NASA Technical Reports Server (NTRS)

    Durgunoglu, H. T.; Mitchell, J. K.

    1973-01-01

    Model test results were used to define the failure mechanism associated with the static penetration resistance of cohesionless and low-cohesion soils. Knowledge of this mechanism has permitted the development of a new analytical method for calculating the ultimate penetration resistance which explicitly accounts for penetrometer base apex angle and roughness, soil friction angle, and the ratio of penetration depth to base width. Curves relating the bearing capacity factors to the soil friction angle are presented for failure in general shear. Strength parameters and penetrometer interaction properties of a fine sand were determined and used as the basis for prediction of the penetration resistance encountered by wedge, cone, and flat-ended penetrometers of different surface roughness using the proposed analytical method. Because of the close agreement between predicted values and values measured in laboratory tests, it appears possible to deduce in-situ soil strength parameters and their variation with depth from the results of static penetration tests.

  20. Stabilization of glucose-oxidase in the graphene paste for screen-printed glucose biosensor

    NASA Astrophysics Data System (ADS)

    Pepłowski, Andrzej; Janczak, Daniel; Jakubowska, Małgorzata

    2015-09-01

    Various methods and materials for enzyme stabilization within screen-printed graphene sensor were analyzed. Main goal was to develop technology allowing immediate printing of the biosensors in single printing process. Factors being considered were: toxicity of the materials used, ability of the material to be screen-printed (squeezed through the printing mesh) and temperatures required in the fabrication process. Performance of the examined sensors was measured using chemical amperometry method, then appropriate analysis of the measurements was conducted. The analysis results were then compared with the medical requirements. Parameters calculated were: correlation coefficient between concentration of the analyte and the measured electrical current (0.986) and variation coefficient for the particular concentrations of the analyte used as the calibration points. Variation of the measured values was significant only in ranges close to 0, decreasing for the concentrations of clinical importance. These outcomes justify further development of the graphene-based biosensors fabricated through printing techniques.

  1. From pixel to voxel: a deeper view of biological tissue by 3D mass spectral imaging

    PubMed Central

    Ye, Hui; Greer, Tyler; Li, Lingjun

    2011-01-01

    Three dimensional mass spectral imaging (3D MSI) is an exciting field that grants the ability to study a broad mass range of molecular species ranging from small molecules to large proteins by creating lateral and vertical distribution maps of select compounds. Although the general premise behind 3D MSI is simple, factors such as choice of ionization method, sample handling, software considerations and many others must be taken into account for the successful design of a 3D MSI experiment. This review provides a brief overview of ionization methods, sample preparation, software types and technological advancements driving 3D MSI research of a wide range of low- to high-mass analytes. Future perspectives in this field are also provided to conclude that the positive and promises ever-growing applications in the biomedical field with continuous developments of this powerful analytical tool. PMID:21320052

  2. Preliminary measurement of gas concentrations of perfluropropane using an analytical weighing balance.

    PubMed

    Clarkson, Douglas McG; Manna, Avinish; Hero, Mark

    2014-02-01

    We describe the use of an analytical weighing balance of measurement accuracy 0.00001g for determination of concentrations of perfluropropane (C3F8) gas used in ophthalmic surgical vitrectomy procedures. A range of test eyes corresponding to an eye volume of 6.1ml were constructed using 27 gauge needle exit ducts and separately 20 gauge (straight) and 23 gauge (angled) entrance ports. This method allowed determination of concentration levels in the sample preparation syringe and also levels in test eyes. It was determined that a key factor influencing gas concentrations accuracy related to the method of gas fill and the value of dead space of the gas preparation/delivery system and with a significant contribution arising from the use of the particle filter. The weighing balance technique was identified as an appropriate technique for estimation of gas concentrations. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.

  3. Stakeholder prioritization of zoonoses in Japan with analytic hierarchy process method.

    PubMed

    Kadohira, M; Hill, G; Yoshizaki, R; Ota, S; Yoshikawa, Y

    2015-05-01

    There exists an urgent need to develop iterative risk assessment strategies of zoonotic diseases. The aim of this study is to develop a method of prioritizing 98 zoonoses derived from animal pathogens in Japan and to involve four major groups of stakeholders: researchers, physicians, public health officials, and citizens. We used a combination of risk profiling and analytic hierarchy process (AHP). Profiling risk was accomplished with semi-quantitative analysis of existing public health data. AHP data collection was performed by administering questionnaires to the four stakeholder groups. Results showed that researchers and public health officials focused on case fatality as the chief important factor, while physicians and citizens placed more weight on diagnosis and prevention, respectively. Most of the six top-ranked diseases were similar among all stakeholders. Transmissible spongiform encephalopathy, severe acute respiratory syndrome, and Ebola fever were ranked first, second, and third, respectively.

  4. An algorithm for analytical solution of basic problems featuring elastostatic bodies with cavities and surface flaws

    NASA Astrophysics Data System (ADS)

    Penkov, V. B.; Levina, L. V.; Novikova, O. S.; Shulmin, A. S.

    2018-03-01

    Herein we propose a methodology for structuring a full parametric analytical solution to problems featuring elastostatic media based on state-of-the-art computing facilities that support computerized algebra. The methodology includes: direct and reverse application of P-Theorem; methods of accounting for physical properties of media; accounting for variable geometrical parameters of bodies, parameters of boundary states, independent parameters of volume forces, and remote stress factors. An efficient tool to address the task is the sustainable method of boundary states originally designed for the purposes of computerized algebra and based on the isomorphism of Hilbertian spaces of internal states and boundary states of bodies. We performed full parametric solutions of basic problems featuring a ball with a nonconcentric spherical cavity, a ball with a near-surface flaw, and an unlimited medium with two spherical cavities.

  5. Best-Matched Internal Standard Normalization in Liquid Chromatography-Mass Spectrometry Metabolomics Applied to Environmental Samples.

    PubMed

    Boysen, Angela K; Heal, Katherine R; Carlson, Laura T; Ingalls, Anitra E

    2018-01-16

    The goal of metabolomics is to measure the entire range of small organic molecules in biological samples. In liquid chromatography-mass spectrometry-based metabolomics, formidable analytical challenges remain in removing the nonbiological factors that affect chromatographic peak areas. These factors include sample matrix-induced ion suppression, chromatographic quality, and analytical drift. The combination of these factors is referred to as obscuring variation. Some metabolomics samples can exhibit intense obscuring variation due to matrix-induced ion suppression, rendering large amounts of data unreliable and difficult to interpret. Existing normalization techniques have limited applicability to these sample types. Here we present a data normalization method to minimize the effects of obscuring variation. We normalize peak areas using a batch-specific normalization process, which matches measured metabolites with isotope-labeled internal standards that behave similarly during the analysis. This method, called best-matched internal standard (B-MIS) normalization, can be applied to targeted or untargeted metabolomics data sets and yields relative concentrations. We evaluate and demonstrate the utility of B-MIS normalization using marine environmental samples and laboratory grown cultures of phytoplankton. In untargeted analyses, B-MIS normalization allowed for inclusion of mass features in downstream analyses that would have been considered unreliable without normalization due to obscuring variation. B-MIS normalization for targeted or untargeted metabolomics is freely available at https://github.com/IngallsLabUW/B-MIS-normalization .

  6. Conformational and mechanical changes of DNA upon transcription factor binding detected by a QCM and transmission line model.

    PubMed

    de-Carvalho, Jorge; Rodrigues, Rogério M M; Tomé, Brigitte; Henriques, Sílvia F; Mira, Nuno P; Sá-Correia, Isabel; Ferreira, Guilherme N M

    2014-04-21

    A novel quartz crystal microbalance (QCM) analytical method is developed based on the transmission line model (TLM) algorithm to analyze the binding of transcription factors (TFs) to immobilized DNA oligoduplexes. The method is used to characterize the mechanical properties of biological films through the estimation of the film dynamic shear moduli, G and G, and the film thickness. Using the Saccharomyces cerevisiae transcription factor Haa1 (Haa1DBD) as a biological model two sensors were prepared by immobilizing DNA oligoduplexes, one containing the Haa1 recognition element (HRE(wt)) and another with a random sequence (HRE(neg)) used as a negative control. The immobilization of DNA oligoduplexes was followed in real time and we show that DNA strands initially adsorb with low or non-tilting, laying flat close to the surface, which then lift-off the surface leading to final film tilting angles of 62.9° and 46.7° for HRE(wt) and HRE(neg), respectively. Furthermore we show that the binding of Haa1DBD to HRE(wt) leads to a more ordered and compact film, and forces a 31.7° bending of the immobilized HRE(wt) oligoduplex. This work demonstrates the suitability of the QCM to monitor the specific binding of TFs to immobilized DNA sequences and provides an analytical methodology to study protein-DNA biophysics and kinetics.

  7. Fabrication of plasmonic cavity arrays for SERS analysis

    NASA Astrophysics Data System (ADS)

    Li, Ning; Feng, Lei; Teng, Fei; Lu, Nan

    2017-05-01

    The plasmonic cavity arrays are ideal substrates for surface enhanced Raman scattering analysis because they can provide hot spots with large volume for analyte molecules. The large area increases the probability to make more analyte molecules on hot spots and leads to a high reproducibility. Therefore, to develop a simple method for creating cavity arrays is important. Herein, we demonstrate how to fabricate a V and W shape cavity arrays by a simple method based on self-assembly. Briefly, the V and W shape cavity arrays are respectively fabricated by taking KOH etching on a nanohole and a nanoring array patterned silicon (Si) slides. The nanohole array is generated by taking a reactive ion etching on a Si slide assembled with monolayer of polystyrene (PS) spheres. The nanoring array is generated by taking a reactive ion etching on a Si slide covered with a monolayer of octadecyltrichlorosilane before self-assembling PS spheres. Both plasmonic V and W cavity arrays can provide large hot area, which increases the probability for analyte molecules to deposit on the hot spots. Taking 4-Mercaptopyridine as analyte probe, the enhancement factor can reach 2.99 × 105 and 9.97 × 105 for plasmonic V cavity and W cavity array, respectively. The relative standard deviations of the plasmonic V and W cavity arrays are 6.5% and 10.2% respectively according to the spectra collected on 20 random spots.

  8. Fabrication of plasmonic cavity arrays for SERS analysis.

    PubMed

    Li, Ning; Feng, Lei; Teng, Fei; Lu, Nan

    2017-05-05

    The plasmonic cavity arrays are ideal substrates for surface enhanced Raman scattering analysis because they can provide hot spots with large volume for analyte molecules. The large area increases the probability to make more analyte molecules on hot spots and leads to a high reproducibility. Therefore, to develop a simple method for creating cavity arrays is important. Herein, we demonstrate how to fabricate a V and W shape cavity arrays by a simple method based on self-assembly. Briefly, the V and W shape cavity arrays are respectively fabricated by taking KOH etching on a nanohole and a nanoring array patterned silicon (Si) slides. The nanohole array is generated by taking a reactive ion etching on a Si slide assembled with monolayer of polystyrene (PS) spheres. The nanoring array is generated by taking a reactive ion etching on a Si slide covered with a monolayer of octadecyltrichlorosilane before self-assembling PS spheres. Both plasmonic V and W cavity arrays can provide large hot area, which increases the probability for analyte molecules to deposit on the hot spots. Taking 4-Mercaptopyridine as analyte probe, the enhancement factor can reach 2.99 × 10 5 and 9.97 × 10 5 for plasmonic V cavity and W cavity array, respectively. The relative standard deviations of the plasmonic V and W cavity arrays are 6.5% and 10.2% respectively according to the spectra collected on 20 random spots.

  9. 3-MCPD in food other than soy sauce or hydrolysed vegetable protein (HVP).

    PubMed

    Baer, Ines; de la Calle, Beatriz; Taylor, Philip

    2010-01-01

    This review gives an overview of current knowledge about 3-monochloropropane-1,2-diol (3-MCPD) formation and detection. Although 3-MCPD is often mentioned with regard to soy sauce and acid-hydrolysed vegetable protein (HVP), and much research has been done in that area, the emphasis here is placed on other foods. This contaminant can be found in a great variety of foodstuffs and is difficult to avoid in our daily nutrition. Despite its low concentration in most foods, its carcinogenic properties are of general concern. Its formation is a multivariate problem influenced by factors such as heat, moisture and sugar/lipid content, depending on the type of food and respective processing employed. Understanding the formation of this contaminant in food is fundamental to not only preventing or reducing it, but also developing efficient analytical methods of detecting it. Considering the differences between 3-MCPD-containing foods, and the need to test for the contaminant at different levels of food processing, one would expect a variety of analytical approaches. In this review, an attempt is made to provide an up-to-date list of available analytical methods and to highlight the differences among these techniques. Finally, the emergence of 3-MCPD esters and analytical techniques for them are also discussed here, although they are not the main focus of this review.

  10. An Analytical Solution for Yaw Maneuver Optimization on the International Space Station and Other Orbiting Space Vehicles

    NASA Technical Reports Server (NTRS)

    Dobrinskaya, Tatiana

    2015-01-01

    This paper suggests a new method for optimizing yaw maneuvers on the International Space Station (ISS). Yaw rotations are the most common large maneuvers on the ISS often used for docking and undocking operations, as well as for other activities. When maneuver optimization is used, large maneuvers, which were performed on thrusters, could be performed either using control moment gyroscopes (CMG), or with significantly reduced thruster firings. Maneuver optimization helps to save expensive propellant and reduce structural loads - an important factor for the ISS service life. In addition, optimized maneuvers reduce contamination of the critical elements of the vehicle structure, such as solar arrays. This paper presents an analytical solution for optimizing yaw attitude maneuvers. Equations describing pitch and roll motion needed to counteract the major torques during a yaw maneuver are obtained. A yaw rate profile is proposed. Also the paper describes the physical basis of the suggested optimization approach. In the obtained optimized case, the torques are significantly reduced. This torque reduction was compared to the existing optimization method which utilizes the computational solution. It was shown that the attitude profiles and the torque reduction have a good match for these two methods of optimization. The simulations using the ISS flight software showed similar propellant consumption for both methods. The analytical solution proposed in this paper has major benefits with respect to computational approach. In contrast to the current computational solution, which only can be calculated on the ground, the analytical solution does not require extensive computational resources, and can be implemented in the onboard software, thus, making the maneuver execution automatic. The automatic maneuver significantly simplifies the operations and, if necessary, allows to perform a maneuver without communication with the ground. It also reduces the probability of command errors. The suggested analytical solution provides a new method of maneuver optimization which is less complicated, automatic and more universal. A maneuver optimization approach, presented in this paper, can be used not only for the ISS, but for other orbiting space vehicles.

  11. Proposal of Classification Method of Time Series Data in International Emissions Trading Market Using Agent-based Simulation

    NASA Astrophysics Data System (ADS)

    Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi

    This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.

  12. Prediction of light aircraft interior noise

    NASA Technical Reports Server (NTRS)

    Howlett, J. T.; Morales, D. A.

    1976-01-01

    At the present time, predictions of aircraft interior noise depend heavily on empirical correction factors derived from previous flight measurements. However, to design for acceptable interior noise levels and to optimize acoustic treatments, analytical techniques which do not depend on empirical data are needed. This paper describes a computerized interior noise prediction method for light aircraft. An existing analytical program (developed for commercial jets by Cockburn and Jolly in 1968) forms the basis of some modal analysis work which is described. The accuracy of this modal analysis technique for predicting low-frequency coupled acoustic-structural natural frequencies is discussed along with trends indicating the effects of varying parameters such as fuselage length and diameter, structural stiffness, and interior acoustic absorption.

  13. 40 CFR 141.704 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...

  14. 40 CFR 141.704 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...

  15. 40 CFR 141.704 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...

  16. 77 FR 41336 - Analytical Methods Used in Periodic Reporting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-13

    ... Methods Used in Periodic Reporting AGENCY: Postal Regulatory Commission. ACTION: Notice of filing. SUMMARY... proceeding to consider changes in analytical methods used in periodic reporting. This notice addresses... informal rulemaking proceeding to consider changes in the analytical methods approved for use in periodic...

  17. 40 CFR 141.704 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...

  18. 40 CFR 141.704 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...

  19. Measuring salivary analytes from free-ranging monkeys

    PubMed Central

    Higham, James P.; Vitale, Alison; Rivera, Adaris Mas; Ayala, James E.; Maestripieri, Dario

    2014-01-01

    Studies of large free-ranging mammals have been revolutionized by non-invasive methods for assessing physiology, which usually involve the measurement of fecal or urinary biomarkers. However, such techniques are limited by numerous factors. To expand the range of physiological variables measurable non-invasively from free-ranging primates, we developed techniques for sampling monkey saliva by offering monkeys ropes with oral swabs sewn on the ends. We evaluated different attractants for encouraging individuals to offer samples, and proportions of individuals in different age/sex categories willing to give samples. We tested the saliva samples we obtained in three commercially available assays: cortisol, Salivary Alpha Amylase, and Secretory Immunoglobulin A. We show that habituated free-ranging rhesus macaques will give saliva samples voluntarily without training, with 100% of infants, and over 50% of adults willing to chew on collection devices. Our field methods are robust even for analytes that show poor recovery from cotton, and/or that have concentrations dependent on salivary flow rate. We validated the cortisol and SAA assays for use in rhesus macaques by showing aspects of analytical validation, such as that samples dilute linearly and in parallel to assay standards. We also found that values measured correlated with biologically meaningful characteristics of sampled individuals (age and dominance rank). The SIgA assay tested did not react to samples. Given the wide range of analytes measurable in saliva but not in feces or urine, our methods considerably improve our ability to study physiological aspects of the behavior and ecology of free-ranging primates, and are also potentially adaptable to other mammalian taxa. PMID:20837036

  20. Missed detection of significant positive and negative shifts in gentamicin assay: implications for routine laboratory quality practices.

    PubMed

    Koerbin, Gus; Liu, Jiakai; Eigenstetter, Alex; Tan, Chin Hon; Badrick, Tony; Loh, Tze Ping

    2018-02-15

    A product recall was issued for the Roche/Hitachi Cobas Gentamicin II assays on 25 th May 2016 in Australia, after a 15 - 20% positive analytical shift was discovered. Laboratories were advised to employ the Thermo Fisher Gentamicin assay as an alternative. Following the reintroduction of the revised assay on 12 th September 2016, a second reagent recall was made on 20 th March 2017 after the discovery of a 20% negative analytical shift due to erroneous instrument adjustment factor. The practices of an index laboratory were examined to determine how the analytical shifts evaded detection by routine internal quality control (IQC) and external quality assurance (EQA) systems. The ability of the patient result-based approaches, including moving average (MovAvg) and moving sum of outliers (MovSO) approaches in detecting these shifts were examined. Internal quality control data of the index laboratory were acceptable prior to the product recall. The practice of adjusting IQC target following a change in assay method resulted in the missed negative shift when the revised Roche assay was reintroduced. While the EQA data of the Roche subgroup showed clear negative bias relative to other laboratory methods, the results were considered as possible 'matrix effect'. The MovAvg method detected the positive shift before the product recall. The MovSO did not detect the negative shift in the index laboratory but did so in another laboratory 5 days before the second product recall. There are gaps in current laboratory quality practices that leave room for analytical errors to evade detection.

  1. Analytic and numeric Green's functions for a two-dimensional electron gas in an orthogonal magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cresti, Alessandro; Grosso, Giuseppe; Parravicini, Giuseppe Pastori

    2006-05-15

    We have derived closed analytic expressions for the Green's function of an electron in a two-dimensional electron gas threaded by a uniform perpendicular magnetic field, also in the presence of a uniform electric field and of a parabolic spatial confinement. A workable and powerful numerical procedure for the calculation of the Green's functions for a large infinitely extended quantum wire is considered exploiting a lattice model for the wire, the tight-binding representation for the corresponding matrix Green's function, and the Peierls phase factor in the Hamiltonian hopping matrix element to account for the magnetic field. The numerical evaluation of themore » Green's function has been performed by means of the decimation-renormalization method, and quite satisfactorily compared with the analytic results worked out in this paper. As an example of the versatility of the numerical and analytic tools here presented, the peculiar semilocal character of the magnetic Green's function is studied in detail because of its basic importance in determining magneto-transport properties in mesoscopic systems.« less

  2. Using Fuzzy Analytic Hierarchy Process multicriteria and Geographical information system for coastal vulnerability analysis in Morocco: The case of Mohammedia

    NASA Astrophysics Data System (ADS)

    Tahri, Meryem; Maanan, Mohamed; Hakdaoui, Mustapha

    2016-04-01

    This paper shows a method to assess the vulnerability of coastal risks such as coastal erosion or submarine applying Fuzzy Analytic Hierarchy Process (FAHP) and spatial analysis techniques with Geographic Information System (GIS). The coast of the Mohammedia located in Morocco was chosen as the study site to implement and validate the proposed framework by applying a GIS-FAHP based methodology. The coastal risk vulnerability mapping follows multi-parametric causative factors as sea level rise, significant wave height, tidal range, coastal erosion, elevation, geomorphology and distance to an urban area. The Fuzzy Analytic Hierarchy Process methodology enables the calculation of corresponding criteria weights. The result shows that the coastline of the Mohammedia is characterized by a moderate, high and very high level of vulnerability to coastal risk. The high vulnerability areas are situated in the east at Monika and Sablette beaches. This technical approach is based on the efficiency of the Geographic Information System tool based on Fuzzy Analytical Hierarchy Process to help decision maker to find optimal strategies to minimize coastal risks.

  3. Using design of experiments to optimize derivatization with methyl chloroformate for quantitative analysis of the aqueous phase from hydrothermal liquefaction of biomass.

    PubMed

    Madsen, René Bjerregaard; Jensen, Mads Mørk; Mørup, Anders Juul; Houlberg, Kasper; Christensen, Per Sigaard; Klemmer, Maika; Becker, Jacob; Iversen, Bo Brummerstedt; Glasius, Marianne

    2016-03-01

    Hydrothermal liquefaction is a promising technique for the production of bio-oil. The process produces an oil phase, a gas phase, a solid residue, and an aqueous phase. Gas chromatography coupled with mass spectrometry is used to analyze the complex aqueous phase. Especially small organic acids and nitrogen-containing compounds are of interest. The efficient derivatization reagent methyl chloroformate was used to make analysis of the complex aqueous phase from hydrothermal liquefaction of dried distillers grains with solubles possible. A circumscribed central composite design was used to optimize the responses of both derivatized and nonderivatized analytes, which included small organic acids, pyrazines, phenol, and cyclic ketones. Response surface methodology was used to visualize significant factors and identify optimized derivatization conditions (volumes of methyl chloroformate, NaOH solution, methanol, and pyridine). Twenty-nine analytes of small organic acids, pyrazines, phenol, and cyclic ketones were quantified. An additional three analytes were pseudoquantified with use of standards with similar mass spectra. Calibration curves with high correlation coefficients were obtained, in most cases R (2)  > 0.991. Method validation was evaluated with repeatability, and spike recoveries of all 29 analytes were obtained. The 32 analytes were quantified in samples from the commissioning of a continuous flow reactor and in samples from recirculation experiments involving the aqueous phase. The results indicated when the steady-state condition of the flow reactor was obtained and the effects of recirculation. The validated method will be especially useful for investigations of the effect of small organic acids on the hydrothermal liquefaction process.

  4. Application of multiattribute decision-making methods for the determination of relative significance factor of impact categories.

    PubMed

    Noh, Jaesung; Lee, Kun Mo

    2003-05-01

    A relative significance factor (f(i)) of an impact category is the external weight of the impact category. The objective of this study is to propose a systematic and easy-to-use method for the determination of f(i). Multiattribute decision-making (MADM) methods including the analytical hierarchy process (AHP), the rank-order centroid method, and the fuzzy method were evaluated for this purpose. The results and practical aspects of using the three methods are compared. Each method shows the same trend, with minor differences in the value of f(i). Thus, all three methods can be applied to the determination of f(i). The rank order centroid method reduces the number of pairwise comparisons by placing the alternatives in order, although it has inherent weakness over the fuzzy method in expressing the degree of vagueness associated with assigning weights to criteria and alternatives. The rank order centroid method is considered a practical method for the determination of f(i) because it is easier and simpler to use compared to the AHP and the fuzzy method.

  5. Factor Analytic Approach to Transitive Text Mining using Medline Descriptors

    NASA Astrophysics Data System (ADS)

    Stegmann, J.; Grohmann, G.

    Matrix decomposition methods were applied to examples of noninteractive literature sets sharing implicit relations. Document-by-term matrices were created from downloaded PubMed literature sets, the terms being the Medical Subject Headings (MeSH descriptors) assigned to the documents. The loadings of the factors derived from singular value or eigenvalue matrix decomposition were sorted according to absolute values and subsequently inspected for positions of terms relevant to the discovery of hidden connections. It was found that only a small number of factors had to be screened to find key terms in close neighbourhood, being separated by a small number of terms only.

  6. 7 CFR 93.13 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...

  7. 40 CFR 141.25 - Analytical methods for radioactivity.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Analytical methods for radioactivity... § 141.25 Analytical methods for radioactivity. (a) Analysis for the following contaminants shall be conducted to determine compliance with § 141.66 (radioactivity) in accordance with the methods in the...

  8. 7 CFR 93.13 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...

  9. 7 CFR 93.13 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...

  10. 7 CFR 93.13 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...

  11. 7 CFR 93.13 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...

  12. Generation of gas-phase ions from charged clusters: an important ionization step causing suppression of matrix and analyte ions in matrix-assisted laser desorption/ionization mass spectrometry.

    PubMed

    Lou, Xianwen; van Dongen, Joost L J; Milroy, Lech-Gustav; Meijer, E W

    2016-12-30

    Ionization in matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) is a very complicated process. It has been reported that quaternary ammonium salts show extremely strong matrix and analyte suppression effects which cannot satisfactorily be explained by charge transfer reactions. Further investigation of the reasons causing these effects can be useful to improve our understanding of the MALDI process. The dried-droplet and modified thin-layer methods were used as sample preparation methods. In the dried-droplet method, analytes were co-crystallized with matrix, whereas in the modified thin-layer method analytes were deposited on the surface of matrix crystals. Model compounds, tetrabutylammonium iodide ([N(Bu) 4 ]I), cesium iodide (CsI), trihexylamine (THA) and polyethylene glycol 600 (PEG 600), were selected as the test analytes given their ability to generate exclusively pre-formed ions, protonated ions and metal ion adducts respectively in MALDI. The strong matrix suppression effect (MSE) observed using the dried-droplet method might disappear using the modified thin-layer method, which suggests that the incorporation of analytes in matrix crystals contributes to the MSE. By depositing analytes on the matrix surface instead of incorporating in the matrix crystals, the competition for evaporation/ionization from charged matrix/analyte clusters could be weakened resulting in reduced MSE. Further supporting evidence for this inference was found by studying the analyte suppression effect using the same two sample deposition methods. By comparing differences between the mass spectra obtained via the two sample preparation methods, we present evidence suggesting that the generation of gas-phase ions from charged matrix/analyte clusters may induce significant suppression of matrix and analyte ions. The results suggest that the generation of gas-phase ions from charged matrix/analyte clusters is an important ionization step in MALDI-MS. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Marker-based reconstruction of the kinematics of a chain of segments: a new method that incorporates joint kinematic constraints.

    PubMed

    Klous, Miriam; Klous, Sander

    2010-07-01

    The aim of skin-marker-based motion analysis is to reconstruct the motion of a kinematical model from noisy measured motion of skin markers. Existing kinematic models for reconstruction of chains of segments can be divided into two categories: analytical methods that do not take joint constraints into account and numerical global optimization methods that do take joint constraints into account but require numerical optimization of a large number of degrees of freedom, especially when the number of segments increases. In this study, a new and largely analytical method for a chain of rigid bodies is presented, interconnected in spherical joints (chain-method). In this method, the number of generalized coordinates to be determined through numerical optimization is three, irrespective of the number of segments. This new method is compared with the analytical method of Veldpaus et al. [1988, "A Least-Squares Algorithm for the Equiform Transformation From Spatial Marker Co-Ordinates," J. Biomech., 21, pp. 45-54] (Veldpaus-method, a method of the first category) and the numerical global optimization method of Lu and O'Connor [1999, "Bone Position Estimation From Skin-Marker Co-Ordinates Using Global Optimization With Joint Constraints," J. Biomech., 32, pp. 129-134] (Lu-method, a method of the second category) regarding the effects of continuous noise simulating skin movement artifacts and regarding systematic errors in joint constraints. The study is based on simulated data to allow a comparison of the results of the different algorithms with true (noise- and error-free) marker locations. Results indicate a clear trend that accuracy for the chain-method is higher than the Veldpaus-method and similar to the Lu-method. Because large parts of the equations in the chain-method can be solved analytically, the speed of convergence in this method is substantially higher than in the Lu-method. With only three segments, the average number of required iterations with the chain-method is 3.0+/-0.2 times lower than with the Lu-method when skin movement artifacts are simulated by applying a continuous noise model. When simulating systematic errors in joint constraints, the number of iterations for the chain-method was almost a factor 5 lower than the number of iterations for the Lu-method. However, the Lu-method performs slightly better than the chain-method. The RMSD value between the reconstructed and actual marker positions is approximately 57% of the systematic error on the joint center positions for the Lu-method compared with 59% for the chain-method.

  14. Stress Intensity Factors of Semi-Circular Bend Specimens with Straight-Through and Chevron Notches

    NASA Astrophysics Data System (ADS)

    Ayatollahi, M. R.; Mahdavi, E.; Alborzi, M. J.; Obara, Y.

    2016-04-01

    Semi-circular bend specimen is one of the useful test specimens for determining fracture toughness of rock and geo-materials. Generally, in rock test specimens, initial cracks are produced in two shapes: straight-edge cracks and chevron notches. In this study, the minimum dimensionless stress intensity factors of semi-circular bend specimen (SCB) with straight-through and chevron notches are calculated. First, using finite element analysis, a suitable relation for the dimensionless stress intensity factor of SCB with straight-through crack is presented based on the normalized crack length and half-distance between supports. For evaluating the validity and accuracy of this relation, the obtained results are then compared with numerical and experimental results reported in the literature. Subsequently, by performing some experiments and also finite element analysis of the SCB specimen with chevron notch, the minimum dimensionless stress intensity factor of this specimen is obtained. Using the new equation for the dimensionless stress intensity factor of SCB with straight-through crack and an analytical method, i.e., Bluhm's slice synthesis method, the minimum (critical) dimensionless stress intensity factor of chevron notched semi-circular bend specimens is calculated. Good agreement is observed between the results of two mentioned methods.

  15. [Determination of five naphthaquinones in Arnebia euchroma by quantitative analysis multi-components with single-marker].

    PubMed

    Zhao, Wen-Wen; Wu, Zhi-Min; Wu, Xia; Zhao, Hai-Yu; Chen, Xiao-Qing

    2016-10-01

    This study is to determine five naphthaquinones (acetylshikonin, β-acetoxyisovalerylalkannin, isobutylshikonin, β,β'-dimethylacrylalkannin,α-methyl-n-butylshikonin) by quantitative analysis of multi-components with a single marker (QAMS). β,β'-Dimethylacrylalkannin was selected as the internal reference substance, and the relative correlation factors (RCFs) of acetylshikonin, β-acetoxyisovalerylalkannin, isobutylshikonin and α-methyl-n-butylshikonin were calculated. Then the ruggedness of relative correction factors was tested on different instruments and columns. Meanwhile, 16 batches of Arnebia euchroma were analyzed by external standard method (ESM) and QAMS, respectively. The peaks were identifited by LC-MS. The ruggedness of relative correction factors was good. And the analytical results calculated by ESM and QAMS showed no difference. The quantitative method established was feasible and suitable for the quality evaluation of A. euchroma. Copyright© by the Chinese Pharmaceutical Association.

  16. Quantitative methods for analysing cumulative effects on fish migration success: a review.

    PubMed

    Johnson, J E; Patterson, D A; Martins, E G; Cooke, S J; Hinch, S G

    2012-07-01

    It is often recognized, but seldom addressed, that a quantitative assessment of the cumulative effects, both additive and non-additive, of multiple stressors on fish survival would provide a more realistic representation of the factors that influence fish migration. This review presents a compilation of analytical methods applied to a well-studied fish migration, a more general review of quantitative multivariable methods, and a synthesis on how to apply new analytical techniques in fish migration studies. A compilation of adult migration papers from Fraser River sockeye salmon Oncorhynchus nerka revealed a limited number of multivariable methods being applied and the sub-optimal reliance on univariable methods for multivariable problems. The literature review of fisheries science, general biology and medicine identified a large number of alternative methods for dealing with cumulative effects, with a limited number of techniques being used in fish migration studies. An evaluation of the different methods revealed that certain classes of multivariable analyses will probably prove useful in future assessments of cumulative effects on fish migration. This overview and evaluation of quantitative methods gathered from the disparate fields should serve as a primer for anyone seeking to quantify cumulative effects on fish migration survival. © 2012 The Authors. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.

  17. An analytical model for solute transport through a GCL-based two-layered liner considering biodegradation.

    PubMed

    Guan, C; Xie, H J; Wang, Y Z; Chen, Y M; Jiang, Y S; Tang, X W

    2014-01-01

    An analytical model for solute advection and dispersion in a two-layered liner consisting of a geosynthetic clay liner (GCL) and a soil liner (SL) considering the effect of biodegradation was proposed. The analytical solution was derived by Laplace transformation and was validated over a range of parameters using the finite-layer method based software Pollute v7.0. Results show that if the half-life of the solute in GCL is larger than 1 year, the degradation in GCL can be neglected for solute transport in GCL/SL. When the half-life of GCL is less than 1 year, neglecting the effect of degradation in GCL on solute migration will result in a large difference of relative base concentration of GCL/SL (e.g., 32% for the case with half-life of 0.01 year). The 100-year solute base concentration can be reduced by a factor of 2.2 when the hydraulic conductivity of the SL was reduced by an order of magnitude. The 100-year base concentration was reduced by a factor of 155 when the half life of the contaminant in the SL was reduced by an order of magnitude. The effect of degradation is more important in approving the groundwater protection level than the hydraulic conductivity. The analytical solution can be used for experimental data fitting, verification of complicated numerical models and preliminary design of landfill liner systems. © 2013.

  18. Coping with matrix effects in headspace solid phase microextraction gas chromatography using multivariate calibration strategies.

    PubMed

    Ferreira, Vicente; Herrero, Paula; Zapata, Julián; Escudero, Ana

    2015-08-14

    SPME is extremely sensitive to experimental parameters affecting liquid-gas and gas-solid distribution coefficients. Our aims were to measure the weights of these factors and to design a multivariate strategy based on the addition of a pool of internal standards, to minimize matrix effects. Synthetic but real-like wines containing selected analytes and variable amounts of ethanol, non-volatile constituents and major volatile compounds were prepared following a factorial design. The ANOVA study revealed that even using a strong matrix dilution, matrix effects are important and additive with non-significant interaction effects and that it is the presence of major volatile constituents the most dominant factor. A single internal standard provided a robust calibration for 15 out of 47 analytes. Then, two different multivariate calibration strategies based on Partial Least Square Regression were run in order to build calibration functions based on 13 different internal standards able to cope with matrix effects. The first one is based in the calculation of Multivariate Internal Standards (MIS), linear combinations of the normalized signals of the 13 internal standards, which provide the expected area of a given unit of analyte present in each sample. The second strategy is a direct calibration relating concentration to the 13 relative areas measured in each sample for each analyte. Overall, 47 different compounds can be reliably quantified in a single fully automated method with overall uncertainties better than 15%. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Detection methods and performance criteria for genetically modified organisms.

    PubMed

    Bertheau, Yves; Diolez, Annick; Kobilinsky, André; Magin, Kimberly

    2002-01-01

    Detection methods for genetically modified organisms (GMOs) are necessary for many applications, from seed purity assessment to compliance of food labeling in several countries. Numerous analytical methods are currently used or under development to support these needs. The currently used methods are bioassays and protein- and DNA-based detection protocols. To avoid discrepancy of results between such largely different methods and, for instance, the potential resulting legal actions, compatibility of the methods is urgently needed. Performance criteria of methods allow evaluation against a common standard. The more-common performance criteria for detection methods are precision, accuracy, sensitivity, and specificity, which together specifically address other terms used to describe the performance of a method, such as applicability, selectivity, calibration, trueness, precision, recovery, operating range, limit of quantitation, limit of detection, and ruggedness. Performance criteria should provide objective tools to accept or reject specific methods, to validate them, to ensure compatibility between validated methods, and be used on a routine basis to reject data outside an acceptable range of variability. When selecting a method of detection, it is also important to consider its applicability, its field of applications, and its limitations, by including factors such as its ability to detect the target analyte in a given matrix, the duration of the analyses, its cost effectiveness, and the necessary sample sizes for testing. Thus, the current GMO detection methods should be evaluated against a common set of performance criteria.

  20. Finite-analytic numerical solution of heat transfer in two-dimensional cavity flow

    NASA Technical Reports Server (NTRS)

    Chen, C.-J.; Naseri-Neshat, H.; Ho, K.-S.

    1981-01-01

    Heat transfer in cavity flow is numerically analyzed by a new numerical method called the finite-analytic method. The basic idea of the finite-analytic method is the incorporation of local analytic solutions in the numerical solutions of linear or nonlinear partial differential equations. In the present investigation, the local analytic solutions for temperature, stream function, and vorticity distributions are derived. When the local analytic solution is evaluated at a given nodal point, it gives an algebraic relationship between a nodal value in a subregion and its neighboring nodal points. A system of algebraic equations is solved to provide the numerical solution of the problem. The finite-analytic method is used to solve heat transfer in the cavity flow at high Reynolds number (1000) for Prandtl numbers of 0.1, 1, and 10.

  1. Effects of Vibrations on Metal Forming Process: Analytical Approach and Finite Element Simulations

    NASA Astrophysics Data System (ADS)

    Armaghan, Khan; Christophe, Giraud-Audine; Gabriel, Abba; Régis, Bigot

    2011-01-01

    Vibration assisted forming is one of the most recent and beneficial technique used to improve forming process. Effects of vibration on metal forming processes can be attributed to two causes. First, the volume effect links lowering of yield stress with the influence of vibration on the dislocation movement. Second, the surface effect explains lowering of the effective coefficient of friction by periodic reduction contact area. This work is related to vibration assisted forming process in viscoplastic domain. Impact of change in vibration waveform has been analyzed. For this purpose, two analytical models have been developed for two different types of vibration waveforms (sinusoidal and triangular). These models were developed on the basis of Slice method that is used to find out the required forming force for the process. Final relationships show that application of triangular waveform in forming process is more beneficial as compare to sinusoidal vibrations in terms of reduced forming force. Finite Element Method (FEM) based simulations were performed using Forge2008®and these confirmed the results of analytical models. The ratio of vibration speed to upper die speed is a critical factor in the reduction of the forming force.

  2. ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION

    PubMed Central

    Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey

    2013-01-01

    MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053

  3. Selected Analytical Methods for Environmental Remediation and Recovery (SAM) - Home

    EPA Pesticide Factsheets

    The SAM Home page provides access to all information provided in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM), and includes a query function allowing users to search methods by analyte, sample type and instrumentation.

  4. 40 CFR 141.89 - Analytical methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...

  5. 40 CFR 141.89 - Analytical methods.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...

  6. 40 CFR 141.89 - Analytical methods.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...

  7. 40 CFR 141.89 - Analytical methods.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...

  8. 75 FR 49930 - Stakeholder Meeting Regarding Re-Evaluation of Currently Approved Total Coliform Analytical Methods

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-16

    ... Currently Approved Total Coliform Analytical Methods AGENCY: Environmental Protection Agency (EPA). ACTION... of currently approved Total Coliform Rule (TCR) analytical methods. At these meetings, stakeholders will be given an opportunity to discuss potential elements of a method re-evaluation study, such as...

  9. 40 CFR 141.89 - Analytical methods.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...

  10. On geometric factors for neutral particle analyzers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stagner, L.; Heidbrink, W. W.

    2014-11-15

    Neutral particle analyzers (NPA) detect neutralized energetic particles that escape from plasmas. Geometric factors relate the counting rate of the detectors to the intensity of the particle source. Accurate geometric factors enable quick simulation of geometric effects without the need to resort to slower Monte Carlo methods. Previously derived expressions [G. R. Thomas and D. M. Willis, “Analytical derivation of the geometric factor of a particle detector having circular or rectangular geometry,” J. Phys. E: Sci. Instrum. 5(3), 260 (1972); J. D. Sullivan, “Geometric factor and directional response of single and multi-element particle telescopes,” Nucl. Instrum. Methods 95(1), 5–11 (1971)]more » for the geometric factor implicitly assume that the particle source is very far away from the detector (far-field); this excludes applications close to the detector (near-field). The far-field assumption does not hold in most fusion applications of NPA detectors. We derive, from probability theory, a generalized framework for deriving geometric factors that are valid for both near and far-field applications as well as for non-isotropic sources and nonlinear particle trajectories.« less

  11. Foundations of measurement and instrumentation

    NASA Technical Reports Server (NTRS)

    Warshawsky, Isidore

    1990-01-01

    The user of instrumentation has provided an understanding of the factors that influence instrument performance, selection, and application, and of the methods of interpreting and presenting the results of measurements. Such understanding is prerequisite to the successful attainment of the best compromise among reliability, accuracy, speed, cost, and importance of the measurement operation in achieving the ultimate goal of a project. Some subjects covered are dimensions; units; sources of measurement error; methods of describing and estimating accuracy; deduction and presentation of results through empirical equations, including the method of least squares; experimental and analytical methods of determining the static and dynamic behavior of instrumentation systems, including the use of analogs.

  12. An investigation of the 'Overlap' between the Statistical-Discrete-Gust and the Power-Spectral-Density analysis methods

    NASA Technical Reports Server (NTRS)

    Perry, Boyd, III; Pototzky, Anthony S.; Woods, Jessica A.

    1989-01-01

    This paper presents the results of a NASA investigation of a claimed 'Overlap' between two gust response analysis methods: the Statistical Discrete Gust (SDG) method and the Power Spectral Density (PSD) method. The claim is that the ratio of an SDG response to the corresponding PSD response is 10.4. Analytical results presented in this paper for several different airplanes at several different flight conditions indicate that such an 'Overlap' does appear to exist. However, the claim was not met precisely: a scatter of up to about 10 percent about the 10.4 factor can be expected.

  13. Scattering of Airy elastic sheets by a cylindrical cavity in a solid.

    PubMed

    Mitri, F G

    2017-11-01

    The prediction of the elastic scattering by voids (and cracks) in materials is an important process in structural health monitoring, phononic crystals, metamaterials and non-destructive evaluation/imaging to name a few examples. Earlier analytical theories and numerical computations considered the elastic scattering by voids in plane waves of infinite extent. However, current research suggesting the use of (limited-diffracting, accelerating and self-healing) Airy acoustical-sheet beams for non-destructive evaluation or imaging applications in elastic solids requires the development of an improved analytical formalism to predict the scattering efficiency used as a priori information in quantitative material characterization. Based on the definition of the time-averaged scattered power flow density, an analytical expression for the scattering efficiency of a cylindrical empty cavity (i.e., void) encased in an elastic medium is derived for compressional and normally-polarized shear-wave Airy beams. The multipole expansion method using cylindrical wave functions is utilized. Numerical computations for the scattering energy efficiency factors for compressional and shear waves illustrate the analysis with particular emphasis on the Airy beam parameters and the non-dimensional frequency, for various elastic materials surrounding the cavity. The ratio of the compressional to the shear wave speed stimulates the generation of elastic resonances, which are manifested as a series of peaks in the scattering efficiency plots. The present analysis provides an improved method for the computations of the scattering energy efficiency factors using compressional and shear-wave Airy beams in elastic materials as opposed to plane waves of infinite extent. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    PubMed

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.

  15. Evaluation of the availability of bound analyte for passive sampling in the presence of mobile binding matrix.

    PubMed

    Xu, Jianqiao; Huang, Shuyao; Jiang, Ruifen; Cui, Shufen; Luan, Tiangang; Chen, Guosheng; Qiu, Junlang; Cao, Chenyang; Zhu, Fang; Ouyang, Gangfeng

    2016-04-21

    Elucidating the availability of the bound analytes for the mass transfer through the diffusion boundary layers (DBLs) adjacent to passive samplers is important for understanding the passive sampling kinetics in complex samples, in which the lability factor of the bound analyte in the DBL is an important parameter. In this study, the mathematical expression of lability factor was deduced by assuming a pseudo-steady state during passive sampling, and the equation indicated that the lability factor was equal to the ratio of normalized concentration gradients between the bound and free analytes. Through the introduction of the mathematical expression of lability factor, the modified effective average diffusion coefficient was proven to be more suitable for describing the passive sampling kinetics in the presence of mobile binding matrixes. Thereafter, the lability factors of the bound polycyclic aromatic hydrocarbons (PAHs) with sodium dodecylsulphate (SDS) micelles as the binding matrixes were figured out according to the improved theory. The lability factors were observed to decrease with larger binding ratios and smaller micelle sizes, and were successfully used to predict the mass transfer efficiencies of PAHs through DBLs. This study would promote the understanding of the availability of bound analytes for passive sampling based on the theoretical improvements and experimental assessments. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Analytical challenges and regulatory requirements for nasal drug products in europe and the u.s.

    PubMed

    Trows, Sabrina; Wuchner, Klaus; Spycher, Rene; Steckel, Hartwig

    2014-04-11

    Nasal drug delivery can be assessed by a variety of means and regulatory agencies, e.g., the Food and Drug Administration (FDA) and the European Medicines Agency (EMA) have published a set of guidelines and regulations proposing in vitro test methods for the characterization of nasal drug products. This article gives a summary of the FDA and EMA requirements regarding the determination of droplet size distribution (DSD), plume geometry, spray pattern and shot weights of solution nasal sprays and discusses the analytical challenges that can occur when performing these measurements. In order to support findings from the literature, studies were performed using a standard nasal spray pump and aqueous model formulations. The aim was to identify possible method-, device- and formulation-dependent influencing factors. The literature review, as well as the results from the studies show that DSD, plume geometry and spray pattern are influenced by, e.g., the viscosity of the solution, the design of the device and the actuation parameters, particularly the stroke length, actuation velocity and actuation force. The dominant factor influencing shot weights, however, is the adjustment of the actuation parameters, especially stroke length and actuation velocity. Consequently, for routine measurements assuring, e.g., the quality of a solution nasal spray or, for in vitro bioequivalence studies, the critical parameters, have to be identified and considered in method development in order to obtain reproducible and reliable results.

  17. Methodological evaluation and comparison of five urinary albumin measurements.

    PubMed

    Liu, Rui; Li, Gang; Cui, Xiao-Fan; Zhang, Dong-Ling; Yang, Qing-Hong; Mu, Xiao-Yan; Pan, Wen-Jie

    2011-01-01

    Microalbuminuria is an indicator of kidney damage and a risk factor for the progression kidney disease, cardiovascular disease, and so on. Therefore, accurate and precise measurement of urinary albumin is critical. However, there are no reference measurement procedures and reference materials for urinary albumin. Nephelometry, turbidimetry, colloidal gold method, radioimmunoassay, and chemiluminescence immunoassay were performed for methodological evaluation, based on imprecision test, recovery rate, linearity, haemoglobin interference rate, and verified reference interval. Then we tested 40 urine samples from diabetic patients by each method, and compared the result between assays. The results indicate that nephelometry is the method with best analytical performance among the five methods, with an average intraassay coefficient of variation (CV) of 2.6%, an average interassay CV of 1.7%, a mean recovery of 99.6%, a linearity of R=1.00 from 2 to 250 mg/l, and an interference rate of <10% at haemoglobin concentrations of <1.82 g/l. The correlation (r) between assays was from 0.701 to 0.982, and the Bland-Altman plots indicated each assay provided significantly different results from each other. Nephelometry is the clinical urinary albumin method with best analytical performance in our study. © 2011 Wiley-Liss, Inc.

  18. Identifying environmental variables explaining genotype-by-environment interaction for body weight of rainbow trout (Onchorynchus mykiss): reaction norm and factor analytic models.

    PubMed

    Sae-Lim, Panya; Komen, Hans; Kause, Antti; Mulder, Han A

    2014-02-26

    Identifying the relevant environmental variables that cause GxE interaction is often difficult when they cannot be experimentally manipulated. Two statistical approaches can be applied to address this question. When data on candidate environmental variables are available, GxE interaction can be quantified as a function of specific environmental variables using a reaction norm model. Alternatively, a factor analytic model can be used to identify the latent common factor that explains GxE interaction. This factor can be correlated with known environmental variables to identify those that are relevant. Previously, we reported a significant GxE interaction for body weight at harvest in rainbow trout reared on three continents. Here we explore their possible causes. Reaction norm and factor analytic models were used to identify which environmental variables (age at harvest, water temperature, oxygen, and photoperiod) may have caused the observed GxE interaction. Data on body weight at harvest was recorded on 8976 offspring reared in various locations: (1) a breeding environment in the USA (nucleus), (2) a recirculating aquaculture system in the Freshwater Institute in West Virginia, USA, (3) a high-altitude farm in Peru, and (4) a low-water temperature farm in Germany. Akaike and Bayesian information criteria were used to compare models. The combination of days to harvest multiplied with daily temperature (Day*Degree) and photoperiod were identified by the reaction norm model as the environmental variables responsible for the GxE interaction. The latent common factor that was identified by the factor analytic model showed the highest correlation with Day*Degree. Day*Degree and photoperiod were the environmental variables that differed most between Peru and other environments. Akaike and Bayesian information criteria indicated that the factor analytical model was more parsimonious than the reaction norm model. Day*Degree and photoperiod were identified as environmental variables responsible for the strong GxE interaction for body weight at harvest in rainbow trout across four environments. Both the reaction norm and the factor analytic models can help identify the environmental variables responsible for GxE interaction. A factor analytic model is preferred over a reaction norm model when limited information on differences in environmental variables between farms is available.

  19. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An...

  20. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An...

  1. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 25 2013-07-01 2013-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An...

  2. 40 CFR 158.355 - Enforcement analytical method.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 24 2011-07-01 2011-07-01 false Enforcement analytical method. 158.355 Section 158.355 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An...

  3. Application of multiplex arrays for cytokine and chemokine profiling of bile.

    PubMed

    Kemp, Troy J; Castro, Felipe A; Gao, Yu-Tang; Hildesheim, Allan; Nogueira, Leticia; Wang, Bing-Sheng; Sun, Lu; Shelton, Gloriana; Pfeiffer, Ruth M; Hsing, Ann W; Pinto, Ligia A; Koshiol, Jill

    2015-05-01

    Gallbladder disease is highly related to inflammation, but the inflammatory processes are not well understood. Bile provides a direct substrate in assessing the local inflammatory response that develops in the gallbladder. To assess the reproducibility of measuring inflammatory markers in bile, we designed a methods study of 69 multiplexed immune-related markers measured in bile obtained from gallstone patients. To evaluate assay performance, a total of 18 bile samples were tested twice within the same plate for each analyte, and the 18 bile samples were tested on two different days for each analyte. We used the following performance parameters: detectability, coefficient of variation (CV), intraclass correlation coefficient (ICC), and percent agreement (concordance among replicate measures above and below detection limit). Furthermore, we examined the association of analyte levels with gallstone characteristics such as type, numbers, and size. All but 3 analytes (Stem Cell Factor, SCF; Thrombopoietin, TPO; sIL-1RI) were detectable in bile. 52 of 69 (75.4%) analytes had detectable levels for at least 50% of the subjects tested. The within-plate CVs were ⩽25% for 53 of 66 (80.3%) detectable analytes, and across-plate CVs were ⩽25% for 32 of 66 (48.5%) detectable analytes. Moreover, 64 of 66 (97.0%) analytes had ICC values of at least 0.8. Lastly, the percent agreement was high between replicates for all of the analytes (median; within plate, 97.2%; across plate, 97.2%). In exploratory analyses, we assessed analyte levels by gallstone characteristics and found that levels for several analytes decreased with increasing size of the largest gallstone per patient. Our data suggest that multiplex assays can be used to reliably measure cytokines and chemokines in bile. In addition, gallstone size was inversely related to the levels of select analytes, which may aid in identifying critical pathways and mechanisms associated with the pathogenesis of gallbladder diseases. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hohimer, J.P.

    The use of laser-based analytical methods in nuclear-fuel processing plants is considered. The species and locations for accountability, process control, and effluent control measurements in the Coprocessing, Thorex, and reference Purex fuel processing operations are identified and the conventional analytical methods used for these measurements are summarized. The laser analytical methods based upon Raman, absorption, fluorescence, and nonlinear spectroscopy are reviewed and evaluated for their use in fuel processing plants. After a comparison of the capabilities of the laser-based and conventional analytical methods, the promising areas of application of the laser-based methods in fuel processing plants are identified.

  5. A Meta-Analytic Review of Work-Family Conflict and Its Antecedents

    ERIC Educational Resources Information Center

    Byron, Kristin

    2005-01-01

    This meta-analytic review combines the results of more than 60 studies to help determine the relative effects of work, nonwork, and demographic and individual factors on work interference with family (WIF) and family interference with work (FIW). As expected, work factors related more strongly to WIF, and some nonwork factors were more strongly…

  6. Taxometric and Factor Analytic Models of Anxiety Sensitivity among Youth: Exploring the Latent Structure of Anxiety Psychopathology Vulnerability

    ERIC Educational Resources Information Center

    Bernstein, Amit; Zvolensky, Michael J.; Stewart, Sherry; Comeau, Nancy

    2007-01-01

    This study represents an effort to better understand the latent structure of anxiety sensitivity (AS), a well-established affect-sensitivity individual difference factor, among youth by employing taxometric and factor analytic approaches in an integrative manner. Taxometric analyses indicated that AS, as indexed by the Child Anxiety Sensitivity…

  7. Orthogonal Higher Order Structure of the WISC-IV Spanish Using Hierarchical Exploratory Factor Analytic Procedures

    ERIC Educational Resources Information Center

    McGill, Ryan J.; Canivez, Gary L.

    2016-01-01

    As recommended by Carroll, the present study examined the factor structure of the Wechsler Intelligence Scale for Children-Fourth Edition Spanish (WISC-IV Spanish) normative sample using higher order exploratory factor analytic techniques not included in the WISC-IV Spanish Technical Manual. Results indicated that the WISC-IV Spanish subtests were…

  8. A Crowdsensing Based Analytical Framework for Perceptional Degradation of OTT Web Browsing.

    PubMed

    Li, Ke; Wang, Hai; Xu, Xiaolong; Du, Yu; Liu, Yuansheng; Ahmad, M Omair

    2018-05-15

    Service perception analysis is crucial for understanding both user experiences and network quality as well as for maintaining and optimizing of mobile networks. Given the rapid development of mobile Internet and over-the-top (OTT) services, the conventional network-centric mode of network operation and maintenance is no longer effective. Therefore, developing an approach to evaluate and optimizing users' service perceptions has become increasingly important. Meanwhile, the development of a new sensing paradigm, mobile crowdsensing (MCS), makes it possible to evaluate and analyze the user's OTT service perception from end-user's point of view other than from the network side. In this paper, the key factors that impact users' end-to-end OTT web browsing service perception are analyzed by monitoring crowdsourced user perceptions. The intrinsic relationships among the key factors and the interactions between key quality indicators (KQI) are evaluated from several perspectives. Moreover, an analytical framework of perceptional degradation and a detailed algorithm are proposed whose goal is to identify the major factors that impact the perceptional degradation of web browsing service as well as their significance of contribution. Finally, a case study is presented to show the effectiveness of the proposed method using a dataset crowdsensed from a large number of smartphone users in a real mobile network. The proposed analytical framework forms a valuable solution for mobile network maintenance and optimization and can help improve web browsing service perception and network quality.

  9. Study of different HILIC, mixed-mode, and other aqueous normal-phase approaches for the liquid chromatography/mass spectrometry-based determination of challenging polar pesticides.

    PubMed

    Vass, Andrea; Robles-Molina, José; Pérez-Ortega, Patricia; Gilbert-López, Bienvenida; Dernovics, Mihaly; Molina-Díaz, Antonio; García-Reyes, Juan F

    2016-07-01

    The aim of the study was to evaluate the performance of different chromatographic approaches for the liquid chromatography/mass spectrometry (LC-MS(/MS)) determination of 24 highly polar pesticides. The studied compounds, which are in most cases unsuitable for conventional LC-MS(/MS) multiresidue methods were tested with nine different chromatographic conditions, including two different hydrophilic interaction liquid chromatography (HILIC) columns, two zwitterionic-type mixed-mode columns, three normal-phase columns operated in HILIC-mode (bare silica and two silica-based chemically bonded columns (cyano and amino)), and two standard reversed-phase C18 columns. Different sets of chromatographic parameters in positive (for 17 analytes) and negative ionization modes (for nine analytes) were examined. In order to compare the different approaches, a semi-quantitative classification was proposed, calculated as the percentage of an empirical performance value, which consisted of three main features: (i) capacity factor (k) to characterize analyte separation from the void, (ii) relative response factor, and (iii) peak shape based on analytes' peak width. While no single method was able to provide appropriate detection of all the 24 studied species in a single run, the best suited approach for the compounds ionized in positive mode was based on a UHPLC HILIC column with 1.8 μm particle size, providing appropriate results for 22 out of the 24 species tested. In contrast, the detection of glyphosate and aminomethylphosphonic acid could only be achieved with a zwitterionic-type mixed-mode column, which proved to be suitable only for the pesticides detected in negative ion mode. Finally, the selected approach (UHPLC HILIC) was found to be useful for the determination of multiple pesticides in oranges using HILIC-ESI-MS/MS, with limits of quantitation in the low microgram per kilogram in most cases. Graphical Abstract HILIC improves separation of multiclass polar pesticides.

  10. Determination of residual acetone and acetone related impurities in drug product intermediates prepared as Spray Dried Dispersions (SDD) using gas chromatography with headspace autosampling (GCHS).

    PubMed

    Quirk, Emma; Doggett, Adrian; Bretnall, Alison

    2014-08-05

    Spray Dried Dispersions (SDD) are uniform mixtures of a specific ratio of amorphous active pharmaceutical ingredient (API) and polymer prepared via a spray drying process. Volatile solvents are employed during spray drying to facilitate the formation of the SDD material. Following manufacture, analytical methodology is required to determine residual levels of the spray drying solvent and its associated impurities. Due to the high level of polymer in the SDD samples, direct liquid injection with Gas Chromatography (GC) is not a viable option for analysis. This work describes the development and validation of an analytical approach to determine residual levels of acetone and acetone related impurities, mesityl oxide (MO) and diacetone alcohol (DAA), in drug product intermediates prepared as SDDs using GC with headspace (HS) autosampling. The method development for these analytes presented a number of analytical challenges which had to be overcome before the levels of the volatiles of interest could be accurately quantified. GCHS could be used after two critical factors were implemented; (1) calculation and application of conversion factors to 'correct' for the reactions occurring between acetone, MO and DAA during generation of the headspace volume for analysis, and the addition of an equivalent amount of polymer into all reference solutions used for quantitation to ensure comparability between the headspace volumes generated for both samples and external standards. This work describes the method development and optimisation of the standard preparation, the headspace autosampler operating parameters and the chromatographic conditions, together with a summary of the validation of the methodology. The approach has been demonstrated to be robust and suitable to accurately determine levels of acetone, MO and DAA in SDD materials over the linear concentration range 0.008-0.4μL/mL, with minimum quantitation limits of 20ppm for acetone and MO, and 80ppm for DAA. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Extraction and preconcentration of residual solvents in pharmaceuticals using dynamic headspace-liquid phase microextraction and their determination by gas chromatography-flame ionization detection.

    PubMed

    Farajzadeh, Mir Ali; Dehghani, Hamideh; Yadeghari, Adeleh; Khoshmaram, Leila

    2017-02-01

    The present study describes a microextraction and determination method for analyzing residual solvents in pharmaceutical products using dynamic headspace-liquid phase microextraction technique followed by gas chromatography-flame ionization detection. In this method dimethyl sulfoxide (μL level) placed into a GC liner-shaped extraction vessel is used as a collection/extraction solvent. Then the liner is exposed to the headspace of a vial containing the sample solution. The effect of different parameters influencing the microextraction procedure including collection/extraction solvent type and its volume, ionic strength, extraction time, extraction temperature and concentration of NaOH solution used in dissolving the studied pharmaceuticals are investigated and optimized. Under the optimum extraction conditions, the method showed wide linear ranges between 0.5 and 5000 mg L -1 . The other analytical parameters were obtained in the following ranges: enrichment factors 240-327, extraction recoveries 72-98% and limits of detection 0.1-0.8 mg L -1 in solution and 0.6-3.2 μg g -1 in solid. Relative standard deviations for the extraction of 100 mg L -1 of each analyte were obtained in the ranges of 4-7 and 5-8% for intra-day (n = 6) and inter-day (n = 4) respectively. Finally the target analytes were determined in different samples such as erythromycin, azithromycin, cefalexin, amoxicillin and co-amoxiclav by the proposed method. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Ion/Neutral, Ion/Electron, Ion/Photon, and Ion/Ion Interactions in Tandem Mass Spectrometry: Do we need them all? Are they enough?

    PubMed Central

    McLuckey, Scott A.; Mentinova, Marija

    2011-01-01

    A range of strategies and tools has been developed to facilitate the determination of primary structures of analyte molecules of interest via tandem mass spectrometry (MS/MS). The two main factors that determine the primary structural information present in an MS/MS spectrum are the type of ion generated from the analyte molecule and the dissociation method. The ion-type subjected to dissociation is determined by the ionization method/conditions and ion transformation processes that might take place after initial gas-phase ion formation. Furthermore, the range of analyte-related ion types can be expanded via derivatization reactions prior to mass spectrometry. Dissociation methods include those that simply alter the population of internal states of the mass-selected ion (i.e., activation methods like collision-induced dissociation) as well as processes that rely on transformation of the ion-type prior to dissociation (e.g., electron capture dissociation). A variety of ionic interactions has been studied for the purpose of ion dissociation and ion transformation that include ion/neutral, ion/photon, ion/electron, and ion/ion interactions. A wide range of phenomena has been observed, many of which have been explored/developed as means for structural analysis. The techniques arising from these phenomena are discussed within the context of the elements of structure determination in tandem mass spectrometry, viz., ion-type definition and dissociation. Unique aspects of the various ion interactions are emphasized along with any barriers to widespread implementation. PMID:21472539

  13. Air-assisted liquid-liquid microextraction using floating organic droplet solidification for simultaneous extraction and spectrophotometric determination of some drugs in biological samples through chemometrics methods

    NASA Astrophysics Data System (ADS)

    Farahmand, Farnaz; Ghasemzadeh, Bahar; Naseri, Abdolhossein

    2018-01-01

    An air assisted liquid-liquid microextraction by applying the solidification of a floating organic droplet method (AALLME-SFOD) coupled with a multivariate calibration method, namely partial least squares (PLS), was introduced for the fast and easy determination of Atenolol (ATE), Propanolol (PRO) and Carvedilol (CAR) in biological samples via a spectrophotometric approach. The analytes would be extracted from neutral aqueous solution into 1-dodecanol as an organic solvent, using AALLME. In this approach a low-density solvent with a melting point close to room temperature was applied as the extraction solvent. The emulsion was immediately formed by repeatedly pulling in and pushing out the aqueous sample solution and extraction solvent mixture via a 10-mL glass syringe for ten times. After centrifugation, the extractant droplet could be simply collected from the aqueous samples by solidifying the emulsion at a lower than the melting point temperature. In the next step, analytes were back extracted simultaneously into the acidic aqueous solution. Derringer and Suich multi-response optimization were utilized for simultaneous optimizing the parameters of three analytes. This method incorporates the benefits of AALLME and dispersive liquid-liquid microextraction considering the solidification of floating organic droplets (DLLME-SFOD). Calibration graphs under optimized conditions were linear in the range of 0.30-6.00, 0.32-2.00 and 0.30-1.40 μg mL- 1 for ATE, CAR and PRO, respectively. Other analytical parameters were obtained as follows: enrichment factors (EFs) were found to be 11.24, 16.55 and 14.90, and limits of detection (LODs) were determined to be 0.09, 0.10 and 0.08 μg mL- 1 for ATE, CAR and PRO, respectively. The proposed method will require neither a highly toxic chlorinated solvent for extraction nor an organic dispersive solvent in the application process; hence, it is more environmentally friendly.

  14. Air-assisted liquid-liquid microextraction using floating organic droplet solidification for simultaneous extraction and spectrophotometric determination of some drugs in biological samples through chemometrics methods.

    PubMed

    Farahmand, Farnaz; Ghasemzadeh, Bahar; Naseri, Abdolhossein

    2018-01-05

    An air assisted liquid-liquid microextraction by applying the solidification of a floating organic droplet method (AALLME-SFOD) coupled with a multivariate calibration method, namely partial least squares (PLS), was introduced for the fast and easy determination of Atenolol (ATE), Propanolol (PRO) and Carvedilol (CAR) in biological samples via a spectrophotometric approach. The analytes would be extracted from neutral aqueous solution into 1-dodecanol as an organic solvent, using AALLME. In this approach a low-density solvent with a melting point close to room temperature was applied as the extraction solvent. The emulsion was immediately formed by repeatedly pulling in and pushing out the aqueous sample solution and extraction solvent mixture via a 10-mL glass syringe for ten times. After centrifugation, the extractant droplet could be simply collected from the aqueous samples by solidifying the emulsion at a lower than the melting point temperature. In the next step, analytes were back extracted simultaneously into the acidic aqueous solution. Derringer and Suich multi-response optimization were utilized for simultaneous optimizing the parameters of three analytes. This method incorporates the benefits of AALLME and dispersive liquid-liquid microextraction considering the solidification of floating organic droplets (DLLME-SFOD). Calibration graphs under optimized conditions were linear in the range of 0.30-6.00, 0.32-2.00 and 0.30-1.40μg mL -1 for ATE, CAR and PRO, respectively. Other analytical parameters were obtained as follows: enrichment factors (EFs) were found to be 11.24, 16.55 and 14.90, and limits of detection (LODs) were determined to be 0.09, 0.10 and 0.08μg mL -1 for ATE, CAR and PRO, respectively. The proposed method will require neither a highly toxic chlorinated solvent for extraction nor an organic dispersive solvent in the application process; hence, it is more environmentally friendly. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Use of fractional factorial design for optimization of digestion procedures followed by multi-element determination of essential and non-essential elements in nuts using ICP-OES technique.

    PubMed

    Momen, Awad A; Zachariadis, George A; Anthemidis, Aristidis N; Stratis, John A

    2007-01-15

    Two digestion procedures have been tested on nut samples for application in the determination of essential (Cr, Cu, Fe, Mg, Mn, Zn) and non-essential (Al, Ba, Cd, Pb) elements by inductively coupled plasma-optical emission spectrometry (ICP-OES). These included wet digestions with HNO(3)/H(2)SO(4) and HNO(3)/H(2)SO(4)/H(2)O(2). The later one is recommended for better analytes recoveries (relative error<11%). Two calibrations (aqueous standard and standard addition) procedures were studied and proved that standard addition was preferable for all analytes. Experimental designs for seven factors (HNO(3), H(2)SO(4) and H(2)O(2) volumes, digestion time, pre-digestion time, temperature of the hot plate and sample weight) were used for optimization of sample digestion procedures. For this purpose Plackett-Burman fractional factorial design, which involve eight experiments was adopted. The factors HNO(3) and H(2)O(2) volume, and the digestion time were found to be the most important parameters. The instrumental conditions were also optimized (using peanut matrix rather than aqueous standard solutions) considering radio-frequency (rf) incident power, nebulizer argon gas flow rate and sample uptake flow rate. The analytical performance, such as limits of detection (LOD<0.74mugg(-1)), precision of the overall procedures (relative standard deviation between 2.0 and 8.2%) and accuracy (relative errors between 0.4 and 11%) were assessed statistically to evaluate the developed analytical procedures. The good agreement between measured and certified values for all analytes (relative error <11%) with respect to IAEA-331 (spinach leaves) and IAEA-359 (cabbage) indicates that the developed analytical method is well suited for further studies on the fate of major elements in nuts and possibly similar matrices.

  16. Public Policy on the Status of Women: Agenda and Strategy for the 70s.

    ERIC Educational Resources Information Center

    Murphy, Irene L.

    The book, using analytical methods of political science, provides an initial overall study of the formation of national policy on the status of women. It also focuses on factors most likely to influence the future course of the women's rights movement. Concentration is on existing policy from the end of the Johnson presidency on into the women's…

  17. Development and validation of an UPLC method for determination of content uniformity in low-dose solid drugs products using the design space approach.

    PubMed

    Oliva, Alexis; Fariña, José B; Llabrés, Matías

    2013-10-15

    A simple and reproducible UPLC method was developed and validated for the quantitative analysis of finasteride in low-dose drug products. Method validation demonstrated the reliability and consistency of analytical results. Due to the regulatory requirements of pharmaceutical analysis in particular, evaluation of robustness is vital to predict how small variations in operating conditions affect the responses. Response surface methodology as an optimization technique was used to evaluate the robustness. For this, a central composite design was implemented around the nominal conditions. Statistical treatment of the responses (retention factor and drug concentrations expressed as percentage of label claim) showed that methanol content in mobile-phase and flow rate were the most influential factors. In the optimization process, the compromise decision support problem (cDSP) strategy was used. Construction of the robust domain from response-surfaces provided tolerance windows for the factors affecting the effectiveness of the method. The specified limits for the USP uniformity of dosage units assay (98.5-101.5%) and the purely experimental variations based on the repeatability test for center points (nominal conditions repetitions) were used as criteria to establish the tolerance windows, which allowed definition design space (DS) of analytical method. Thus, the acceptance criteria values (AV) proposed by the USP-uniformity of assay only depend on the sampling error. If the variation in the responses corresponded to approximately twice the repeatability standard deviation, individual values for percentage label claim (%LC) response may lie outside the specified limits; this implies the data are not centered between the specified limits, and that this term plus the sampling error affects the AV value. To avoid this fact, the limits specified by the Uniformity of Dosage Form assay (i.e., 98.5-101.5%) must be taken into consideration to fix the tolerance windows for each factor. All these results were verified by the Monte Carlo simulation. In conclusion, the level of variability for different factors must be calculated for each case, and not arbitrary way, provided a variation is found higher than the repeatability for center points and secondly, the %LC response must lie inside the specified limits i.e., 98.5-101.5%. If not the UPLC method must be re-developed. © 2013 Elsevier B.V. All rights reserved.

  18. Rational Selection, Criticality Assessment, and Tiering of Quality Attributes and Test Methods for Analytical Similarity Evaluation of Biosimilars.

    PubMed

    Vandekerckhove, Kristof; Seidl, Andreas; Gutka, Hiten; Kumar, Manish; Gratzl, Gyöngyi; Keire, David; Coffey, Todd; Kuehne, Henriette

    2018-05-10

    Leading regulatory agencies recommend biosimilar assessment to proceed in a stepwise fashion, starting with a detailed analytical comparison of the structural and functional properties of the proposed biosimilar and reference product. The degree of analytical similarity determines the degree of residual uncertainty that must be addressed through downstream in vivo studies. Substantive evidence of similarity from comprehensive analytical testing may justify a targeted clinical development plan, and thus enable a shorter path to licensing. The importance of a careful design of the analytical similarity study program therefore should not be underestimated. Designing a state-of-the-art analytical similarity study meeting current regulatory requirements in regions such as the USA and EU requires a methodical approach, consisting of specific steps that far precede the work on the actual analytical study protocol. This white paper discusses scientific and methodological considerations on the process of attribute and test method selection, criticality assessment, and subsequent assignment of analytical measures to US FDA's three tiers of analytical similarity assessment. Case examples of selection of critical quality attributes and analytical methods for similarity exercises are provided to illustrate the practical implementation of the principles discussed.

  19. Simultaneous Spectrophotometric Determination of Rifampicin, Isoniazid and Pyrazinamide in a Single Step

    PubMed Central

    Asadpour-Zeynali, Karim; Saeb, Elhameh

    2016-01-01

    Three antituberculosis medications are investigated in this work consist of rifampicin, isoniazid and pyrazinamide. The ultra violet (UV) spectra of these compounds are overlapped, thus use of suitable chemometric methods are helpful for simultaneous spectrophotometric determination of them. A generalized version of net analyte signal standard addition method (GNASSAM) was used for determination of three antituberculosis medications as a model system. In generalized net analyte signal standard addition method only one standard solution was prepared for all analytes. This standard solution contains a mixture of all analytes of interest, and the addition of such solution to sample, causes increases in net analyte signal of each analyte which are proportional to the concentrations of analytes in added standards solution. For determination of concentration of each analyte in some synthetic mixtures, the UV spectra of pure analytes and each sample were recorded in the range of 210 nm-550 nm. The standard addition procedure was performed for each sample and the UV spectrum was recorded after each addition and finally the results were analyzed by net analyte signal method. Obtained concentrations show acceptable performance of GNASSAM in these cases. PMID:28243267

  20. Stabilizing potentials in bound state analytic continuation methods for electronic resonances in polyatomic molecules

    DOE PAGES

    White, Alec F.; Head-Gordon, Martin; McCurdy, C. William

    2017-01-30

    The computation of Siegert energies by analytic continuation of bound state energies has recently been applied to shape resonances in polyatomic molecules by several authors. Here, we critically evaluate a recently proposed analytic continuation method based on low order (type III) Padé approximants as well as an analytic continuation method based on high order (type II) Padé approximants. We compare three classes of stabilizing potentials: Coulomb potentials, Gaussian potentials, and attenuated Coulomb potentials. These methods are applied to a model potential where the correct answer is known exactly and to the 2Π g shape resonance of N 2 - whichmore » has been studied extensively by other methods. Both the choice of stabilizing potential and method of analytic continuation prove to be important to the accuracy of the results. We then conclude that an attenuated Coulomb potential is the most effective of the three for bound state analytic continuation methods. With the proper potential, such methods show promise for algorithmic determination of the positions and widths of molecular shape resonances.« less

  1. Stabilizing potentials in bound state analytic continuation methods for electronic resonances in polyatomic molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Alec F.; Head-Gordon, Martin; McCurdy, C. William

    The computation of Siegert energies by analytic continuation of bound state energies has recently been applied to shape resonances in polyatomic molecules by several authors. Here, we critically evaluate a recently proposed analytic continuation method based on low order (type III) Padé approximants as well as an analytic continuation method based on high order (type II) Padé approximants. We compare three classes of stabilizing potentials: Coulomb potentials, Gaussian potentials, and attenuated Coulomb potentials. These methods are applied to a model potential where the correct answer is known exactly and to the 2Π g shape resonance of N 2 - whichmore » has been studied extensively by other methods. Both the choice of stabilizing potential and method of analytic continuation prove to be important to the accuracy of the results. We then conclude that an attenuated Coulomb potential is the most effective of the three for bound state analytic continuation methods. With the proper potential, such methods show promise for algorithmic determination of the positions and widths of molecular shape resonances.« less

  2. Method for reduction of selected ion intensities in confined ion beams

    DOEpatents

    Eiden, Gregory C.; Barinaga, Charles J.; Koppenaal, David W.

    1998-01-01

    A method for producing an ion beam having an increased proportion of analyte ions compared to carrier gas ions is disclosed. Specifically, the method has the step of addition of a charge transfer gas to the carrier analyte combination that accepts charge from the carrier gas ions yet minimally accepts charge from the analyte ions thereby selectively neutralizing the carrier gas ions. Also disclosed is the method as employed in various analytical instruments including an inductively coupled plasma mass spectrometer.

  3. Method for reduction of selected ion intensities in confined ion beams

    DOEpatents

    Eiden, G.C.; Barinaga, C.J.; Koppenaal, D.W.

    1998-06-16

    A method for producing an ion beam having an increased proportion of analyte ions compared to carrier gas ions is disclosed. Specifically, the method has the step of addition of a charge transfer gas to the carrier analyte combination that accepts charge from the carrier gas ions yet minimally accepts charge from the analyte ions thereby selectively neutralizing the carrier gas ions. Also disclosed is the method as employed in various analytical instruments including an inductively coupled plasma mass spectrometer. 7 figs.

  4. Application of statistical methods to reveal and remove the causes of welding of coil laps upon annealing of cold-rolled steel strips

    NASA Astrophysics Data System (ADS)

    Garber, E. A.; Diligenskii, E. V.; Antonov, P. V.; Shalaevskii, D. L.; Dyatlov, I. A.

    2017-09-01

    The factors of the process of production of cold-rolled steel strips that promote and hinder the appearance of a coil lap welding defect upon annealing in bell-type furnaces are analyzed using statistical methods. The works dealing with this problem are analytically reviewed to reveal the problems to be studied and refined. The ranking of the technological factors according to the significance of their influence on the probability of appearance of this defect is determined and supported by industrial data, and a regression equation is derived to calculate this probability. The process of production is improved to minimize the rejection of strips caused by the welding of coil laps.

  5. Mapping debris flow susceptibility using analytical network process in Kodaikkanal Hills, Tamil Nadu (India)

    NASA Astrophysics Data System (ADS)

    Sujatha, Evangelin Ramani; Sridhar, Venkataramana

    2017-12-01

    Rapid debris flows, a mixture of unconsolidated sediments and water travelling at speeds > 10 m/s are the most destructive water related mass movements that affect hill and mountain regions. The predisposing factors setting the stage for the event are the availability of materials, type of materials, stream power, slope gradient, aspect and curvature, lithology, land use and land cover, lineament density, and drainage. Rainfall is the most common triggering factor that causes debris flow in the Palar subwatershed and seismicity is not considered as it is a stable continental region and moderate seismic zone. Also, there are no records of major seismic activities in the past. In this study, one of the less explored heuristic methods known as the analytical network process (ANP) is used to map the spatial propensity of debris flow. This method is based on top-down decision model and is a multi-criteria, decision-making tool that translates subjective assessment of relative importance to weights or scores and is implemented in the Palar subwatershed which is part of the Western Ghats in southern India. The results suggest that the factors influencing debris flow susceptibility in this region are the availability of material on the slope, peak flow, gradient of the slope, land use and land cover, and proximity to streams. Among all, peak discharge is identified as the chief factor causing debris flow. The use of micro-scale watersheds demonstrated in this study to develop the susceptibility map can be very effective for local level planning and land management.

  6. Targeted analyte deconvolution and identification by four-way parallel factor analysis using three-dimensional gas chromatography with mass spectrometry data.

    PubMed

    Watson, Nathanial E; Prebihalo, Sarah E; Synovec, Robert E

    2017-08-29

    Comprehensive three-dimensional gas chromatography with time-of-flight mass spectrometry (GC 3 -TOFMS) creates an opportunity to explore a new paradigm in chemometric analysis. Using this newly described instrument and the well understood Parallel Factor Analysis (PARAFAC) model we present one option for utilization of the novel GC 3 -TOFMS data structure. We present a method which builds upon previous work in both GC 3 and targeted analysis using PARAFAC to simplify some of the implementation challenges previously discovered. Conceptualizing the GC 3 -TOFMS instead as a one-dimensional gas chromatograph with GC × GC-TOFMS detection we allow the instrument to create the PARAFAC target window natively. Each first dimension modulation thus creates a full GC × GC-TOFMS chromatogram fully amenable to PARAFAC. A simple mixture of 115 compounds and a diesel sample are interrogated through this methodology. All test analyte targets are successfully identified in both mixtures. In addition, mass spectral matching of the PARAFAC loadings to library spectra yielded results greater than 900 in 40 of 42 test analyte cases. Twenty-nine of these cases produced match values greater than 950. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Analytical approaches for the detection of emerging therapeutics and non-approved drugs in human doping controls.

    PubMed

    Thevis, Mario; Schänzer, Wilhelm

    2014-12-01

    The number and diversity of potentially performance-enhancing substances is continuously growing, fueled by new pharmaceutical developments but also by the inventiveness and, at the same time, unscrupulousness of black-market (designer) drug producers and providers. In terms of sports drug testing, this situation necessitates reactive as well as proactive research and expansion of the analytical armamentarium to ensure timely, adequate, and comprehensive doping controls. This review summarizes literature published over the past 5 years on new drug entities, discontinued therapeutics, and 'tailored' compounds classified as doping agents according to the regulations of the World Anti-Doping Agency, with particular attention to analytical strategies enabling their detection in human blood or urine. Among these compounds, low- and high-molecular mass substances of peptidic (e.g. modified insulin-like growth factor-1, TB-500, hematide/peginesatide, growth hormone releasing peptides, AOD-9604, etc.) and non-peptidic (selective androgen receptor modulators, hypoxia-inducible factor stabilizers, siRNA, S-107 and ARM036/aladorian, etc.) as well as inorganic (cobalt) nature are considered and discussed in terms of specific requirements originating from physicochemical properties, concentration levels, metabolism, and their amenability for chromatographic-mass spectrometric or alternative detection methods. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Reevaluation of air surveillance station siting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbott, K.; Jannik, T.

    2016-07-06

    DOE Technical Standard HDBK-1216-2015 (DOE 2015) recommends evaluating air-monitoring station placement using the analytical method developed by Waite. The technique utilizes wind rose and population distribution data in order to determine a weighting factor for each directional sector surrounding a nuclear facility. Based on the available resources (number of stations) and a scaling factor, this weighting factor is used to determine the number of stations recommended to be placed in each sector considered. An assessment utilizing this method was performed in 2003 to evaluate the effectiveness of the existing SRS air-monitoring program. The resulting recommended distribution of air-monitoring stations wasmore » then compared to that of the existing site perimeter surveillance program. The assessment demonstrated that the distribution of air-monitoring stations at the time generally agreed with the results obtained using the Waite method; however, at the time new stations were established in Barnwell and in Williston in order to meet requirements of DOE guidance document EH-0173T.« less

  9. Method for Operating a Sensor to Differentiate Between Analytes in a Sample

    DOEpatents

    Kunt, Tekin; Cavicchi, Richard E; Semancik, Stephen; McAvoy, Thomas J

    1998-07-28

    Disclosed is a method for operating a sensor to differentiate between first and second analytes in a sample. The method comprises the steps of determining a input profile for the sensor which will enhance the difference in the output profiles of the sensor as between the first analyte and the second analyte; determining a first analyte output profile as observed when the input profile is applied to the sensor; determining a second analyte output profile as observed when the temperature profile is applied to the sensor; introducing the sensor to the sample while applying the temperature profile to the sensor, thereby obtaining a sample output profile; and evaluating the sample output profile as against the first and second analyte output profiles to thereby determine which of the analytes is present in the sample.

  10. Consideration of some factors affecting low-frequency fuselage noise transmission for propeller aircraft

    NASA Technical Reports Server (NTRS)

    Mixson, J. S.; Roussos, L. A.

    1986-01-01

    Possible reasons for disagreement between measured and predicted trends of sidewall noise transmission at low frequency are investigated using simplified analysis methods. An analytical model combining incident plane acoustic waves with an infinite flat panel is used to study the effects of sound incidence angle, plate structural properties, frequency, absorption, and the difference between noise reduction and transmission loss. Analysis shows that these factors have significant effects on noise transmission but they do not account for the differences between measured and predicted trends at low frequencies. An analytical model combining an infinite flat plate with a normally incident acoustic wave having exponentially decaying magnitude along one coordinate is used to study the effect of a localized source distribution such as is associated with propeller noise. Results show that the localization brings the predicted low-frequency trend of noise transmission into better agreement with measured propeller results. This effect is independent of low-frequency stiffness effects that have been previously reported to be associated with boundary conditions.

  11. Radiated flow of chemically reacting nanoliquid with an induced magnetic field across a permeable vertical plate

    NASA Astrophysics Data System (ADS)

    Mahanthesh, B.; Gireesha, B. J.; Athira, P. R.

    Impact of induced magnetic field over a flat porous plate by utilizing incompressible water-copper nanoliquid is examined analytically. Flow is supposed to be laminar, steady and two-dimensional. The plate is subjected to a regular free stream velocity as well as suction velocity. Flow formulation is developed by considering Maxwell-Garnetts (MG) and Brinkman models of nanoliquid. Impacts of thermal radiation, viscous dissipation, temperature dependent heat source/sink and first order chemical reaction are also retained. The subjected non-linear problems are non-dimensionalized and analytic solutions are presented via series expansion method. The graphs are plotted to analyze the influence of pertinent parameters on flow, magnetism, heat and mass transfer fields as well as friction factor, current density, Nusselt and Sherwood numbers. It is found that friction factor at the plate is more for larger magnetic Prandtl number. Also the rate of heat transfer decayed with increasing nanoparticles volume fraction and the strength of magnetism.

  12. Polyol-enhanced dispersive liquid-liquid microextraction coupled with gas chromatography and nitrogen phosphorous detection for the determination of organophosphorus pesticides from aqueous samples, fruit juices, and vegetables.

    PubMed

    Farajzadeh, Mir Ali; Afshar Mogaddam, Mohammad Reza; Alizadeh Nabil, Ali Akbar

    2015-12-01

    Polyol-enhanced dispersive liquid-liquid microextraction has been proposed for the extraction and preconcentration of some organophosphorus pesticides from different samples. In the present study, a high volume of an aqueous phase containing a polyol (sorbitol) is prepared and then a disperser solvent along with an extraction solvent is rapidly injected into it. Sorbitol showed the best results and it was more effective on the extraction recoveries of the analytes than inorganic salts such as sodium chloride, potassium chloride, and sodium sulfate. Under the optimum extraction conditions, the method showed low limits of detection and quantification within the ranges of 12-56 and 44-162 pg/mL, respectively. Enrichment factors and extraction recoveries were in the ranges of 2799-3033 and 84-92%, respectively. The method precision was evaluated at a concentration of 10 ng/mL of each analyte, and relative standard deviations were found to be less than 5.9% for intraday (n = 6) and less than 7.8% for interday (n = 4). Finally, some aqueous samples were successfully analyzed using the proposed method and four analytes (diazinon, dimethoate, chlorpyrifos, and phosalone) were determined, some of them at ng/mL level. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. A multi-analyte serum test for the detection of non-small cell lung cancer

    PubMed Central

    Farlow, E C; Vercillo, M S; Coon, J S; Basu, S; Kim, A W; Faber, L P; Warren, W H; Bonomi, P; Liptay, M J; Borgia, J A

    2010-01-01

    Background: In this study, we appraised a wide assortment of biomarkers previously shown to have diagnostic or prognostic value for non-small cell lung cancer (NSCLC) with the intent of establishing a multi-analyte serum test capable of identifying patients with lung cancer. Methods: Circulating levels of 47 biomarkers were evaluated against patient cohorts consisting of 90 NSCLC and 43 non-cancer controls using commercial immunoassays. Multivariate statistical methods were used on all biomarkers achieving statistical relevance to define an optimised panel of diagnostic biomarkers for NSCLC. The resulting biomarkers were fashioned into a classification algorithm and validated against serum from a second patient cohort. Results: A total of 14 analytes achieved statistical relevance upon evaluation. Multivariate statistical methods then identified a panel of six biomarkers (tumour necrosis factor-α, CYFRA 21-1, interleukin-1ra, matrix metalloproteinase-2, monocyte chemotactic protein-1 and sE-selectin) as being the most efficacious for diagnosing early stage NSCLC. When tested against a second patient cohort, the panel successfully classified 75 of 88 patients. Conclusions: Here, we report the development of a serum algorithm with high specificity for classifying patients with NSCLC against cohorts of various ‘high-risk' individuals. A high rate of false positives was observed within the cohort in which patients had non-neoplastic lung nodules, possibly as a consequence of the inflammatory nature of these conditions. PMID:20859284

  14. Methods of analysis by the U.S. Geological Survey National Water Quality Laboratory; determination of selected carbamate pesticides in water by high-performance liquid chromatography

    USGS Publications Warehouse

    Werner, S.L.; Johnson, S.M.

    1994-01-01

    As part of its primary responsibility concerning water as a national resource, the U.S. Geological Survey collects and analyzes samples of ground water and surface water to determine water quality. This report describes the method used since June 1987 to determine selected total-recoverable carbamate pesticides present in water samples. High- performance liquid chromatography is used to separate N-methyl carbamates, N-methyl carbamoyloximes, and an N-phenyl carbamate which have been extracted from water and concentrated in dichloromethane. Analytes, surrogate compounds, and reference compounds are eluted from the analytical column within 25 minutes. Two modes of analyte detection are used: (1) a photodiode-array detector measures and records ultraviolet-absorbance profiles, and (2) a fluorescence detector measures and records fluorescence from an analyte derivative produced when analyte hydrolysis is combined with chemical derivatization. Analytes are identified and confirmed in a three-stage process by use of chromatographic retention time, ultraviolet (UV) spectral comparison, and derivatization/fluorescence detection. Quantitative results are based on the integration of single-wavelength UV-absorbance chromatograms and on comparison with calibration curves derived from external analyte standards that are run with samples as part of an instrumental analytical sequence. Estimated method detection limits vary for each analyte, depending on the sample matrix conditions, and range from 0.5 microgram per liter to as low as 0.01 microgram per liter. Reporting levels for all analytes have been set at 0.5 microgram per liter for this method. Corrections on the basis of percentage recoveries of analytes spiked into distilled water are not applied to values calculated for analyte concentration in samples. These values for analyte concentrations instead indicate the quantities recovered by the method from a particular sample matrix.

  15. Pushing quantitation limits in micro UHPLC-MS/MS analysis of steroid hormones by sample dilution using high volume injection.

    PubMed

    Márta, Zoltán; Bobály, Balázs; Fekete, Jenő; Magda, Balázs; Imre, Tímea; Mészáros, Katalin Viola; Szabó, Pál Tamás

    2016-09-10

    Ultratrace analysis of sample components requires excellent analytical performance in terms of limits of quantitation (LoQ). Micro UHPLC coupling with sensitive tandem mass spectrometry provides state of the art solutions for such analytical problems. Decreased column volume in micro LC limits the injectable sample volume. However, if analyte concentration is extremely low, it might be necessary to inject high sample volumes. This is particularly critical for strong sample solvents and weakly retained analytes, which are often the case when preparing biological samples (protein precipitation, sample extraction, etc.). In that case, high injection volumes may cause band broadening, peak distortion or even elution in dead volume. In this study, we evaluated possibilities of high volume injection onto microbore RP-LC columns, when sample solvent is diluted. The presented micro RP-LC-MS/MS method was optimized for the analysis of steroid hormones from human plasma after protein precipitation with organic solvents. A proper sample dilution procedure helps to increase the injection volume without compromising peak shapes. Finally, due to increased injection volume, the limit of quantitation can be decreased by a factor of 2-5, depending on the analytes and the experimental conditions. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. 3-D discrete analytical ridgelet transform.

    PubMed

    Helbert, David; Carré, Philippe; Andres, Eric

    2006-12-01

    In this paper, we propose an implementation of the 3-D Ridgelet transform: the 3-D discrete analytical Ridgelet transform (3-D DART). This transform uses the Fourier strategy for the computation of the associated 3-D discrete Radon transform. The innovative step is the definition of a discrete 3-D transform with the discrete analytical geometry theory by the construction of 3-D discrete analytical lines in the Fourier domain. We propose two types of 3-D discrete lines: 3-D discrete radial lines going through the origin defined from their orthogonal projections and 3-D planes covered with 2-D discrete line segments. These discrete analytical lines have a parameter called arithmetical thickness, allowing us to define a 3-D DART adapted to a specific application. Indeed, the 3-D DART representation is not orthogonal, It is associated with a flexible redundancy factor. The 3-D DART has a very simple forward/inverse algorithm that provides an exact reconstruction without any iterative method. In order to illustrate the potentiality of this new discrete transform, we apply the 3-D DART and its extension to the Local-DART (with smooth windowing) to the denoising of 3-D image and color video. These experimental results show that the simple thresholding of the 3-D DART coefficients is efficient.

  17. PESTICIDE ANALYTICAL METHODS TO SUPPORT DUPLICATE-DIET HUMAN EXPOSURE MEASUREMENTS

    EPA Science Inventory

    Historically, analytical methods for determination of pesticides in foods have been developed in support of regulatory programs and are specific to food items or food groups. Most of the available methods have been developed, tested and validated for relatively few analytes an...

  18. Validating Analytical Methods

    ERIC Educational Resources Information Center

    Ember, Lois R.

    1977-01-01

    The procedures utilized by the Association of Official Analytical Chemists (AOAC) to develop, evaluate, and validate analytical methods for the analysis of chemical pollutants are detailed. Methods validated by AOAC are used by the EPA and FDA in their enforcement programs and are granted preferential treatment by the courts. (BT)

  19. Pre-analytical and analytical factors influencing Alzheimer's disease cerebrospinal fluid biomarker variability.

    PubMed

    Fourier, Anthony; Portelius, Erik; Zetterberg, Henrik; Blennow, Kaj; Quadrio, Isabelle; Perret-Liaudet, Armand

    2015-09-20

    A panel of cerebrospinal fluid (CSF) biomarkers including total Tau (t-Tau), phosphorylated Tau protein at residue 181 (p-Tau) and β-amyloid peptides (Aβ42 and Aβ40), is frequently used as an aid in Alzheimer's disease (AD) diagnosis for young patients with cognitive impairment, for predicting prodromal AD in mild cognitive impairment (MCI) subjects, for AD discrimination in atypical clinical phenotypes and for inclusion/exclusion and stratification of patients in clinical trials. Due to variability in absolute levels between laboratories, there is no consensus on medical cut-off value for the CSF AD signature. Thus, for full implementation of this core AD biomarker panel in clinical routine, this issue has to be solved. Variability can be explained both by pre-analytical and analytical factors. For example, the plastic tubes used for CSF collection and storage, the lack of reference material and the variability of the analytical protocols were identified as important sources of variability. The aim of this review is to highlight these pre-analytical and analytical factors and describe efforts done to counteract them in order to establish cut-off values for core CSF AD biomarkers. This review will give the current state of recommendations. Copyright © 2015. Published by Elsevier B.V.

  20. The uncertainty of nitrous oxide emissions from grazed grasslands: A New Zealand case study

    NASA Astrophysics Data System (ADS)

    Kelliher, Francis M.; Henderson, Harold V.; Cox, Neil R.

    2017-01-01

    Agricultural soils emit nitrous oxide (N2O), a greenhouse gas and the primary source of nitrogen oxides which deplete stratospheric ozone. Agriculture has been estimated to be the largest anthropogenic N2O source. In New Zealand (NZ), pastoral agriculture uses half the land area. To estimate the annual N2O emissions from NZ's agricultural soils, the nitrogen (N) inputs have been determined and multiplied by an emission factor (EF), the mass fraction of N inputs emitted as N2Osbnd N. To estimate the associated uncertainty, we developed an analytical method. For comparison, another estimate was determined by Monte Carlo numerical simulation. For both methods, expert judgement was used to estimate the N input uncertainty. The EF uncertainty was estimated by meta-analysis of the results from 185 NZ field trials. For the analytical method, assuming a normal distribution and independence of the terms used to calculate the emissions (correlation = 0), the estimated 95% confidence limit was ±57%. When there was a normal distribution and an estimated correlation of 0.4 between N input and EF, the latter inferred from experimental data involving six NZ soils, the analytical method estimated a 95% confidence limit of ±61%. The EF data from 185 NZ field trials had a logarithmic normal distribution. For the Monte Carlo method, assuming a logarithmic normal distribution for EF, a normal distribution for the other terms and independence of all terms, the estimated 95% confidence limits were -32% and +88% or ±60% on average. When there were the same distribution assumptions and a correlation of 0.4 between N input and EF, the Monte Carlo method estimated 95% confidence limits were -34% and +94% or ±64% on average. For the analytical and Monte Carlo methods, EF uncertainty accounted for 95% and 83% of the emissions uncertainty when the correlation between N input and EF was 0 and 0.4, respectively. As the first uncertainty analysis of an agricultural soils N2O emissions inventory using "country-specific" field trials to estimate EF uncertainty, this can be a potentially informative case study for the international scientific community.

Top