Contains basic information on the role and origins of the Selected Analytical Methods including the formation of the Homeland Security Laboratory Capacity Work Group and the Environmental Evaluation Analytical Process Roadmap for Homeland Security Events
Alberer, Martin; Hoefele, Julia; Benz, Marcus R; Bökenkamp, Arend; Weber, Lutz T
2017-01-01
Measurement of inulin clearance is considered to be the gold standard for determining kidney function in children, but this method is time consuming and expensive. The glomerular filtration rate (GFR) is on the other hand easier to calculate by using various creatinine- and/or cystatin C (Cys C)-based formulas. However, for the determination of serum creatinine (Scr) and Cys C, different and non-interchangeable analytical methods exist. Given the fact that different analytical methods for the determination of creatinine and Cys C were used in order to validate existing GFR formulas, clinicians should be aware of the type used in their local laboratory. In this study, we compared GFR results calculated on the basis of different GFR formulas and either used Scr and Cys C values as determined by the analytical method originally employed for validation or values obtained by an alternative analytical method to evaluate any possible effects on the performance. Cys C values determined by means of an immunoturbidimetric assay were used for calculating the GFR using equations in which this analytical method had originally been used for validation. Additionally, these same values were then used in other GFR formulas that had originally been validated using a nephelometric immunoassay for determining Cys C. The effect of using either the compatible or the possibly incompatible analytical method for determining Cys C in the calculation of GFR was assessed in comparison with the GFR measured by creatinine clearance (CrCl). Unexpectedly, using GFR equations that employed Cys C values derived from a possibly incompatible analytical method did not result in a significant difference concerning the classification of patients as having normal or reduced GFR compared to the classification obtained on the basis of CrCl. Sensitivity and specificity were adequate. On the other hand, formulas using Cys C values derived from a compatible analytical method partly showed insufficient performance when compared to CrCl. Although clinicians should be aware of applying a GFR formula that is compatible with the locally used analytical method for determining Cys C and creatinine, other factors might be more crucial for the calculation of correct GFR values.
METHOD DEVELOPMENT FOR THE DETERMINATION OF FORMALDEHYDE IN SAMPLES OF ENVIRONMENTAL ORIGIN
An analytical method was developed for the determination of formaldehyde in samples of environmental origin. After a review of the current literature, five candidate methods involving chemical derivatization were chosen for evaluation. The five derivatization reagents studied wer...
Method of multiplexed analysis using ion mobility spectrometer
Belov, Mikhail E [Richland, WA; Smith, Richard D [Richland, WA
2009-06-02
A method for analyzing analytes from a sample introduced into a Spectrometer by generating a pseudo random sequence of a modulation bins, organizing each modulation bin as a series of submodulation bins, thereby forming an extended pseudo random sequence of submodulation bins, releasing the analytes in a series of analyte packets into a Spectrometer, thereby generating an unknown original ion signal vector, detecting the analytes at a detector, and characterizing the sample using the plurality of analyte signal subvectors. The method is advantageously applied to an Ion Mobility Spectrometer, and an Ion Mobility Spectrometer interfaced with a Time of Flight Mass Spectrometer.
Asadpour-Zeynali, Karim; Maryam Sajjadi, S; Taherzadeh, Fatemeh; Rahmanian, Reza
2014-04-05
Bilinear least square (BLLS) method is one of the most suitable algorithms for second-order calibration. Original BLLS method is not applicable to the second order pH-spectral data when an analyte has more than one spectroscopically active species. Bilinear least square-residual bilinearization (BLLS-RBL) was developed to achieve the second order advantage for analysis of complex mixtures. Although the modified method is useful, the pure profiles cannot be obtained and only the linear combination will be obtained. Moreover, for prediction of analyte in an unknown sample, the original algorithm of RBL may diverge; instead of converging to the desired analyte concentrations. Therefore, Gauss Newton-RLB algorithm should be used, which is not as simple as original protocol. Also, the analyte concentration can be predicted on the basis of each of the equilibrating species of the component of interest that are not exactly the same. The aim of the present work is to tackle the non-uniqueness problem in the second order calibration of monoprotic acid mixtures and divergence of RBL. Each pH-absorbance matrix was pretreated by subtraction of the first spectrum from other spectra in the data set to produce full rank array that is called variation matrix. Then variation matrices were analyzed uniquely by original BLLS-RBL that is more parsimonious than its modified counterpart. The proposed method was performed on the simulated as well as the analysis of real data. Sunset yellow and Carmosine as monoprotic acids were determined in candy sample in the presence of unknown interference by this method. Copyright © 2013 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Shimokawa, Kenichi; Lambert, Michael J.; Smart, David W.
2010-01-01
Objective: Outcome research has documented worsening among a minority of the patient population (5% to 10%). In this study, we conducted a meta-analytic and mega-analytic review of a psychotherapy quality assurance system intended to enhance outcomes in patients at risk of treatment failure. Method: Original data from six major studies conducted…
Semi-analytic valuation of stock loans with finite maturity
NASA Astrophysics Data System (ADS)
Lu, Xiaoping; Putri, Endah R. M.
2015-10-01
In this paper we study stock loans of finite maturity with different dividend distributions semi-analytically using the analytical approximation method in Zhu (2006). Stock loan partial differential equations (PDEs) are established under Black-Scholes framework. Laplace transform method is used to solve the PDEs. Optimal exit price and stock loan value are obtained in Laplace space. Values in the original time space are recovered by numerical Laplace inversion. To demonstrate the efficiency and accuracy of our semi-analytic method several examples are presented, the results are compared with those calculated using existing methods. We also present a calculation of fair service fee charged by the lender for different loan parameters.
Analytical methods for gelatin differentiation from bovine and porcine origins and food products.
Nhari, Raja Mohd Hafidz Raja; Ismail, Amin; Che Man, Yaakob B
2012-01-01
Usage of gelatin in food products has been widely debated for several years, which is about the source of gelatin that has been used, religion, and health. As an impact, various analytical methods have been introduced and developed to differentiate gelatin whether it is made from porcine or bovine sources. The analytical methods comprise a diverse range of equipment and techniques including spectroscopy, chemical precipitation, chromatography, and immunochemical. Each technique can differentiate gelatins for certain extent with advantages and limitations. This review is focused on overview of the analytical methods available for differentiation of bovine and porcine gelatin and gelatin in food products so that new method development can be established. © 2011 Institute of Food Technologists®
Hybrid experimental/analytical models of structural dynamics - Creation and use for predictions
NASA Technical Reports Server (NTRS)
Balmes, Etienne
1993-01-01
An original complete methodology for the construction of predictive models of damped structural vibrations is introduced. A consistent definition of normal and complex modes is given which leads to an original method to accurately identify non-proportionally damped normal mode models. A new method to create predictive hybrid experimental/analytical models of damped structures is introduced, and the ability of hybrid models to predict the response to system configuration changes is discussed. Finally a critical review of the overall methodology is made by application to the case of the MIT/SERC interferometer testbed.
Barricklow, Jason; Ryder, Tim F; Furlong, Michael T
2009-08-01
During LC-MS/MS quantification of a small molecule in human urine samples from a clinical study, an unexpected peak was observed to nearly co-elute with the analyte of interest in many study samples. Improved chromatographic resolution revealed the presence of at least 3 non-analyte peaks, which were identified as cysteine metabolites and N-acetyl (mercapturic acid) derivatives thereof. These metabolites produced artifact responses in the parent compound MRM channel due to decomposition in the ionization source of the mass spectrometer. Quantitative comparison of the analyte concentrations in study samples using the original chromatographic method and the improved chromatographic separation method demonstrated that the original method substantially over-estimated the analyte concentration in many cases. The substitution of electrospray ionization (ESI) for atmospheric pressure chemical ionization (APCI) nearly eliminated the source instability of these metabolites, which would have mitigated their interference in the quantification of the analyte, even without chromatographic separation. These results 1) demonstrate the potential for thiol metabolite interferences during the quantification of small molecules in pharmacokinetic samples, and 2) underscore the need to carefully evaluate LC-MS/MS methods for molecules that can undergo metabolism to thiol adducts to ensure that they are not susceptible to such interferences during quantification.
Parrinello, Christina M.; Grams, Morgan E.; Couper, David; Ballantyne, Christie M.; Hoogeveen, Ron C.; Eckfeldt, John H.; Selvin, Elizabeth; Coresh, Josef
2016-01-01
Background Equivalence of laboratory tests over time is important for longitudinal studies. Even a small systematic difference (bias) can result in substantial misclassification. Methods We selected 200 Atherosclerosis Risk in Communities Study participants attending all 5 study visits over 25 years. Eight analytes were re-measured in 2011–13 from stored blood samples from multiple visits: creatinine, uric acid, glucose, total cholesterol, HDL-cholesterol, LDL-cholesterol, triglycerides, and high-sensitivity C-reactive protein. Original values were recalibrated to re-measured values using Deming regression. Differences >10% were considered to reflect substantial bias, and correction equations were applied to affected analytes in the total study population. We examined trends in chronic kidney disease (CKD) pre- and post-recalibration. Results Repeat measures were highly correlated with original values (Pearson’s r>0.85 after removing outliers [median 4.5% of paired measurements]), but 2 of 8 analytes (creatinine and uric acid) had differences >10%. Original values of creatinine and uric acid were recalibrated to current values using correction equations. CKD prevalence differed substantially after recalibration of creatinine (visits 1, 2, 4 and 5 pre-recalibration: 21.7%, 36.1%, 3.5%, 29.4%; post-recalibration: 1.3%, 2.2%, 6.4%, 29.4%). For HDL-cholesterol, the current direct enzymatic method differed substantially from magnesium dextran precipitation used during visits 1–4. Conclusions Analytes re-measured in samples stored for ~25 years were highly correlated with original values, but two of the 8 analytes showed substantial bias at multiple visits. Laboratory recalibration improved reproducibility of test results across visits and resulted in substantial differences in CKD prevalence. We demonstrate the importance of consistent recalibration of laboratory assays in a cohort study. PMID:25952043
On the Decay of Correlations in Non-Analytic SO(n)-Symmetric Models
NASA Astrophysics Data System (ADS)
Naddaf, Ali
We extend the method of complex translations which was originally employed by McBryan-Spencer [2] to obtain a decay rate for the two point function in two-dimensional SO(n)-symmetric models with non-analytic Hamiltonians for $.
Thermodynamics of Gas Turbine Cycles with Analytic Derivatives in OpenMDAO
NASA Technical Reports Server (NTRS)
Gray, Justin; Chin, Jeffrey; Hearn, Tristan; Hendricks, Eric; Lavelle, Thomas; Martins, Joaquim R. R. A.
2016-01-01
A new equilibrium thermodynamics analysis tool was built based on the CEA method using the OpenMDAO framework. The new tool provides forward and adjoint analytic derivatives for use with gradient based optimization algorithms. The new tool was validated against the original CEA code to ensure an accurate analysis and the analytic derivatives were validated against finite-difference approximations. Performance comparisons between analytic and finite difference methods showed a significant speed advantage for the analytic methods. To further test the new analysis tool, a sample optimization was performed to find the optimal air-fuel equivalence ratio, , maximizing combustion temperature for a range of different pressures. Collectively, the results demonstrate the viability of the new tool to serve as the thermodynamic backbone for future work on a full propulsion modeling tool.
Artificial Intelligence Methods in Pursuit Evasion Differential Games
1990-07-30
objectives, sometimes with fuzzy ones. Classical optimization, control or game theoretic methods are insufficient for their resolution. I Solution...OVERALL SATISFACTION WITH SCHOOL 120 FIGURE 5.13 EXAMPLE AHP HIERARCHY FOR CHOOSING MOST APPROPRIATE DIFFERENTIAL GAME AND PARAMETRIZATION 125 FIGURE 5.14...the Analytical Hierarchy Process originated by T.L. Saaty of the Wharton School. The Analytic Hierarchy Process ( AHP ) is a general theory of
Semi-analytical solutions of the Schnakenberg model of a reaction-diffusion cell with feedback
NASA Astrophysics Data System (ADS)
Al Noufaey, K. S.
2018-06-01
This paper considers the application of a semi-analytical method to the Schnakenberg model of a reaction-diffusion cell. The semi-analytical method is based on the Galerkin method which approximates the original governing partial differential equations as a system of ordinary differential equations. Steady-state curves, bifurcation diagrams and the region of parameter space in which Hopf bifurcations occur are presented for semi-analytical solutions and the numerical solution. The effect of feedback control, via altering various concentrations in the boundary reservoirs in response to concentrations in the cell centre, is examined. It is shown that increasing the magnitude of feedback leads to destabilization of the system, whereas decreasing this parameter to negative values of large magnitude stabilizes the system. The semi-analytical solutions agree well with numerical solutions of the governing equations.
NASA Astrophysics Data System (ADS)
Comastri, S. A.; Perez, Liliana I.; Pérez, Gervasio D.; Bastida, K.; Martin, G.
2008-04-01
The wavefront aberration of any image forming system and, in particular, of a human eye, is often expanded in Zernike modes each mode being weighed by a coefficient that depends both on the image forming components of the system and on the contour, size and centering of the pupil. In the present article, expanding up to 7th order the wavefront aberration, an analytical method to compute a new set of Zernike coefficients corresponding to a pupil in terms of an original set evaluated via ray tracing for a dilated and transversally arbitrarily displaced pupil is developed. A transformation matrix of dimension 36×36 is attained multiplying the scaling-horizontal traslation matrix previously derived by appropriate rotation matrices. Multiplying the original coefficients by this transformation matrix, analytical formulas for each new coefficient are attained and supplied and, for the information concerning the wavefront aberration to be available, these formulas must be employed in cases in which the new pupil is contained in the original one. The use of these analytical formulas is exemplified applying them to study the effect of pupil contraction and/or decentering in 3 situations: calculation of corneal aberrations of a keratoconic subject for the natural photopic pupil size and various decenterings; coma compensation by means of pupil shift in a fictitious system solely having primary aberrations and evaluation of the amount of astigmatism and coma of a hypothetical system originally having spherical aberration alone.
Maghrabi, Mufeed; Al-Abdullah, Tariq; Khattari, Ziad
2018-03-24
The two heating rates method (originally developed for first-order glow peaks) was used for the first time to evaluate the activation energy (E) from glow peaks obeying mixed-order (MO) kinetics. The derived expression for E has an insignificant additional term (on the scale of a few meV) when compared with the first-order case. Hence, the original expression for E using the two heating rates method can be used with excellent accuracy in the case of MO glow peaks. In addition, we derived a simple analytical expression for the MO parameter. The present procedure has the advantage that the MO parameter can now be evaluated using analytical expression instead of using the graphical representation between the geometrical factor and the MO parameter as given by the existing peak shape methods. The applicability of the derived expressions for real samples was demonstrated for the glow curve of Li 2 B 4 O 7 :Mn single crystal. The obtained parameters compare very well with those obtained by glow curve fitting and with the available published data.
Original analytic solution of a half-bridge modelled as a statically indeterminate system
NASA Astrophysics Data System (ADS)
Oanta, Emil M.; Panait, Cornel; Raicu, Alexandra; Barhalescu, Mihaela
2016-12-01
The paper presents an original computer based analytical model of a half-bridge belonging to a circular settling tank. The primary unknown is computed using the force method, the coefficients of the canonical equation being calculated using either the discretization of the bending moment diagram in trapezoids, or using the relations specific to the polygons. A second algorithm based on the method of initial parameters is also presented. Analyzing the new solution we came to the conclusion that most of the computer code developed for other model may be reused. The results are useful to evaluate the behavior of the structure and to compare with the results of the finite element models.
Authentication of meat and meat products.
Ballin, N Z
2010-11-01
In recent years, interest in meat authenticity has increased. Many consumers are concerned about the meat they eat and accurate labelling is important to inform consumer choice. Authentication methods can be categorised into the areas where fraud is most likely to occur: meat origin, meat substitution, meat processing treatment and non-meat ingredient addition. Within each area the possibilities for fraud can be subcategorised as follows: meat origin-sex, meat cuts, breed, feed intake, slaughter age, wild versus farmed meat, organic versus conventional meat, and geographic origin; meat substitution-meat species, fat, and protein; meat processing treatment-irradiation, fresh versus thawed meat and meat preparation; non-meat ingredient addition-additives and water. Analytical methods used in authentication are as diverse as the authentication problems, and include a diverse range of equipment and techniques. This review is intended to provide an overview of the possible analytical methods available for meat and meat products authentication. In areas where no authentication methods have been published, possible strategies are suggested. Copyright © 2010 The American Meat Science Association. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaurov, Alexander A., E-mail: kaurov@uchicago.edu
The methods for studying the epoch of cosmic reionization vary from full radiative transfer simulations to purely analytical models. While numerical approaches are computationally expensive and are not suitable for generating many mock catalogs, analytical methods are based on assumptions and approximations. We explore the interconnection between both methods. First, we ask how the analytical framework of excursion set formalism can be used for statistical analysis of numerical simulations and visual representation of the morphology of ionization fronts. Second, we explore the methods of training the analytical model on a given numerical simulation. We present a new code which emergedmore » from this study. Its main application is to match the analytical model with a numerical simulation. Then, it allows one to generate mock reionization catalogs with volumes exceeding the original simulation quickly and computationally inexpensively, meanwhile reproducing large-scale statistical properties. These mock catalogs are particularly useful for cosmic microwave background polarization and 21 cm experiments, where large volumes are required to simulate the observed signal.« less
NASA Technical Reports Server (NTRS)
King, R. B.; Fordyce, J. S.; Antoine, A. C.; Leibecki, H. F.; Neustadter, H. E.; Sidik, S. M.
1976-01-01
Concentrations of 60 chemical elements in the airborne particulate matter were measured at 16 sites in Cleveland, OH over a 1 year period during 1971 and 1972 (45 to 50 sampling days). Analytical methods used included instrumental neutron activation, emission spectroscopy, and combustion techniques. Uncertainties in the concentrations associated with the sampling procedures, the analytical methods, the use of several analytical facilities, and samples with concentrations below the detection limits are evaluated in detail. The data are discussed in relation to other studies and source origins. The trace constituent concentrations as a function of wind direction are used to suggest a practical method for air pollution source identification.
ERIC Educational Resources Information Center
Albright, Jessica C.; Beussman, Douglas J.
2017-01-01
Capillary electrophoresis is an important analytical separation method used to study a wide variety of samples, including those of biological origin. Capillary electrophoresis may be covered in the classroom, especially in advanced analytical courses, and while many students are exposed to gel electrophoresis in biology or biochemistry…
Harries, Bruce; Filiatrault, Lyne; Abu-Laban, Riyad B
2018-05-30
Quality improvement (QI) analytic methodology is rarely encountered in the emergency medicine literature. We sought to comparatively apply QI design and analysis techniques to an existing data set, and discuss these techniques as an alternative to standard research methodology for evaluating a change in a process of care. We used data from a previously published randomized controlled trial on triage-nurse initiated radiography using the Ottawa ankle rules (OAR). QI analytic tools were applied to the data set from this study and evaluated comparatively against the original standard research methodology. The original study concluded that triage nurse-initiated radiographs led to a statistically significant decrease in mean emergency department length of stay. Using QI analytic methodology, we applied control charts and interpreted the results using established methods that preserved the time sequence of the data. This analysis found a compelling signal of a positive treatment effect that would have been identified after the enrolment of 58% of the original study sample, and in the 6th month of this 11-month study. Our comparative analysis demonstrates some of the potential benefits of QI analytic methodology. We found that had this approach been used in the original study, insights regarding the benefits of nurse-initiated radiography using the OAR would have been achieved earlier, and thus potentially at a lower cost. In situations where the overarching aim is to accelerate implementation of practice improvement to benefit future patients, we believe that increased consideration should be given to the use of QI analytic methodology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Synovec, R.E.; Johnson, E.L.; Bahowick, T.J.
1990-08-01
This paper describes a new technique for data analysis in chromatography, based on taking the point-by-point ratio of sequential chromatograms that have been base line corrected. This ratio chromatogram provides a robust means for the identification and the quantitation of analytes. In addition, the appearance of an interferent is made highly visible, even when it coelutes with desired analytes. For quantitative analysis, the region of the ratio chromatogram corresponding to the pure elution of an analyte is identified and is used to calculate a ratio value equal to the ratio of concentrations of the analyte in sequential injections. For themore » ratio value calculation, a variance-weighted average is used, which compensates for the varying signal-to-noise ratio. This ratio value, or equivalently the percent change in concentration, is the basis of a chromatographic standard addition method and an algorithm to monitor analyte concentration in a process stream. In the case of overlapped peaks, a spiking procedure is used to calculate both the original concentration of an analyte and its signal contribution to the original chromatogram. Thus, quantitation and curve resolution may be performed simultaneously, without peak modeling or curve fitting. These concepts are demonstrated by using data from ion chromatography, but the technique should be applicable to all chromatographic techniques.« less
Interactive visual exploration and analysis of origin-destination data
NASA Astrophysics Data System (ADS)
Ding, Linfang; Meng, Liqiu; Yang, Jian; Krisp, Jukka M.
2018-05-01
In this paper, we propose a visual analytics approach for the exploration of spatiotemporal interaction patterns of massive origin-destination data. Firstly, we visually query the movement database for data at certain time windows. Secondly, we conduct interactive clustering to allow the users to select input variables/features (e.g., origins, destinations, distance, and duration) and to adjust clustering parameters (e.g. distance threshold). The agglomerative hierarchical clustering method is applied for the multivariate clustering of the origin-destination data. Thirdly, we design a parallel coordinates plot for visualizing the precomputed clusters and for further exploration of interesting clusters. Finally, we propose a gradient line rendering technique to show the spatial and directional distribution of origin-destination clusters on a map view. We implement the visual analytics approach in a web-based interactive environment and apply it to real-world floating car data from Shanghai. The experiment results show the origin/destination hotspots and their spatial interaction patterns. They also demonstrate the effectiveness of our proposed approach.
Agut, C; Caron, A; Giordano, C; Hoffman, D; Ségalini, A
2011-09-10
In 2001, a multidisciplinary team made of analytical scientists and statisticians at Sanofi-aventis has published a methodology which has governed, from that time, the transfers from R&D sites to Manufacturing sites of the release monographs. This article provides an overview of the recent adaptations brought to this original methodology taking advantage of our experience and the new regulatory framework, and, in particular, the risk management perspective introduced by ICH Q9. Although some alternate strategies have been introduced in our practices, the comparative testing one, based equivalence testing as statistical approach, remains the standard for assays lying on very critical quality attributes. This is conducted with the concern to control the most important consumer's risk involved at two levels in analytical decisions in the frame of transfer studies: risk, for the receiving laboratory, to take poor release decisions with the analytical method and risk, for the sending laboratory, to accredit such a receiving laboratory on account of its insufficient performances with the method. Among the enhancements to the comparative studies, the manuscript presents the process settled within our company for a better integration of the transfer study into the method life-cycle, just as proposals of generic acceptance criteria and designs for assay and related substances methods. While maintaining rigor and selectivity of the original approach, these improvements tend towards an increased efficiency in the transfer operations. Copyright © 2011 Elsevier B.V. All rights reserved.
An analytical SMASH procedure (ASP) for sensitivity-encoded MRI.
Lee, R F; Westgate, C R; Weiss, R G; Bottomley, P A
2000-05-01
The simultaneous acquisition of spatial harmonics (SMASH) method of imaging with detector arrays can reduce the number of phase-encoding steps, and MRI scan time several-fold. The original approach utilized numerical gradient-descent fitting with the coil sensitivity profiles to create a set of composite spatial harmonics to replace the phase-encoding steps. Here, an analytical approach for generating the harmonics is presented. A transform is derived to project the harmonics onto a set of sensitivity profiles. A sequence of Fourier, Hilbert, and inverse Fourier transform is then applied to analytically eliminate spatially dependent phase errors from the different coils while fully preserving the spatial-encoding. By combining the transform and phase correction, the original numerical image reconstruction method can be replaced by an analytical SMASH procedure (ASP). The approach also allows simulation of SMASH imaging, revealing a criterion for the ratio of the detector sensitivity profile width to the detector spacing that produces optimal harmonic generation. When detector geometry is suboptimal, a group of quasi-harmonics arises, which can be corrected and restored to pure harmonics. The simulation also reveals high-order harmonic modulation effects, and a demodulation procedure is presented that enables application of ASP to a large numbers of detectors. The method is demonstrated on a phantom and humans using a standard 4-channel phased-array MRI system. Copyright 2000 Wiley-Liss, Inc.
PPM mixtures of formaldehyde in gas cylinders: Stability and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, K.C.; Miller, S.B.; Patterson, L.M.
1999-07-01
Scott Specialty Gases has been successful in producing stable calibration gases of formaldehyde at low concentration. Critical to this success has been the development of a treatment process for high pressure aluminum cylinders. Formaldehyde cylinders having concentrations of 20ppm and 4ppm were found to show only small decline in concentrations over a period of approximately 12 months. Since no NIST traceable formaldehyde standards (or Standard Reference Material) are available, all Scott's formaldehyde cylinders were originally certified by traditional impinger method. This method involves an extremely tedious purification procedure for 2,4-dinitrophenylhydrazine (2,4-DNPH). A modified version of the impinger method has beenmore » developed and does not require extensive reagent purification for formaldehyde analysis. Extremely low formaldehyde blanks have been obtained with the modified method. The HPLC conditions in the original method were used for chromatographic separations. The modified method results in a lower analytical uncertainty for the formaldehyde standard mixtures. Consequently, it is possible to discern small differences between analytical results that are important for stability study.« less
Methods for geochemical analysis
Baedecker, Philip A.
1987-01-01
The laboratories for analytical chemistry within the Geologic Division of the U.S. Geological Survey are administered by the Office of Mineral Resources. The laboratory analysts provide analytical support to those programs of the Geologic Division that require chemical information and conduct basic research in analytical and geochemical areas vital to the furtherance of Division program goals. Laboratories for research and geochemical analysis are maintained at the three major centers in Reston, Virginia, Denver, Colorado, and Menlo Park, California. The Division has an expertise in a broad spectrum of analytical techniques, and the analytical research is designed to advance the state of the art of existing techniques and to develop new methods of analysis in response to special problems in geochemical analysis. The geochemical research and analytical results are applied to the solution of fundamental geochemical problems relating to the origin of mineral deposits and fossil fuels, as well as to studies relating to the distribution of elements in varied geologic systems, the mechanisms by which they are transported, and their impact on the environment.
Modern analytical methods for the detection of food fraud and adulteration by food category.
Hong, Eunyoung; Lee, Sang Yoo; Jeong, Jae Yun; Park, Jung Min; Kim, Byung Hee; Kwon, Kisung; Chun, Hyang Sook
2017-09-01
This review provides current information on the analytical methods used to identify food adulteration in the six most adulterated food categories: animal origin and seafood, oils and fats, beverages, spices and sweet foods (e.g. honey), grain-based food, and others (organic food and dietary supplements). The analytical techniques (both conventional and emerging) used to identify adulteration in these six food categories involve sensory, physicochemical, DNA-based, chromatographic and spectroscopic methods, and have been combined with chemometrics, making these techniques more convenient and effective for the analysis of a broad variety of food products. Despite recent advances, the need remains for suitably sensitive and widely applicable methodologies that encompass all the various aspects of food adulteration. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
[Spectral scatter correction of coal samples based on quasi-linear local weighted method].
Lei, Meng; Li, Ming; Ma, Xiao-Ping; Miao, Yan-Zi; Wang, Jian-Sheng
2014-07-01
The present paper puts forth a new spectral correction method based on quasi-linear expression and local weighted function. The first stage of the method is to search 3 quasi-linear expressions to replace the original linear expression in MSC method, such as quadratic, cubic and growth curve expression. Then the local weighted function is constructed by introducing 4 kernel functions, such as Gaussian, Epanechnikov, Biweight and Triweight kernel function. After adding the function in the basic estimation equation, the dependency between the original and ideal spectra is described more accurately and meticulously at each wavelength point. Furthermore, two analytical models were established respectively based on PLS and PCA-BP neural network method, which can be used for estimating the accuracy of corrected spectra. At last, the optimal correction mode was determined by the analytical results with different combination of quasi-linear expression and local weighted function. The spectra of the same coal sample have different noise ratios while the coal sample was prepared under different particle sizes. To validate the effectiveness of this method, the experiment analyzed the correction results of 3 spectral data sets with the particle sizes of 0.2, 1 and 3 mm. The results show that the proposed method can eliminate the scattering influence, and also can enhance the information of spectral peaks. This paper proves a more efficient way to enhance the correlation between corrected spectra and coal qualities significantly, and improve the accuracy and stability of the analytical model substantially.
Mu, Zhaobin; Feng, Xiaoxiao; Zhang, Yun; Zhang, Hongyan
2016-02-01
A multi-residue method based on modified QuEChERS (quick, easy, cheap, effective, rugged, and safe) sample preparation, followed by liquid chromatography tandem mass spectrometry (LC-MS/MS), was developed and validated for the determination of three selected fungicides (propiconazole, pyraclostrobin, and isopyrazam) in seven animal origin foods. The overall recoveries at the three spiking levels of 0.005, 0.05, and 0.5 mg kg(-1) spanned between 72.3 and 101.4% with relative standard deviation (RSD) values between 0.7 and 14.9%. The method shows good linearity in the concentrations between 0.001 and 1 mg L(-1) with the coefficient of determination (R (2)) value >0.99 for each target analyte. The limit of detections (LODs) for target analytes were between 0.04 and 1.26 μg kg(-1), and the limit of quantifications (LOQs) were between 0.13 and 4.20 μg kg(-1). The matrix effect for each individual compound was evaluated through the study of ratios of the areas obtained in solvent and matrix standards. The optimized method provided a negligible matrix effect for propiconazole within 20%, whereas for pyraclostrobin and isopyrazam, the matrix effect was relatively significant with a maximum value of 49.8%. The developed method has been successfully applied to the analysis of 210 animal origin samples obtained from 16 provinces of China. The results suggested that the developed method was satisfactory for trace analysis of three fungicides in animal origin foods.
Geochemical and analytical implications of extensive sulfur retention in ash from Indonesian peats
Kane, Jean S.; Neuzil, Sandra G.
1993-01-01
Sulfur is an analyte of considerable importance to the complete major element analysis of ash from low-sulfur, low-ash Indonesian peats. Most analytical schemes for major element peat- and coal-ash analyses, including the inductively coupled plasma atomic emission spectrometry method used in this work, do not permit measurement of sulfur in the ash. As a result, oxide totals cannot be used as a check on accuracy of analysis. Alternative quality control checks verify the accuracy of the cation analyses. Cation and sulfur correlations with percent ash yield suggest that silicon and titanium, and to a lesser extent, aluminum, generally originate as minerals, whereas magnesium and sulfur generally originate from organic matter. Cation correlations with oxide totals indicate that, for these Indonesian peats, magnesium dominates sulfur fixation during ashing because it is considerably more abundant in the ash than calcium, the next most important cation in sulfur fixation.
Vincent, Ursula; Serano, Federica; von Holst, Christoph
2017-08-01
Carotenoids are used in animal nutrition mainly as sensory additives that favourably affect the colour of fish, birds and food of animal origin. Various analytical methods exist for their quantification in compound feed, reflecting the different physico-chemical characteristics of the carotenoid and the corresponding feed additives. They may be natural products or specific formulations containing the target carotenoids produced by chemical synthesis. In this study a multi-analyte method was developed that can be applied to the determination of all 10 carotenoids currently authorised within the European Union for compound feedingstuffs. The method functions regardless of whether the carotenoids have been added to the compound feed via natural products or specific formulations. It is comprised of three steps: (1) digestion of the feed sample with an enzyme; (2) pressurised liquid extraction; and (3) quantification of the analytes by reversed-phase HPLC coupled to a photodiode array detector in the visible range. The method was single-laboratory validated for poultry and fish feed covering a mass fraction range of the target analyte from 2.5 to 300 mg kg - 1 . The following method performance characteristics were obtained: the recovery rate varied from 82% to 129% and precision expressed as the relative standard deviation of intermediate precision varied from 1.6% to 15%. Based on the acceptable performance obtained in the validation study, the multi-analyte method is considered fit for the intended purpose.
Statistical Approaches to Assess Biosimilarity from Analytical Data.
Burdick, Richard; Coffey, Todd; Gutka, Hiten; Gratzl, Gyöngyi; Conlon, Hugh D; Huang, Chi-Ting; Boyne, Michael; Kuehne, Henriette
2017-01-01
Protein therapeutics have unique critical quality attributes (CQAs) that define their purity, potency, and safety. The analytical methods used to assess CQAs must be able to distinguish clinically meaningful differences in comparator products, and the most important CQAs should be evaluated with the most statistical rigor. High-risk CQA measurements assess the most important attributes that directly impact the clinical mechanism of action or have known implications for safety, while the moderate- to low-risk characteristics may have a lower direct impact and thereby may have a broader range to establish similarity. Statistical equivalence testing is applied for high-risk CQA measurements to establish the degree of similarity (e.g., highly similar fingerprint, highly similar, or similar) of selected attributes. Notably, some high-risk CQAs (e.g., primary sequence or disulfide bonding) are qualitative (e.g., the same as the originator or not the same) and therefore not amenable to equivalence testing. For biosimilars, an important step is the acquisition of a sufficient number of unique originator drug product lots to measure the variability in the originator drug manufacturing process and provide sufficient statistical power for the analytical data comparisons. Together, these analytical evaluations, along with PK/PD and safety data (immunogenicity), provide the data necessary to determine if the totality of the evidence warrants a designation of biosimilarity and subsequent licensure for marketing in the USA. In this paper, a case study approach is used to provide examples of analytical similarity exercises and the appropriateness of statistical approaches for the example data.
Koskinen, M T; Holopainen, J; Pyörälä, S; Bredbacka, P; Pitkälä, A; Barkema, H W; Bexiga, R; Roberson, J; Sølverød, L; Piccinini, R; Kelton, D; Lehmusto, H; Niskala, S; Salmikivi, L
2009-03-01
Intramammary infection (IMI), also known as mastitis, is the most frequently occurring and economically the most important infectious disease in dairy cattle. This study provides a validation of the analytical specificity and sensitivity of a real-time PCR-based assay that identifies 11 major pathogen species or species groups responsible for IMI, and a gene coding for staphylococcal beta-lactamase production (penicillin resistance). Altogether, 643 culture isolates originating from clinical bovine mastitis, human, and companion animal samples were analyzed using the assay. The isolates represented 83 different species, groups, or families, and originated from 6 countries in Europe and North America. The analytical specificity and sensitivity of the assay was 100% in bacterial and beta-lactamase identification across all isolates originating from bovine mastitis (n = 454). When considering the entire culture collection (including also the isolates originating from human and companion animal samples), 4 Streptococcus pyogenes, 1 Streptococcus salivarius, and 1 Streptococcus sanguis strain of human origin were identified as Streptococcus uberis, and 3 Shigella spp. strains were identified as Escherichia coli, decreasing specificity to 99% in Strep. uberis and to 99.5% in E. coli. These false-positive results were confirmed by sequencing of the 16S rRNA gene. Specificity and sensitivity remained at 100% for all other bacterial targets across the entire culture collection. In conclusion, the real-time PCR assay shows excellent analytical accuracy and holds much promise for use in routine bovine IMI testing programs. This study provides the basis for evaluating the assay's diagnostic performance against the conventional bacterial culture method in clinical field trials using mastitis milk samples.
NASA Technical Reports Server (NTRS)
Mitchell, William S.; Throckmorton, David (Technical Monitor)
2002-01-01
The purpose of this research was to further the understanding of a crack initiation problem in a highly strained pressure containment housing. Finite Element Analysis methods were used to model the behavior of shot peened materials undergoing plastic deformation. Analytical results are in agreement with laboratory tensile tests that simulated the actual housing load conditions. These results further validate the original investigation finding that the shot peened residual stress had reversed, changing from compressive to tensile, and demonstrate that analytical finite element methods can be used to predict this behavior.
Compounds in airborne particulates - Salts and hydrocarbons. [at Cleveland, OH
NASA Technical Reports Server (NTRS)
King, R. B.; Antoine, A. C.; Fordyce, J. S.; Neustadter, H. E.; Leibecki, H. F.
1977-01-01
Concentrations of 10 polycyclic aromatic hydrocarbons (PAH), the aliphatics as a group, sulfate, nitrate, fluoride, acidity, and carbon in the airborne particulate matter were measured at 16 sites in Cleveland, OH over a 1-year period during 1971 and 1972. Analytical methods used included gas chromatography, colorimetry, and combustion techniques. Uncertainties in the concentrations associated with the sampling procedures, and the analytical methods are evaluated. The data are discussed relative to other studies and source origins. High concentrations downwind of coke ovens for 3,4 benzopyrene are discussed. Hydrocarbon correlation studies indicated no significant relations among compounds studied.
Masada, Sayaka
2016-07-01
Various herbal medicines have been developed and used in various parts of the world for thousands of years. Although locally grown indigenous plants were originally used for traditional herbal preparations, Western herbal products are now becoming popular in Japan with the increasing interest in health. At the same time, there are growing concerns about the substitution of ingredients and adulteration of herbal products, highlighting the need for the authentication of the origin of plants used in herbal products. This review describes studies on Cimicifuga and Vitex products developed in Europe and Japan, focusing on establishing analytical methods to evaluate the origins of material plants and finished products. These methods include a polymerase chain reaction-restriction fragment length polymorphism method and a multiplex amplification refractory mutation system method. A genome-based authentication method and liquid chromatography-mass spectrometry-based authentication for black cohosh products, and the identification of two characteristic diterpenes of agnus castus fruit and a shrub chaste tree fruit-specific triterpene derivative are also described.
The role of analytical chemistry in Niger Delta petroleum exploration: a review.
Akinlua, Akinsehinwa
2012-06-12
Petroleum and organic matter from which the petroleum is derived are composed of organic compounds with some trace elements. These compounds give an insight into the origin, thermal maturity and paleoenvironmental history of petroleum, which are essential elements in petroleum exploration. The main tool to acquire the geochemical data is analytical techniques. Due to progress in the development of new analytical techniques, many hitherto petroleum exploration problems have been resolved. Analytical chemistry has played a significant role in the development of petroleum resources of Niger Delta. Various analytical techniques that have aided the success of petroleum exploration in the Niger Delta are discussed. The analytical techniques that have helped to understand the petroleum system of the basin are also described. Recent and emerging analytical methodologies including green analytical methods as applicable to petroleum exploration particularly Niger Delta petroleum province are discussed in this paper. Analytical chemistry is an invaluable tool in finding the Niger Delta oils. Copyright © 2011 Elsevier B.V. All rights reserved.
Reanalysis of a 15-year Archive of IMPROVE Samples
NASA Astrophysics Data System (ADS)
Hyslop, N. P.; White, W. H.; Trzepla, K.
2013-12-01
The IMPROVE (Interagency Monitoring of PROtected Visual Environments) network monitors aerosol concentrations at 170 remote sites throughout the United States. Twenty-four-hour filter samples of particulate matter are collected every third day and analyzed for chemical composition. About 30 of the sites have operated continuously since 1988, and the sustained data record (http://views.cira.colostate.edu/web/) offers a unique window on regional aerosol trends. All elemental analyses have been performed by Crocker Nuclear Laboratory at the University of California in Davis, and sample filters collected since 1995 are archived on campus. The suite of reported elements has remained constant, but the analytical methods employed for their determination have evolved. For example, the elements Na - Mn were determined by PIXE until November 2001, then by XRF analysis in a He-flushed atmosphere through 2004, and by XRF analysis in vacuum since January 2005. In addition to these fundamental changes, incompletely-documented operational factors such as detector performance and calibration details have introduced variations in the measurements. Because the past analytical methods were non-destructive, the archived filters can be re-analyzed with the current analytical systems and protocols. The 15-year sample archives from Great Smoky Mountains, Mount Rainier, and Point Reyes National Parks were selected for reanalysis. The agreement between the new analyses and original determinations varies with element and analytical era (Figure 1). Temporal trends for some elements are affected by these changes in measurement technique while others are not (Figure 2). Figure 1. Repeatability of analyses for sulfur and vanadium at Great Smoky Mountains National Park. Each point shows the ratio of mass loadings determined by the original analysis and recent reanalysis. Major method distinctions are indicated at the top. Figure 2. Trends, based on Thiel-Sen regression, in lead concentrations based on the original and reanalysis data.
Oliveira, Carolina Dizioli Rodrigues; Okai, Guilherme Gonçalves; da Costa, José Luiz; de Almeida, Rafael Menck; Oliveira-Silva, Diogo; Yonamine, Mauricio
2012-07-01
Ayahuasca is a psychoactive plant beverage originally used by indigenous people throughout the Amazon Basin, long before its modern use by syncretic religious groups established in Brazil, the USA and European countries. The objective of this study was to develop a method for quantification of dimethyltryptamine and β-carbolines in human plasma samples. The analytes were extracted by means of C18 cartridges and injected into LC-MS/MS, operated in positive ion mode and multiple reaction monitoring. The LOQs obtained for all analytes were below 0.5 ng/ml. By using the weighted least squares linear regression, the accuracy of the analytical method was improved at the lower end of the calibration curve (from 0.5 to 100 ng/ml; r(2)> 0.98). The method proved to be simple, rapid and useful to estimate administered doses for further pharmacological and toxicological investigations of ayahuasca exposure.
Calculated and measured fields in superferric wiggler magnets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blum, E.B.; Solomon, L.
1995-02-01
Although Klaus Halbach is widely known and appreciated as the originator of the computer program POISSON for electromagnetic field calculation, Klaus has always believed that analytical methods can give much more insight into the performance of a magnet than numerical simulation. Analytical approximations readily show how the different aspects of a magnet`s design such as pole dimensions, current, and coil configuration contribute to the performance. These methods yield accuracies of better than 10%. Analytical methods should therefore be used when conceptualizing a magnet design. Computer analysis can then be used for refinement. A simple model is presented for the peakmore » on-axis field of an electro-magnetic wiggler with iron poles and superconducting coils. The model is applied to the radiator section of the superconducting wiggler for the BNL Harmonic Generation Free Electron Laser. The predictions of the model are compared to the measured field and the results from POISSON.« less
Kwon, Yong-Kook; Bong, Yeon-Sik; Lee, Kwang-Sik; Hwang, Geum-Sook
2014-10-15
ICP-MS and (1)H NMR are commonly used to determine the geographical origin of food and crops. In this study, data from multielemental analysis performed by ICP-AES/ICP-MS and metabolomic data obtained from (1)H NMR were integrated to improve the reliability of determining the geographical origin of medicinal herbs. Astragalus membranaceus and Paeonia albiflora with different origins in Korea and China were analysed by (1)H NMR and ICP-AES/ICP-MS, and an integrated multivariate analysis was performed to characterise the differences between their origins. Four classification methods were applied: linear discriminant analysis (LDA), k-nearest neighbour classification (KNN), support vector machines (SVM), and partial least squares-discriminant analysis (PLS-DA). Results were compared using leave-one-out cross-validation and external validation. The integration of multielemental and metabolomic data was more suitable for determining geographical origin than the use of each individual data set alone. The integration of the two analytical techniques allowed diverse environmental factors such as climate and geology, to be considered. Our study suggests that an appropriate integration of different types of analytical data is useful for determining the geographical origin of food and crops with a high degree of reliability. Copyright © 2014 Elsevier Ltd. All rights reserved.
Moran, Paul; Bromaghin, Jeffrey F.; Masuda, Michele
2014-01-01
Many applications in ecological genetics involve sampling individuals from a mixture of multiple biological populations and subsequently associating those individuals with the populations from which they arose. Analytical methods that assign individuals to their putative population of origin have utility in both basic and applied research, providing information about population-specific life history and habitat use, ecotoxins, pathogen and parasite loads, and many other non-genetic ecological, or phenotypic traits. Although the question is initially directed at the origin of individuals, in most cases the ultimate desire is to investigate the distribution of some trait among populations. Current practice is to assign individuals to a population of origin and study properties of the trait among individuals within population strata as if they constituted independent samples. It seemed that approach might bias population-specific trait inference. In this study we made trait inferences directly through modeling, bypassing individual assignment. We extended a Bayesian model for population mixture analysis to incorporate parameters for the phenotypic trait and compared its performance to that of individual assignment with a minimum probability threshold for assignment. The Bayesian mixture model outperformed individual assignment under some trait inference conditions. However, by discarding individuals whose origins are most uncertain, the individual assignment method provided a less complex analytical technique whose performance may be adequate for some common trait inference problems. Our results provide specific guidance for method selection under various genetic relationships among populations with different trait distributions.
Moran, Paul; Bromaghin, Jeffrey F.; Masuda, Michele
2014-01-01
Many applications in ecological genetics involve sampling individuals from a mixture of multiple biological populations and subsequently associating those individuals with the populations from which they arose. Analytical methods that assign individuals to their putative population of origin have utility in both basic and applied research, providing information about population-specific life history and habitat use, ecotoxins, pathogen and parasite loads, and many other non-genetic ecological, or phenotypic traits. Although the question is initially directed at the origin of individuals, in most cases the ultimate desire is to investigate the distribution of some trait among populations. Current practice is to assign individuals to a population of origin and study properties of the trait among individuals within population strata as if they constituted independent samples. It seemed that approach might bias population-specific trait inference. In this study we made trait inferences directly through modeling, bypassing individual assignment. We extended a Bayesian model for population mixture analysis to incorporate parameters for the phenotypic trait and compared its performance to that of individual assignment with a minimum probability threshold for assignment. The Bayesian mixture model outperformed individual assignment under some trait inference conditions. However, by discarding individuals whose origins are most uncertain, the individual assignment method provided a less complex analytical technique whose performance may be adequate for some common trait inference problems. Our results provide specific guidance for method selection under various genetic relationships among populations with different trait distributions. PMID:24905464
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daley, P F
The overall objective of this project is the continued development, installation, and testing of continuous water sampling and analysis technologies for application to on-site monitoring of groundwater treatment systems and remediation sites. In a previous project, an on-line analytical system (OLAS) for multistream water sampling was installed at the Fort Ord Operable Unit 2 Groundwater Treatment System, with the objective of developing a simplified analytical method for detection of Compounds of Concern at that plant, and continuous sampling of up to twelve locations in the treatment system, from raw influent waters to treated effluent. Earlier implementations of the water samplingmore » and processing system (Analytical Sampling and Analysis Platform, A A+RT, Milpitas, CA) depended on off-line integrators that produced paper plots of chromatograms, and sent summary tables to a host computer for archiving. We developed a basic LabVIEW (National Instruments, Inc., Austin, TX) based gas chromatography control and data acquisition system that was the foundation for further development and integration with the ASAP system. Advantages of this integration include electronic archiving of all raw chromatographic data, and a flexible programming environment to support development of improved ASAP operation and automated reporting. The initial goals of integrating the preexisting LabVIEW chromatography control system with the ASAP, and demonstration of a simplified, site-specific analytical method were successfully achieved. However, although the principal objective of this system was assembly of an analytical system that would allow plant operators an up-to-the-minute view of the plant's performance, several obstacles remained. Data reduction with the base LabVIEW system was limited to peak detection and simple tabular output, patterned after commercial chromatography integrators, with compound retention times and peak areas. Preparation of calibration curves, method detection limit estimates and trend plotting were performed with spreadsheets and statistics software. Moreover, the analytical method developed was very limited in compound coverage, and unable to closely mirror the standard analytical methods promulgated by the EPA. To address these deficiencies, during this award the original equipment was operated at the OU 2-GTS to further evaluate the use of columns, commercial standard blends and other components to broaden the compound coverage of the chromatography system. A second-generation ASAP was designed and built to replace the original system at the OU 2-GTS, and include provision for introduction of internal standard compounds and surrogates into each sample analyzed. An enhanced, LabVIEW based chromatogram analysis application was written, that manages and archives chemical standards information, and provides a basis for NIST traceability for all analyses. Within this same package, all compound calibration response curves are managed, and different report formats were incorporated, that simplify trend analysis. Test results focus on operation of the original system at the OU 1 Integrated Chemical and Flow Monitoring System, at the OU 1 Fire Drill Area remediation site.« less
Analytical Fingerprint of Wolframite Ore Concentrates.
Gäbler, Hans-Eike; Schink, Wilhelm; Goldmann, Simon; Bahr, Andreas; Gawronski, Timo
2017-07-01
Ongoing violent conflicts in Central Africa are fueled by illegal mining and trading of tantalum, tin, and tungsten ores. The credibility of document-based traceability systems can be improved by an analytical fingerprint applied as an independent method to confirm or doubt the documented origin of ore minerals. Wolframite (Fe,Mn)WO 4 is the most important ore mineral for tungsten and is subject to artisanal mining in Central Africa. Element concentrations of wolframite grains analyzed by laser ablation-inductively coupled plasma-mass spectrometry are used to establish the analytical fingerprint. The data from ore concentrate samples are multivariate, not normal or log-normal distributed. The samples cannot be regarded as representative aliquots of a population. Based on the Kolmogorov-Smirnov distance, a measure of similarity between a sample in question and reference samples from a database is determined. A decision criterion is deduced to recognize samples which do not originate from the declared mine site. © 2017 American Academy of Forensic Sciences.
RP-HPLC determination of water-soluble vitamins in honey.
Ciulu, Marco; Solinas, Silvia; Floris, Ignazio; Panzanelli, Angelo; Pilo, Maria I; Piu, Paola C; Spano, Nadia; Sanna, Gavino
2011-01-15
The assessment and validation of reliable analytical methods for the determination of vitamins in sugar-based matrices (e.g. honey) are still scarcely explored fields of research. This study proposes and fully validates a simple and fast RP-HPLC method for the simultaneous determination of five water-soluble vitamins (vitamin B(2), riboflavin; vitamin B(3), nicotinic acid; vitamin B(5), pantothenic acid; vitamin B(9), folic acid; and vitamin C, ascorbic acid) in honey. The method provides low detection and quantification limits, very good linearity in a large concentration interval, very good precision, and the absence of any bias. It has been successfully applied to 28 honey samples (mainly from Sardinia, Italy) of 12 different botanical origins. While the overall amount of the analytes in the samples is quite low (always below 40 mg kg(-1)), we have observed a marked dependence of some of their concentrations (i.e. vitamin B(3) and vitamin B(5)) and the botanical origin of the honey. This insight might lead to important characterization features for this food item. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min
2017-09-01
The Halbach type hollow cylindrical permanent magnet array (HCPMA) is a volume compact and energy conserved field source, which have attracted intense interests in many practical applications. Here, using the complex variable integration method based on the Biot-Savart Law (including current distributions inside the body and on the surfaces of magnet), we derive analytical field solutions to an ideal multipole HCPMA in entire space including the interior of magnet. The analytic field expression inside the array material is used to construct an analytic demagnetization function, with which we can explain the origin of demagnetization phenomena in HCPMA by taking into account an ideal magnetic hysteresis loop with finite coercivity. These analytical field expressions and demagnetization functions provide deeper insight into the nature of such permanent magnet array systems and offer guidance in designing optimized array system.
A semi-analytical method for simulating transient contaminant transport originating from the dissolution of multicomponent nonaqueous phase liquid (NAPL) pools in three-dimensional, saturated, homogeneous porous media is presented. Each dissolved component may undergo first-order...
Qualitative methods in quantum theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Migdal, A.B.
The author feels that the solution of most problems in theoretical physics begins with the application of qualitative methods - dimensional estimates and estimates made from simple models, the investigation of limiting cases, the use of the analytic properties of physical quantities, etc. This book proceeds in this spirit, rather than in a formal, mathematical way with no traces of the sweat involved in the original work left to show. The chapters are entitled Dimensional and model approximations, Various types of perturbation theory, The quasi-classical approximation, Analytic properties of physical quantities, Methods in the many-body problem, and Qualitative methods inmore » quantum field theory. Each chapter begins with a detailed introduction, in which the physical meaning of the results obtained in that chapter is explained in a simple way. 61 figures. (RWR)« less
Akbulut, Songul; Grieken, Renevan; Kılıc, Mehmet A; Cevik, Ugur; Rotondo, Giuliana G
2013-03-01
Soils are complex mixtures of organic, inorganic materials, and metal compounds from anthropogenic sources. In order to identify the pollution sources, their magnitude and development, several X-ray analytical methods were applied in this study. The concentrations of 16 elements were determined in all the soil samples using energy dispersive X-ray fluorescence spectrometry. Soils of unknown origin were observed by scanning electron microscopy equipped with a Si(Li) X-ray detector using Monte Carlo simulation approach. The mineralogical analyses were carried out using X-ray diffraction spectrometry. Due to the correlations between heavy metals and oxide compounds, the samples were analyzed also by electron probe microanalyzer (EPMA) in order to have information about their oxide contents. On the other hand, soil pH and salinity levels were identified owing to their influence between heavy metal and soil-surface chemistry. Moreover, the geoaccumulation index (I (geo)) enables the assessment of contamination by comparing current and pre-industrial concentrations.
Farabegoli, Federica; Pirini, Maurizio; Rotolo, Magda; Silvi, Marina; Testi, Silvia; Ghidini, Sergio; Zanardi, Emanuela; Remondini, Daniel; Bonaldo, Alessio; Parma, Luca; Badiani, Anna
2018-06-08
The authenticity of fish products has become an imperative issue for authorities involved in the protection of consumers against fraudulent practices and in the market stabilization. The present study aimed to provide a method for authentication of European sea bass (Dicentrarchus labrax) according to the requirements for seafood labels (Regulation 1379/2013/EU). Data on biometric traits, fatty acid profile, elemental composition, and isotopic abundance of wild and reared (intensively, semi-intensively and extensively) specimens from 18 Southern European sources (n = 160) were collected and clustered in 6 sets of parameters, then subjected to multivariate analysis. Correct allocations of subjects according to their production method, origin and stocking density were demonstrated with good approximation rates (94%, 92% and 92%, respectively) using fatty acid profiles. Less satisfying results were obtained using isotopic abundance, biometric traits, and elemental composition. The multivariate analysis also revealed that extensively reared subjects cannot be analytically discriminated from wild ones.
Calculus domains modelled using an original bool algebra based on polygons
NASA Astrophysics Data System (ADS)
Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.
2016-08-01
Analytical and numerical computer based models require analytical definitions of the calculus domains. The paper presents a method to model a calculus domain based on a bool algebra which uses solid and hollow polygons. The general calculus relations of the geometrical characteristics that are widely used in mechanical engineering are tested using several shapes of the calculus domain in order to draw conclusions regarding the most effective methods to discretize the domain. The paper also tests the results of several CAD commercial software applications which are able to compute the geometrical characteristics, being drawn interesting conclusions. The tests were also targeting the accuracy of the results vs. the number of nodes on the curved boundary of the cross section. The study required the development of an original software consisting of more than 1700 computer code lines. In comparison with other calculus methods, the discretization using convex polygons is a simpler approach. Moreover, this method doesn't lead to large numbers as the spline approximation did, in that case being required special software packages in order to offer multiple, arbitrary precision. The knowledge resulted from this study may be used to develop complex computer based models in engineering.
Schweitzer, Mary Higby; Schroeter, Elena R; Goshe, Michael B
2014-07-15
Advances in resolution and sensitivity of analytical techniques have provided novel applications, including the analyses of fossil material. However, the recovery of original proteinaceous components from very old fossil samples (defined as >1 million years (1 Ma) from previously named limits in the literature) is far from trivial. Here, we discuss the challenges to recovery of proteinaceous components from fossils, and the need for new sample preparation techniques, analytical methods, and bioinformatics to optimize and fully utilize the great potential of information locked in the fossil record. We present evidence for survival of original components across geological time, and discuss the potential benefits of recovery, analyses, and interpretation of fossil materials older than 1 Ma, both within and outside of the fields of evolutionary biology.
Chemical data as markers of the geographical origins of sugarcane spirits.
Serafim, F A T; Pereira-Filho, Edenir R; Franco, D W
2016-04-01
In an attempt to classify sugarcane spirits according to their geographic region of origin, chemical data for 24 analytes were evaluated in 50 cachaças produced using a similar procedure in selected regions of Brazil: São Paulo - SP (15), Minas Gerais - MG (11), Rio de Janeiro - RJ (11), Paraiba -PB (9), and Ceará - CE (4). Multivariate analysis was applied to the analytical results, and the predictive abilities of different classification methods were evaluated. Principal component analysis identified five groups, and chemical similarities were observed between MG and SP samples and between RJ and PB samples. CE samples presented a distinct chemical profile. Among the samples, partial linear square discriminant analysis (PLS-DA) classified 50.2% of the samples correctly, K-nearest neighbor (KNN) 86%, and soft independent modeling of class analogy (SIMCA) 56.2%. Therefore, in this proof of concept demonstration, the proposed approach based on chemical data satisfactorily predicted the cachaças' geographic origins. Copyright © 2015 Elsevier Ltd. All rights reserved.
Analytical methods used for the authentication of food of animal origin.
Abbas, Ouissam; Zadravec, Manuela; Baeten, Vincent; Mikuš, Tomislav; Lešić, Tina; Vulić, Ana; Prpić, Jelena; Jemeršić, Lorena; Pleadin, Jelka
2018-04-25
Since adulteration can have serious consequences on human health, it affects market growth by destroying consumer confidence. Therefore, authentication of food is important for food processors, retailers and consumers, but also for regulatory authorities. However, a complex nature of food and an increase in types of adulterants make their detection difficult, so that food authentication often poses a challenge. This review focuses on analytical approaches to authentication of food of animal origin, with an emphasis put on determination of specific ingredients, geographical origin and adulteration by virtue of substitution. This review highlights a current overview of the application of target approaches in cases when the compound of interest is known and non-target approaches for screening issues. Papers cited herein mainly concern milk, cheese, meat and honey. Moreover, advantages, disadvantages as well as challenges regarding the use of both approaches in official food control but also in food industry are investigated. Copyright © 2017 Elsevier Ltd. All rights reserved.
Durante, Caterina; Baschieri, Carlo; Bertacchini, Lucia; Bertelli, Davide; Cocchi, Marina; Marchetti, Andrea; Manzini, Daniela; Papotti, Giulia; Sighinolfi, Simona
2015-04-15
Geographical origin and authenticity of food are topics of interest for both consumers and producers. Among the different indicators used for traceability studies, (87)Sr/(86)Sr isotopic ratio has provided excellent results. In this study, two analytical approaches for wine sample pre-treatment, microwave and low temperature mineralisation, were investigated to develop accurate and precise analytical method for (87)Sr/(86)Sr determination. The two procedures led to comparable results (paired t-test, with t
Bichon, E; Guiffard, I; Vénisseau, A; Lesquin, E; Vaccher, V; Brosseaud, A; Marchand, P; Le Bizec, B
2016-08-12
A gas chromatography tandem mass spectrometry method using atmospheric pressure chemical ionisation was developed for the monitoring of 16 brominated flame retardants (7 usually monitored polybromodiphenylethers (PBDEs) and BDE #209 and 8 additional emerging and novel BFRs) in food and feed of animal origin. The developed analytical method has decreased the run time by three compared to conventional strategies, using a 2.5m column length (5% phenyl stationary phase, 0.1mm i.d., 0.1μmf.t.), a pulsed split injection (1:5) with carrier gas helium flow rate at 0.48mLmin(-1) in one run of 20 min. For most BFRs, analytical data were compared with the current analytical strategy relying on GC/EI/HRMS (double sector, R=10000 at 10% valley). Performances in terms of sensitivity were found to meet the Commission recommendation (118/2014/EC) for nBFRs. GC/APCI/MS/MS represents a promising alternative for multi-BFRs analysis in complex matrices, in that it allows the monitoring of a wider list of contaminants in a single injection and a shorter run time. Copyright © 2016 Elsevier B.V. All rights reserved.
Matuszak, Małgorzata; Minorczyk, Maria; Góralczyk, Katarzyna; Hernik, Agnieszka; Struciński, Paweł; Liszewska, Monika; Czaja, Katarzyna; Korcz, Wojciech; Łyczewska, Monika; Ludwicki, Jan K
2016-01-01
Polybrominated diphenyl ethers (PBDEs) as other persistent organic pollutants like polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs) pose a significant hazard to human health, mainly due to interference with the endocrine system and carcinogenetic effects. Humans are exposed to these substances mainly through a food of animal origin. These pollutants are globally detected in human matrices which requires to dispose reliable and simple analytical method that would enable further studies to assess the exposure of specific human populations to these compounds. The purpose of this study was to modify and validate of the analytical procedure for the simultaneous determination of selected PBDEs, PCBs and OCPs in human blood serum samples. The analytical measurement was performed by GC-µECD following preparation of serum samples (denaturation, multiple extraction, lipid removal). Identity of the compounds was confirmed by GC-MS. The method was characterised by the appropriate linearity, good repeatability (CV below 20%). The recoveries ranged from 52.9 to 125.0% depending on compound and level of fortification. The limit of quantification was set at 0.03 ng mL(-1) of serum. The modified analytical method proved to be suitable for the simultaneous determination of selected PBDEs, PCBs and OCPs in human blood serum by GC-µECD with good precision.
Grundy, H H; Reece, P; Buckley, M; Solazzo, C M; Dowle, A A; Ashford, D; Charlton, A J; Wadsley, M K; Collins, M J
2016-01-01
Gelatine is a component of a wide range of foods. It is manufactured as a by-product of the meat industry from bone and hide, mainly from bovine and porcine sources. Accurate food labelling enables consumers to make informed decisions about the food they buy. Since labelling currently relies heavily on due diligence involving a paper trail, there could be benefits in developing a reliable test method for the consumer industries in terms of the species origin of gelatine. We present a method to determine the species origin of gelatines by peptide mass spectrometry methods. An evaluative comparison is also made with ELISA and PCR technologies. Commercial gelatines were found to contain undeclared species. Furthermore, undeclared bovine peptides were observed in commercial injection matrices. This analytical method could therefore support the food industry in terms of determining the species authenticity of gelatine in foods. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Current Protocols in Pharmacology
2016-01-01
Determination of drug or drug metabolite concentrations in biological samples, particularly in serum or plasma, is fundamental to describing the relationships between administered dose, route of administration, and time after dose to the drug concentrations achieved and to the observed effects of the drug. A well-characterized, accurate analytical method is needed, but it must also be established that the analyte concentration in the sample at the time of analysis is the same as the concentration at sample acquisition. Drugs and metabolites may be susceptible to degradation in samples due to metabolism or to physical and chemical processes, resulting in a lower measured concentration than was in the original sample. Careful examination of analyte stability during processing and storage and adjustment of procedures and conditions to maximize that stability are a critical part of method validation for the analysis, and can ensure the accuracy of the measured concentrations. PMID:27960029
Simultaneous determination of three herbicides by differential pulse voltammetry and chemometrics.
Ni, Yongnian; Wang, Lin; Kokot, Serge
2011-01-01
A novel differential pulse voltammetry method (DPV) was researched and developed for the simultaneous determination of Pendimethalin, Dinoseb and sodium 5-nitroguaiacolate (5NG) with the aid of chemometrics. The voltammograms of these three compounds overlapped significantly, and to facilitate the simultaneous determination of the three analytes, chemometrics methods were applied. These included classical least squares (CLS), principal component regression (PCR), partial least squares (PLS) and radial basis function-artificial neural networks (RBF-ANN). A separately prepared verification data set was used to confirm the calibrations, which were built from the original and first derivative data matrices of the voltammograms. On the basis relative prediction errors and recoveries of the analytes, the RBF-ANN and the DPLS (D - first derivative spectra) models performed best and are particularly recommended for application. The DPLS calibration model was applied satisfactorily for the prediction of the three analytes from market vegetables and lake water samples.
ERIC Educational Resources Information Center
Rosu, Cornelia; Cueto, Rafael; Veillion, Lucas; David, Connie; Laine, Roger A.; Russo, Paul S.
2017-01-01
Volatile compounds from polymeric materials such as weatherstripping were identified by solid-phase microextraction (SPME), a solvent-free analytical method, coupled to gas chromatography-mass spectrometry (GC-MS). These compounds, originating from additives and fillers used in weatherstripping processing, were mostly polycyclic aromatic…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, P.B.; Yatabe, M.
1987-01-01
In this report the Nuclear Criticality Safety Analytical Methods Resource Center describes a new interactive version of CESAR, a critical experiments storage and retrieval program available on the Nuclear Criticality Information System (NCIS) database at Lawrence Livermore National Laboratory. The original version of CESAR did not include interactive search capabilities. The CESAR database was developed to provide a convenient, readily accessible means of storing and retrieving code input data for the SCALE Criticality Safety Analytical Sequences and the codes comprising those sequences. The database includes data for both cross section preparation and criticality safety calculations. 3 refs., 1 tab.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, P.B.; Yatabe, M.
1987-01-01
The Nuclear Criticality Safety Analytical Methods Resource Center announces the availability of a new interactive version of CESAR, a critical experiments storage and retrieval program available on the Nuclear Criticality Information System (NCIS) data base at Lawrence Livermore National Laboratory. The original version of CESAR did not include interactive search capabilities. The CESAR data base was developed to provide a convenient, readily accessible means of storing and retrieving code input data for the SCALE criticality safety analytical sequences and the codes comprising those sequences. The data base includes data for both cross-section preparation and criticality safety calculations.
Sonic horizon formation for oscillating Bose-Einstein condensates in isotropic harmonic potential
Wang, Ying; Zhou, Yu; Zhou, Shuyu
2016-01-01
We study the sonic horizon phenomena of the oscillating Bose-Einstein condensates in isotropic harmonic potential. Based on the Gross-Pitaevskii equation model and variational method, we derive the original analytical formula for the criteria and lifetime of the formation of the sonic horizon, demonstrating pictorially the interaction parameter dependence for the occur- rence of the sonic horizon and damping effect of the system distribution width. Our analytical results corroborate quantitatively the particular features of the sonic horizon reported in previous numerical study. PMID:27922129
Nuclear and analytical methods for investigation of high quality wines
NASA Astrophysics Data System (ADS)
Tonev, D.; Geleva, E.; Grigorov, T.; Goutev, N.; Protohristov, H.; Stoyanov, Ch; Bashev, V.; Tringovska, I.; Kostova, D.
2018-05-01
Nuclear and analytical methods can help to determine the year of production – vintage and the geographical provenance of high quality wines. A complex analytical investigation of Melnik fine wines from “Artarkata” vineyards, Vinogradi village near Melnik in Southwest Bulgaria using different methods and equipment were performed. Nuclear methods, based on measured gamma-ray activity of 137Cs and specific activity of 3H can be used to determine the year of wine production. The specific activity of 137Cs was measured in wines from different vintages using Low-Background High-Resolution Gamma-Spectrometry. Tritium measurements in wine samples were carried out by using a low level liquid scintillation counting in a Packard Tri-Carb 2770 TR/SL liquid scintillation analyzer. The identification of the origin of wines using their chemical fingerprints is of great interest for wine consumers and producers. Determination of 16 chemical elements in samples from soil, wine stems, wine leaves and fine wine from the type Shiroka Melnishka, which are grown in typical Melnik vineyard was made, using Inductively Coupled Plasma-Optical Emission Spectrometry (ICP-OES).
Selection and authentication of botanical materials for the development of analytical methods.
Applequist, Wendy L; Miller, James S
2013-05-01
Herbal products, for example botanical dietary supplements, are widely used. Analytical methods are needed to ensure that botanical ingredients used in commercial products are correctly identified and that research materials are of adequate quality and are sufficiently characterized to enable research to be interpreted and replicated. Adulteration of botanical material in commerce is common for some species. The development of analytical methods for specific botanicals, and accurate reporting of research results, depend critically on correct identification of test materials. Conscious efforts must therefore be made to ensure that the botanical identity of test materials is rigorously confirmed and documented through preservation of vouchers, and that their geographic origin and handling are appropriate. Use of material with an associated herbarium voucher that can be botanically identified is always ideal. Indirect methods of authenticating bulk material in commerce, for example use of organoleptic, anatomical, chemical, or molecular characteristics, are not always acceptable for the chemist's purposes. Familiarity with botanical and pharmacognostic literature is necessary to determine what potential adulterants exist and how they may be distinguished.
Bending of an Infinite beam on a base with two parameters in the absence of a part of the base
NASA Astrophysics Data System (ADS)
Aleksandrovskiy, Maxim; Zaharova, Lidiya
2018-03-01
Currently, in connection with the rapid development of high-rise construction and the improvement of joint operation of high-rise structures and bases models, the questions connected with the use of various calculation methods become topical. The rigor of analytical methods is capable of more detailed and accurate characterization of the structures behavior, which will affect the reliability of objects and can lead to a reduction in their cost. In the article, a model with two parameters is used as a computational model of the base that can effectively take into account the distributive properties of the base by varying the coefficient reflecting the shift parameter. The paper constructs the effective analytical solution of the problem of a beam of infinite length interacting with a two-parameter voided base. Using the Fourier integral equations, the original differential equation is reduced to the Fredholm integral equation of the second kind with a degenerate kernel, and all the integrals are solved analytically and explicitly, which leads to an increase in the accuracy of the computations in comparison with the approximate methods. The paper consider the solution of the problem of a beam loaded with a concentrated force applied at the point of origin with a fixed value of the length of the dip section. The paper gives the analysis of the obtained results values for various parameters of coefficient taking into account cohesion of the ground.
An improved method to measure nitrate/nitrite with an NO-selective electrochemical sensor
Boo, Yong Chool; Tressel, Sarah L.; Jo, Hanjoong
2007-01-01
Nitric oxide produced from nitric oxide synthase(s) is an important cell signaling molecule in physiology and pathophysiology. In the present study, we describe a very sensitive and convenient analytical method to measure NOx (nitrite plus nitrate) in culture media by employing an ultra-sensitive nitric oxide-selective electrochemical sensor which became commercially available recently. An aliquot of conditioned culture media was first treated with nitrate reductase/NADPH/glucose-6-phosphate dehydrogenase/glucose-6-phosphate to convert nitrate to nitrite quantitatively. The nitrite (that is present originally plus the reduced nitrate) was then reduced to equimolar NO in an acidic iodide bath while NO was being detected by the sensor. This analytical method appears to be very useful to assess basal and stimulated NO release from cultured cells. PMID:17056288
Evaluating Trends in Historical PM2.5 Element Concentrations by Reanalyzing a 15-Year Sample Archive
NASA Astrophysics Data System (ADS)
Hyslop, N. P.; White, W. H.; Trzepla, K.
2014-12-01
The IMPROVE (Interagency Monitoring of PROtected Visual Environments) network monitors aerosol concentrations at 170 remote sites throughout the United States. Twenty-four-hour filter samples of particulate matter are collected every third day and analyzed for chemical composition. About 30 of the sites have operated continuously since 1988, and the sustained data record (http://views.cira.colostate.edu/web/) offers a unique window on regional aerosol trends. All elemental analyses have been performed by Crocker Nuclear Laboratory at the University of California in Davis, and sample filters collected since 1995 are archived on campus. The suite of reported elements has remained constant, but the analytical methods employed for their determination have evolved. For example, the elements Na - Mn were determined by PIXE until November 2001, then by XRF analysis in a He-flushed atmosphere through 2004, and by XRF analysis in vacuum since January 2005. In addition to these fundamental changes, incompletely-documented operational factors such as detector performance and calibration details have introduced variations in the measurements. Because the past analytical methods were non-destructive, the archived filters can be re-analyzed with the current analytical systems and protocols. The 15-year sample archives from Great Smoky Mountains (GRSM), Mount Rainier (MORA), and Point Reyes National Parks (PORE) were selected for reanalysis. The agreement between the new analyses and original determinations varies with element and analytical era. The graph below compares the trend estimates for all the elements measured by IMPROVE based on the original and repeat analyses; the elements identified in color are measured above the detection limit more than 90% of the time. The trend estimates are sensitive to the treatment of non-detect data. The original and reanalysis trends are indistinguishable (have overlapping confidence intervals) for most of the well-detected elements.
NASA Astrophysics Data System (ADS)
Carraro, F.; Valiani, A.; Caleffi, V.
2018-03-01
Within the framework of the de Saint Venant equations coupled with the Exner equation for morphodynamic evolution, this work presents a new efficient implementation of the Dumbser-Osher-Toro (DOT) scheme for non-conservative problems. The DOT path-conservative scheme is a robust upwind method based on a complete Riemann solver, but it has the drawback of requiring expensive numerical computations. Indeed, to compute the non-linear time evolution in each time step, the DOT scheme requires numerical computation of the flux matrix eigenstructure (the totality of eigenvalues and eigenvectors) several times at each cell edge. In this work, an analytical and compact formulation of the eigenstructure for the de Saint Venant-Exner (dSVE) model is introduced and tested in terms of numerical efficiency and stability. Using the original DOT and PRICE-C (a very efficient FORCE-type method) as reference methods, we present a convergence analysis (error against CPU time) to study the performance of the DOT method with our new analytical implementation of eigenstructure calculations (A-DOT). In particular, the numerical performance of the three methods is tested in three test cases: a movable bed Riemann problem with analytical solution; a problem with smooth analytical solution; a test in which the water flow is characterised by subcritical and supercritical regions. For a given target error, the A-DOT method is always the most efficient choice. Finally, two experimental data sets and different transport formulae are considered to test the A-DOT model in more practical case studies.
Ha, Jing; Song, Ge; Ai, Lian-Feng; Li, Jian-Chen
2016-04-01
A new method using solid phase extraction (SPE) combined with liquid chromatography-tandem mass spectrometry (LC-MS/MS) has been developed for the determination of six polyether antibiotics, including lasalocid, salinomycin, monensin, narasin, madubamycin and nigericin residues, in foods of animal origin. The samples were extracted with acetonitrile and purified by ENVI-Carb SPE columns after comparing the impurity effect and maneuverability of several SPE cartridges. Subsequently, the analytes were separated on a Hypersil Gold column (2.1×150mm, 5μm) and analyzed by MS/MS detection. The limit of quantization (LOQ) for milk and chicken was 0.4μg/kg, and for chicken livers and eggs, it was 1μg/kg. The linearity was satisfactory with a correlation coefficient of >0.9995 at concentrations ranging from 2 to 100μg/L. The average recoveries of the analytes fortified at three levels ranged from 68.2 to 114.3%, and the relative standard deviations ranged from 4.5 to 12.1%. The method was suitable for quantitative analysis and confirmation of polyether antibiotic residues in foods of animal origin. Copyright © 2016 Elsevier B.V. All rights reserved.
Kaplan-Sandquist, Kimberly; LeBeau, Marc A; Miller, Mark L
2014-02-01
Chemical analysis of latent fingermarks, "touch chemistry," has the potential of providing intelligence or forensically relevant information. Matrix-assisted laser desorption ionization/time-of-flight mass spectrometry (MALDI/TOF MS) was used as an analytical platform for obtaining mass spectra and chemical images of target drugs and explosives in fingermark residues following conventional fingerprint development methods and MALDI matrix processing. There were two main purposes of this research: (1) develop effective laboratory methods for detecting drugs and explosives in fingermark residues and (2) determine the feasibility of detecting drugs and explosives after casual contact with pills, powders, and residues. Further, synthetic latent print reference pads were evaluated as mimics of natural fingermark residue to determine if the pads could be used for method development and quality control. The results suggest that artificial amino acid and sebaceous oil residue pads are not suitable to adequately simulate natural fingermark chemistry for MALDI/TOF MS analysis. However, the pads were useful for designing experiments and setting instrumental parameters. Based on the natural fingermark residue experiments, handling whole or broken pills did not transfer sufficient quantities of drugs to allow for definitive detection. Transferring drugs or explosives in the form of powders and residues was successful for preparing analytes for detection after contact with fingers and deposition of fingermark residue. One downfall to handling powders was that the analyte particles were easily spread beyond the original fingermark during development. Analyte particles were confined in the original fingermark when using transfer residues. The MALDI/TOF MS was able to detect procaine, pseudoephedrine, TNT, and RDX from contact residue under laboratory conditions with the integration of conventional fingerprint development methods and MALDI matrix. MALDI/TOF MS is a nondestructive technique which provides chemical information in both the mass spectra and chemical images. Published by Elsevier Ireland Ltd.
Truzzi, Cristina; Annibaldi, Anna; Illuminati, Silvia; Finale, Carolina; Scarponi, Giuseppe
2014-05-01
The study compares official spectrophotometric methods for the determination of proline content in honey - those of the International Honey Commission (IHC) and the Association of Official Analytical Chemists (AOAC) - with the original Ough method. Results show that the extra time-consuming treatment stages added by the IHC method with respect to the Ough method are pointless. We demonstrate that the AOACs method proves to be the best in terms of accuracy and time saving. The optimized waiting time for the absorbance recording is set at 35min from the removal of reaction tubes from the boiling bath used in the sample treatment. The optimized method was validated in the matrix: linearity up to 1800mgL(-1), limit of detection 20mgL(-1), limit of quantification 61mgL(-1). The method was applied to 43 unifloral honey samples from the Marche region, Italy. Copyright © 2013 Elsevier Ltd. All rights reserved.
Jakóbik-Kolon, Agata; Milewski, Andrzej; Dydo, Piotr; Witczak, Magdalena; Bok-Badura, Joanna
2018-02-23
The fast and simple method for total chlorine determination in polyglycerols using low resolution inductively coupled plasma mass spectrometry (ICP-MS) without the need for additional equipment and time-consuming sample decomposition was evaluated. Linear calibration curve for 35 Cl isotope in the concentration range 20-800 µg/L was observed. Limits of detection and quantification equaled to 15 µg/L and 44 µg/L, respectively. This corresponds to possibility of detection 3 µg/g and determination 9 µg/g of chlorine in polyglycerol using studied conditions (0.5% matrix-polyglycerol samples diluted or dissolved with water to an overall concentration of 0.5%). Matrix effects as well as the effect of chlorine origin have been evaluated. The presence of 0.5% (m/m) of matrix species similar to polyglycerol (polyethylene glycol-PEG) did not influence the chlorine determination for PEGs with average molecular weights (MW) up to 2000 Da. Good precision and accuracy of the chlorine content determination was achieved regardless on its origin (inorganic/organic). High analyte recovery level and low relative standard deviation values were observed for real polyglycerol samples spiked with chloride. Additionally, the Combustion Ion Chromatography System was used as a reference method. The results confirmed high accuracy and precision of the tested method.
NASA Technical Reports Server (NTRS)
Carleton, O.
1972-01-01
Consideration is given specifically to sixth order elliptic partial differential equations in two independent real variables x, y such that the coefficients of the highest order terms are real constants. It is assumed that the differential operator has distinct characteristics and that it can be factored as a product of second order operators. By analytically continuing into the complex domain and using the complex characteristic coordinates of the differential equation, it is shown that its solutions, u, may be reflected across analytic arcs on which u satisfies certain analytic boundary conditions. Moreover, a method is given whereby one can determine a region into which the solution is extensible. It is seen that this region of reflection is dependent on the original domain of difinition of the solution, the arc and the coefficients of the highest order terms of the equation and not on any sufficiently small quantities; i.e., the reflection is global in nature. The method employed may be applied to similar differential equations of order 2n.
What's Going on in This Picture? Visual Thinking Strategies and Adult Learning
ERIC Educational Resources Information Center
Landorf, Hilary
2006-01-01
The Visual Thinking Strategies (VTS) curriculum and teaching method uses art to help students think critically, listen attentively, communicate, and collaborate. VTS has been proven to enhance reading, writing, comprehension, and creative and analytical skills among students of all ages. The origins and procedures of the VTS curriculum are…
Distributed Parameter Analysis of Pressure and Flow Disturbances in Rocket Propellant Feed Systems
NASA Technical Reports Server (NTRS)
Dorsch, Robert G.; Wood, Don J.; Lightner, Charlene
1966-01-01
A digital distributed parameter model for computing the dynamic response of propellant feed systems is formulated. The analytical approach used is an application of the wave-plan method of analyzing unsteady flow. Nonlinear effects are included. The model takes into account locally high compliances at the pump inlet and at the injector dome region. Examples of the calculated transient and steady-state periodic responses of a simple hypothetical propellant feed system to several types of disturbances are presented. Included are flow disturbances originating from longitudinal structural motion, gimbaling, throttling, and combustion-chamber coupling. The analytical method can be employed for analyzing developmental hardware and offers a flexible tool for the calculation of unsteady flow in these systems.
Portal scatter to primary dose ratio of 4 to 18 MV photon spectra incident on heterogeneous phantoms
NASA Astrophysics Data System (ADS)
Ozard, Siobhan R.
Electronic portal imagers designed and used to verify the positioning of a cancer patient undergoing radiation treatment can also be employed to measure the in vivo dose received by the patient. This thesis investigates the ratio of the dose from patient-scattered particles to the dose from primary (unscattered) photons at the imaging plane, called the scatter to primary dose ratio (SPR). The composition of the SPR according to the origin of scatter is analyzed more thoroughly than in previous studies. A new analytical method for calculating the SPR is developed and experimentally verified for heterogeneous phantoms. A novel technique that applies the analytical SPR method for in vivo dosimetry with a portal imager is evaluated. Monte Carlo simulation was used to determine the imager dose from patient-generated electrons and photons that scatter one or more times within the object. The database of SPRs reported from this investigation is new since the contribution from patient-generated electrons was neglected by previous Monte Carlo studies. The SPR from patient-generated electrons was found here to be as large as 0.03. The analytical SPR method relies on the established result that the scatter dose is uniform for an air gap between the patient and the imager that is greater than 50 cm. This method also applies the hypothesis that first-order Compton scatter only, is sufficient for scatter estimation. A comparison of analytical and measured SPRs for neck, thorax, and pelvis phantoms showed that the maximum difference was within +/-0.03, and the mean difference was less than +/-0.01 for most cases. This accuracy was comparable to similar analytical approaches that are limited to homogeneous phantoms. The analytical SPR method could replace lookup tables of measured scatter doses that can require significant time to measure. In vivo doses were calculated by combining our analytical SPR method and the convolution/superposition algorithm. Our calculated in vivo doses agreed within +/-3% with the doses measured in the phantom. The present in vivo method was faster compared to other techniques that use convolution/superposition. Our method is a feasible and satisfactory approach that contributes to on-line patient dose monitoring.
Beyond single-stream with the Schrödinger method
NASA Astrophysics Data System (ADS)
Uhlemann, Cora; Kopp, Michael
2016-10-01
We investigate large scale structure formation of collisionless dark matter in the phase space description based on the Vlasov-Poisson equation. We present the Schrödinger method, originally proposed by \\cite{WK93} as numerical technique based on the Schrödinger Poisson equation, as an analytical tool which is superior to the common standard pressureless fluid model. Whereas the dust model fails and develops singularities at shell crossing the Schrödinger method encompasses multi-streaming and even virialization.
Rahman, Md Musfiqur; Abd El-Aty, A M; Na, Tae-Woong; Park, Joon-Seong; Kabir, Md Humayun; Chung, Hyung Suk; Lee, Han Sol; Shin, Ho-Chul; Shim, Jae-Han
2017-08-15
A simultaneous analytical method was developed for the determination of methiocarb and its metabolites, methiocarb sulfoxide and methiocarb sulfone, in five livestock products (chicken, pork, beef, table egg, and milk) using liquid chromatography-tandem mass spectrometry. Due to the rapid degradation of methiocarb and its metabolites, a quick sample preparation method was developed using acetonitrile and salts followed by purification via dispersive- solid phase extraction (d-SPE). Seven-point calibration curves were constructed separately in each matrix, and good linearity was observed in each matrix-matched calibration curve with a coefficient of determination (R 2 ) ≥ 0.991. The limits of detection and quantification were 0.0016 and 0.005mg/kg, respectively, for all tested analytes in various matrices. The method was validated in triplicate at three fortification levels (equivalent to 1, 2, and 10 times the limit of quantification) with a recovery rate ranging between 76.4-118.0% and a relative standard deviation≤10.0%. The developed method was successfully applied to market samples, and no residues of methiocarb and/or its metabolites were observed in the tested samples. In sum, this method can be applied for the routine analysis of methiocarb and its metabolites in foods of animal origins. Copyright © 2017 Elsevier B.V. All rights reserved.
A modeling approach to compare ΣPCB concentrations between congener-specific analyses
Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.
2017-01-01
Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time.
Random walk in degree space and the time-dependent Watts-Strogatz model
NASA Astrophysics Data System (ADS)
Casa Grande, H. L.; Cotacallapa, M.; Hase, M. O.
2017-01-01
In this work, we propose a scheme that provides an analytical estimate for the time-dependent degree distribution of some networks. This scheme maps the problem into a random walk in degree space, and then we choose the paths that are responsible for the dominant contributions. The method is illustrated on the dynamical versions of the Erdős-Rényi and Watts-Strogatz graphs, which were introduced as static models in the original formulation. We have succeeded in obtaining an analytical form for the dynamics Watts-Strogatz model, which is asymptotically exact for some regimes.
Random walk in degree space and the time-dependent Watts-Strogatz model.
Casa Grande, H L; Cotacallapa, M; Hase, M O
2017-01-01
In this work, we propose a scheme that provides an analytical estimate for the time-dependent degree distribution of some networks. This scheme maps the problem into a random walk in degree space, and then we choose the paths that are responsible for the dominant contributions. The method is illustrated on the dynamical versions of the Erdős-Rényi and Watts-Strogatz graphs, which were introduced as static models in the original formulation. We have succeeded in obtaining an analytical form for the dynamics Watts-Strogatz model, which is asymptotically exact for some regimes.
NASA Technical Reports Server (NTRS)
Greene, William H.
1990-01-01
A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.
Flow chemistry vs. flow analysis.
Trojanowicz, Marek
2016-01-01
The flow mode of conducting chemical syntheses facilitates chemical processes through the use of on-line analytical monitoring of occurring reactions, the application of solid-supported reagents to minimize downstream processing and computerized control systems to perform multi-step sequences. They are exactly the same attributes as those of flow analysis, which has solid place in modern analytical chemistry in several last decades. The following review paper, based on 131 references to original papers as well as pre-selected reviews, presents basic aspects, selected instrumental achievements and developmental directions of a rapidly growing field of continuous flow chemical synthesis. Interestingly, many of them might be potentially employed in the development of new methods in flow analysis too. In this paper, examples of application of flow analytical measurements for on-line monitoring of flow syntheses have been indicated and perspectives for a wider application of real-time analytical measurements have been discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Assessment of tannin variation in Tamarisk foliage across a latitudinal gradient
Hussey, A.M.; Kimball, B.A.; Friedman, J.M.
2011-01-01
Certain phenotypic traits of plants vary with latitude of origin. To understand if tannin concentration varies among populations of tamarisk (Tamarix spp.) according to a latitudinal gradient, an analytical method was adapted from an enological tannin assay. The tannin content (wet basis) of tamarisk foliage collected from 160 plants grown in a common garden ranged from 8.26 to 62.36 mg/g and was not correlated with the latitude of the original North American plant collection site. Tannins do not contribute to observed differences in herbivory observed among these tamarisk populations.
Costa, Fabiane Pinho; Caldas, Sergiane Souza; Primel, Ednei Gilberto
2014-12-15
Original, citrate and acetate QuEChERS methods were studied in order to evaluate the extraction efficiency and the matrix effect in the extraction of pesticides from canned peach samples. Determinations were performed by gas chromatography coupled to mass spectrometry (GC-MS). The proposed method with extraction using the original QuEChERS method and determination by GC-MS was validated. LOQs ranged between 1 and 10 μg kg(-1) and all analytical curves showed r values higher than 0.99. Recovery values varied from 69% to 125% with RSDs less than 20%. The matrix effect was evaluated and most compounds showed signal enrichment. Robustness was demonstrated using fresh peaches, which provided recovery values within acceptable limits. The applicability of the method was verified and residues of tebuconazole and dimethoate were found in the samples. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Bermudez, Miguel Angel Lopez; Garcia, Rafael Ferro; Calvillo, Manuel
2010-01-01
Traditional methods of diagnosis are of little therapeutic use when diagnostic criteria are based upon topographical rather than functional aspects of behavior. Also, this sentence in the original seemed rather awkward and a bit unclear. In contrast to this, several authors have put forward experience avoidance disorders as an alternative which…
Dubský, Pavel; Müllerová, Ludmila; Dvořák, Martin; Gaš, Bohuslav
2015-03-06
The model of electromigration of a multivalent weak acidic/basic/amphoteric analyte that undergoes complexation with a mixture of selectors is introduced. The model provides an extension of the series of models starting with the single-selector model without dissociation by Wren and Rowe in 1992, continuing with the monovalent weak analyte/single-selector model by Rawjee, Williams and Vigh in 1993 and that by Lelièvre in 1994, and ending with the multi-selector overall model without dissociation developed by our group in 2008. The new multivalent analyte multi-selector model shows that the effective mobility of the analyte obeys the original Wren and Row's formula. The overall complexation constant, mobility of the free analyte and mobility of complex can be measured and used in a standard way. The mathematical expressions for the overall parameters are provided. We further demonstrate mathematically that the pH dependent parameters for weak analytes can be simply used as an input into the multi-selector overall model and, in reverse, the multi-selector overall parameters can serve as an input into the pH-dependent models for the weak analytes. These findings can greatly simplify the rationale method development in analytical electrophoresis, specifically enantioseparations. Copyright © 2015 Elsevier B.V. All rights reserved.
Analytical method of waste allocation in waste management systems: Concept, method and case study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergeron, Francis C., E-mail: francis.b.c@videotron.ca
Waste is not a rejected item to dispose anymore but increasingly a secondary resource to exploit, influencing waste allocation among treatment operations in a waste management (WM) system. The aim of this methodological paper is to present a new method for the assessment of the WM system, the “analytical method of the waste allocation process” (AMWAP), based on the concept of the “waste allocation process” defined as the aggregation of all processes of apportioning waste among alternative waste treatment operations inside or outside the spatial borders of a WM system. AMWAP contains a conceptual framework and an analytical approach. Themore » conceptual framework includes, firstly, a descriptive model that focuses on the description and classification of the WM system. It includes, secondly, an explanatory model that serves to explain and to predict the operation of the WM system. The analytical approach consists of a step-by-step analysis for the empirical implementation of the conceptual framework. With its multiple purposes, AMWAP provides an innovative and objective modular method to analyse a WM system which may be integrated in the framework of impact assessment methods and environmental systems analysis tools. Its originality comes from the interdisciplinary analysis of the WAP and to develop the conceptual framework. AMWAP is applied in the framework of an illustrative case study on the household WM system of Geneva (Switzerland). It demonstrates that this method provides an in-depth and contextual knowledge of WM. - Highlights: • The study presents a new analytical method based on the waste allocation process. • The method provides an in-depth and contextual knowledge of the waste management system. • The paper provides a reproducible procedure for professionals, experts and academics. • It may be integrated into impact assessment or environmental system analysis tools. • An illustrative case study is provided based on household waste management in Geneva.« less
Yang, Lixin; Li, Heli; Miao, Hong; Zeng, Fangang; Li, Ruifeng; Chen, Huijing; Zhao, Yunfeng; Wu, Yongning
2011-10-01
A method was established for the quantitative determination of 54 organophosphorus pesticide residues and their metabolites in foods of animal origin by dual gas chromatography-dual pulse flame photometric detection. Homogenized samples were extracted with acetone and methylene chloride, and cleaned-up by gel permeation chromatography (GPC). The response of each analyte showed a good linearity with a correlation coefficient not less than 0. 99. The recovery experiments were performed by a blank sample spiked at low, medium and high fortification levels. The recoveries for beef, mutton, pork, chicken were in the range of 50. 5% -128. 1% with the relative standard deviations (n = 6) of 1. 1% -25. 5%, which demonstrated the good precision and accuracy of the present method. The limits of detection for the analytes were in the range of 0. 001 -0. 170 mg/kg, and the limits of quantification were in the range of 0. 002 -0. 455 mg/kg. Animal food samples collected from markets such as meat, liver and kidney were analyzed, and the residues of dichlorovos and disulfoton-sulfoxide were found in the some samples. The established method is sensitive and selective enough to detect organophosphorus pesticide residues in animal foods.
NASA Astrophysics Data System (ADS)
Pomata, Donatella; Di Filippo, Patrizia; Riccardi, Carmela; Buiarelli, Francesca; Gallo, Valentina
2014-02-01
Organic component of airborne particulate matter originates from both natural and anthropogenic sources whose contributions can be identified through the analysis of chemical markers. The validation of analytical methods for analysis of compounds used as chemical markers is of great importance especially if they must be determined in rather complex matrices. Currently, standard reference materials (SRM) with certified values for all those analytes are not available. In this paper, we report a method for the simultaneous determination of levoglucosan and xylitol as tracers for biomass burning emissions, and arabitol, mannitol and ergosterol as biomarkers for airborne fungi in SRM 1649a, by GC/MS. Their quantitative analysis in SRM 1649a was carried out using both internal standard calibration curves and standard addition method. A matrix effect was observed for all analytes, minor for levoglucosan and major for polyols and ergosterol. The results related to levoglucosan around 160 μg g-1 agreed with those reported by other authors, while no comparison was possible for xylitol (120 μg g-1), arabitol (15 μg g-1), mannitol (18 μg g-1), and ergosterol (0.5 μg g-1). The analytical method used for SRM 1649a was also applied to PM10 samples collected in Rome during four seasonal sampling campaigns. The ratios between annual analyte concentrations in PM10 samples and in SRM 1649a were of the same order of magnitude although particulate matter samples analyzed were collected in two different sites and periods.
The case for visual analytics of arsenic concentrations in foods.
Johnson, Matilda O; Cohly, Hari H P; Isokpehi, Raphael D; Awofolu, Omotayo R
2010-05-01
Arsenic is a naturally occurring toxic metal and its presence in food could be a potential risk to the health of both humans and animals. Prolonged ingestion of arsenic contaminated water may result in manifestations of toxicity in all systems of the body. Visual Analytics is a multidisciplinary field that is defined as the science of analytical reasoning facilitated by interactive visual interfaces. The concentrations of arsenic vary in foods making it impractical and impossible to provide regulatory limit for each food. This review article presents a case for the use of visual analytics approaches to provide comparative assessment of arsenic in various foods. The topics covered include (i) metabolism of arsenic in the human body; (ii) arsenic concentrations in various foods; (ii) factors affecting arsenic uptake in plants; (ii) introduction to visual analytics; and (iv) benefits of visual analytics for comparative assessment of arsenic concentration in foods. Visual analytics can provide an information superstructure of arsenic in various foods to permit insightful comparative risk assessment of the diverse and continually expanding data on arsenic in food groups in the context of country of study or origin, year of study, method of analysis and arsenic species.
The Case for Visual Analytics of Arsenic Concentrations in Foods
Johnson, Matilda O.; Cohly, Hari H.P.; Isokpehi, Raphael D.; Awofolu, Omotayo R.
2010-01-01
Arsenic is a naturally occurring toxic metal and its presence in food could be a potential risk to the health of both humans and animals. Prolonged ingestion of arsenic contaminated water may result in manifestations of toxicity in all systems of the body. Visual Analytics is a multidisciplinary field that is defined as the science of analytical reasoning facilitated by interactive visual interfaces. The concentrations of arsenic vary in foods making it impractical and impossible to provide regulatory limit for each food. This review article presents a case for the use of visual analytics approaches to provide comparative assessment of arsenic in various foods. The topics covered include (i) metabolism of arsenic in the human body; (ii) arsenic concentrations in various foods; (ii) factors affecting arsenic uptake in plants; (ii) introduction to visual analytics; and (iv) benefits of visual analytics for comparative assessment of arsenic concentration in foods. Visual analytics can provide an information superstructure of arsenic in various foods to permit insightful comparative risk assessment of the diverse and continually expanding data on arsenic in food groups in the context of country of study or origin, year of study, method of analysis and arsenic species. PMID:20623005
Screening of 23 β-lactams in foodstuffs by LC-MS/MS using an alkaline QuEChERS-like extraction.
Bessaire, Thomas; Mujahid, Claudia; Beck, Andrea; Tarres, Adrienne; Savoy, Marie-Claude; Woo, Pei-Mun; Mottier, Pascal; Desmarchelier, Aurélien
2018-04-01
A fast and robust high performance LC-MS/MS screening method was developed for the analysis of β-lactam antibiotics in foods of animal origin: eggs, raw milk, processed dairy ingredients, infant formula, and meat- and fish-based products including baby foods. QuEChERS extraction with some adaptations enabled 23 drugs to be simultaneously monitored. Screening target concentrations were set at levels adequate to ensure compliance with current European, Chinese, US and Canadian regulations. The method was fully validated according to the European Community Reference Laboratories Residues Guidelines using 93 food samples of different composition. False-negative and false-positive rates were below 5% for all analytes. The method is adequate for use in high-routine laboratories. A 1-year study was additionally conducted to assess the stability of the 23 analytes in the working standard solution.
Methods for the analysis of azo dyes employed in food industry--A review.
Yamjala, Karthik; Nainar, Meyyanathan Subramania; Ramisetti, Nageswara Rao
2016-02-01
A wide variety of azo dyes are generally added for coloring food products not only to make them visually aesthetic but also to reinstate the original appearance lost during the production process. However, many countries in the world have banned the use of most of the azo dyes in food and their usage is highly regulated by domestic and export food supplies. The regulatory authorities and food analysts adopt highly sensitive and selective analytical methods for monitoring as well as assuring the quality and safety of food products. The present manuscript presents a comprehensive review of various analytical techniques used in the analysis of azo dyes employed in food industries of different parts of the world. A brief description on the use of different extraction methods such as liquid-liquid, solid phase and membrane extraction has also been presented. Copyright © 2015 Elsevier Ltd. All rights reserved.
Free and Forced Vibrations of Thick-Walled Anisotropic Cylindrical Shells
NASA Astrophysics Data System (ADS)
Marchuk, A. V.; Gnedash, S. V.; Levkovskii, S. A.
2017-03-01
Two approaches to studying the free and forced axisymmetric vibrations of cylindrical shell are proposed. They are based on the three-dimensional theory of elasticity and division of the original cylindrical shell with concentric cross-sectional circles into several coaxial cylindrical shells. One approach uses linear polynomials to approximate functions defined in plan and across the thickness. The other approach also uses linear polynomials to approximate functions defined in plan, but their variation with thickness is described by the analytical solution of a system of differential equations. Both approaches have approximation and arithmetic errors. When determining the natural frequencies by the semi-analytical finite-element method in combination with the divide and conqure method, it is convenient to find the initial frequencies by the finite-element method. The behavior of the shell during free and forced vibrations is analyzed in the case where the loading area is half the shell thickness
Cantrill, Richard C
2008-01-01
Methods of analysis for products of modern biotechnology are required for national and international trade in seeds, grain and food in order to meet the labeling or import/export requirements of different nations and trading blocks. Although many methods were developed by the originators of transgenic events, governments, universities, and testing laboratories, trade is less complicated if there exists a set of international consensus-derived analytical standards. In any analytical situation, multiple methods may exist for testing for the same analyte. These methods may be supported by regional preferences and regulatory requirements. However, tests need to be sensitive enough to determine low levels of these traits in commodity grain for regulatory purposes and also to indicate purity of seeds containing these traits. The International Organization for Standardization (ISO) and its European counterpart have worked to produce a suite of standards through open, balanced and consensus-driven processes. Presently, these standards are approaching the time for their first review. In fact, ISO 21572, the "protein standard" has already been circulated for systematic review. In order to expedite the review and revision of the nucleic acid standards an ISO Technical Specification (ISO/TS 21098) was drafted to set the criteria for the inclusion of precision data from collaborative studies into the annexes of these standards.
Bajoub, Aadil; Bendini, Alessandra; Fernández-Gutiérrez, Alberto; Carrasco-Pancorbo, Alegría
2018-03-24
Over the last decades, olive oil quality and authenticity control has become an issue of great importance to consumers, suppliers, retailers, and regulators in both traditional and emerging olive oil producing countries, mainly due to the increasing worldwide popularity and the trade globalization of this product. Thus, in order to ensure olive oil authentication, various national and international laws and regulations have been adopted, although some of them are actually causing an enormous debate about the risk that they can represent for the harmonization of international olive oil trade standards. Within this context, this review was designed to provide a critical overview and comparative analysis of selected regulatory frameworks for olive oil authentication, with special emphasis on the quality and purity criteria considered by these regulation systems, their thresholds and the analytical methods employed for monitoring them. To complete the general overview, recent analytical advances to overcome drawbacks and limitations of the official methods to evaluate olive oil quality and to determine possible adulterations were reviewed. Furthermore, the latest trends on analytical approaches to assess the olive oil geographical and varietal origin traceability were also examined.
Simulating ground water-lake interactions: Approaches and insights
Hunt, R.J.; Haitjema, H.M.; Krohelski, J.T.; Feinstein, D.T.
2003-01-01
Approaches for modeling lake-ground water interactions have evolved significantly from early simulations that used fixed lake stages specified as constant head to sophisticated LAK packages for MODFLOW. Although model input can be complex, the LAK package capabilities and output are superior to methods that rely on a fixed lake stage and compare well to other simple methods where lake stage can be calculated. Regardless of the approach, guidelines presented here for model grid size, location of three-dimensional flow, and extent of vertical capture can facilitate the construction of appropriately detailed models that simulate important lake-ground water interactions without adding unnecessary complexity. In addition to MODFLOW approaches, lake simulation has been formulated in terms of analytic elements. The analytic element lake package had acceptable agreement with a published LAK1 problem, even though there were differences in the total lake conductance and number of layers used in the two models. The grid size used in the original LAK1 problem, however, violated a grid size guideline presented in this paper. Grid sensitivity analyses demonstrated that an appreciable discrepancy in the distribution of stream and lake flux was related to the large grid size used in the original LAK1 problem. This artifact is expected regardless of MODFLOW LAK package used. When the grid size was reduced, a finite-difference formulation approached the analytic element results. These insights and guidelines can help ensure that the proper lake simulation tool is being selected and applied.
Simplified Computation for Nonparametric Windows Method of Probability Density Function Estimation.
Joshi, Niranjan; Kadir, Timor; Brady, Michael
2011-08-01
Recently, Kadir and Brady proposed a method for estimating probability density functions (PDFs) for digital signals which they call the Nonparametric (NP) Windows method. The method involves constructing a continuous space representation of the discrete space and sampled signal by using a suitable interpolation method. NP Windows requires only a small number of observed signal samples to estimate the PDF and is completely data driven. In this short paper, we first develop analytical formulae to obtain the NP Windows PDF estimates for 1D, 2D, and 3D signals, for different interpolation methods. We then show that the original procedure to calculate the PDF estimate can be significantly simplified and made computationally more efficient by a judicious choice of the frame of reference. We have also outlined specific algorithmic details of the procedures enabling quick implementation. Our reformulation of the original concept has directly demonstrated a close link between the NP Windows method and the Kernel Density Estimator.
Targeted Analyte Detection by Standard Addition Improves Detection Limits in MALDI Mass Spectrometry
Eshghi, Shadi Toghi; Li, Xingde; Zhang, Hui
2014-01-01
Matrix-assisted laser desorption/ionization has proven an effective tool for fast and accurate determination of many molecules. However, the detector sensitivity and chemical noise compromise the detection of many invaluable low-abundance molecules from biological and clinical samples. To challenge this limitation, we developed a targeted analyte detection (TAD) technique. In TAD, the target analyte is selectively elevated by spiking a known amount of that analyte into the sample, thereby raising its concentration above the noise level, where we take advantage of the improved sensitivity to detect the presence of the endogenous analyte in the sample. We assessed TAD on three peptides in simple and complex background solutions with various exogenous analyte concentrations in two MALDI matrices. TAD successfully improved the limit of detection (LOD) of target analytes when the target peptides were added to the sample in a concentration close to optimum concentration. The optimum exogenous concentration was estimated through a quantitative method to be approximately equal to the original LOD for each target. Also, we showed that TAD could achieve LOD improvements on an average of 3-fold in a simple and 2-fold in a complex sample. TAD provides a straightforward assay to improve the LOD of generic target analytes without the need for costly hardware modifications. PMID:22877355
Toghi Eshghi, Shadi; Li, Xingde; Zhang, Hui
2012-09-18
Matrix-assisted laser desorption/ionization (MALDI) has proven an effective tool for fast and accurate determination of many molecules. However, the detector sensitivity and chemical noise compromise the detection of many invaluable low-abundance molecules from biological and clinical samples. To challenge this limitation, we developed a targeted analyte detection (TAD) technique. In TAD, the target analyte is selectively elevated by spiking a known amount of that analyte into the sample, thereby raising its concentration above the noise level, where we take advantage of the improved sensitivity to detect the presence of the endogenous analyte in the sample. We assessed TAD on three peptides in simple and complex background solutions with various exogenous analyte concentrations in two MALDI matrices. TAD successfully improved the limit of detection (LOD) of target analytes when the target peptides were added to the sample in a concentration close to optimum concentration. The optimum exogenous concentration was estimated through a quantitative method to be approximately equal to the original LOD for each target. Also, we showed that TAD could achieve LOD improvements on an average of 3-fold in a simple and 2-fold in a complex sample. TAD provides a straightforward assay to improve the LOD of generic target analytes without the need for costly hardware modifications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobranskis, R. R.; Zharkova, V. V., E-mail: valentina.zharkova@northumbria.ac.uk
2014-06-10
The original continuity equation (CE) used for the interpretation of the power law energy spectra of beam electrons in flares was written and solved for an electron beam flux while ignoring an additional free term with an electron density. In order to remedy this omission, the original CE for electron flux, considering beam's energy losses in Coulomb collisions, was first differentiated by the two independent variables: depth and energy leading to partial differential equation for an electron beam density instead of flux with the additional free term. The analytical solution of this partial differential continuity equation (PDCE) is obtained bymore » using the method of characteristics. This solution is further used to derive analytical expressions for mean electron spectra for Coulomb collisions and to carry out numeric calculations of hard X-ray (HXR) photon spectra for beams with different parameters. The solutions revealed a significant departure of electron densities at lower energies from the original results derived from the CE for the flux obtained for Coulomb collisions. This departure is caused by the additional exponential term that appeared in the updated solutions for electron differential density leading to its faster decrease at lower energies (below 100 keV) with every precipitation depth similar to the results obtained with numerical Fokker-Planck solutions. The effects of these updated solutions for electron densities on mean electron spectra and HXR photon spectra are also discussed.« less
Human biological monitoring of suspected endocrine-disrupting compounds
Faniband, Moosa; Lindh, Christian H; Jönsson, Bo AG
2014-01-01
Endocrine-disrupting compounds are exogenous agents that interfere with the natural hormones of the body. Human biological monitoring is a powerful method for monitoring exposure to endocrine disrupting compounds. In this review, we describe human biological monitoring systems for different groups of endocrine disrupting compounds, polychlorinated biphenyls, brominated flame retardants, phthalates, alkylphenols, pesticides, metals, perfluronated compounds, parabens, ultraviolet filters, and organic solvents. The aspects discussed are origin to exposure, metabolism, matrices to analyse, analytical determination methods, determinants, and time trends. PMID:24369128
NASA Astrophysics Data System (ADS)
López, J.; Hernández, J.; Gómez, P.; Faura, F.
2018-02-01
The VOFTools library includes efficient analytical and geometrical routines for (1) area/volume computation, (2) truncation operations that typically arise in VOF (volume of fluid) methods, (3) area/volume conservation enforcement (VCE) in PLIC (piecewise linear interface calculation) reconstruction and(4) computation of the distance from a given point to the reconstructed interface. The computation of a polyhedron volume uses an efficient formula based on a quadrilateral decomposition and a 2D projection of each polyhedron face. The analytical VCE method is based on coupling an interpolation procedure to bracket the solution with an improved final calculation step based on the above volume computation formula. Although the library was originally created to help develop highly accurate advection and reconstruction schemes in the context of VOF methods, it may have more general applications. To assess the performance of the supplied routines, different tests, which are provided in FORTRAN and C, were implemented for several 2D and 3D geometries.
Species-specific detection of processed animal proteins in feed by Raman spectroscopy.
Mandrile, Luisa; Amato, Giuseppina; Marchis, Daniela; Martra, Gianmario; Rossi, Andrea Mario
2017-08-15
The existing European Regulation (EC n° 51/2013) prohibits the use of animals meals in feedstuffs in order to prevent Bovine Spongiform Encephalopathy infection and diffusion, however the legislation is rapidly moving towards a partial lifting of the "feed ban" and the competent control organisms are urged to develop suitable analytical methods able to avoid food safety incidents related to animal origin products. The limitations of the official methods (i.e. light microscopy and Polymerase Chain Reaction) suggest exploring new analytic ways to get reliable results in a short time. The combination of spectroscopic techniques with optical microscopy allows the development of an individual particle method able to meet both selectivity and sensitivity requirements (0.1%w/w). A spectroscopic method based on Fourier Transform micro-Raman spectroscopy coupled with Discriminant Analysis is here presented. This approach could be very useful for in-situ applications, such as customs inspections, since it drastically reduces time and costs of analysis. Copyright © 2017. Published by Elsevier Ltd.
Capela, Daniela; Homem, Vera; Alves, Arminda; Santos, Lúcia
2018-03-01
Recently, Pierre Germain from CES - Silicon Europe published a comment on the paper "Volatile methylsiloxanes in personal care products - Using QuEChERS as a "green" analytical approach", raising concerns that the artefacts employed in the analysis of cyclic volatile methylsiloxanes (cVMS) were not adequately controlled, while using this example as an opportunity to emphasize the difficulties associated with siloxanes analyses in complex matrices such as personal care products (PCPs). We are now addressing these concerns and conveying some clarifications regarding the experiments performed to validate the analytical method adequately. Those details were not included in the original publication because the objective was the quantification of VMS in several PCPs. Copyright © 2017 Elsevier B.V. All rights reserved.
Luo, Yuan; Szolovits, Peter; Dighe, Anand S; Baron, Jason M
2018-06-01
A key challenge in clinical data mining is that most clinical datasets contain missing data. Since many commonly used machine learning algorithms require complete datasets (no missing data), clinical analytic approaches often entail an imputation procedure to "fill in" missing data. However, although most clinical datasets contain a temporal component, most commonly used imputation methods do not adequately accommodate longitudinal time-based data. We sought to develop a new imputation algorithm, 3-dimensional multiple imputation with chained equations (3D-MICE), that can perform accurate imputation of missing clinical time series data. We extracted clinical laboratory test results for 13 commonly measured analytes (clinical laboratory tests). We imputed missing test results for the 13 analytes using 3 imputation methods: multiple imputation with chained equations (MICE), Gaussian process (GP), and 3D-MICE. 3D-MICE utilizes both MICE and GP imputation to integrate cross-sectional and longitudinal information. To evaluate imputation method performance, we randomly masked selected test results and imputed these masked results alongside results missing from our original data. We compared predicted results to measured results for masked data points. 3D-MICE performed significantly better than MICE and GP-based imputation in a composite of all 13 analytes, predicting missing results with a normalized root-mean-square error of 0.342, compared to 0.373 for MICE alone and 0.358 for GP alone. 3D-MICE offers a novel and practical approach to imputing clinical laboratory time series data. 3D-MICE may provide an additional tool for use as a foundation in clinical predictive analytics and intelligent clinical decision support.
Srinivas, Nuggehally R
2006-05-01
The development of sound bioanalytical method(s) is of paramount importance during the process of drug discovery and development culminating in a marketing approval. Although the bioanalytical procedure(s) originally developed during the discovery stage may not necessarily be fit to support the drug development scenario, they may be suitably modified and validated, as deemed necessary. Several reviews have appeared over the years describing analytical approaches including various techniques, detection systems, automation tools that are available for an effective separation, enhanced selectivity and sensitivity for quantitation of many analytes. The intention of this review is to cover various key areas where analytical method development becomes necessary during different stages of drug discovery research and development process. The key areas covered in this article with relevant case studies include: (a) simultaneous assay for parent compound and metabolites that are purported to display pharmacological activity; (b) bioanalytical procedures for determination of multiple drugs in combating a disease; (c) analytical measurement of chirality aspects in the pharmacokinetics, metabolism and biotransformation investigations; (d) drug monitoring for therapeutic benefits and/or occupational hazard; (e) analysis of drugs from complex and/or less frequently used matrices; (f) analytical determination during in vitro experiments (metabolism and permeability related) and in situ intestinal perfusion experiments; (g) determination of a major metabolite as a surrogate for the parent molecule; (h) analytical approaches for universal determination of CYP450 probe substrates and metabolites; (i) analytical applicability to prodrug evaluations-simultaneous determination of prodrug, parent and metabolites; (j) quantitative determination of parent compound and/or phase II metabolite(s) via direct or indirect approaches; (k) applicability in analysis of multiple compounds in select disease areas and/or in clinically important drug-drug interaction studies. A tabular representation of select examples of analysis is provided covering areas of separation conditions, validation aspects and applicable conclusion. A limited discussion is provided on relevant aspects of the need for developing bioanalytical procedures for speedy drug discovery and development. Additionally, some key elements such as internal standard selection, likely issues of mass detection, matrix effect, chiral aspects etc. are provided for consideration during method development.
Han, Bangxing; Peng, Huasheng; Yan, Hui
2016-01-01
Mugua is a common Chinese herbal medicine. There are three main medicinal origin places in China, Xuancheng City Anhui Province, Qijiang District Chongqing City, Yichang City, Hubei Province, and suitable for food origin places Linyi City Shandong Province. To construct a qualitative analytical method to identify the origin of medicinal Mugua by near infrared spectroscopy (NIRS). Partial least squares discriminant analysis (PLSDA) model was established after the Mugua derived from five different origins were preprocessed by the original spectrum. Moreover, the hierarchical cluster analysis was performed. The result showed that PLSDA model was established. According to the relationship of the origins-related important score and wavenumber, and K-mean cluster analysis, the Muguas derived from different origins were effectively identified. NIRS technology can quickly and accurately identify the origin of Mugua, provide a new method and technology for the identification of Chinese medicinal materials. After preprocessed by D1+autoscale, more peaks were increased in the preprocessed Mugua in the near infrared spectrumFive latent variable scores could reflect the information related to the origin place of MuguaOrigins of Mugua were well-distinguished according to K. mean value clustering analysis. Abbreviations used: TCM: Traditional Chinese Medicine, NIRS: Near infrared spectroscopy, SG: Savitzky-Golay smoothness, D1: First derivative, D2: Second derivative, SNV: Standard normal variable transformation, MSC: Multiplicative scatter correction, PLSDA: Partial least squares discriminant analysis, LV: Latent variable, VIP scores: Important score.
ERIC Educational Resources Information Center
Schaal, David W.
2012-01-01
This article presents an introduction to "The Behavior-Analytic Origins of Constraint-Induced Movement Therapy: An Example of Behavioral Neurorehabilitation," by Edward Taub and his colleagues (Taub, 2012). Based on extensive experimentation with animal models of peripheral nerve injury, Taub and colleagues have created an approach to overcoming…
NASA Astrophysics Data System (ADS)
Iskhakova, K.; Murzakhanov, F.; Mamin, G.; Putlyaev, V.; Klimashina, E.; Fadeeva, I.; Fomin, A.; Barinov, S.; Maltsev, A.; Bakhteev, S.; Yusupov, R.; Gafurov, M.; Orlinskii, S.
2018-05-01
Calcium phosphates (CaP) are exploited in many fields of science, including geology, chemistry, biology and medicine due to their abundance in the nature and presence in the living organism. Various analytical and biochemical methods are used for controlling their chemical content, structure, morphology, etc. Unfortunately, magnetic resonance techniques are usually not even considered as necessary tools for CaP inspection. Some aspects of application of the commercially realized electron paramagnetic resonance (EPR) approaches for characterization of CaP powders and ceramics (including the nanosized materails) such as hydroxyapatite and tricalcium phosphates of biogenic and synthetic origins containing intrinsic impurities or intentional dopants are demonstrated. The key features and advantages of the EPR techniques for CaP based materials characterization that could compliment the data obtained with the recognized analytical methods are pointed out.
An analysis of hypercritical states in elastic and inelastic systems
NASA Astrophysics Data System (ADS)
Kowalczk, Maciej
The author raises a wide range of problems whose common characteristic is an analysis of hypercritical states in elastic and inelastic systems. the article consists of two basic parts. The first part primarily discusses problems of modelling hypercritical states, while the second analyzes numerical methods (so-called continuation methods) used to solve non-linear problems. The original approaches for modelling hypercritical states found in this article include the combination of plasticity theory and an energy condition for cracking, accounting for the variability and cyclical nature of the forms of fracture of a brittle material under a die, and the combination of plasticity theory and a simplified description of the phenomenon of localization along a discontinuity line. The author presents analytical solutions of three non-linear problems for systems made of elastic/brittle/plastic and elastic/ideally plastic materials. The author proceeds to discuss the analytical basics of continuation methods and analyzes the significance of the parameterization of non-linear problems, provides a method for selecting control parameters based on an analysis of the rank of a rectangular matrix of a uniform system of increment equations, and also provides a new method for selecting an equilibrium path originating from a bifurcation point. The author provides a general outline of continuation methods based on an analysis of the rank of a matrix of a corrective system of equations. The author supplements his theoretical solutions with numerical solutions of non-linear problems for rod systems and problems of the plastic disintegration of a notched rectangular plastic plate.
Bandoniene, Donata; Zettl, Daniela; Meisel, Thomas; Maneiko, Marija
2013-02-15
An analytical method was developed and validated for the classification of the geographical origin of pumpkin seeds and oil from Austria, China and Russia. The distribution of element traces in pumpkin seed and pumpkin seed oils in relation to the geographical origin of soils of several agricultural farms in Austria was studied in detail. Samples from several geographic origins were taken from parts of the pumpkin, pumpkin flesh, seeds, the oil extracted from the seeds and the oil-extraction cake as well as the topsoil on which the plants were grown. Plants from different geographical origin show variations of the elemental patterns that are significantly large, reproducible over the years and ripeness period and show no significant influence of oil production procedure, to allow to a discrimination of geographical origin. A successful differentiation of oils from different regions in Austria, China and Russia classified with multivariate data analysis is demonstrated. Copyright © 2012 Elsevier Ltd. All rights reserved.
Spalla, S; Baffi, C; Barbante, C; Turetta, C; Turretta, C; Cozzi, G; Beone, G M; Bettinelli, M
2009-10-30
In recent years identification of the geographical origin of food has grown more important as consumers have become interested in knowing the provenance of the food that they purchase and eat. Certification schemes and labels have thus been developed to protect consumers and genuine producers from the improper use of popular brand names or renowned geographical origins. As the tomato is one of the major components of what is considered to be the healthy Mediterranean diet, it is important to be able to determine the geographical origin of tomatoes and tomato-based products such as tomato sauce. The aim of this work is to develop an analytical method to determine rare earth elements (RRE) for the control of the geographic origin of tomatoes. The content of REE in tomato plant samples collected from an agricultural area in Piacenza, Italy, was determined, using four different digestion procedures with and without HF. Microwave dissolution with HNO3 + H2O2 proved to be the most suitable digestion procedure. Inductively coupled plasma quadrupole mass spectrometry (ICPQMS) and inductively coupled plasma sector field plasma mass spectrometry (ICPSFMS) instruments, both coupled with a desolvation system, were used to determine the REE in tomato plants in two different laboratories. A matched calibration curve method was used for the quantification of the analytes. The detection limits (MDLs) of the method ranged from 0.03 ng g(-1) for Ho, Tm, and Lu to 2 ng g(-1) for La and Ce. The precision, in terms of relative standard deviation on six replicates, was good, with values ranging, on average, from 6.0% for LREE (light rare earth elements) to 16.5% for HREE (heavy rare earth elements). These detection limits allowed the determination of the very low concentrations of REE present in tomato berries. For the concentrations of REE in tomato plants, the following trend was observed: roots > leaves > stems > berries. Copyright 2009 John Wiley & Sons, Ltd.
Zhang, Rui; Taddei, Phillip J; Fitzek, Markus M; Newhauser, Wayne D
2010-05-07
Heavy charged particle beam radiotherapy for cancer is of increasing interest because it delivers a highly conformal radiation dose to the target volume. Accurate knowledge of the range of a heavy charged particle beam after it penetrates a patient's body or other materials in the beam line is very important and is usually stated in terms of the water equivalent thickness (WET). However, methods of calculating WET for heavy charged particle beams are lacking. Our objective was to test several simple analytical formulas previously developed for proton beams for their ability to calculate WET values for materials exposed to beams of protons, helium, carbon and iron ions. Experimentally measured heavy charged particle beam ranges and WET values from an iterative numerical method were compared with the WET values calculated by the analytical formulas. In most cases, the deviations were within 1 mm. We conclude that the analytical formulas originally developed for proton beams can also be used to calculate WET values for helium, carbon and iron ion beams with good accuracy.
Zhang, Rui; Taddei, Phillip J; Fitzek, Markus M; Newhauser, Wayne D
2010-01-01
Heavy charged particle beam radiotherapy for cancer is of increasing interest because it delivers a highly conformal radiation dose to the target volume. Accurate knowledge of the range of a heavy charged particle beam after it penetrates a patient’s body or other materials in the beam line is very important and is usually stated in terms of the water equivalent thickness (WET). However, methods of calculating WET for heavy charged particle beams are lacking. Our objective was to test several simple analytical formulas previously developed for proton beams for their ability to calculate WET values for materials exposed to beams of protons, helium, carbon and iron ions. Experimentally measured heavy charged particle beam ranges and WET values from an iterative numerical method were compared with the WET values calculated by the analytical formulas. Inmost cases, the deviations were within 1 mm. We conclude that the analytical formulas originally developed for proton beams can also be used to calculate WET values for helium, carbon and iron ion beams with good accuracy. PMID:20371908
Quality Tetrahedral Mesh Smoothing via Boundary-Optimized Delaunay Triangulation
Gao, Zhanheng; Yu, Zeyun; Holst, Michael
2012-01-01
Despite its great success in improving the quality of a tetrahedral mesh, the original optimal Delaunay triangulation (ODT) is designed to move only inner vertices and thus cannot handle input meshes containing “bad” triangles on boundaries. In the current work, we present an integrated approach called boundary-optimized Delaunay triangulation (B-ODT) to smooth (improve) a tetrahedral mesh. In our method, both inner and boundary vertices are repositioned by analytically minimizing the error between a paraboloid function and its piecewise linear interpolation over the neighborhood of each vertex. In addition to the guaranteed volume-preserving property, the proposed algorithm can be readily adapted to preserve sharp features in the original mesh. A number of experiments are included to demonstrate the performance of our method. PMID:23144522
Bichon, Emmanuelle; Guiffard, Ingrid; Vénisseau, Anaïs; Lesquin, Elodie; Vaccher, Vincent; Marchand, Philippe; Le Bizec, Bruno
2018-08-01
Brominated Flame Retardants (BFRs) are still widely used for industrial purposes. These contaminants may enter the food chain where they mainly occur in food of animal origin. The aim of our work was to provide a unique method able to quantify the widest range of BFRs in feed and food items. After freeze-drying and grinding, a pressurized liquid extraction was carried out. The extract was purified on acidified silica, Florisil ® and carbon columns, the four separated fractions were analyzed by gas and liquid chromatography coupled to high resolution and tandem mass spectrometry. Isotopic dilution was preferentially used when commercial labelled compounds were available. Analytical sensitivity was in accordance with the expectations of Recommendation 2014/118/EU for PBDEs, HBCDDs, TBBPA, TBBPA-bME, EHTBB, BEHTEBP and TBBPA-bME. Additional BFRs were included in this analytical method with the same level of performances (LOQs below 0.01 ng g -1 ww). These are PBBs, pTBX, TBCT, PBBz, PBT, PBEB, HBBz, BTBPE, OBIND and T23BPIC. However, some of the BFRs listed in Recommendation 2014/118/EU are not yet covered by our analytical method, i.e. TBBPA-bOHEE, TBBPA-bAE, TBBPA-bGE, TBBPA-bDiBPrE, TBBPS, TBBPS-bME, TDBPP, EBTEBPI, HBCYD and DBNPG. The uncertainty measurement was fully calculated for 21 of the 31 analytes monitored in the method. Reproducibility uncertainty was below 23% in isotopic dilution. Certified reference materials are now required to better characterize the trueness of this method, which was applied in the French National Control Plans. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, Scott J.; Edwards, Shatiel B.; Teper, Gerald E.
We report that recent budget reductions have posed tremendous challenges to the U.S. Army in managing its portfolio of ground combat systems (tanks and other fighting vehicles), thus placing many important programs at risk. To address these challenges, the Army and a supporting team developed and applied the Capability Portfolio Analysis Tool (CPAT) to optimally invest in ground combat modernization over the next 25–35 years. CPAT provides the Army with the analytical rigor needed to help senior Army decision makers allocate scarce modernization dollars to protect soldiers and maintain capability overmatch. CPAT delivers unparalleled insight into multiple-decade modernization planning usingmore » a novel multiphase mixed-integer linear programming technique and illustrates a cultural shift toward analytics in the Army’s acquisition thinking and processes. CPAT analysis helped shape decisions to continue modernization of the $10 billion Stryker family of vehicles (originally slated for cancellation) and to strategically reallocate over $20 billion to existing modernization programs by not pursuing the Ground Combat Vehicle program as originally envisioned. Ultimately, more than 40 studies have been completed using CPAT, applying operations research methods to optimally prioritize billions of taxpayer dollars and allowing Army acquisition executives to base investment decisions on analytically rigorous evaluations of portfolio trade-offs.« less
Davis, Scott J.; Edwards, Shatiel B.; Teper, Gerald E.; ...
2016-02-01
We report that recent budget reductions have posed tremendous challenges to the U.S. Army in managing its portfolio of ground combat systems (tanks and other fighting vehicles), thus placing many important programs at risk. To address these challenges, the Army and a supporting team developed and applied the Capability Portfolio Analysis Tool (CPAT) to optimally invest in ground combat modernization over the next 25–35 years. CPAT provides the Army with the analytical rigor needed to help senior Army decision makers allocate scarce modernization dollars to protect soldiers and maintain capability overmatch. CPAT delivers unparalleled insight into multiple-decade modernization planning usingmore » a novel multiphase mixed-integer linear programming technique and illustrates a cultural shift toward analytics in the Army’s acquisition thinking and processes. CPAT analysis helped shape decisions to continue modernization of the $10 billion Stryker family of vehicles (originally slated for cancellation) and to strategically reallocate over $20 billion to existing modernization programs by not pursuing the Ground Combat Vehicle program as originally envisioned. Ultimately, more than 40 studies have been completed using CPAT, applying operations research methods to optimally prioritize billions of taxpayer dollars and allowing Army acquisition executives to base investment decisions on analytically rigorous evaluations of portfolio trade-offs.« less
Du, Lihong; White, Robert L
2009-02-01
A previously proposed partition equilibrium model for quantitative prediction of analyte response in electrospray ionization mass spectrometry is modified to yield an improved linear relationship. Analyte mass spectrometer response is modeled by a competition mechanism between analyte and background electrolytes that is based on partition equilibrium considerations. The correlation between analyte response and solution composition is described by the linear model over a wide concentration range and the improved model is shown to be valid for a wide range of experimental conditions. The behavior of an analyte in a salt solution, which could not be explained by the original model, is correctly predicted. The ion suppression effects of 16:0 lysophosphatidylcholine (LPC) on analyte signals are attributed to a combination of competition for excess charge and reduction of total charge due to surface tension effects. In contrast to the complicated mathematical forms that comprise the original model, the simplified model described here can more easily be employed to predict analyte mass spectrometer responses for solutions containing multiple components. Copyright (c) 2008 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Ling; Zhao, Haihua; Kim, Seung Jun
In this study, the classical Welander’s oscillatory natural circulation problem is investigated using high-order numerical methods. As originally studied by Welander, the fluid motion in a differentially heated fluid loop can exhibit stable, weakly instable, and strongly instable modes. A theoretical stability map has also been originally derived from the stability analysis. Numerical results obtained in this paper show very good agreement with Welander’s theoretical derivations. For stable cases, numerical results from both the high-order and low-order numerical methods agree well with the non-dimensional flow rate analytically derived. The high-order numerical methods give much less numerical errors compared to themore » low-order methods. For stability analysis, the high-order numerical methods could perfectly predict the stability map, while the low-order numerical methods failed to do so. For all theoretically unstable cases, the low-order methods predicted them to be stable. The result obtained in this paper is a strong evidence to show the benefits of using high-order numerical methods over the low-order ones, when they are applied to simulate natural circulation phenomenon that has already gain increasing interests in many future nuclear reactor designs.« less
Portable microwave assisted extraction: An original concept for green analytical chemistry.
Perino, Sandrine; Petitcolas, Emmanuel; de la Guardia, Miguel; Chemat, Farid
2013-11-08
This paper describes a portable microwave assisted extraction apparatus (PMAE) for extraction of bioactive compounds especially essential oils and aromas directly in a crop or in a forest. The developed procedure, based on the concept of green analytical chemistry, is appropriate to obtain direct in-field information about the level of essential oils in natural samples and to illustrate green chemical lesson and research. The efficiency of this experiment was validated for the extraction of essential oil of rosemary directly in a crop and allows obtaining a quantitative information on the content of essential oil, which was similar to that obtained by conventional methods in the laboratory. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Cleary, Justin W.; Peale, Robert E.; Saxena, Himanshu; Buchwald, Walter R.
2011-05-01
The observation of THz regime transmission resonances in an InGaAs/InP high electron mobility transistor (HEMT) can be attributed to excitation of plasmons in its two-dimensional electron gas (2DEG). Properties of grating-based, gate-voltage tunable resonances are shown to be adequately modeled using commercial finite element method (FEM) software when the HEMT layer structure, gate geometry and sheet charge concentration are taken into account. The FEM results are shown to produce results consistent with standard analytical theories in the 10-100 cm-1 wavenumber range. An original analytic formula presented here describes how the plasmonic resonance may change in the presence of a virtual gate, or region of relatively high free charge carriers that lies in the HEMT between the physical grating gate and the 2DEG. The virtual gate and corresponding analytic formulation are able to account for the red-shifting experimentally observed in plasmonic resonances. The calculation methods demonstrated here have the potential to greatly aid in the design of future detection devices that require specifically tuned plasmonic modes in the 2DEG of a HEMT, as well as giving new insights to aid in the development of more complete analytic theories.
A Chaotic Ordered Hierarchies Consistency Analysis Performance Evaluation Model
NASA Astrophysics Data System (ADS)
Yeh, Wei-Chang
2013-02-01
The Hierarchies Consistency Analysis (HCA) is proposed by Guh in-cooperated along with some case study on a Resort to reinforce the weakness of Analytical Hierarchy Process (AHP). Although the results obtained enabled aid for the Decision Maker to make more reasonable and rational verdicts, the HCA itself is flawed. In this paper, our objective is to indicate the problems of HCA, and then propose a revised method called chaotic ordered HCA (COH in short) which can avoid problems. Since the COH is based upon Guh's method, the Decision Maker establishes decisions in a way similar to that of the original method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheele, R.D.
In 1995, available subsegment samples of wastes taken from the Hanford Site underground radioactive waste storage tanks 241-C-112 (C-112) and 241-C-109 (C-109) were reanalyzed to determine the nickel concentrations in the samples and to determine whether the use of a nickel crucible in the analytical sample preparation biased the reported nickel concentrations reported by Simpson and coworkers and in the original report that this report supplements. The reanalysis strategy to determine nickel was to use a sodium peroxide flux in a zirconium crucible instead of the previously used potassium hydroxide flux in a nickel crucible. This supplemental report provides themore » results of the reanalyses and updates tables from the original report which reflect the new nickel analyses. Nickel is important with respect to management of the potentially reactive ferrocyanide wastes as it is one of the key defining characteristics of the solids that resulted from scavenging radiocesium using ferrocyanides. In Hanford Site wastes, few other processes introduced nickel into the wastes other than radiocobalt scavenging, which was often coupled with the ferrocyanide-scavenging process. Thus the presence of nickel in a waste provides strong evidence that the original waste was or contained ferrocyanide waste at one time. Given the potential import of nickel as a defining characteristic and marker for ferrocyanide wastes, the Pacific Northwest Laboratory`s (PNL) Analytical Chemistry Laboratory (ACL) reanalyzed available samples from tanks C-112 and C-109 using inductively coupled argon plasma/atomic emission spectrometry (ICP/AES) and an alternative sample preparation method which precluded contamination of the analytical samples with nickel.« less
Generalized constitutive equations for piezo-actuated compliant mechanism
NASA Astrophysics Data System (ADS)
Cao, Junyi; Ling, Mingxiang; Inman, Daniel J.; Lin, Jin
2016-09-01
This paper formulates analytical models to describe the static displacement and force interactions between generic serial-parallel compliant mechanisms and their loads by employing the matrix method. In keeping with the familiar piezoelectric constitutive equations, the generalized constitutive equations of compliant mechanism represent the input-output displacement and force relations in the form of a generalized Hooke’s law and as analytical functions of physical parameters. Also significantly, a new model of output displacement for compliant mechanism interacting with piezo-stacks and elastic loads is deduced based on the generalized constitutive equations. Some original findings differing from the well-known constitutive performance of piezo-stacks are also given. The feasibility of the proposed models is confirmed by finite element analysis and by experiments under various elastic loads. The analytical models can be an insightful tool for predicting and optimizing the performance of a wide class of compliant mechanisms that simultaneously consider the influence of loads and piezo-stacks.
Analytic study of a rolling sphere on a rough surface
NASA Astrophysics Data System (ADS)
Florea, Olivia A.; Rosca, Ileana C.
2016-11-01
In this paper it is realized an analytic study of the rolling's sphere on a rough horizontal plane under the action of its own gravity. The necessities of integration of the system of dynamical equations of motion lead us to find a reference system where the motion equations should be transformed into simpler expressions and which, in the presence of some significant hypothesis to permit the application of some original methods of analytical integration. In technical applications, the bodies may have a free rolling motion or a motion constrained by geometrical relations in assemblies of parts and machine parts. This study involves a lot of investigations in the field of tribology and of applied dynamics accompanied by experiments. Multiple recordings of several trajectories of the sphere, as well as their treatment of images, also followed by statistical processing experimental data allowed highlighting a very good agreement between the theoretical findings and experimental results.
Green analytical chemistry--theory and practice.
Tobiszewski, Marek; Mechlińska, Agata; Namieśnik, Jacek
2010-08-01
This tutorial review summarises the current state of green analytical chemistry with special emphasis on environmentally friendly sample preparation techniques. Green analytical chemistry is a part of the sustainable development concept; its history and origins are described. Miniaturisation of analytical devices and shortening the time elapsing between performing analysis and obtaining reliable analytical results are important aspects of green analytical chemistry. Solventless extraction techniques, the application of alternative solvents and assisted extractions are considered to be the main approaches complying with green analytical chemistry principles.
Measurement analysis of two radials with a common-origin point and its application.
Liu, Zhenyao; Yang, Jidong; Zhu, Weiwei; Zhou, Shang; Tan, Xuanping
2017-08-01
In spectral analysis, a chemical component is usually identified by its characteristic spectra, especially the peaks. If two components have overlapping spectral peaks, they are generally considered to be indiscriminate in current analytical chemistry textbooks and related literature. However, if the intensities of the overlapping major spectral peaks are additive, and have different rates of change with respect to variations in the concentration of the individual components, a simple method, named the 'common-origin ray', for the simultaneous determination of two components can be established. Several case studies highlighting its applications are presented. Copyright © 2017 John Wiley & Sons, Ltd.
Hondrogiannis, Ellen; Rotta, Kathryn; Zapf, Charles M
2013-03-01
Sixteen elements found in 37 vanilla samples from Madagascar, Uganda, India, Indonesia (all Vanilla planifolia species), and Papa New Guinea (Vanilla tahitensis species) were measured by wavelength dispersive X-ray fluorescence (WDXRF) spectroscopy for the purpose of determining the elemental concentrations to discriminate among the origins. Pellets were prepared of the samples and elemental concentrations were calculated based on calibration curves created using 4 Natl. Inst. of Standards and Technology (NIST) standards. Discriminant analysis was used to successfully classify the vanilla samples by their species and their geographical region. Our method allows for higher throughput in the rapid screening of vanilla samples in less time than analytical methods currently available. Wavelength dispersive X-ray fluorescence spectroscopy and discriminant function analysis were used to classify vanilla from different origins resulting in a model that could potentially serve to rapidly validate these samples before purchasing from a producer. © 2013 Institute of Food Technologists®
Topography measurements and applications in ballistics and tool mark identifications*
Vorburger, T V; Song, J; Petraco, N
2016-01-01
The application of surface topography measurement methods to the field of firearm and toolmark analysis is fairly new. The field has been boosted by the development of a number of competing optical methods, which has improved the speed and accuracy of surface topography acquisitions. We describe here some of these measurement methods as well as several analytical methods for assessing similarities and differences among pairs of surfaces. We also provide a few examples of research results to identify cartridge cases originating from the same firearm or tool marks produced by the same tool. Physical standards and issues of traceability are also discussed. PMID:27182440
Non-axisymmetric local magnetostatic equilibrium
Candy, Jefferey M.; Belli, Emily A.
2015-03-24
In this study, we outline an approach to the problem of local equilibrium in non-axisymmetric configurations that adheres closely to Miller's original method for axisymmetric plasmas. Importantly, this method is novel in that it allows not only specification of 3D shape, but also explicit specification of the shear in the 3D shape. A spectrally-accurate method for solution of the resulting nonlinear partial differential equations is also developed. We verify the correctness of the spectral method, in the axisymmetric limit, through comparisons with an independent numerical solution. Some analytic results for the two-dimensional case are given, and the connection to Boozermore » coordinates is clarified.« less
Subtracting infrared renormalons from Wilson coefficients: Uniqueness and power dependences on ΛQCD
NASA Astrophysics Data System (ADS)
Mishima, Go; Sumino, Yukinari; Takaura, Hiromasa
2017-06-01
In the context of operator product expansion (OPE) and using the large-β0 approximation, we propose a method to define Wilson coefficients free from uncertainties due to IR renormalons. We first introduce a general observable X (Q2) with an explicit IR cutoff, and then we extract a genuine UV contribution XUV as a cutoff-independent part. XUV includes power corrections ˜(ΛQCD2/Q2)n which are independent of renormalons. Using the integration-by-regions method, we observe that XUV coincides with the leading Wilson coefficient in OPE and also clarify that the power corrections originate from UV region. We examine scheme dependence of XUV and single out a specific scheme favorable in terms of analytical properties. Our method would be optimal with respect to systematicity, analyticity and stability. We test our formulation with the examples of the Adler function, QCD force between Q Q ¯, and R -ratio in e+e- collision.
The Analytical Solution of the Transient Radial Diffusion Equation with a Nonuniform Loss Term.
NASA Astrophysics Data System (ADS)
Loridan, V.; Ripoll, J. F.; De Vuyst, F.
2017-12-01
Many works have been done during the past 40 years to perform the analytical solution of the radial diffusion equation that models the transport and loss of electrons in the magnetosphere, considering a diffusion coefficient proportional to a power law in shell and a constant loss term. Here, we propose an original analytical method to address this challenge with a nonuniform loss term. The strategy is to match any L-dependent electron losses with a piecewise constant function on M subintervals, i.e., dealing with a constant lifetime on each subinterval. Applying an eigenfunction expansion method, the eigenvalue problem becomes presently a Sturm-Liouville problem with M interfaces. Assuming the continuity of both the distribution function and its first spatial derivatives, we are able to deal with a well-posed problem and to find the full analytical solution. We further show an excellent agreement between both the analytical solutions and the solutions obtained directly from numerical simulations for different loss terms of various shapes and with a diffusion coefficient DLL L6. We also give two expressions for the required number of eigenmodes N to get an accurate snapshot of the analytical solution, highlighting that N is proportional to 1/√t0, where t0 is a time of interest, and that N increases with the diffusion power. Finally, the equilibrium time, defined as the time to nearly reach the steady solution, is estimated by a closed-form expression and discussed. Applications to Earth and also Jupiter and Saturn are discussed.
Wang, Pei; Yu, Zhiguo
2015-10-01
Near infrared (NIR) spectroscopy as a rapid and nondestructive analytical technique, integrated with chemometrics, is a powerful process analytical tool for the pharmaceutical industry and is becoming an attractive complementary technique for herbal medicine analysis. This review mainly focuses on the recent applications of NIR spectroscopy in species authentication of herbal medicines and their geographical origin discrimination.
Mitsumori, K
1993-01-01
Maximum residue level (MRL) for veterinary drugs in food of animal origin has been proposed by FAO/WHO, as a new evaluation procedure taking into account the presence of metabolites for the regulation of veterinary drug residues. The MRL is the maximum concentration of residue resulting from the use of a veterinary drug that is recommended to be legally permitted as acceptable in a food. It is established from the Acceptable Daily Intake (ADI) obtained from the data of toxicological studies, the residue concentration of the drug when used according to good practice in the use of veterinary drugs, and the lowest level consistent with the practical analytical methods available for routine residue analysis. Among the veterinary drugs, some chemicals contain a large amount of bound residues that are neither extractable from tissues by the analytical method identical with that used in parent chemicals. Especially, the bioavailable residues which are probably absorbed when the food is ingested are of great toxicological concern. In this case, the FAO/WHO recommends that the MRL can be established after the calculation of daily intake of residues of toxicological concern by the addition of both the extractable and bioavailable bound residues.
Wang, Deli; Xu, Wei; Zhao, Xiangrong
2016-03-01
This paper aims to deal with the stationary responses of a Rayleigh viscoelastic system with zero barrier impacts under external random excitation. First, the original stochastic viscoelastic system is converted to an equivalent stochastic system without viscoelastic terms by approximately adding the equivalent stiffness and damping. Relying on the means of non-smooth transformation of state variables, the above system is replaced by a new system without an impact term. Then, the stationary probability density functions of the system are observed analytically through stochastic averaging method. By considering the effects of the biquadratic nonlinear damping coefficient and the noise intensity on the system responses, the effectiveness of the theoretical method is tested by comparing the analytical results with those generated from Monte Carlo simulations. Additionally, it does deserve attention that some system parameters can induce the occurrence of stochastic P-bifurcation.
Swelling-induced and controlled curving in layered gel beams
Lucantonio, A.; Nardinocchi, P.; Pezzulla, M.
2014-01-01
We describe swelling-driven curving in originally straight and non-homogeneous beams. We present and verify a structural model of swollen beams, based on a new point of view adopted to describe swelling-induced deformation processes in bilayered gel beams, that is based on the split of the swelling-induced deformation of the beam at equilibrium into two components, both depending on the elastic properties of the gel. The method allows us to: (i) determine beam stretching and curving, once assigned the characteristics of the solvent bath and of the non-homogeneous beam, and (ii) estimate the characteristics of non-homogeneous flat gel beams in such a way as to obtain, under free-swelling conditions, three-dimensional shapes. The study was pursued by means of analytical, semi-analytical and numerical tools; excellent agreement of the outcomes of the different techniques was found, thus confirming the strength of the method. PMID:25383031
NASA Astrophysics Data System (ADS)
Penkov, V. B.; Levina, L. V.; Novikova, O. S.; Shulmin, A. S.
2018-03-01
Herein we propose a methodology for structuring a full parametric analytical solution to problems featuring elastostatic media based on state-of-the-art computing facilities that support computerized algebra. The methodology includes: direct and reverse application of P-Theorem; methods of accounting for physical properties of media; accounting for variable geometrical parameters of bodies, parameters of boundary states, independent parameters of volume forces, and remote stress factors. An efficient tool to address the task is the sustainable method of boundary states originally designed for the purposes of computerized algebra and based on the isomorphism of Hilbertian spaces of internal states and boundary states of bodies. We performed full parametric solutions of basic problems featuring a ball with a nonconcentric spherical cavity, a ball with a near-surface flaw, and an unlimited medium with two spherical cavities.
Application of analytical methods in authentication and adulteration of honey.
Siddiqui, Amna Jabbar; Musharraf, Syed Ghulam; Choudhary, M Iqbal; Rahman, Atta-Ur-
2017-02-15
Honey is synthesized from flower nectar and it is famous for its tremendous therapeutic potential since ancient times. Many factors influence the basic properties of honey including the nectar-providing plant species, bee species, geographic area, and harvesting conditions. Quality and composition of honey is also affected by many other factors, such as overfeeding of bees with sucrose, harvesting prior to maturity, and adulteration with sugar syrups. Due to the complex nature of honey, it is often challenging to authenticate the purity and quality by using common methods such as physicochemical parameters and more specialized procedures need to be developed. This article reviews the literature (between 2000 and 2016) on the use of analytical techniques, mainly NMR spectroscopy, for authentication of honey, its botanical and geographical origin, and adulteration by sugar syrups. NMR is a powerful technique and can be used as a fingerprinting technique to compare various samples. Copyright © 2016 Elsevier Ltd. All rights reserved.
Houiste, Céline; Auguste, Cécile; Macrez, Céline; Dereux, Stéphanie; Derouet, Angélique; Anger, Pascal
2009-02-01
Low-molecular-weight heparins (LMWHs) are widely used in the management of thrombosis and acute coronary syndromes. They are obtained by the enzymatic or chemical depolymerization of porcine intestinal heparin. Enoxaparin sodium, a widely used LMWH, has a unique and reproducible oligosaccharide profile which is determined by the origin of the starting material and a tightly controlled manufacturing process. Although other enoxaparin-like LMWHs do exist, specific release criteria including the origin of the crude heparin utilized for their production, have not been established. A quantitative polymerase chain reaction method has been developed to ensure the purity of the porcine origin of crude heparin, with a DNA detection limit as low as 1 ppm for bovine, or 10 ppm for ovine contaminants. This method is routinely used as the release acceptance criterion during enoxaparin sodium manufacturing. Furthermore, when the process removes DNA, other analytical techniques can be used to assess any contamination. Disaccharide profiling after exhaustive depolymerization can determine the presence of at least 10% bovine or 20% ovine material; multivariate analysis is useful to perform the data analysis. Consistent with the availability of newer technology, these methods should be required as acceptance criteria for crude heparins used in the manufacture of LMWHs to ensure their safety, quality, and immunologic profile.
NASA Astrophysics Data System (ADS)
Sanskrityayn, Abhishek; Suk, Heejun; Kumar, Naveen
2017-04-01
In this study, analytical solutions of one-dimensional pollutant transport originating from instantaneous and continuous point sources were developed in groundwater and riverine flow using both Green's Function Method (GFM) and pertinent coordinate transformation method. Dispersion coefficient and flow velocity are considered spatially and temporally dependent. The spatial dependence of the velocity is linear, non-homogeneous and that of dispersion coefficient is square of that of velocity, while the temporal dependence is considered linear, exponentially and asymptotically decelerating and accelerating. Our proposed analytical solutions are derived for three different situations depending on variations of dispersion coefficient and velocity, respectively which can represent real physical processes occurring in groundwater and riverine systems. First case refers to steady solute transport situation in steady flow in which dispersion coefficient and velocity are only spatially dependent. The second case represents transient solute transport in steady flow in which dispersion coefficient is spatially and temporally dependent while the velocity is spatially dependent. Finally, the third case indicates transient solute transport in unsteady flow in which both dispersion coefficient and velocity are spatially and temporally dependent. The present paper demonstrates the concentration distribution behavior from a point source in realistically occurring flow domains of hydrological systems including groundwater and riverine water in which the dispersivity of pollutant's mass is affected by heterogeneity of the medium as well as by other factors like velocity fluctuations, while velocity is influenced by water table slope and recharge rate. Such capabilities give the proposed method's superiority about application of various hydrological problems to be solved over other previously existing analytical solutions. Especially, to author's knowledge, any other solution doesn't exist for both spatially and temporally variations of dispersion coefficient and velocity. In this study, the existing analytical solutions from previous widely known studies are used for comparison as validation tools to verify the proposed analytical solution as well as the numerical code of the Two-Dimensional Subsurface Flow, Fate and Transport of Microbes and Chemicals (2DFATMIC) code and the developed 1D finite difference code (FDM). All such solutions show perfect match with the respective proposed solutions.
van Stee, Leo L P; Brinkman, Udo A Th
2011-10-28
A method is presented to facilitate the non-target analysis of data obtained in temperature-programmed comprehensive two-dimensional (2D) gas chromatography coupled to time-of-flight mass spectrometry (GC×GC-ToF-MS). One main difficulty of GC×GC data analysis is that each peak is usually modulated several times and therefore appears as a series of peaks (or peaklets) in the one-dimensionally recorded data. The proposed method, 2DAid, uses basic chromatographic laws to calculate the theoretical shape of a 2D peak (a cluster of peaklets originating from the same analyte) in order to define the area in which the peaklets of each individual compound can be expected to show up. Based on analyte-identity information obtained by means of mass spectral library searching, the individual peaklets are then combined into a single 2D peak. The method is applied, amongst others, to a complex mixture containing 362 analytes. It is demonstrated that the 2D peak shapes can be accurately predicted and that clustering and further processing can reduce the final peak list to a manageable size. Copyright © 2011 Elsevier B.V. All rights reserved.
An analytical framework to assist decision makers in the use of forest ecosystem model predictions
Larocque, Guy R.; Bhatti, Jagtar S.; Ascough, J.C.; Liu, J.; Luckai, N.; Mailly, D.; Archambault, L.; Gordon, Andrew M.
2011-01-01
The predictions from most forest ecosystem models originate from deterministic simulations. However, few evaluation exercises for model outputs are performed by either model developers or users. This issue has important consequences for decision makers using these models to develop natural resource management policies, as they cannot evaluate the extent to which predictions stemming from the simulation of alternative management scenarios may result in significant environmental or economic differences. Various numerical methods, such as sensitivity/uncertainty analyses, or bootstrap methods, may be used to evaluate models and the errors associated with their outputs. However, the application of each of these methods carries unique challenges which decision makers do not necessarily understand; guidance is required when interpreting the output generated from each model. This paper proposes a decision flow chart in the form of an analytical framework to help decision makers apply, in an orderly fashion, different steps involved in examining the model outputs. The analytical framework is discussed with regard to the definition of problems and objectives and includes the following topics: model selection, identification of alternatives, modelling tasks and selecting alternatives for developing policy or implementing management scenarios. Its application is illustrated using an on-going exercise in developing silvicultural guidelines for a forest management enterprise in Ontario, Canada.
Hercegová, Andrea; Dömötörová, Milena; Kruzlicová, Dása; Matisová, Eva
2006-05-01
Four sample preparation techniques were compared for the ultratrace analysis of pesticide residues in baby food: (a) modified Schenck's method based on ACN extraction with SPE cleaning; (b) quick, easy, cheap, effective, rugged, and safe (QuEChERS) method based on ACN extraction and dispersive SPE; (c) modified QuEChERS method which utilizes column-based SPE instead of dispersive SPE; and (d) matrix solid phase dispersion (MSPD). The methods were combined with fast gas chromatographic-mass spectrometric analysis. The effectiveness of clean-up of the final extract was determined by comparison of the chromatograms obtained. Time consumption, laboriousness, demands on glassware and working place, and consumption of chemicals, especially solvents, increase in the following order QuEChERS < modified QuEChERS < MSPD < modified Schenck's method. All methods offer satisfactory analytical characteristics at the concentration levels of 5, 10, and 100 microg/kg in terms of recoveries and repeatability. Recoveries obtained for the modified QuEChERS method were lower than for the original QuEChERS. In general the best LOQs were obtained for the modified Schenck's method. Modified QuEChERS method provides 21-72% better LOQs than the original method.
Computing the Evans function via solving a linear boundary value ODE
NASA Astrophysics Data System (ADS)
Wahl, Colin; Nguyen, Rose; Ventura, Nathaniel; Barker, Blake; Sandstede, Bjorn
2015-11-01
Determining the stability of traveling wave solutions to partial differential equations can oftentimes be computationally intensive but of great importance to understanding the effects of perturbations on the physical systems (chemical reactions, hydrodynamics, etc.) they model. For waves in one spatial dimension, one may linearize around the wave and form an Evans function - an analytic Wronskian-like function which has zeros that correspond in multiplicity to the eigenvalues of the linearized system. If eigenvalues with a positive real part do not exist, the traveling wave will be stable. Two methods exist for calculating the Evans function numerically: the exterior-product method and the method of continuous orthogonalization. The first is numerically expensive, and the second reformulates the originally linear system as a nonlinear system. We develop a new algorithm for computing the Evans function through appropriate linear boundary-value problems. This algorithm is cheaper than the previous methods, and we prove that it preserves analyticity of the Evans function. We also provide error estimates and implement it on some classical one- and two-dimensional systems, one being the Swift-Hohenberg equation in a channel, to show the advantages.
Analytical Chemistry and Measurement Science: (What Has DOE Done for Analytical Chemistry?)
DOE R&D Accomplishments Database
Shults, W. D.
1989-04-01
Over the past forty years, analytical scientists within the DOE complex have had a tremendous impact on the field of analytical chemistry. This paper suggests six "high impact" research/development areas that either originated within or were brought to maturity within the DOE laboratories. "High impact" means they lead to new subdisciplines or to new ways of doing business.
NASA Technical Reports Server (NTRS)
Siclari, Michael J.
1988-01-01
A computer code called NCOREL (for Nonconical Relaxation) has been developed to solve for supersonic full potential flows over complex geometries. The method first solves for the conical at the apex and then marches downstream in a spherical coordinate system. Implicit relaxation techniques are used to numerically solve the full potential equation at each subsequent crossflow plane. Many improvements have been made to the original code including more reliable numerics for computing wing-body flows with multiple embedded shocks, inlet flow through simulation, wake model and entropy corrections. Line relaxation or approximate factorization schemes are optionally available. Improved internal grid generation using analytic conformal mappings, supported by a simple geometric Harris wave drag input that was originally developed for panel methods and internal geometry package are some of the new features.
Laguna, George R.; Peter, Frank J.; Butler, Michael A.
1999-01-01
A new chemical probe determines the properties of an analyte using the light absorption of the products of a reagent/analyte reaction. The probe places a small reaction volume in contact with a large analyte volume. Analyte diffuses into the reaction volume. Reagent is selectively supplied to the reaction volume. The light absorption of the reaction in the reaction volume indicates properties of the original analyte. The probe is suitable for repeated use in remote or hostile environments. It does not require physical sampling of the analyte or result in significant regent contamination of the analyte reservoir.
Laguna, G.R.; Peter, F.J.; Butler, M.A.
1999-02-16
A new chemical probe determines the properties of an analyte using the light absorption of the products of a reagent/analyte reaction. The probe places a small reaction volume in contact with a large analyte volume. Analyte diffuses into the reaction volume. Reagent is selectively supplied to the reaction volume. The light absorption of the reaction in the reaction volume indicates properties of the original analyte. The probe is suitable for repeated use in remote or hostile environments. It does not require physical sampling of the analyte or result in significant regent contamination of the analyte reservoir. 7 figs.
A simple mass-conserved level set method for simulation of multiphase flows
NASA Astrophysics Data System (ADS)
Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.
2018-04-01
In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eglinton, T.I.; Goni, M.A.; Boon, J.J.
1995-12-31
Tissue samples from Ginkgo shoots (Ginkgo biloba L.) and Rice grass (Oryzasitiva sp.) incubated in the presence of {sup 13}C-labeled substrates such as coniferin (postulated to be biosynthetic intermediates in lignin biosynthesis) were studied using thermal and chemical dissociation methods in combination with molecular-level isotopic measurements. The aim of the study was (1) to investigate dissociation mechanisms, and (2) to examine and quantify the proportions of labeled material incorporated within each sample. Isotopic analysis of specific dissociation products revealed the presence of the label in its original positions, and only within lignin-derived (phenolic) products. Moreover, the distribution and isotopic compositionmore » of the dissociation products strongly suggest an origin from newly-formed lignin. These results clearly indicate that there is no {open_quotes}scrambling{close_quotes} of carbon atoms as a result of the dissociation process, thereby lending support to this analytical approach. In addition, the data provide confidence in the selective labeling approach for elucidation of the structure and biosynthesis of lignin.« less
Hopkins, D.M.
1991-01-01
Trace metals that are commonly associated with mineralization were concentrated and separated from natural water by coprecipitation with ammonium pyrollidine dithiocarbamate (APDC) and cobalt and determined by inductively coupled plasma-atomic emission spectroscopy (ICP-AES). The method is useful in hydrogeochemical surveys because it permits preconcentration near the sample sites, and selected metals are preserved shortly after the samples are collected. The procedure is relatively simple: (1) a liter of water is filtered; (2) the pH is adjusted; (3) Co chloride and APDC are added to coprecipitate the trace metals; and (4) later, the precipitate is filtered, dissolved, and diluted to 10 ml for a 100-fold concentration enrichment of the separated metals. Sb(III), As(III), Cd, Cr, Cu, Fe, Pb, Mo, Ni, Ag, V, and Zn can then be determined simultaneously by ICP-AES. In an experiment designed to measure the coprecipitation efficiency, Sb(III), Cd and Ag were recovered at 70 to 75% of their original concentration. The remaining metals were recovered at 85 to 100% of their original concentrations, however. The range for the lower limits of determination for the metals after preconcentration is 0.1 to 3.0 ??g/l. The precision of the method was evaluated by replicate analyses of a Colorado creek water and two simulated water samples. The accuracy of the method was estimated using a water reference standard (SRM 1643a) certified by the U.S. National Bureau of Standards. In addition, the method was evaluated by analyzing groundwater samples collected near a porphyry copper deposit in Arizona and by analyzing meltwater from glacier-covered areas favorable for mineralization in south-central Alaska. The results for the ICP-AES analyses compared favorably with those obtained using the sequential technique of GFAAS on the acidified but unconcentrated water samples. ICP-AES analysis of trace-metal preconcentrates for hydrogeochemical surveys is more efficient than GFAAS because a large suite of metals is simultaneously determined with acceptable analytical accuracy and precision. The proposed analytical technique can provide direct evidence of mineralization and is useful in the exploration for unknown ore deposits. ?? 1991.
Explicit solutions for exit-only radioactive decay chains
NASA Astrophysics Data System (ADS)
Yuan, Ding; Kernan, Warnick
2007-05-01
In this study, we extended Bateman's [Proc. Cambridge Philos. Soc. 15, 423 (1910)] original work for solving radioactive decay chains and explicitly derived analytic solutions for generic exit-only radioactive decay problems under given initial conditions. Instead of using the conventional Laplace transform for solving Bateman's equations, we used a much simpler algebraic approach. Finally, we discuss methods of breaking down certain classes of large decay chains into collections of simpler chains for easy handling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khamzin, A. A., E-mail: airat.khamzin@rambler.ru; Sitdikov, A. S.; Nikitin, A. S.
An original method for calculating the moment of inertia of the collective rotation of a nucleus on the basis of the cranking model with the harmonic-oscillator Hamiltonian at arbitrary frequencies of rotation and finite temperature is proposed. In the adiabatic limit, an oscillating chemical-potential dependence of the moment of inertia is obtained by means of analytic calculations. The oscillations of the moment of inertia become more pronounced as deformations approach the spherical limit and decrease exponentially with increasing temperature.
Emshwiller, Eve; Theim, Terra; Grau, Alfredo; Nina, Victor; Terrazas, Franz
2009-10-01
Many crops are polyploids, and it can be challenging to untangle the often complicated history of their origins of domestication and origins of polyploidy. To complement other studies of the origins of polyploidy of the octoploid tuber crop oca (Oxalis tuberosa) that used DNA sequence data and phylogenetic methods, we here compared AFLP data for oca with four wild, tuber-bearing Oxalis taxa found in different regions of the central Andes. Results confirmed the divergence of two use-categories of cultivated oca that indigenous farmers use for different purposes, suggesting the possibility that they might have had separate origins of domestication. Despite previous results with nuclear-encoded, chloroplast-expressed glutamine synthetase suggesting that O. picchensis might be a progenitor of oca, AFLP data of this species, as well as different populations of wild, tuber-bearing Oxalis found in Lima Department, Peru, were relatively divergent from O. tuberosa. Results from all analytical methods suggested that the unnamed wild, tuber-bearing Oxalis found in Bolivia and O. chicligastensis in NW Argentina are the best candidates as the genome donors for polyploid O. tuberosa, but the results were somewhat equivocal about which of these two taxa is the more strongly supported as oca's progenitor.
Zou, Ling; Zhao, Haihua; Kim, Seung Jun
2016-11-16
In this study, the classical Welander’s oscillatory natural circulation problem is investigated using high-order numerical methods. As originally studied by Welander, the fluid motion in a differentially heated fluid loop can exhibit stable, weakly instable, and strongly instable modes. A theoretical stability map has also been originally derived from the stability analysis. Numerical results obtained in this paper show very good agreement with Welander’s theoretical derivations. For stable cases, numerical results from both the high-order and low-order numerical methods agree well with the non-dimensional flow rate analytically derived. The high-order numerical methods give much less numerical errors compared to themore » low-order methods. For stability analysis, the high-order numerical methods could perfectly predict the stability map, while the low-order numerical methods failed to do so. For all theoretically unstable cases, the low-order methods predicted them to be stable. The result obtained in this paper is a strong evidence to show the benefits of using high-order numerical methods over the low-order ones, when they are applied to simulate natural circulation phenomenon that has already gain increasing interests in many future nuclear reactor designs.« less
HUMAN EYE OPTICS: Determination of positions of optical elements of the human eye
NASA Astrophysics Data System (ADS)
Galetskii, S. O.; Cherezova, T. Yu
2009-02-01
An original method for noninvasive determining the positions of elements of intraocular optics is proposed. The analytic dependence of the measurement error on the optical-scheme parameters and the restriction in distance from the element being measured are determined within the framework of the method proposed. It is shown that the method can be efficiently used for determining the position of elements in the classical Gullstrand eye model and personalised eye models. The positions of six optical surfaces of the Gullstrand eye model and four optical surfaces of the personalised eye model can be determined with an error of less than 0.25 mm.
Symmetry-plane model of 3D Euler flows: Mapping to regular systems and numerical solutions of blowup
NASA Astrophysics Data System (ADS)
Mulungye, Rachel M.; Lucas, Dan; Bustamante, Miguel D.
2014-11-01
We introduce a family of 2D models describing the dynamics on the so-called symmetry plane of the full 3D Euler fluid equations. These models depend on a free real parameter and can be solved analytically. For selected representative values of the free parameter, we apply the method introduced in [M.D. Bustamante, Physica D: Nonlinear Phenom. 240, 1092 (2011)] to map the fluid equations bijectively to globally regular systems. By comparing the analytical solutions with the results of numerical simulations, we establish that the numerical simulations of the mapped regular systems are far more accurate than the numerical simulations of the original systems, at the same spatial resolution and CPU time. In particular, the numerical integrations of the mapped regular systems produce robust estimates for the growth exponent and singularity time of the main blowup quantity (vorticity stretching rate), converging well to the analytically-predicted values even beyond the time at which the flow becomes under-resolved (i.e. the reliability time). In contrast, direct numerical integrations of the original systems develop unstable oscillations near the reliability time. We discuss the reasons for this improvement in accuracy, and explain how to extend the analysis to the full 3D case. Supported under the programme for Research in Third Level Institutions (PRTLI) Cycle 5 and co-funded by the European Regional Development Fund.
Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie
2018-01-01
As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.
NASA Astrophysics Data System (ADS)
Wang, Jindong; Chen, Peng; Deng, Yufen; Guo, Junjie
2018-01-01
As a three-dimensional measuring instrument, the laser tracker is widely used in industrial measurement. To avoid the influence of angle measurement error on the overall measurement accuracy, the multi-station and time-sharing measurement with a laser tracker is introduced on the basis of the global positioning system (GPS) principle in this paper. For the proposed method, how to accurately determine the coordinates of each measuring point by using a large amount of measured data is a critical issue. Taking detecting motion error of a numerical control machine tool, for example, the corresponding measurement algorithms are investigated thoroughly. By establishing the mathematical model of detecting motion error of a machine tool with this method, the analytical algorithm concerning on base station calibration and measuring point determination is deduced without selecting the initial iterative value in calculation. However, when the motion area of the machine tool is in a 2D plane, the coefficient matrix of base station calibration is singular, which generates a distortion result. In order to overcome the limitation of the original algorithm, an improved analytical algorithm is also derived. Meanwhile, the calibration accuracy of the base station with the improved algorithm is compared with that with the original analytical algorithm and some iterative algorithms, such as the Gauss-Newton algorithm and Levenberg-Marquardt algorithm. The experiment further verifies the feasibility and effectiveness of the improved algorithm. In addition, the different motion areas of the machine tool have certain influence on the calibration accuracy of the base station, and the corresponding influence of measurement error on the calibration result of the base station depending on the condition number of coefficient matrix are analyzed.
Characterization of Compton-scatter imaging with an analytical simulation method
Jones, Kevin C; Redler, Gage; Templeton, Alistair; Bernard, Damian; Turian, Julius V; Chu, James C H
2018-01-01
By collimating the photons scattered when a megavoltage therapy beam interacts with the patient, a Compton-scatter image may be formed without the delivery of an extra dose. To characterize and assess the potential of the technique, an analytical model for simulating scatter images was developed and validated against Monte Carlo (MC). For three phantoms, the scatter images collected during irradiation with a 6 MV flattening-filter-free therapy beam were simulated. Images, profiles, and spectra were compared for different phantoms and different irradiation angles. The proposed analytical method simulates accurate scatter images up to 1000 times faster than MC. Minor differences between MC and analytical simulated images are attributed to limitations in the isotropic superposition/convolution algorithm used to analytically model multiple-order scattering. For a detector placed at 90° relative to the treatment beam, the simulated scattered photon energy spectrum peaks at 140–220 keV, and 40–50% of the photons are the result of multiple scattering. The high energy photons originate at the beam entrance. Increasing the angle between source and detector increases the average energy of the collected photons and decreases the relative contribution of multiple scattered photons. Multiple scattered photons cause blurring in the image. For an ideal 5 mm diameter pinhole collimator placed 18.5 cm from the isocenter, 10 cGy of deposited dose (2 Hz imaging rate for 1200 MU min−1 treatment delivery) is expected to generate an average 1000 photons per mm2 at the detector. For the considered lung tumor CT phantom, the contrast is high enough to clearly identify the lung tumor in the scatter image. Increasing the treatment beam size perpendicular to the detector plane decreases the contrast, although the scatter subject contrast is expected to be greater than the megavoltage transmission image contrast. With the analytical method, real-time tumor tracking may be possible through comparison of simulated and acquired patient images. PMID:29243663
Characterization of Compton-scatter imaging with an analytical simulation method
NASA Astrophysics Data System (ADS)
Jones, Kevin C.; Redler, Gage; Templeton, Alistair; Bernard, Damian; Turian, Julius V.; Chu, James C. H.
2018-01-01
By collimating the photons scattered when a megavoltage therapy beam interacts with the patient, a Compton-scatter image may be formed without the delivery of an extra dose. To characterize and assess the potential of the technique, an analytical model for simulating scatter images was developed and validated against Monte Carlo (MC). For three phantoms, the scatter images collected during irradiation with a 6 MV flattening-filter-free therapy beam were simulated. Images, profiles, and spectra were compared for different phantoms and different irradiation angles. The proposed analytical method simulates accurate scatter images up to 1000 times faster than MC. Minor differences between MC and analytical simulated images are attributed to limitations in the isotropic superposition/convolution algorithm used to analytically model multiple-order scattering. For a detector placed at 90° relative to the treatment beam, the simulated scattered photon energy spectrum peaks at 140-220 keV, and 40-50% of the photons are the result of multiple scattering. The high energy photons originate at the beam entrance. Increasing the angle between source and detector increases the average energy of the collected photons and decreases the relative contribution of multiple scattered photons. Multiple scattered photons cause blurring in the image. For an ideal 5 mm diameter pinhole collimator placed 18.5 cm from the isocenter, 10 cGy of deposited dose (2 Hz imaging rate for 1200 MU min-1 treatment delivery) is expected to generate an average 1000 photons per mm2 at the detector. For the considered lung tumor CT phantom, the contrast is high enough to clearly identify the lung tumor in the scatter image. Increasing the treatment beam size perpendicular to the detector plane decreases the contrast, although the scatter subject contrast is expected to be greater than the megavoltage transmission image contrast. With the analytical method, real-time tumor tracking may be possible through comparison of simulated and acquired patient images.
NASA Astrophysics Data System (ADS)
Le Doussal, Pierre; Petković, Aleksandra; Wiese, Kay Jörg
2012-06-01
We study the motion of an elastic object driven in a disordered environment in presence of both dissipation and inertia. We consider random forces with the statistics of random walks and reduce the problem to a single degree of freedom. It is the extension of the mean-field Alessandro-Beatrice- Bertotti-Montorsi (ABBM) model in presence of an inertial mass m. While the ABBM model can be solved exactly, its extension to inertia exhibits complicated history dependence due to oscillations and backward motion. The characteristic scales for avalanche motion are studied from numerics and qualitative arguments. To make analytical progress, we consider two variants which coincide with the original model whenever the particle moves only forward. Using a combination of analytical and numerical methods together with simulations, we characterize the distributions of instantaneous acceleration and velocity, and compare them in these three models. We show that for large driving velocity, all three models share the same large-deviation function for positive velocities, which is obtained analytically for small and large m, as well as for m=6/25. The effect of small additional thermal and quantum fluctuations can be treated within an approximate method.
Aydin, Funda Armagan; Soylak, Mustafa
2010-01-15
A simple and effective method is presented for the separation and preconcentration of Th(IV), Ti(IV), Fe(III), Pb(II) and Cr(III) by solid phase extraction on 2-nitroso-1-naphthol impregnated MCI GEL CHP20P resin prior to their inductively coupled plasma-mass spectrometric determinations. The influence of analytical parameters including pH of the aqueous solution, flow rates of sample and eluent solutions and sample volume on the quantitative recoveries of analyte ions was investigated. Matrix effects caused by the presence of alkali, earth alkali and some metal ions in the analyzed solutions were investigated. The presented solid phase extraction method was applied to BCR-144R Sewage Sludge (domestic origin), BCR-141R Calcareous Loam Soil, NIST 1568a Rice Flour and NIST 1577b Bovine Liver certified reference materials (CRMs) for the determination of analyte ions and the results were in good agreement with the certified values. The separation procedure presented was also applied to the various natural water samples collected from Turkey with satisfactory results.
Override the controversy: Analytic thinking predicts endorsement of evolution.
Gervais, Will M
2015-09-01
Despite overwhelming scientific consensus, popular opinions regarding evolution are starkly divided. In the USA, for example, nearly one in three adults espouse a literal and recent divine creation account of human origins. Plausibly, resistance to scientific conclusions regarding the origins of species-like much resistance to other scientific conclusions (Bloom & Weisberg, 2007)-gains support from reliably developing intuitions. Intuitions about essentialism, teleology, agency, and order may combine to make creationism potentially more cognitively attractive than evolutionary concepts. However, dual process approaches to cognition recognize that people can often analytically override their intuitions. Two large studies (total N=1324) found consistent evidence that a tendency to engage analytic thinking predicted endorsement of evolution, even controlling for relevant demographic, attitudinal, and religious variables. Meanwhile, exposure to religion predicted reduced endorsement of evolution. Cognitive style is one factor among many affecting opinions on the origin of species. Copyright © 2015 Elsevier B.V. All rights reserved.
Gbylik-Sikorska, Malgorzata; Sniegocki, Tomasz; Posyniak, Andrzej
2015-05-15
The original analytical method for the simultaneous determination and confirmation of neonicotinoids insecticides (imidacloprid, clothianidin, acetamiprid, thiametoxam, thiacloprid, nitenpyram, dinotefuran) and some of their metabolites (imidacloprid guanidine, imidacloprid olefin, imidacloprid urea, desnitro-imidacloprid hydrochloride, thiacloprid-amid and acetamiprid-N-desmethyl) in honey bee and honey was developed. Preparation of honey bee samples involves the extraction with mixture of acetonitrile and ethyl acetate followed by cleaned up using the Sep-Pak Alumina N Plus Long cartridges. Honey samples were dissolved in 1% mixture of acetonitrile and ethyl acetate with addition of TEA, then extracts were cleaned up with Strata X-CW cartridges. The identity of analytes was confirmed using liquid chromatography tandem mass spectrometry. All compounds were separated on a Luna C18 column with gradient elution. The whole procedure was validated according to the requirements of SANCO 12571/2013. The average recoveries of the analytes ranged from 85.3% to 112.0%, repeatabilities were in the range of 2.8-11.2%, within-laboratory reproducibility was in the range of 3.3-14.6%, the limits of quantitation were in the range of 0.1-0.5μgkg(-1), depending of analyte and matrices. The validated method was successfully applied for the determination of clothianidin, imidacloprid and imidacloprid urea in real incurred honey bee samples and clothianidin in honey. Copyright © 2015 Elsevier B.V. All rights reserved.
Analytical tools for the analysis of β-carotene and its degradation products
Stutz, H.; Bresgen, N.; Eckl, P. M.
2015-01-01
Abstract β-Carotene, the precursor of vitamin A, possesses pronounced radical scavenging properties. This has centered the attention on β-carotene dietary supplementation in healthcare as well as in the therapy of degenerative disorders and several cancer types. However, two intervention trials with β-carotene have revealed adverse effects on two proband groups, that is, cigarette smokers and asbestos-exposed workers. Beside other causative reasons, the detrimental effects observed have been related to the oxidation products of β-carotene. Their generation originates in the polyene structure of β-carotene that is beneficial for radical scavenging, but is also prone to oxidation. Depending on the dominant degradation mechanism, bond cleavage might occur either randomly or at defined positions of the conjugated electron system, resulting in a diversity of cleavage products (CPs). Due to their instability and hydrophobicity, the handling of standards and real samples containing β-carotene and related CPs requires preventive measures during specimen preparation, analyte extraction, and final analysis, to avoid artificial degradation and to preserve the initial analyte portfolio. This review critically discusses different preparation strategies of standards and treatment solutions, and also addresses their protection from oxidation. Additionally, in vitro oxidation strategies for the generation of oxidative model compounds are surveyed. Extraction methods are discussed for volatile and non-volatile CPs individually. Gas chromatography (GC), (ultra)high performance liquid chromatography (U)HPLC, and capillary electrochromatography (CEC) are reviewed as analytical tools for final analyte analysis. For identity confirmation of analytes, mass spectrometry (MS) is indispensable, and the appropriate ionization principles are comprehensively discussed. The final sections cover analysis of real samples and aspects of quality assurance, namely matrix effects and method validation. PMID:25867077
Review of analytical methods for the quantification of iodine in complex matrices.
Shelor, C Phillip; Dasgupta, Purnendu K
2011-09-19
Iodine is an essential element of human nutrition. Nearly a third of the global population has insufficient iodine intake and is at risk of developing Iodine Deficiency Disorders (IDD). Most countries have iodine supplementation and monitoring programs. Urinary iodide (UI) is the biomarker used for epidemiological studies; only a few methods are currently used routinely for analysis. These methods either require expensive instrumentation with qualified personnel (inductively coupled plasma-mass spectrometry, instrumental nuclear activation analysis) or oxidative sample digestion to remove potential interferences prior to analysis by a kinetic colorimetric method originally introduced by Sandell and Kolthoff ~75 years ago. The Sandell-Kolthoff (S-K) method is based on the catalytic effect of iodide on the reaction between Ce(4+) and As(3+). No available technique fully fits the needs of developing countries; research into inexpensive reliable methods and instrumentation are needed. There have been multiple reviews of methods used for epidemiological studies and specific techniques. However, a general review of iodine determination on a wide-ranging set of complex matrices is not available. While this review is not comprehensive, we cover the principal developments since the original development of the S-K method. Copyright © 2011 Elsevier B.V. All rights reserved.
SIFT-MS and FA-MS methods for ambient gas phase analysis: developments and applications in the UK.
Smith, David; Španěl, Patrik
2015-04-21
Selected ion flow tube mass spectrometry, SIFT-MS, a relatively new gas/vapour phase analytical method, is derived from the much earlier selected ion flow tube, SIFT, used for the study of gas phase ion-molecule reactions. Both the SIFT and SIFT-MS techniques were conceived and developed in the UK, the former at Birmingham University, the latter at Keele University along with the complementary flowing afterglow mass spectrometry, FA-MS, technique. The focus of this short review is largely to describe the origins, developments and, most importantly, the unique features of SIFT-MS as an analytical tool for ambient analysis and to indicate its growing use to analyse humid air, especially exhaled breath, its unique place as a on-line, real time analytical method and its growing use and applications as a non-invasive diagnostic in clinical diagnosis and therapeutic monitoring, principally within several UK universities and hospitals, and briefly in the wider world. A few case studies are outlined that show the potential of SIFT-MS and FA-MS in the detection and quantification of metabolites in exhaled breath as a step towards recognising pathophysiology indicative of disease and the presence of bacterial and fungal infection of the airways and lungs. Particular cases include the detection of Pseudomonas aeruginosa infection of the airways of patients with cystic fibrosis (SIFT-MS) and the measurement of total body water in patients with chronic kidney disease (FA-MS). The growing exploitation of SIFT-MS in other areas of research and commerce are briefly listed to show the wide utility of this unique UK-developed analytical method, and future prospects and developments are alluded to.
Wang, Chuanxian; Qu, Li; Liu, Xia; Zhao, Chaomin; Zhao, Fengjuan; Huang, Fuzhen; Zhu, Zhenou; Han, Chao
2017-02-01
An analytical method has been developed for the detection of a metabolite of nifursol, 3,5-dinitrosalicylic acid hydrazide, in foodstuffs of animal origin (chicken liver, pork liver, lobster, shrimp, eel, sausage, and honey). The method combines liquid chromatography and tandem mass spectrometry with liquid-liquid extraction. Samples were hydrolyzed with hydrochloric acid and derivatized with 2-nitrobenzaldehyde at 37°C for 16 h. The solutions of derivatives were adjusted to pH 7.0-7.5, and the metabolite was extracted with ethyl acetate. 3,5-Dinitrosalicylic acid hydrazide determination was performed in the negative electrospray ionization method. Both isotope-labeled internal standard and matrix-matched calibration solutions were used to correct the matrix effects. Limits of quantification were 0.5 μg/kg for all samples. The average recoveries, measured at three concentration levels (0.5, 2.0, and 10 μg/kg) were in the range of 75.8-108.4% with relative standard deviations below 9.8%. The developed method exhibits a high sensitivity and selectivity for the routine determination and confirmation of the presence of a metabolite of nifursol in foodstuffs of animal origin. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Robust electroencephalogram phase estimation with applications in brain-computer interface systems.
Seraj, Esmaeil; Sameni, Reza
2017-03-01
In this study, a robust method is developed for frequency-specific electroencephalogram (EEG) phase extraction using the analytic representation of the EEG. Based on recent theoretical findings in this area, it is shown that some of the phase variations-previously associated to the brain response-are systematic side-effects of the methods used for EEG phase calculation, especially during low analytical amplitude segments of the EEG. With this insight, the proposed method generates randomized ensembles of the EEG phase using minor perturbations in the zero-pole loci of narrow-band filters, followed by phase estimation using the signal's analytical form and ensemble averaging over the randomized ensembles to obtain a robust EEG phase and frequency. This Monte Carlo estimation method is shown to be very robust to noise and minor changes of the filter parameters and reduces the effect of fake EEG phase jumps, which do not have a cerebral origin. As proof of concept, the proposed method is used for extracting EEG phase features for a brain computer interface (BCI) application. The results show significant improvement in classification rates using rather simple phase-related features and a standard K-nearest neighbors and random forest classifiers, over a standard BCI dataset. The average performance was improved between 4-7% (in absence of additive noise) and 8-12% (in presence of additive noise). The significance of these improvements was statistically confirmed by a paired sample t-test, with 0.01 and 0.03 p-values, respectively. The proposed method for EEG phase calculation is very generic and may be applied to other EEG phase-based studies.
Lahouidak, Samah; Salghi, Rachid; Zougagh, Mohammed; Ríos, Angel
2018-03-06
A capillary electrophoresis method was developed for the determination of coumarin (COUM), ethyl vanillin (EVA), p-hydroxybenzaldehyde (PHB), p-hydroxybenzoic acid (PHBA), vanillin (VAN), vanillic acid (VANA) and vanillic alcohol (VOH) in vanilla products. The measured concentrations are compared to values obtained by liquid chromatography (LC) method. Analytical results, method precision, and accuracy data are presented and limits of detection for the method ranged from 2 to 5 μg/mL. The results obtained are used in monitoring the composition of vanilla flavorings, as well as for confirmation of natural or non-natural origin of vanilla in samples using four selected food samples containing this flavor. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Approximate method of variational Bayesian matrix factorization/completion with sparse prior
NASA Astrophysics Data System (ADS)
Kawasumi, Ryota; Takeda, Koujin
2018-05-01
We derive the analytical expression of a matrix factorization/completion solution by the variational Bayes method, under the assumption that the observed matrix is originally the product of low-rank, dense and sparse matrices with additive noise. We assume the prior of a sparse matrix is a Laplace distribution by taking matrix sparsity into consideration. Then we use several approximations for the derivation of a matrix factorization/completion solution. By our solution, we also numerically evaluate the performance of a sparse matrix reconstruction in matrix factorization, and completion of a missing matrix element in matrix completion.
Akan, Ozgur B.
2018-01-01
We consider a microfluidic molecular communication (MC) system, where the concentration-encoded molecular messages are transported via fluid flow-induced convection and diffusion, and detected by a surface-based MC receiver with ligand receptors placed at the bottom of the microfluidic channel. The overall system is a convection-diffusion-reaction system that can only be solved by numerical methods, e.g., finite element analysis (FEA). However, analytical models are key for the information and communication technology (ICT), as they enable an optimisation framework to develop advanced communication techniques, such as optimum detection methods and reliable transmission schemes. In this direction, we develop an analytical model to approximate the expected time course of bound receptor concentration, i.e., the received signal used to decode the transmitted messages. The model obviates the need for computationally expensive numerical methods by capturing the nonlinearities caused by laminar flow resulting in parabolic velocity profile, and finite number of ligand receptors leading to receiver saturation. The model also captures the effects of reactive surface depletion layer resulting from the mass transport limitations and moving reaction boundary originated from the passage of finite-duration molecular concentration pulse over the receiver surface. Based on the proposed model, we derive closed form analytical expressions that approximate the received pulse width, pulse delay and pulse amplitude, which can be used to optimize the system from an ICT perspective. We evaluate the accuracy of the proposed model by comparing model-based analytical results to the numerical results obtained by solving the exact system model with COMSOL Multiphysics. PMID:29415019
Kuscu, Murat; Akan, Ozgur B
2018-01-01
We consider a microfluidic molecular communication (MC) system, where the concentration-encoded molecular messages are transported via fluid flow-induced convection and diffusion, and detected by a surface-based MC receiver with ligand receptors placed at the bottom of the microfluidic channel. The overall system is a convection-diffusion-reaction system that can only be solved by numerical methods, e.g., finite element analysis (FEA). However, analytical models are key for the information and communication technology (ICT), as they enable an optimisation framework to develop advanced communication techniques, such as optimum detection methods and reliable transmission schemes. In this direction, we develop an analytical model to approximate the expected time course of bound receptor concentration, i.e., the received signal used to decode the transmitted messages. The model obviates the need for computationally expensive numerical methods by capturing the nonlinearities caused by laminar flow resulting in parabolic velocity profile, and finite number of ligand receptors leading to receiver saturation. The model also captures the effects of reactive surface depletion layer resulting from the mass transport limitations and moving reaction boundary originated from the passage of finite-duration molecular concentration pulse over the receiver surface. Based on the proposed model, we derive closed form analytical expressions that approximate the received pulse width, pulse delay and pulse amplitude, which can be used to optimize the system from an ICT perspective. We evaluate the accuracy of the proposed model by comparing model-based analytical results to the numerical results obtained by solving the exact system model with COMSOL Multiphysics.
Current matrix element in HAL QCD's wavefunction-equivalent potential method
NASA Astrophysics Data System (ADS)
Watanabe, Kai; Ishii, Noriyoshi
2018-04-01
We give a formula to calculate a matrix element of a conserved current in the effective quantum mechanics defined by the wavefunction-equivalent potentials proposed by the HAL QCD collaboration. As a first step, a non-relativistic field theory with two-channel coupling is considered as the original theory, with which a wavefunction-equivalent HAL QCD potential is obtained in a closed analytic form. The external field method is used to derive the formula by demanding that the result should agree with the original theory. With this formula, the matrix element is obtained by sandwiching the effective current operator between the left and right eigenfunctions of the effective Hamiltonian associated with the HAL QCD potential. In addition to the naive one-body current, the effective current operator contains an additional two-body term emerging from the degrees of freedom which has been integrated out.
Lechowicz, Wojciech
2009-01-01
Toxicological analyses performed in individuals who died in unclear circumstances constitute a key element of research aiming at providing a complete explanation of cause of death. The entire panel of examinations of the corpse of general Sikorski also included toxicological analyses for drugs and organic poisons of synthetic and natural origin. Attention was focused on fast-acting and potent poisons known and used in the forties of the century. The internal organs (stomach, liver, lung, brain) and hair, as well as other materials collected from the body and found in the coffin were analyzed. The classic method of sample preparation, i.e. homogenization, deproteinization, headspace and liquid-liquid extraction were applied. Hyphenated methods, mainly chromatographic with mass spectrometry were used for identification of the analytes. Organic poisons were not identified in the material as a result of the research.
Zhu, Zhiqiang; Han, Jing; Zhang, Yan; Zhou, Yafei; Xu, Ning; Zhang, Bo; Gu, Haiwei; Chen, Huanwen
2012-12-15
Desorption electrospray ionization (DESI) is the most popular ambient ionization technique for direct analysis of complex samples without sample pretreatment. However, for many applications, especially for trace analysis, it is of interest to improve the sensitivity of DESI-mass spectrometry (MS). In traditional DESI-MS, a mixture of methanol/water/acetic acid is usually used to generate the primary ions. In this article, dilute protein solutions were electrosprayed in the DESI method to create multiply charged primary ions for the desorption ionization of trace analytes on various surfaces (e.g., filter paper, glass, Al-foil) without any sample pretreatment. The analyte ions were then detected and structurally characterized using a LTQ XL mass spectrometer. Compared with the methanol/water/acetic acid (49:49:2, v/v/v) solution, protein solutions significantly increased the signal levels of non-volatile compounds such as benzoic acid, TNT, o-toluidine, peptide and insulin in either positive or negative ion detection mode. For all the analytes tested, the limits of detection (LODs) were reduced to about half of the original values which were obtained using traditional DESI. The results showed that the signal enhancement is highly correlated with the molecular weight of the proteins and the selected solid surfaces. The proposed DESI method is a universal strategy for rapid and sensitive detection of trace amounts of strongly bound and/or non-volatile analytes, including explosives, peptides, and proteins. The results indicate that the sensitivity of DESI can be further improved by selecting larger proteins and appropriate solid surfaces. Copyright © 2012 John Wiley & Sons, Ltd.
Östlund, Ulrika; Kidd, Lisa; Wengström, Yvonne; Rowa-Dewar, Neneh
2011-03-01
It has been argued that mixed methods research can be useful in nursing and health science because of the complexity of the phenomena studied. However, the integration of qualitative and quantitative approaches continues to be one of much debate and there is a need for a rigorous framework for designing and interpreting mixed methods research. This paper explores the analytical approaches (i.e. parallel, concurrent or sequential) used in mixed methods studies within healthcare and exemplifies the use of triangulation as a methodological metaphor for drawing inferences from qualitative and quantitative findings originating from such analyses. This review of the literature used systematic principles in searching CINAHL, Medline and PsycINFO for healthcare research studies which employed a mixed methods approach and were published in the English language between January 1999 and September 2009. In total, 168 studies were included in the results. Most studies originated in the United States of America (USA), the United Kingdom (UK) and Canada. The analytic approach most widely used was parallel data analysis. A number of studies used sequential data analysis; far fewer studies employed concurrent data analysis. Very few of these studies clearly articulated the purpose for using a mixed methods design. The use of the methodological metaphor of triangulation on convergent, complementary, and divergent results from mixed methods studies is exemplified and an example of developing theory from such data is provided. A trend for conducting parallel data analysis on quantitative and qualitative data in mixed methods healthcare research has been identified in the studies included in this review. Using triangulation as a methodological metaphor can facilitate the integration of qualitative and quantitative findings, help researchers to clarify their theoretical propositions and the basis of their results. This can offer a better understanding of the links between theory and empirical findings, challenge theoretical assumptions and develop new theory. Copyright © 2010 Elsevier Ltd. All rights reserved.
Inertia-gravity wave radiation from the elliptical vortex in the f-plane shallow water system
NASA Astrophysics Data System (ADS)
Sugimoto, Norihiko
2017-04-01
Inertia-gravity wave (IGW) radiation from the elliptical vortex is investigated in the f-plane shallow water system. The far field of IGW is analytically derived for the case of an almost circular Kirchhoff vortex with a small aspect ratio. Cyclone-anticyclone asymmetry appears at finite values of the Rossby number (Ro) caused by the source originating in the Coriolis acceleration. While the intensity of IGWs from the cyclone monotonically decreases as f increases, that from the anticyclone increases as f increases for relatively smaller f and has a local maximum at intermediate f. A numerical experiment is conducted on a model using a spectral method in an unbounded domain. The numerical results agree quite well with the analytical ones for elliptical vortices with small aspect ratios, implying that the derived analytical forms are useful for the verification of the numerical model. For elliptical vortices with larger aspect ratios, however, significant deviation from the analytical estimates appears. The intensity of IGWs radiated in the numerical simulation is larger than that estimated analytically. The reason is that the source of IGWs is amplified during the time evolution because the shape of the vortex changes from ideal ellipse to elongated with filaments. Nevertheless, cyclone-anticyclone asymmetry similar to the analytical estimate appears in all the range of aspect ratios, suggesting that this asymmetry is a robust feature.
How to Compress Sequential Memory Patterns into Periodic Oscillations: General Reduction Rules
Zhang, Kechen
2017-01-01
A neural network with symmetric reciprocal connections always admits a Lyapunov function, whose minima correspond to the memory states stored in the network. Networks with suitable asymmetric connections can store and retrieve a sequence of memory patterns, but the dynamics of these networks cannot be characterized as readily as that of the symmetric networks due to the lack of established general methods. Here, a reduction method is developed for a class of asymmetric attractor networks that store sequences of activity patterns as associative memories, as in a Hopfield network. The method projects the original activity pattern of the network to a low-dimensional space such that sequential memory retrievals in the original network correspond to periodic oscillations in the reduced system. The reduced system is self-contained and provides quantitative information about the stability and speed of sequential memory retrievals in the original network. The time evolution of the overlaps between the network state and the stored memory patterns can also be determined from extended reduced systems. The reduction procedure can be summarized by a few reduction rules, which are applied to several network models, including coupled networks and networks with time-delayed connections, and the analytical solutions of the reduced systems are confirmed by numerical simulations of the original networks. Finally, a local learning rule that provides an approximation to the connection weights involving the pseudoinverse is also presented. PMID:24877729
3-D discrete analytical ridgelet transform.
Helbert, David; Carré, Philippe; Andres, Eric
2006-12-01
In this paper, we propose an implementation of the 3-D Ridgelet transform: the 3-D discrete analytical Ridgelet transform (3-D DART). This transform uses the Fourier strategy for the computation of the associated 3-D discrete Radon transform. The innovative step is the definition of a discrete 3-D transform with the discrete analytical geometry theory by the construction of 3-D discrete analytical lines in the Fourier domain. We propose two types of 3-D discrete lines: 3-D discrete radial lines going through the origin defined from their orthogonal projections and 3-D planes covered with 2-D discrete line segments. These discrete analytical lines have a parameter called arithmetical thickness, allowing us to define a 3-D DART adapted to a specific application. Indeed, the 3-D DART representation is not orthogonal, It is associated with a flexible redundancy factor. The 3-D DART has a very simple forward/inverse algorithm that provides an exact reconstruction without any iterative method. In order to illustrate the potentiality of this new discrete transform, we apply the 3-D DART and its extension to the Local-DART (with smooth windowing) to the denoising of 3-D image and color video. These experimental results show that the simple thresholding of the 3-D DART coefficients is efficient.
Ansorge, Martin; Dubský, Pavel; Ušelová, Kateřina
2018-03-01
The partial-filling affinity capillary electrophoresis (pf-ACE) works with a ligand present in a background electrolyte that forms a weak complex with an analyte. In contrast to a more popular mobility-shift affinity capillary electrophoresis, only a short plug of the ligand is introduced into a capillary in the pf-ACE. Both methods can serve for determining apparent stability constants of the formed complexes but this task is hindered in the pf-ACE by the fact that the analyte spends only a part of its migration time in a contact with the ligand. In 1998, Amini and Westerlund published a linearization strategy that allows for extracting an effective mobility of an analyte in the presence of a neutral ligand out of the pf-ACE data. The main purpose of this paper is to show that the original formula is only approximate. We derive a new formula and demonstrate its applicability by means of computer simulations. We further inspect several strategies of data processing in the pf-ACE regarding a risk of an error propagation. This establishes a good practice of determining apparent stability constants of analyte-ligand complexes by means of the pf-ACE. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Liguori, Lucia; Bjørsvik, Hans-René
2012-12-01
The development of a multivariate study for a quantitative analysis of six different polybrominated diphenyl ethers (PBDEs) in tissue of Atlantic Salmo salar L. is reported. An extraction, isolation, and purification process based on an accelerated solvent extraction system was designed, investigated, and optimized by means of statistical experimental design and multivariate data analysis and regression. An accompanying gas chromatography-mass spectrometry analytical method was developed for the identification and quantification of the analytes, BDE 28, BDE 47, BDE 99, BDE 100, BDE 153, and BDE 154. These PBDEs have been used in commercial blends that were used as flame-retardants for a variety of materials, including electronic devices, synthetic polymers and textiles. The present study revealed that an extracting solvent mixture composed of hexane and CH₂Cl₂ (10:90) provided excellent recoveries of all of the six PBDEs studied herein. A somewhat lower polarity in the extracting solvent, hexane and CH₂Cl₂ (40:60) decreased the analyte %-recoveries, which still remain acceptable and satisfactory. The study demonstrates the necessity to perform an intimately investigation of the extraction and purification process in order to achieve quantitative isolation of the analytes from the specific matrix. Copyright © 2012 Elsevier B.V. All rights reserved.
Determining passive cooling limits in CPV using an analytical thermal model
NASA Astrophysics Data System (ADS)
Gualdi, Federico; Arenas, Osvaldo; Vossier, Alexis; Dollet, Alain; Aimez, Vincent; Arès, Richard
2013-09-01
We propose an original thermal analytical model aiming to predict the practical limits of passive cooling systems for high concentration photovoltaic modules. The analytical model is described and validated by comparison with a commercial 3D finite element model. The limiting performances of flat plate cooling systems in natural convection are then derived and discussed.
Proxy-SU(3) symmetry in heavy deformed nuclei
NASA Astrophysics Data System (ADS)
Bonatsos, Dennis; Assimakis, I. E.; Minkov, N.; Martinou, Andriana; Cakirli, R. B.; Casten, R. F.; Blaum, K.
2017-06-01
Background: Microscopic calculations of heavy nuclei face considerable difficulties due to the sizes of the matrices that need to be solved. Various approximation schemes have been invoked, for example by truncating the spaces, imposing seniority limits, or appealing to various symmetry schemes such as pseudo-SU(3). This paper proposes a new symmetry scheme also based on SU(3). This proxy-SU(3) can be applied to well-deformed nuclei, is simple to use, and can yield analytic predictions. Purpose: To present the new scheme and its microscopic motivation, and to test it using a Nilsson model calculation with the original shell model orbits and with the new proxy set. Method: We invoke an approximate, analytic, treatment of the Nilsson model, that allows the above vetting and yet is also transparent in understanding the approximations involved in the new proxy-SU(3). Results: It is found that the new scheme yields a Nilsson diagram for well-deformed nuclei that is very close to the original Nilsson diagram. The specific levels of approximation in the new scheme are also shown, for each major shell. Conclusions: The new proxy-SU(3) scheme is a good approximation to the full set of orbits in a major shell. Being able to replace a complex shell model calculation with a symmetry-based description now opens up the possibility to predict many properties of nuclei analytically and often in a parameter-free way. The new scheme works best for heavier nuclei, precisely where full microscopic calculations are most challenged. Some cases in which the new scheme can be used, often analytically, to make specific predictions, are shown in a subsequent paper.
Review of spectral imaging technology in biomedical engineering: achievements and challenges.
Li, Qingli; He, Xiaofu; Wang, Yiting; Liu, Hongying; Xu, Dongrong; Guo, Fangmin
2013-10-01
Spectral imaging is a technology that integrates conventional imaging and spectroscopy to get both spatial and spectral information from an object. Although this technology was originally developed for remote sensing, it has been extended to the biomedical engineering field as a powerful analytical tool for biological and biomedical research. This review introduces the basics of spectral imaging, imaging methods, current equipment, and recent advances in biomedical applications. The performance and analytical capabilities of spectral imaging systems for biological and biomedical imaging are discussed. In particular, the current achievements and limitations of this technology in biomedical engineering are presented. The benefits and development trends of biomedical spectral imaging are highlighted to provide the reader with an insight into the current technological advances and its potential for biomedical research.
Equilibrium Solutions of the Logarithmic Hamiltonian Leapfrog for the N-body Problem
NASA Astrophysics Data System (ADS)
Minesaki, Yukitaka
2018-04-01
We prove that a second-order logarithmic Hamiltonian leapfrog for the classical general N-body problem (CGNBP) designed by Mikkola and Tanikawa and some higher-order logarithmic Hamiltonian methods based on symmetric multicompositions of the logarithmic algorithm exactly reproduce the orbits of elliptic relative equilibrium solutions in the original CGNBP. These methods are explicit symplectic methods. Before this proof, only some implicit discrete-time CGNBPs proposed by Minesaki had been analytically shown to trace the orbits of elliptic relative equilibrium solutions. The proof is therefore the first existence proof for explicit symplectic methods. Such logarithmic Hamiltonian methods with a variable time step can also precisely retain periodic orbits in the classical general three-body problem, which generic numerical methods with a constant time step cannot do.
Analytic dyon solution in SU/N/ grand unified theories
NASA Astrophysics Data System (ADS)
Lyi, W. S.; Park, Y. J.; Koh, I. G.; Kim, Y. D.
1982-10-01
Analytic solutions which are regular everywhere, including at the origin, are found for certain cases of SU(N) grand unified theories. Attention is restricted to order-1/g behavior of the SU(N) grand unified theory, and aspects of the solutions of the Higgs field of the SU(N) near the origin are considered. Comments regarding the mass, the Pontryagin-like index of the dyon, and magnetic charge are made with respect to the recent report of a monopole discovery.
NASA Astrophysics Data System (ADS)
Macías-Díaz, J. E.
In the present manuscript, we introduce a finite-difference scheme to approximate solutions of the two-dimensional version of Fisher's equation from population dynamics, which is a model for which the existence of traveling-wave fronts bounded within (0,1) is a well-known fact. The method presented here is a nonstandard technique which, in the linear regime, approximates the solutions of the original model with a consistency of second order in space and first order in time. The theory of M-matrices is employed here in order to elucidate conditions under which the method is able to preserve the positivity and the boundedness of solutions. In fact, our main result establishes relatively flexible conditions under which the preservation of the positivity and the boundedness of new approximations is guaranteed. Some simulations of the propagation of a traveling-wave solution confirm the analytical results derived in this work; moreover, the experiments evince a good agreement between the numerical result and the analytical solutions.
Alcaráz, Mirta R; Vera-Candioti, Luciana; Culzoni, María J; Goicoechea, Héctor C
2014-04-01
This paper presents the development of a capillary electrophoresis method with diode array detector coupled to multivariate curve resolution-alternating least squares (MCR-ALS) to conduct the resolution and quantitation of a mixture of six quinolones in the presence of several unexpected components. Overlapping of time profiles between analytes and water matrix interferences were mathematically solved by data modeling with the well-known MCR-ALS algorithm. With the aim of overcoming the drawback originated by two compounds with similar spectra, a special strategy was implemented to model the complete electropherogram instead of dividing the data in the region as usually performed in previous works. The method was first applied to quantitate analytes in standard mixtures which were randomly prepared in ultrapure water. Then, tap water samples spiked with several interferences were analyzed. Recoveries between 76.7 and 125 % and limits of detection between 5 and 18 μg L(-1) were achieved.
A new concept of pencil beam dose calculation for 40-200 keV photons using analytical dose kernels.
Bartzsch, Stefan; Oelfke, Uwe
2013-11-01
The advent of widespread kV-cone beam computer tomography in image guided radiation therapy and special therapeutic application of keV photons, e.g., in microbeam radiation therapy (MRT) require accurate and fast dose calculations for photon beams with energies between 40 and 200 keV. Multiple photon scattering originating from Compton scattering and the strong dependence of the photoelectric cross section on the atomic number of the interacting tissue render these dose calculations by far more challenging than the ones established for corresponding MeV beams. That is why so far developed analytical models of kV photon dose calculations fail to provide the required accuracy and one has to rely on time consuming Monte Carlo simulation techniques. In this paper, the authors introduce a novel analytical approach for kV photon dose calculations with an accuracy that is almost comparable to the one of Monte Carlo simulations. First, analytical point dose and pencil beam kernels are derived for homogeneous media and compared to Monte Carlo simulations performed with the Geant4 toolkit. The dose contributions are systematically separated into contributions from the relevant orders of multiple photon scattering. Moreover, approximate scaling laws for the extension of the algorithm to inhomogeneous media are derived. The comparison of the analytically derived dose kernels in water showed an excellent agreement with the Monte Carlo method. Calculated values deviate less than 5% from Monte Carlo derived dose values, for doses above 1% of the maximum dose. The analytical structure of the kernels allows adaption to arbitrary materials and photon spectra in the given energy range of 40-200 keV. The presented analytical methods can be employed in a fast treatment planning system for MRT. In convolution based algorithms dose calculation times can be reduced to a few minutes.
Fluorescence metrology used for analytics of high-quality optical materials
NASA Astrophysics Data System (ADS)
Engel, Axel; Haspel, Rainer; Rupertus, Volker
2004-09-01
Optical, glass ceramics and crystals are used for various specialized applications in telecommunication, biomedical, optical, and micro lithography technology. In order to qualify and control the material quality during the research and production processes several specialized ultra trace analytisis methods have to be appliedcs Schott Glas is applied. One focus of our the activities is the determination of impurities ranging in the sub ppb-regime, because such kind of impurity level is required e.g. for pure materials used for microlithography for example. Common analytical techniques for these impurity levels areSuch impurities are determined using analytical methods like LA ICP-MS and or Neutron Activation Analysis for example. On the other hand direct and non-destructive optical analysistic becomes is attractive because it visualizes the requirement of the optical applications additionally. Typical eExamples are absorption and laser resistivity measurements of optical material with optical methods like precision spectral photometers and or in-situ transmission measurements by means ofusing lamps and or UV lasers. Analytical methods have the drawback that they are time consuming and rather expensive, whereas the sensitivity for the absorption method will not be sufficient to characterize the future needs (coefficient much below 10-3 cm-1). For a non-destructive qualification for the current and future quality requirements a Jobin Yvon FLUOROLOG 3.22 fluorescence spectrometery is employed to enable fast and precise qualification and analysis. The main advantage of this setup is the combination of highest sensitivity (more than one order of magnitude higher sensitivity than state of the art UV absorption spectroscopy), fast measurement and evaluation cycles (several minutes compared to several hours necessary for chemical analystics). An overview is given for spectral characteristics using specified standards, which are necessary to establish the analytical system. The elementary fluorescence and absorption of rare earth element impurities as well as crystal defects induced luminescence originated by impurities was investigated. Quantitative numbers are given for the relative quantum yield as well as for the excitation cross section for doped glass and calcium fluoride.
The determination of mercury in mushrooms by CV-AAS and ICP-AES techniques.
Jarzynska, Grazyna; Falandysz, Jerzy
2011-01-01
This research presents an example of an excellent applied study on analytical problems due to hazardous mercury determination in environmental materials and validity of published results on content of this element in wild growing mushrooms. The total mercury content has been analyzed in a several species of wild-grown mushrooms and some herbal origin certified reference materials, using two analytical methods. One method was commonly known and well validated the cold-vapour atomic absorption spectroscopy (CV-AAS) after a direct sample pyrolysis coupled to the gold wool trap, which was a reference method. A second method was a procedure that involved a final mercury measurement using the inductively-coupled plasma atomic emission spectroscopy (ICP-AES) at λ 194.163 nm, which was used by some authors to report on a high mercury content of a large sets of wild-grown mushrooms. We found that the method using the ICP-AES at λ 194.163 nm gave inaccurate and imprecise results. The results of this study imply that because of unsuitability of total mercury determination using the ICP-AES at λ 194.163 nm, the reports on great concentrations of this metal in a large sets of wild-grown mushrooms, when examined using this method, have to be studied with caution, since data are highly biased.
Friese, K C; Grobecker, K H; Wätjen, U
2001-07-01
A method has been developed for measurement of the homogeneity of analyte distribution in powdered materials by use of electrothermal vaporization with inductively coupled plasma mass spectrometric (ETV-ICP-MS) detection. The method enabled the simultaneous determination of As, Cd, Cu, Fe, Mn, Pb, and Zn in milligram amounts of samples of biological origin. The optimized conditions comprised a high plasma power of 1,500 W, reduced aerosol transport flow, and heating ramps below 300 degrees C s(-1). A temperature ramp to 550 degrees C ensured effective pyrolysis of approximately 70% of the organic compounds without losses of analyte. An additional hold stage at 700 degrees C led to separation of most of the analyte signals from the evaporation of carbonaceous matrix compounds. The effect of time resolution of signal acquisition on the precision of the ETV measurements was investigated. An increase in the number of masses monitored up to 20 is possible with not more than 1% additional relative standard deviation of results caused by limited temporal resolution of the transient signals. Recording of signals from the nebulization of aqueous standards in each sample run enabled correction for drift of the sensitivity of the ETV-ICP-MS instrument. The applicability of the developed method to homogeneity studies was assessed by use of four certified reference materials. According to the best repeatability observed in these sample runs, the maximum contribution of the method to the standard deviation is approximately 5% to 6% for all the elements investigated.
Huang, Yang; Zhang, Tingting; Zhao, Yumei; Zhou, Haibo; Tang, Guangyun; Fillet, Marianne; Crommen, Jacques; Jiang, Zhengjin
2017-09-10
Nucleobases, nucleosides and ginsenosides, which have a significant impact on the physiological activity of organisms, are reported to be the active components of ginseng, while they are less present in ginseng extracts. Few analytical methods have been developed so far to simultaneously analyze these three classes of compounds with different polarities present in ginseng extracts. In the present study, a simple and efficient analytical method was successfully developed for the simultaneous separation of 17 nucleobases, nucleosides and ginsenosides in ginseng extracts using supercritical fluid chromatography coupled with single quadrupole mass spectrometry (SFC-MS). The effect of various experimental factors on the separation performance, such as the column type, temperature and backpressure, the type of modifier and additive, and the concentration of make-up solvent were systematically investigated. Under the selected conditions, the developed method was successfully applied to the quality evaluation of 14 batches of ginseng extracts from different origins. The results obtained for the different batches indicate that this method could be employed for the quality assessment of ginseng extracts. Copyright © 2017 Elsevier B.V. All rights reserved.
McAdoo, Mitchell A.; Kozar, Mark D.
2017-11-14
This report describes a compilation of existing water-quality data associated with groundwater resources originating from abandoned underground coal mines in West Virginia. Data were compiled from multiple sources for the purpose of understanding the suitability of groundwater from abandoned underground coal mines for public supply, industrial, agricultural, and other uses. This compilation includes data collected for multiple individual studies conducted from July 13, 1973 through September 7, 2016. Analytical methods varied by the time period of data collection and requirements of the independent studies.This project identified 770 water-quality samples from 294 sites that could be attributed to abandoned underground coal mine aquifers originating from multiple coal seams in West Virginia.
NASA Technical Reports Server (NTRS)
Joshi, S. M.
1984-01-01
Closed-loop stability is investigated for multivariable linear time-invariant systems controlled by optimal full state feedback linear quadratic (LQ) regulators, with nonlinear gains present in the feedback channels. Estimates are obtained for the region of attraction when the nonlinearities escape the (0.5, infinity) sector in regions away from the origin and for the region of ultimate boundedness when the nonlinearities escape the sector near the origin. The expressions for these regions also provide methods for selecting the performance function parameters in order to obtain LQ designs with better tolerance for nonlinearities. The analytical results are illustrated by applying them to the problem of controlling the rigid-body pitch angle and elastic motion of a large, flexible space antenna.
Dron, Julien; Abidi, Ehgere; Haddad, Imad El; Marchand, Nicolas; Wortham, Henri
2008-06-23
An analytical method for the quantitative determination of the total nitro functional group (R-NO2) content in atmospheric particulate organic matter is developed. The method is based on the selectivity of NO2(-) (m/z 46) precursor ion scanning (PAR 46) by atmospheric pressure chemical ionization-tandem mass spectrometry (APCI-MS/MS). PAR 46 was experimented on 16 nitro compounds of different molecular structures and was compared with a neutral loss of NO (30 amu) technique in terms of sensitivity and efficiency to characterize the nitro functional groups. Covering a wider range of compounds, PAR 46 was preferred and applied to reference mixtures containing all the 16 compounds under study. Repeatability carried out using an original statistical approach, and calibration experiments were performed on the reference mixtures proven the suitability of the technique for quantitative measurements of nitro functional groups in samples of environmental interest with good accuracy. A linear range was obtained for concentrations ranging between 0.005 and 0.25 mM with a detection limit of 0.001 mM of nitro functional groups. Finally, the analytical error based on an original statistical approach applied to numerous reference mixtures was below 20%. Despite of potential artifacts related to nitro-alkanes and organonitrates, this new methodology offers a promising alternative to FT-IR measurements. The relevance of the method and its potentialities are demonstrated through its application to aerosols collected in the EUPHORE simulation chamber during o-xylene photooxidation experiments and in a suburban area of a French alpine valley during summer.
On Statistical Approaches for Demonstrating Analytical Similarity in the Presence of Correlation.
Yang, Harry; Novick, Steven; Burdick, Richard K
Analytical similarity is the foundation for demonstration of biosimilarity between a proposed product and a reference product. For this assessment, currently the U.S. Food and Drug Administration (FDA) recommends a tiered system in which quality attributes are categorized into three tiers commensurate with their risk and approaches of varying statistical rigor are subsequently used for the three-tier quality attributes. Key to the analyses of Tiers 1 and 2 quality attributes is the establishment of equivalence acceptance criterion and quality range. For particular licensure applications, the FDA has provided advice on statistical methods for demonstration of analytical similarity. For example, for Tier 1 assessment, an equivalence test can be used based on an equivalence margin of 1.5 σ R , where σ R is the reference product variability estimated by the sample standard deviation S R from a sample of reference lots. The quality range for demonstrating Tier 2 analytical similarity is of the form X̄ R ± K × σ R where the constant K is appropriately justified. To demonstrate Tier 2 analytical similarity, a large percentage (e.g., 90%) of test product must fall in the quality range. In this paper, through both theoretical derivations and simulations, we show that when the reference drug product lots are correlated, the sample standard deviation S R underestimates the true reference product variability σ R As a result, substituting S R for σ R in the Tier 1 equivalence acceptance criterion and the Tier 2 quality range inappropriately reduces the statistical power and the ability to declare analytical similarity. Also explored is the impact of correlation among drug product lots on Type I error rate and power. Three methods based on generalized pivotal quantities are introduced, and their performance is compared against a two-one-sided tests (TOST) approach. Finally, strategies to mitigate risk of correlation among the reference products lots are discussed. A biosimilar is a generic version of the original biological drug product. A key component of a biosimilar development is the demonstration of analytical similarity between the biosimilar and the reference product. Such demonstration relies on application of statistical methods to establish a similarity margin and appropriate test for equivalence between the two products. This paper discusses statistical issues with demonstration of analytical similarity and provides alternate approaches to potentially mitigate these problems. © PDA, Inc. 2016.
Analytic Considerations and Design Basis for the IEEE Distribution Test Feeders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, K. P.; Mather, B. A.; Pal, B. C.
For nearly 20 years the Test Feeder Working Group of the Distribution System Analysis Subcommittee has been developing openly available distribution test feeders for use by researchers. The purpose of these test feeders is to provide models of distribution systems that reflect the wide diversity in design and their various analytic challenges. Because of their utility and accessibility, the test feeders have been used for a wide range of research, some of which has been outside the original scope of intended uses. This paper provides an overview of the existing distribution feeder models and clarifies the specific analytic challenges thatmore » they were originally designed to examine. Additionally, the paper will provide guidance on which feeders are best suited for various types of analysis. The purpose of this paper is to provide the original intent of the Working Group and to provide the information necessary so that researchers may make an informed decision on which of the test feeders are most appropriate for their work.« less
Analytic Considerations and Design Basis for the IEEE Distribution Test Feeders
Schneider, K. P.; Mather, B. A.; Pal, B. C.; ...
2017-10-10
For nearly 20 years the Test Feeder Working Group of the Distribution System Analysis Subcommittee has been developing openly available distribution test feeders for use by researchers. The purpose of these test feeders is to provide models of distribution systems that reflect the wide diversity in design and their various analytic challenges. Because of their utility and accessibility, the test feeders have been used for a wide range of research, some of which has been outside the original scope of intended uses. This paper provides an overview of the existing distribution feeder models and clarifies the specific analytic challenges thatmore » they were originally designed to examine. Additionally, the paper will provide guidance on which feeders are best suited for various types of analysis. The purpose of this paper is to provide the original intent of the Working Group and to provide the information necessary so that researchers may make an informed decision on which of the test feeders are most appropriate for their work.« less
Higher-n triangular dilatonic black holes
NASA Astrophysics Data System (ADS)
Zadora, Anton; Gal'tsov, Dmitri V.; Chen, Chiang-Mei
2018-04-01
Dilaton gravity with the form fields is known to possess dyon solutions with two horizons for the discrete "triangular" values of the dilaton coupling constant a =√{ n (n + 1) / 2 }. This sequence first obtained numerically and then explained analytically as consequence of the regularity of the dilaton, should have some higher-dimensional and/or group theoretical origin. Meanwhile, this origin was explained earlier only for n = 1 , 2 in which cases the solutions were known analytically. We extend this explanation to n = 3 , 5 presenting analytical triangular solutions for the theory with different dilaton couplings a , b in electric and magnetic sectors in which case the quantization condition reads ab = n (n + 1) / 2. The solutions are derived via the Toda chains for B2 and G2 Lie algebras. They are found in the closed form in general D space-time dimensions. Solutions satisfy the entropy product rules indicating on the microscopic origin of their entropy and have negative binding energy in the extremal case.
The recalculation of the original pulse produced by a partial discharge
NASA Technical Reports Server (NTRS)
Tanasescu, F.
1978-01-01
The loads on a dielectric or an insulation arrangement cannot be precisely rated without properly assessing the manner in which a pulse produced by a partial discharge is transmitted from the point of the event to the point where it is recorded. A number of analytical and graphic methods are presented, and computer simulations are used for specific cases of a few measurement circuits. It turns out to be possible to determine the effect of each circuit element and thus make some valid corrections.
NASA Astrophysics Data System (ADS)
Gatto, Paolo; Lipparini, Filippo; Stamm, Benjamin
2017-12-01
The domain-decomposition (dd) paradigm, originally introduced for the conductor-like screening model, has been recently extended to the dielectric Polarizable Continuum Model (PCM), resulting in the ddPCM method. We present here a complete derivation of the analytical derivatives of the ddPCM energy with respect to the positions of the solute's atoms and discuss their efficient implementation. As it is the case for the energy, we observe a quadratic scaling, which is discussed and demonstrated with numerical tests.
Unified analytic representation of physical sputtering yield
NASA Astrophysics Data System (ADS)
Janev, R. K.; Ralchenko, Yu. V.; Kenmotsu, T.; Hosaka, K.
2001-03-01
Generalized energy parameter η= η( ɛ, δ) and normalized sputtering yield Ỹ(η) , where ɛ= E/ ETF and δ= Eth/ ETF, are introduced to achieve a unified representation of all available experimental and sputtering data at normal ion incidence. The sputtering data in the new Ỹ(η) representation retain their original uncertainties. The Ỹ(η) data can be fitted to a simple three-parameter analytic expression with an rms deviation of 32%, well within the uncertainties of original data. Both η and Ỹ(η) have correct physical behavior in the threshold and high-energy regions. The available theoretical data produced by the TRIM.SP code can also be represented by the same single analytic function Ỹ(η) with a similar accuracy.
Ross, Sylvia An; Allen, Daniel N; Goldstein, Gerald
2014-01-01
The Halstead-Reitan Neuropsychological Battery (HRNB) is the first factor-analyzed neuropsychological battery and consists of three batteries for young children, older children, and adults. Halstead's original factor analysis extracted four factors from the adult version of the battery, which were the basis for his theory of biological intelligence. These factors were called Central Integrative Field, Abstraction, Power, and Directional. Since this original analysis, Reitan's additions to the battery, and the development of the child versions of the test, this factor-analytic research continued. An introduction and the adult literature are reviewed in Ross, Allen, and Goldstein ( in press ). In this supplemental article, factor-analytic studies of the HRNB with children are reviewed. It is concluded that factor analysis of the HRNB or Reitan-Indiana Neuropsychological Battery with children does not replicate the extensiveness of the adult literature, although there is some evidence that when the traditional battery for older children is used, the factor structure is similar to what is found in adult studies. Reitan's changes to the battery appear to have added factors including language and sensory-perceptual factors. When other tests and scoring methods are used in addition to the core battery, differing solutions are produced.
Nagata, Takeshi; Fedorov, Dmitri G; Li, Hui; Kitaura, Kazuo
2012-05-28
A new energy expression is proposed for the fragment molecular orbital method interfaced with the polarizable continuum model (FMO/PCM). The solvation free energy is shown to be more accurate on a set of representative polypeptides with neutral and charged residues, in comparison to the original formulation at the same level of the many-body expansion of the electrostatic potential determining the apparent surface charges. The analytic first derivative of the energy with respect to nuclear coordinates is formulated at the second-order Møller-Plesset (MP2) perturbation theory level combined with PCM, for which we derived coupled perturbed Hartree-Fock equations. The accuracy of the analytic gradient is demonstrated on test calculations in comparison to numeric gradient. Geometry optimization of the small Trp-cage protein (PDB: 1L2Y) is performed with FMO/PCM/6-31(+)G(d) at the MP2 and restricted Hartree-Fock with empirical dispersion (RHF/D). The root mean square deviations between the FMO optimized and NMR experimental structure are found to be 0.414 and 0.426 Å for RHF/D and MP2, respectively. The details of the hydrogen bond network in the Trp-cage protein are revealed.
Data-Driven Astrochemistry: One Step Further within the Origin of Life Puzzle.
Ruf, Alexander; d'Hendecourt, Louis L S; Schmitt-Kopplin, Philippe
2018-06-01
Astrochemistry, meteoritics and chemical analytics represent a manifold scientific field, including various disciplines. In this review, clarifications on astrochemistry, comet chemistry, laboratory astrophysics and meteoritic research with respect to organic and metalorganic chemistry will be given. The seemingly large number of observed astrochemical molecules necessarily requires explanations on molecular complexity and chemical evolution, which will be discussed. Special emphasis should be placed on data-driven analytical methods including ultrahigh-resolving instruments and their interplay with quantum chemical computations. These methods enable remarkable insights into the complex chemical spaces that exist in meteorites and maximize the level of information on the huge astrochemical molecular diversity. In addition, they allow one to study even yet undescribed chemistry as the one involving organomagnesium compounds in meteorites. Both targeted and non-targeted analytical strategies will be explained and may touch upon epistemological problems. In addition, implications of (metal)organic matter toward prebiotic chemistry leading to the emergence of life will be discussed. The precise description of astrochemical organic and metalorganic matter as seeds for life and their interactions within various astrophysical environments may appear essential to further study questions regarding the emergence of life on a most fundamental level that is within the molecular world and its self-organization properties.
NASA Astrophysics Data System (ADS)
Nagata, Takeshi; Fedorov, Dmitri G.; Li, Hui; Kitaura, Kazuo
2012-05-01
A new energy expression is proposed for the fragment molecular orbital method interfaced with the polarizable continuum model (FMO/PCM). The solvation free energy is shown to be more accurate on a set of representative polypeptides with neutral and charged residues, in comparison to the original formulation at the same level of the many-body expansion of the electrostatic potential determining the apparent surface charges. The analytic first derivative of the energy with respect to nuclear coordinates is formulated at the second-order Møller-Plesset (MP2) perturbation theory level combined with PCM, for which we derived coupled perturbed Hartree-Fock equations. The accuracy of the analytic gradient is demonstrated on test calculations in comparison to numeric gradient. Geometry optimization of the small Trp-cage protein (PDB: 1L2Y) is performed with FMO/PCM/6-31(+)G(d) at the MP2 and restricted Hartree-Fock with empirical dispersion (RHF/D). The root mean square deviations between the FMO optimized and NMR experimental structure are found to be 0.414 and 0.426 Å for RHF/D and MP2, respectively. The details of the hydrogen bond network in the Trp-cage protein are revealed.
Monitoring of Cr, Cu, Pb, V and Zn in polluted soils by laser induced breakdown spectroscopy (LIBS).
Dell'Aglio, Marcella; Gaudiuso, Rosalba; Senesi, Giorgio S; De Giacomo, Alessandro; Zaccone, Claudio; Miano, Teodoro M; De Pascale, Olga
2011-05-01
Laser Induced Breakdown Spectroscopy (LIBS) is a fast and multi-elemental analytical technique particularly suitable for the qualitative and quantitative analysis of heavy metals in solid samples, including environmental ones. Although LIBS is often recognised in the literature as a well-established analytical technique, results about quantitative analysis of elements in chemically complex matrices such as soils are quite contrasting. In this work, soil samples of various origins have been analyzed by LIBS and data compared to those obtained by Inductively Coupled Plasma-Optical Emission Spectroscopy (ICP-OES). The emission intensities of one selected line for each of the five analytes (i.e., Cr, Cu, Pb, V, and Zn) were normalized to the background signal, and plotted as a function of the concentration values previously determined by ICP-OES. Data showed a good linearity for all calibration lines drawn, and the correlation between ICP-OES and LIBS was confirmed by the satisfactory agreement obtained between the corresponding values. Consequently, LIBS method can be used at least for metal monitoring in soils. In this respect, a simple method for the estimation of the soil pollution degree by heavy metals, based on the determination of an anthropogenic index, was proposed and determined for Cr and Zn.
Bertelli, Davide; Brighenti, Virginia; Marchetti, Lucia; Reik, Anna; Pellati, Federica
2018-06-01
Humulus lupulus L. (hop) represents one of the most cultivated crops, it being a key ingredient in the brewing process. Many health-related properties have been described for hop extracts, making this plant gain more interest in the field of pharmaceutical and nutraceutical research. Among the analytical tools available for the phytochemical characterization of plant extracts, quantitative nuclear magnetic resonance (qNMR) represents a new and powerful technique. In this ambit, the present study was aimed at the development of a new, simple, and efficient qNMR method for the metabolite fingerprinting of bioactive compounds in hop cones, taking advantage of the novel ERETIC 2 tool. To the best of our knowledge, this is the first attempt to apply this method to complex matrices of natural origin, such as hop extracts. The qNMR method set up in this study was applied to the quantification of both prenylflavonoids and bitter acids in eight hop cultivars. The performance of this analytical method was compared with that of HPLC-UV/DAD, which represents the most frequently used technique in the field of natural product analysis. The quantitative data obtained for hop samples by means of the two aforementioned techniques highlighted that the amount of bioactive compounds was slightly higher when qNMR was applied, although the order of magnitude of the values was the same. The accuracy of qNMR was comparable to that of the chromatographic method, thus proving to be a reliable tool for the analysis of these secondary metabolites in hop extracts. Graphical abstract Graphical abstract related to the extraction and analytical methods applied in this work for the analysis of bioactive compounds in Humulus lupulus L. (hop) cones.
An integrated bioanalytical method development and validation approach: case studies.
Xue, Y-J; Melo, Brian; Vallejo, Martha; Zhao, Yuwen; Tang, Lina; Chen, Yuan-Shek; Keller, Karin M
2012-10-01
We proposed an integrated bioanalytical method development and validation approach: (1) method screening based on analyte's physicochemical properties and metabolism information to determine the most appropriate extraction/analysis conditions; (2) preliminary stability evaluation using both quality control and incurred samples to establish sample collection, storage and processing conditions; (3) mock validation to examine method accuracy and precision and incurred sample reproducibility; and (4) method validation to confirm the results obtained during method development. This integrated approach was applied to the determination of compound I in rat plasma and compound II in rat and dog plasma. The effectiveness of the approach was demonstrated by the superior quality of three method validations: (1) a zero run failure rate; (2) >93% of quality control results within 10% of nominal values; and (3) 99% incurred sample within 9.2% of the original values. In addition, rat and dog plasma methods for compound II were successfully applied to analyze more than 900 plasma samples obtained from Investigational New Drug (IND) toxicology studies in rats and dogs with near perfect results: (1) a zero run failure rate; (2) excellent accuracy and precision for standards and quality controls; and (3) 98% incurred samples within 15% of the original values. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Krčmár, Roman; Šamaj, Ladislav
2018-01-01
The partition function of the symmetric (zero electric field) eight-vertex model on a square lattice can be formulated either in the original "electric" vertex format or in an equivalent "magnetic" Ising-spin format. In this paper, both electric and magnetic versions of the model are studied numerically by using the corner transfer matrix renormalization-group method which provides reliable data. The emphasis is put on the calculation of four specific critical exponents, related by two scaling relations, and of the central charge. The numerical method is first tested in the magnetic format, the obtained dependencies of critical exponents on the model's parameters agree with Baxter's exact solution, and weak universality is confirmed within the accuracy of the method due to the finite size of the system. In particular, the critical exponents η and δ are constant as required by weak universality. On the other hand, in the electric format, analytic formulas based on the scaling relations are derived for the critical exponents ηe and δe which agree with our numerical data. These exponents depend on the model's parameters which is evidence for the full nonuniversality of the symmetric eight-vertex model in the original electric formulation.
The evolution of analytical chemistry methods in foodomics.
Gallo, Monica; Ferranti, Pasquale
2016-01-08
The methodologies of food analysis have greatly evolved over the past 100 years, from basic assays based on solution chemistry to those relying on the modern instrumental platforms. Today, the development and optimization of integrated analytical approaches based on different techniques to study at molecular level the chemical composition of a food may allow to define a 'food fingerprint', valuable to assess nutritional value, safety and quality, authenticity and security of foods. This comprehensive strategy, defined foodomics, includes emerging work areas such as food chemistry, phytochemistry, advanced analytical techniques, biosensors and bioinformatics. Integrated approaches can help to elucidate some critical issues in food analysis, but also to face the new challenges of a globalized world: security, sustainability and food productions in response to environmental world-wide changes. They include the development of powerful analytical methods to ensure the origin and quality of food, as well as the discovery of biomarkers to identify potential food safety problems. In the area of nutrition, the future challenge is to identify, through specific biomarkers, individual peculiarities that allow early diagnosis and then a personalized prognosis and diet for patients with food-related disorders. Far from the aim of an exhaustive review of the abundant literature dedicated to the applications of omic sciences in food analysis, we will explore how classical approaches, such as those used in chemistry and biochemistry, have evolved to intersect with the new omics technologies to produce a progress in our understanding of the complexity of foods. Perhaps most importantly, a key objective of the review will be to explore the development of simple and robust methods for a fully applied use of omics data in food science. Copyright © 2015 Elsevier B.V. All rights reserved.
DETERMINATION OF OXALATE ION DOPANT LEVEL IN POLYPYRROLE USING FT-IR
Benally, Kristal J.; GreyEyes, Shawn D.; McKenzie, Jason T.
2014-01-01
A pellet method using standard addition and FT-IR was used to estimate oxalate ion doping levels in electrosynthesized polypyrrole. The method is useful for materials where removal of analyte from an insoluble material is problematic. Here, electrosynthesized oxalate doped polypyrrole is dispersed in potassium bromide. Spikes of sodium oxalate are added and the mixtures pressed into pellets. The oxalate carbonyl absorption peak is then used to quantify the amount of oxalate present in the polypyrrole. The mass fraction of oxalate dopant in polypyrrole was determined to be 0.4 ± 0.1 % and coincides with the original synthesis solution composition. PMID:25598749
Cai, Kai; Xiang, Zhangmin; Li, Hongqin; Zhao, Huina; Lin, Yechun; Pan, Wenjie; Lei, Bo
2017-12-01
This work describes a rapid, stable, and accurate method for determining the free amino acids, biogenic amines, and ammonium in tobacco. The target analytes were extracted with microwave-assisted extraction and then derivatized with diethyl ethoxymethylenemalonate, followed by ultra high performance liquid chromatography analysis. The experimental design used to optimize the microwave-assisted extraction conditions showed that the optimal extraction time was 10 min with a temperature of 60°C. The stability of aminoenone derivatives was improved by keeping the pH near 9.0, and there was no obvious degradation during the 80°C heating and room temperature storage. Under optimal conditions, this method showed good linearity (R 2 > 0.999) and sensitivity (limits of detection 0.010-0.081 μg/mL). The extraction recoveries were between 88.4 and 106.5%, while the repeatability and reproducibility ranged from 0.48 to 5.12% and from 1.56 to 6.52%, respectively. The newly developed method was employed to analyze the tobacco from different geographical origins. Principal component analysis showed that four geographical origins of tobacco could be clearly distinguished and that each had their characteristic components. The proposed method also showed great potential for further investigations on nitrogen metabolism in plants. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The MCNP6 Analytic Criticality Benchmark Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Forrest B.
2016-06-16
Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less
Petersen, Per H; Lund, Flemming; Fraser, Callum G; Sölétormos, György
2016-11-01
Background The distributions of within-subject biological variation are usually described as coefficients of variation, as are analytical performance specifications for bias, imprecision and other characteristics. Estimation of specifications required for reference change values is traditionally done using relationship between the batch-related changes during routine performance, described as Δbias, and the coefficients of variation for analytical imprecision (CV A ): the original theory is based on standard deviations or coefficients of variation calculated as if distributions were Gaussian. Methods The distribution of between-subject biological variation can generally be described as log-Gaussian. Moreover, recent analyses of within-subject biological variation suggest that many measurands have log-Gaussian distributions. In consequence, we generated a model for the estimation of analytical performance specifications for reference change value, with combination of Δbias and CV A based on log-Gaussian distributions of CV I as natural logarithms. The model was tested using plasma prolactin and glucose as examples. Results Analytical performance specifications for reference change value generated using the new model based on log-Gaussian distributions were practically identical with the traditional model based on Gaussian distributions. Conclusion The traditional and simple to apply model used to generate analytical performance specifications for reference change value, based on the use of coefficients of variation and assuming Gaussian distributions for both CV I and CV A , is generally useful.
Power-law scaling of plasma pressure on laser-ablated tin microdroplets
NASA Astrophysics Data System (ADS)
Kurilovich, Dmitry; Basko, Mikhail M.; Kim, Dmitrii A.; Torretti, Francesco; Schupp, Ruben; Visschers, Jim C.; Scheers, Joris; Hoekstra, Ronnie; Ubachs, Wim; Versolato, Oscar O.
2018-01-01
The measurement of the propulsion of metallic microdroplets exposed to nanosecond laser pulses provides an elegant method for probing the ablation pressure in a dense laser-produced plasma. We present the measurements of the propulsion velocity over three decades in the driving Nd:YAG laser pulse energy and observe a near-perfect power law dependence. Simulations performed with the RALEF-2D radiation-hydrodynamic code are shown to be in good agreement with the power law above a specific threshold energy. The simulations highlight the importance of radiative losses which significantly modify the power of the pressure scaling. Having found a good agreement between the experiment and the simulations, we investigate the analytic origins of the obtained power law and conclude that none of the available analytic theories is directly applicable for explaining our power exponent.
Bayesian evaluation of effect size after replicating an original study
van Aert, Robbie C. M.; van Assen, Marcel A. L. M.
2017-01-01
The vast majority of published results in the literature is statistically significant, which raises concerns about their reliability. The Reproducibility Project Psychology (RPP) and Experimental Economics Replication Project (EE-RP) both replicated a large number of published studies in psychology and economics. The original study and replication were statistically significant in 36.1% in RPP and 68.8% in EE-RP suggesting many null effects among the replicated studies. However, evidence in favor of the null hypothesis cannot be examined with null hypothesis significance testing. We developed a Bayesian meta-analysis method called snapshot hybrid that is easy to use and understand and quantifies the amount of evidence in favor of a zero, small, medium and large effect. The method computes posterior model probabilities for a zero, small, medium, and large effect and adjusts for publication bias by taking into account that the original study is statistically significant. We first analytically approximate the methods performance, and demonstrate the necessity to control for the original study’s significance to enable the accumulation of evidence for a true zero effect. Then we applied the method to the data of RPP and EE-RP, showing that the underlying effect sizes of the included studies in EE-RP are generally larger than in RPP, but that the sample sizes of especially the included studies in RPP are often too small to draw definite conclusions about the true effect size. We also illustrate how snapshot hybrid can be used to determine the required sample size of the replication akin to power analysis in null hypothesis significance testing and present an easy to use web application (https://rvanaert.shinyapps.io/snapshot/) and R code for applying the method. PMID:28388646
Compositional analysis of biomass reference materials: Results from an interlaboratory study
Templeton, David W.; Wolfrum, Edward J.; Yen, James H.; ...
2015-10-29
Biomass compositional methods are used to compare different lignocellulosic feedstocks, to measure component balances around unit operations and to determine process yields and therefore the economic viability of biomass-to-biofuel processes. Four biomass reference materials (RMs NIST 8491–8494) were prepared and characterized, via an interlaboratory comparison exercise in the early 1990s to evaluate biomass summative compositional methods, analysts, and laboratories. Having common, uniform, and stable biomass reference materials gives the opportunity to assess compositional data compared to other analysts, to other labs, and to a known compositional value. The expiration date for the original characterization of these RMs was reached andmore » an effort to assess their stability and recharacterize the reference values for the remaining material using more current methods of analysis was initiated. We sent samples of the four biomass RMs to 11 academic, industrial, and government laboratories, familiar with sulfuric acid compositional methods, for recharacterization of the component reference values. In this work, we have used an expanded suite of analytical methods that are more appropriate for herbaceous feedstocks, to recharacterize the RMs’ compositions. We report the median values and the expanded uncertainty values for the four RMs on a dry-mass, whole-biomass basis. The original characterization data has been recalculated using median statistics to facilitate comparisons with this data. We found improved total component closures for three out of the four RMs compared to the original characterization, and the total component closures were near 100 %, which suggests that most components were accurately measured and little double counting occurred. Here, the major components were not statistically different in the recharacterization which suggests that the biomass materials are stable during storage and that additional components, not seen in the original characterization, were quantified here.« less
DOSAGE DES ELEMENTS EN TRACE DES CALCAIRES AU SERVICE DE L'ARCHEOLOGIE
DOE Office of Scientific and Technical Information (OSTI.GOV)
BLANC,A.; HOLMES,L.; HARBOTTLE,G.
1998-05-01
Numerous quarries in the Lutetian limestone formations of the Paris Basin provided stone for the building and the decoration of monuments from antiquity to the present. To determine the origin of stone used for masonry and sculptures in these monuments, a team of geologists and archaeologists has investigated 300 quarries and collected 2,300 samples. Petrographic and paleontologic examination of thin sections allows geologists to distinguish Lutetian limestones from Jurassic and Cretaceous limestones. Geologists also seek to formulate hypotheses regarding the origin of Lutetian limestones used for building and sculpture in the Paris region. In the search for the sources ofmore » building and sculptural stone, the analytical methods of geologists are limited because often several quarries produce the same lithofacies. A new tool is now available, however, to attack questions of provenance raised by art historians. Because limestones from different sources have distinctive patterns of trace-element concentrations, compositional analysis by neutron activation allows them to compare building or sculptural stone from one monument with stone from quarries or other monuments. This analytical method subjects a powdered limestone sample to standard neutron activation analysis procedures at Brookhaven National Laboratory. With the help of computer programs, the compositional fingerprints of Lutetian limestones can be determined and stored in a database. The limestone database contains data for approximately 2,100 samples from monuments, sculptures and quarries. It is particularly rich in samples from the Paris Basin.« less
Corrected Four-Sphere Head Model for EEG Signals.
Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V; Dale, Anders M; Einevoll, Gaute T; Wójcik, Daniel K
2017-01-01
The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations.
Foster, Tobias
2011-09-01
A novel analytical and continuous density distribution function with a widely variable shape is reported and used to derive an analytical scattering form factor that allows us to universally describe the scattering from particles with the radial density profile of homogeneous spheres, shells, or core-shell particles. Composed by the sum of two Fermi-Dirac distribution functions, the shape of the density profile can be altered continuously from step-like via Gaussian-like or parabolic to asymptotically hyperbolic by varying a single "shape parameter", d. Using this density profile, the scattering form factor can be calculated numerically. An analytical form factor can be derived using an approximate expression for the original Fermi-Dirac distribution function. This approximation is accurate for sufficiently small rescaled shape parameters, d/R (R being the particle radius), up to values of d/R ≈ 0.1, and thus captures step-like, Gaussian-like, and parabolic as well as asymptotically hyperbolic profile shapes. It is expected that this form factor is particularly useful in a model-dependent analysis of small-angle scattering data since the applied continuous and analytical function for the particle density profile can be compared directly with the density profile extracted from the data by model-free approaches like the generalized inverse Fourier transform method. © 2011 American Chemical Society
Corrected Four-Sphere Head Model for EEG Signals
Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V.; Dale, Anders M.; Einevoll, Gaute T.; Wójcik, Daniel K.
2017-01-01
The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations. PMID:29093671
NASA Astrophysics Data System (ADS)
Hreniuc, V.; Hreniuc, A.; Pescaru, A.
2017-08-01
Solving a general strength problem of a ship hull may be done using analytical approaches which are useful to deduce the buoyancy forces distribution, the weighting forces distribution along the hull and the geometrical characteristics of the sections. These data are used to draw the free body diagrams and to compute the stresses. The general strength problems require a large amount of calculi, therefore it is interesting how a computer may be used to solve such problems. Using computer programming an engineer may conceive software instruments based on analytical approaches. However, before developing the computer code the research topic must be thoroughly analysed, in this way being reached a meta-level of understanding of the problem. The following stage is to conceive an appropriate development strategy of the original software instruments useful for the rapid development of computer aided analytical models. The geometrical characteristics of the sections may be computed using a bool algebra that operates with ‘simple’ geometrical shapes. By ‘simple’ we mean that for the according shapes we have direct calculus relations. In the set of ‘simple’ shapes we also have geometrical entities bounded by curves approximated as spline functions or as polygons. To conclude, computer programming offers the necessary support to solve general strength ship hull problems using analytical methods.
Analytical applications of microbial fuel cells. Part I: Biochemical oxygen demand.
Abrevaya, Ximena C; Sacco, Natalia J; Bonetto, Maria C; Hilding-Ohlsson, Astrid; Cortón, Eduardo
2015-01-15
Microbial fuel cells (MFCs) are bio-electrochemical devices, where usually the anode (but sometimes the cathode, or both) contains microorganisms able to generate and sustain an electrochemical gradient which is used typically to generate electrical power. In the more studied set-up, the anode contains heterotrophic bacteria in anaerobic conditions, capable to oxidize organic molecules releasing protons and electrons, as well as other by-products. Released protons could reach the cathode (through a membrane or not) whereas electrons travel across an external circuit originating an easily measurable direct current flow. MFCs have been proposed fundamentally as electric power producing devices or more recently as hydrogen producing devices. Here we will review the still incipient development of analytical uses of MFCs or related devices or set-ups, in the light of a non-restrictive MFC definition, as promising tools to asset water quality or other measurable parameters. An introduction to biological based analytical methods, including bioassays and biosensors, as well as MFCs design and operating principles, will also be included. Besides, the use of MFCs as biochemical oxygen demand sensors (perhaps the main analytical application of MFCs) is discussed. In a companion review (Part 2), other new analytical applications are reviewed used for toxicity sensors, metabolic sensors, life detectors, and other proposed applications. Copyright © 2014 Elsevier B.V. All rights reserved.
Advanced industrial fluorescence metrology used for qualification of high quality optical materials
NASA Astrophysics Data System (ADS)
Engel, Axel; Becker, Hans-Juergen; Sohr, Oliver; Haspel, Rainer; Rupertus, Volker
2003-11-01
Schott Glas is developing and producing the optical material for various specialized applications in telecommunication, biomedical, optical, and micro lithography technology. The requirements on quality for optical materials are extremely high and still increasing. For example in micro lithography applications the impurities of the material are specified to be in the low ppb range. Usually the impurities in the lower ppb range are determined using analytical methods like LA ICP-MS and Neutron Activation Analysis. On the other hand absorption and laser resistivity of optical material is qualified with optical methods like precision spectral photometers and in-situ transmission measurements having UV lasers. Analytical methods have the drawback that they are time consuming and rather expensive, whereas the sensitivity for the absorption method will not be sufficient to characterize the future needs (coefficient much below 10-3 cm-1). In order to achieve the current and future quality requirements a Jobin Yvon FLUOROLOG 3.22 fluorescence spectrometer is employed to enable fast and precise qualification and analysis. The main advantage of this setup is the combination of highest sensitivity (more than one order of magnitude higher sensitivity that state of the art UV absorption spectroscopy) and fast measurement and evaluation cycles (several minutes compared to several hours necessary for chemical analytics). An overview is given for spectral characteristics and using specified standards. Moreover correlations to the material qualities are shown. In particular we have investigated the elementary fluorescence and absorption of rare earth element impurities as well as defects induced luminescence originated by impurities.
Kolpin, D.W.; Goolsby, D.A.; Thurman, E.M.
1995-01-01
In 1992, the U.S. Geological Survey (USGS) determined the distribution of pesticides in near-surface aquifers of the midwestern USA to be much more widespread than originally determined during a 1991 USGS study. The frequency of pesticide detection increased from 28.4% during the 1991 study to 59.0% during the 1992 study. This increase in pesticide detection was primarily the result of a more sensitive analytical method that used reporting limits as much as 20 times lower than previously available and a threefold increase in the number of pesticide metabolites analyzed. No pesticide concentrations exceeded the U.S. Environmental Protection Agency's (USEPAs) maximum contaminant levels or health advisory levels for drinking water. However, five of the six most frequently detected compounds during 1992 were pesticide metabolites that currently do not have drinking water standards determined. The frequent presence of pesticide metabolites for this study documents the importance of obtaining information on these compounds to understand the fate and transport of pesticides in the hydrologic system. It appears that the 56 parent compounds analyzed follow similar pathways through the hydrologic system as atrazine. When atrazine was detected by routine or sensitive analytical methods, there was an increased likelihood of detecting additional parent compounds. As expected, the frequency of pesticide detection was highly dependent on the analytical reporting limit. The number of atrazine detections more than doubled as the reporting limit decreased from 0.10 to 0.01 µg/L. The 1992 data provided no indication that the frequency of pesticide detection would level off as improved analytical methods provide concentrations below 0.003 µg/L. A relation was determined between groundwater age and the frequency of pesticide detection, with 15.8% of the samples composed of pre-1953 water and 70.3% of the samples of post-1953 water having a detection of at least one pesticide or metabolite. Pre-1953 water is less likely to contain pesticides because it tends to predate the use of pesticides to increase crop production in the Midwest. Pre-1953 water was more likely to occur in the near-surface bedrock aquifers (50.0%) than in the near-surface unconsolidated aquifers (9.1%) sampled.
ESTIMATING UNCERTAINITIES IN FACTOR ANALYTIC MODELS
When interpreting results from factor analytic models as used in receptor modeling, it is important to quantify the uncertainties in those results. For example, if the presence of a species on one of the factors is necessary to interpret the factor as originating from a certain ...
CAE "FOCUS" for modelling and simulating electron optics systems: development and application
NASA Astrophysics Data System (ADS)
Trubitsyn, Andrey; Grachev, Evgeny; Gurov, Victor; Bochkov, Ilya; Bochkov, Victor
2017-02-01
Electron optics is a theoretical base of scientific instrument engineering. Mathematical simulation of occurring processes is a base for contemporary design of complicated devices of the electron optics. Problems of the numerical mathematical simulation are effectively solved by CAE system means. CAE "FOCUS" developed by the authors includes fast and accurate methods: boundary element method (BEM) for the electric field calculation, Runge-Kutta- Fieghlberg method for the charged particle trajectory computation controlling an accuracy of calculations, original methods for search of terms for the angular and time-of-flight focusing. CAE "FOCUS" is organized as a collection of modules each of which solves an independent (sub) task. A range of physical and analytical devices, in particular a microfocus X-ray tube of high power, has been developed using this soft.
An accurate boundary element method for the exterior elastic scattering problem in two dimensions
NASA Astrophysics Data System (ADS)
Bao, Gang; Xu, Liwei; Yin, Tao
2017-11-01
This paper is concerned with a Galerkin boundary element method solving the two dimensional exterior elastic wave scattering problem. The original problem is first reduced to the so-called Burton-Miller [1] boundary integral formulation, and essential mathematical features of its variational form are discussed. In numerical implementations, a newly-derived and analytically accurate regularization formula [2] is employed for the numerical evaluation of hyper-singular boundary integral operator. A new computational approach is employed based on the series expansions of Hankel functions for the computation of weakly-singular boundary integral operators during the reduction of corresponding Galerkin equations into a discrete linear system. The effectiveness of proposed numerical methods is demonstrated using several numerical examples.
Water hammer prediction and control: the Green's function method
NASA Astrophysics Data System (ADS)
Xuan, Li-Jun; Mao, Feng; Wu, Jie-Zhi
2012-04-01
By Green's function method we show that the water hammer (WH) can be analytically predicted for both laminar and turbulent flows (for the latter, with an eddy viscosity depending solely on the space coordinates), and thus its hazardous effect can be rationally controlled and minimized. To this end, we generalize a laminar water hammer equation of Wang et al. (J. Hydrodynamics, B2, 51, 1995) to include arbitrary initial condition and variable viscosity, and obtain its solution by Green's function method. The predicted characteristic WH behaviors by the solutions are in excellent agreement with both direct numerical simulation of the original governing equations and, by adjusting the eddy viscosity coefficient, experimentally measured turbulent flow data. Optimal WH control principle is thereby constructed and demonstrated.
Energy analysis in the elliptic restricted three-body problem
NASA Astrophysics Data System (ADS)
Qi, Yi; de Ruiter, Anton
2018-07-01
The gravity assist or flyby is investigated by analysing the inertial energy of a test particle in the elliptic restricted three-body problem (ERTBP), where two primary bodies are moving in elliptic orbits. First, the expression of the derivation of energy is obtained and discussed. Then, the approximate expressions of energy change in a circular neighbourhood of the smaller primary are derived. Numerical computation indicates that the obtained expressions can be applied to study the flyby problem of the nine planets and the Moon in the Solar system. Parameters related to the flyby are discussed analytically and numerically. The optimal conditions, including the position and time of the periapsis, for a flyby orbit are found to make a maximum energy gain or loss. Finally, the mechanical process of a flyby orbit is uncovered by an approximate expression in the ERTBP. Numerical computations testify that our analytical results well approximate the mechanical process of flyby orbits obtained by the numerical simulation in the ERTBP. Compared with the previous research established in the patched-conic method and numerical calculation, our analytical investigations based on a more elaborate derivation get more original results.
Not All Biofluids Are Created Equal: Chewing Over Salivary Diagnostics and the Epigenome
Wren, M.E.; Shirtcliff, E.A.; Drury, Stacy S.
2015-01-01
Purpose This article describes progress to date in the characterization of the salivary epigenome and considers the importance of previous work in the salivary microbiome, proteome, endocrine analytes, genome, and transcriptome. Methods PubMed and Web of Science were used to extensively search the existing literature (original research and reviews) related to salivary diagnostics and bio-marker development, of which 125 studies were examined. This article was derived from the most relevant 73 sources highlighting the recent state of the evolving field of salivary epigenomics and contributing significantly to the foundational work in saliva-based research. Findings Validation of any new saliva-based diagnostic or analyte will require comparison to previously accepted standards established in blood. Careful attention to the collection, processing, and analysis of salivary analytes is critical for the development and implementation of newer applications that include genomic, transcriptomic, and epigenomic markers. All these factors must be integrated into initial study design. Implications This commentary highlights the appeal of the salivary epigenome for translational applications and its utility in future studies of development and the interface among environment, disease, and health. PMID:25778408
Energy Analysis in the Elliptic Restricted Three-body Problem
NASA Astrophysics Data System (ADS)
Qi, Yi; de Ruiter, Anton
2018-05-01
The gravity assist or flyby is investigated by analyzing the inertial energy of a test particle in the elliptic restricted three-body problem (ERTBP), where two primary bodies are moving in elliptic orbits. Firstly, the expression of the derivation of energy is obtained and discussed. Then, the approximate expressions of energy change in a circular neighborhood of the smaller primary are derived. Numerical computation indicates that the obtained expressions can be applied to study the flyby problem of the nine planets and the Moon in the solar system. Parameters related to the flyby are discussed analytically and numerically. The optimal conditions, including the position and time of the periapsis, for a flyby orbit are found to make a maximum energy gain or loss. Finally, the mechanical process of a flyby orbit is uncovered by an approximate expression in the ERTBP. Numerical computations testify that our analytical results well approximate the mechanical process of flyby orbits obtained by the numerical simulation in the ERTBP. Compared with the previous research established in the patched-conic method and numerical calculation, our analytical investigations based on a more elaborate derivation get more original results.
Augmented kludge waveforms for detecting extreme-mass-ratio inspirals
NASA Astrophysics Data System (ADS)
Chua, Alvin J. K.; Moore, Christopher J.; Gair, Jonathan R.
2017-08-01
The extreme-mass-ratio inspirals (EMRIs) of stellar-mass compact objects into massive black holes are an important class of source for the future space-based gravitational-wave detector LISA. Detecting signals from EMRIs will require waveform models that are both accurate and computationally efficient. In this paper, we present the latest implementation of an augmented analytic kludge (AAK) model, publicly available at https://github.com/alvincjk/EMRI_Kludge_Suite as part of an EMRI waveform software suite. This version of the AAK model has improved accuracy compared to its predecessors, with two-month waveform overlaps against a more accurate fiducial model exceeding 0.97 for a generic range of sources; it also generates waveforms 5-15 times faster than the fiducial model. The AAK model is well suited for scoping out data analysis issues in the upcoming round of mock LISA data challenges. A simple analytic argument shows that it might even be viable for detecting EMRIs with LISA through a semicoherent template bank method, while the use of the original analytic kludge in the same approach will result in around 90% fewer detections.
Bigler, Erin D; Farrer, Thomas J; Pertab, Jon L; James, Kelly; Petrie, Jo Ann; Hedges, Dawson W
2013-01-01
In 2009 Pertab, James, and Bigler published a critique of two prior meta-analyses by Binder, Rohling, and Larrabee (1997) and Frencham, Fox, and Maybery (2005) that showed small effect size difference at least 3 months post-injury in individuals who had sustained a mild traumatic brain injury (mTBI). The Binder et al. and Frencham et al. meta-analyses have been widely cited as showing no lasting effect of mTBI. In their critique Pertab et al. (2009) point out many limitations of these two prior meta-analyses, demonstrating that depending on how inclusion/exclusion criteria were defined different meta-analytic findings occur, some supporting the persistence of neuropsychological impairments beyond 3 months. Rohling et al. (2011) have now critiqued Pertab et al. (2009). Herein we respond to the Rolling et al. (2011) critique reaffirming the original findings of Pertab et al. (2009), providing additional details concerning the flaws in prior meta-analytic mTBI studies and the effects on neuropsychological performance.
The cloud radiation impact from optics simulation and airborne observation
NASA Astrophysics Data System (ADS)
Melnikova, Irina; Kuznetsov, Anatoly; Gatebe, Charles
2017-02-01
The analytical approach of inverse asymptotic formulas of the radiative transfer theory is used for solving inverse problems of cloud optics. The method has advantages because it does not impose strict constraints, but it is tied to the desired solution. Observations are accomplished in extended stratus cloudiness, above a homogeneous ocean surface. Data from NASA`s Cloud Absorption Radiometer (CAR) during two airborne experiments (SAFARI-2000 and ARCTAS-2008) were analyzed. The analytical method of inverse asymptotic formulas was used to retrieve cloud optical parameters (optical thickness, single scattering albedo and asymmetry parameter of the phase function) and ground albedo in all 8 spectral channels independently. The method is free from a priori restrictions and there is no links to parameters, and it has been applied to data set of different origin and geometry of observations. Results obtained from different airborne, satellite and ground radiative experiments appeared consistence and showed common features of values of cloud parameters and its spectral dependence (Vasiluev, Melnikova, 2004; Gatebe et al., 2014). Optical parameters, retrieved here, are used for calculation of radiative divergence, reflected and transmitted irradiance and heating rates in cloudy atmosphere, that agree with previous observational data.
Curriculum Mapping with Academic Analytics in Medical and Healthcare Education
Komenda, Martin; Víta, Martin; Vaitsis, Christos; Schwarz, Daniel; Pokorná, Andrea; Zary, Nabil; Dušek, Ladislav
2015-01-01
Background No universal solution, based on an approved pedagogical approach, exists to parametrically describe, effectively manage, and clearly visualize a higher education institution’s curriculum, including tools for unveiling relationships inside curricular datasets. Objective We aim to solve the issue of medical curriculum mapping to improve understanding of the complex structure and content of medical education programs. Our effort is based on the long-term development and implementation of an original web-based platform, which supports an outcomes-based approach to medical and healthcare education and is suitable for repeated updates and adoption to curriculum innovations. Methods We adopted data exploration and visualization approaches in the context of medical curriculum innovations in higher education institutions domain. We have developed a robust platform, covering detailed formal metadata specifications down to the level of learning units, interconnections, and learning outcomes, in accordance with Bloom’s taxonomy and direct links to a particular biomedical nomenclature. Furthermore, we used selected modeling techniques and data mining methods to generate academic analytics reports from medical curriculum mapping datasets. Results We present a solution that allows users to effectively optimize a curriculum structure that is described with appropriate metadata, such as course attributes, learning units and outcomes, a standardized vocabulary nomenclature, and a tree structure of essential terms. We present a case study implementation that includes effective support for curriculum reengineering efforts of academics through a comprehensive overview of the General Medicine study program. Moreover, we introduce deep content analysis of a dataset that was captured with the use of the curriculum mapping platform; this may assist in detecting any potentially problematic areas, and hence it may help to construct a comprehensive overview for the subsequent global in-depth medical curriculum inspection. Conclusions We have proposed, developed, and implemented an original framework for medical and healthcare curriculum innovations and harmonization, including: planning model, mapping model, and selected academic analytics extracted with the use of data mining. PMID:26624281
Husáková, Lenka; Urbanová, Iva; Šafránková, Michaela; Šídová, Tereza
2017-12-01
In this work a simple, efficient, and environmentally-friendly method is proposed for determination of Be in soil and sediment samples employing slurry sampling and high-resolution continuum source electrothermal atomic absorption spectrometry (HR-CS-ETAAS). The spectral effects originating from SiO species were identified and successfully corrected by means of a mathematical correction algorithm. Fractional factorial design has been employed to assess the parameters affecting the analytical results and especially to help in the development of the slurry preparation and optimization of measuring conditions. The effects of seven analytical variables including particle size, concentration of glycerol and HNO 3 for stabilization and analyte extraction, respectively, the effect of ultrasonic agitation for slurry homogenization, concentration of chemical modifier, pyrolysis and atomization temperature were investigated by a 2 7-3 replicate (n = 3) design. Using the optimized experimental conditions, the proposed method allowed the determination of Be with a detection limit being 0.016mgkg -1 and characteristic mass 1.3pg. Optimum results were obtained after preparing the slurries by weighing 100mg of a sample with particle size < 54µm and adding 25mL of 20% w/w glycerol. The use of 1μg Rh and 50μg citric acid was found satisfactory for the analyte stabilization. Accurate data were obtained with the use of matrix-free calibration. The accuracy of the method was confirmed by analysis of two certified reference materials (NIST SRM 2702 Inorganics in Marine Sediment and IGI BIL-1 Baikal Bottom Silt) and by comparison of the results obtained for ten real samples by slurry sampling with those determined after microwave-assisted extraction by inductively coupled plasma time of flight mass spectrometry (TOF-ICP-MS). The reported method has a precision better than 7%. Copyright © 2017 Elsevier B.V. All rights reserved.
Identification of species origin of meat and meat products on the DNA basis: a review.
Kumar, Arun; Kumar, Rajiv Ranjan; Sharma, Brahm Deo; Gokulakrishnan, Palanisamy; Mendiratta, Sanjod Kumar; Sharma, Deepak
2015-01-01
The adulteration/substitution of meat has always been a concern for various reasons such as public health, religious factors, wholesomeness, and unhealthy competition in meat market. Consumer should be protected from these malicious practices of meat adulterations by quick, precise, and specific identification of meat animal species. Several analytical methodologies have been employed for meat speciation based on anatomical, histological, microscopic, organoleptic, chemical, electrophoretic, chromatographic, or immunological principles. However, by virtue of their inherent limitations, most of these techniques have been replaced by the recent DNA-based molecular techniques. In the last decades, several methods based on polymerase chain reaction have been proposed as useful means for identifying the species origin in meat and meat products, due to their high specificity and sensitivity, as well as rapid processing time and low cost. This review intends to provide an updated and extensive overview on the DNA-based methods for species identification in meat and meat products.
The use of noise equivalent count rate and the NEMA phantom for PET image quality evaluation.
Yang, Xin; Peng, Hao
2015-03-01
PET image quality is directly associated with two important parameters among others: count-rate performance and image signal-to-noise ratio (SNR). The framework of noise equivalent count rate (NECR) was developed back in the 1990s and has been widely used since then to evaluate count-rate performance for PET systems. The concept of NECR is not entirely straightforward, however, and among the issues requiring clarification are its original definition, its relationship to image quality, and its consistency among different derivation methods. In particular, we try to answer whether a higher NECR measurement using a standard NEMA phantom actually corresponds to better imaging performance. The paper includes the following topics: 1) revisiting the original analytical model for NECR derivation; 2) validating three methods for NECR calculation based on the NEMA phantom/standard; and 3) studying the spatial dependence of NECR and quantitative relationship between NECR and image SNR. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Recommendation for the review of biological reference intervals in medical laboratories.
Henny, Joseph; Vassault, Anne; Boursier, Guilaine; Vukasovic, Ines; Mesko Brguljan, Pika; Lohmander, Maria; Ghita, Irina; Andreu, Francisco A Bernabeu; Kroupis, Christos; Sprongl, Ludek; Thelen, Marc H M; Vanstapel, Florent J L A; Vodnik, Tatjana; Huisman, Willem; Vaubourdolle, Michel
2016-12-01
This document is based on the original recommendation of the Expert Panel on the Theory of Reference Values of the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC), updated guidelines were recently published under the auspices of the IFCC and the Clinical and Laboratory Standards Institute (CLSI). This document summarizes proposals for recommendations on: (i) The terminology, which is often confusing, noticeably concerning the terms of reference limits and decision limits. (ii) The method for the determination of reference limits according to the original procedure and the conditions, which should be used. (iii) A simple procedure allowing the medical laboratories to fulfill the requirements of the regulation and standards. The updated document proposes to verify that published reference limits are applicable to the laboratory involved. Finally, the strengths and limits of the revised recommendations (especially the selection of the reference population, the maintenance of the analytical quality, the choice of the statistical method used…) will be briefly discussed.
NASA Astrophysics Data System (ADS)
Leite, Clarice C.; de Jesus, Alexandre; Kolling, Leandro; Ferrão, Marco F.; Samios, Dimitrios; Silva, Márcia M.
2018-04-01
This work reports a new method for extraction of Cu, Fe and Pb from Brazilian automotive gasoline and their determination by high-resolution continuous source flame atomic absorption spectrometry (HR-CS FAAS). The method was based on the formation of water-in-oil emulsion by mixing 2.0 mL of extraction solution constituted by 12% (w/v) Triton X-100 and 5% (v/v) HNO3 with 10 mL of sample. After heating at 90 °C for 10 min, two well-defined phases were formed. The bottom phase (approximately 3.5 mL), composed of acidified water and part of the ethanol originally present in the gasoline sample, containing the extracted analytes was analyzed. The surfactant and HNO3 concentrations and the heating temperature employed in the process were optimized by Doehlert design, using a Brazilian gasoline sample spiked with Cu, Fe and Pb (organometallic compounds). The efficiency of extraction was investigated and it ranged from 80 to 89%. The calibration was accomplished by using matrix matching method. For this, the standards were obtained performing the same extraction procedure used for the sample, using emulsions obtained with a gasoline sample free of analytes and the addition of inorganic standards. Limits of detection obtained were 3.0, 5.0 and 14.0 μg L-1 for Cu, Fe and Pb, respectively. These limits were estimated for the original sample taking into account the preconcentration factor obtained. The accuracy of the proposed method was assured by recovery tests spiking the samples with organometallic standards and the obtained values ranged from 98 to 105%. Ten gasoline samples were analyzed and Fe was found in four samples (0.04-0.35 mg L-1) while Cu (0.28 mg L-1) and Pb (0.60 mg L-1) was found in just one sample.
Bazakos, Christos; Khanfir, Emna; Aoun, Mariem; Spano, Thodhoraq; Zein, Zeina El; Chalak, Lamis; Riachy, Milad El; Abou-Sleymane, Gretta; Ali, Sihem Ben; Grati Kammoun, Naziha; Kalaitzis, Panagiotis
2016-07-01
Authentication and traceability of extra virgin olive oil is a challenging research task due to the complexity of fraudulent practices. In this context, the monovarietal olive oils of Protected Designation of Origin (PDO) and Protected Geographical Indication (PGI) require new tests and cutting edge analytical technologies to detect mislabeling and misleading origin. Toward this direction, DNA-based technologies could serve as a complementary to the analytical techniques assay. Single nucleotide polymorphisms are ideal molecular markers since they require short PCR analytical targets which are a prerequisite for forensic applications in olive oil sector. In the present study, a small number of polymorphic SNPs were used with an SNP-based PCR-RFLP capillary electrophoresis platform to discriminate six out of 13 monovarietal olive oils of Mediterranean origin from three different countries, Greece, Tunisia, and Lebanon. Moreover, the high sensitivity of capillary electrophoresis in combination with the DNA extraction protocol lowered the limit of detection to 10% in an admixture of Tsounati in a Koroneiki olive oil matrix. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Meta-analysis as Statistical and Analytical Method of Journal’s Content Scientific Evaluation
Masic, Izet; Begic, Edin
2015-01-01
Introduction: A meta-analysis is a statistical and analytical method which combines and synthesizes different independent studies and integrates their results into one common result. Goal: Analysis of the journals “Medical Archives”, “Materia Socio Medica” and “Acta Informatica Medica”, which are located in the most eminent indexed databases of the biomedical milieu. Material and methods: The study has retrospective and descriptive character, and included the period of the calendar year 2014. Study included six editions of all three journals (total of 18 journals). Results: In this period was published a total of 291 articles (in the “Medical Archives” 110, “Materia Socio Medica” 97, and in “Acta Informatica Medica” 84). The largest number of articles was original articles. Small numbers have been published as professional, review articles and case reports. Clinical events were most common in the first two journals, while in the journal “Acta Informatica Medica” belonged to the field of medical informatics, as part of pre-clinical medical disciplines. Articles are usually required period of fifty to fifty nine days for review. Articles were received from four continents, mostly from Europe. The authors are most often from the territory of Bosnia and Herzegovina, then Iran, Kosovo and Macedonia. Conclusion: The number of articles published each year is increasing, with greater participation of authors from different continents and abroad. Clinical medical disciplines are the most common, with the broader spectrum of topics and with a growing number of original articles. Greater support of the wider scientific community is needed for further development of all three of the aforementioned journals. PMID:25870484
Near-Optimal Guidance Method for Maximizing the Reachable Domain of Gliding Aircraft
NASA Astrophysics Data System (ADS)
Tsuchiya, Takeshi
This paper proposes a guidance method for gliding aircraft by using onboard computers to calculate a near-optimal trajectory in real-time, and thereby expanding the reachable domain. The results are applicable to advanced aircraft and future space transportation systems that require high safety. The calculation load of the optimal control problem that is used to maximize the reachable domain is too large for current computers to calculate in real-time. Thus the optimal control problem is divided into two problems: a gliding distance maximization problem in which the aircraft motion is limited to a vertical plane, and an optimal turning flight problem in a horizontal direction. First, the former problem is solved using a shooting method. It can be solved easily because its scale is smaller than that of the original problem, and because some of the features of the optimal solution are obtained in the first part of this paper. Next, in the latter problem, the optimal bank angle is computed from the solution of the former; this is an analytical computation, rather than an iterative computation. Finally, the reachable domain obtained from the proposed near-optimal guidance method is compared with that obtained from the original optimal control problem.
Field Test of a Hybrid Finite-Difference and Analytic Element Regional Model.
Abrams, D B; Haitjema, H M; Feinstein, D T; Hunt, R J
2016-01-01
Regional finite-difference models often have cell sizes that are too large to sufficiently model well-stream interactions. Here, a steady-state hybrid model is applied whereby the upper layer or layers of a coarse MODFLOW model are replaced by the analytic element model GFLOW, which represents surface waters and wells as line and point sinks. The two models are coupled by transferring cell-by-cell leakage obtained from the original MODFLOW model to the bottom of the GFLOW model. A real-world test of the hybrid model approach is applied on a subdomain of an existing model of the Lake Michigan Basin. The original (coarse) MODFLOW model consists of six layers, the top four of which are aggregated into GFLOW as a single layer, while the bottom two layers remain part of MODFLOW in the hybrid model. The hybrid model and a refined "benchmark" MODFLOW model simulate similar baseflows. The hybrid and benchmark models also simulate similar baseflow reductions due to nearby pumping when the well is located within the layers represented by GFLOW. However, the benchmark model requires refinement of the model grid in the local area of interest, while the hybrid approach uses a gridless top layer and is thus unaffected by grid discretization errors. The hybrid approach is well suited to facilitate cost-effective retrofitting of existing coarse grid MODFLOW models commonly used for regional studies because it leverages the strengths of both finite-difference and analytic element methods for predictions in mildly heterogeneous systems that can be simulated with steady-state conditions. © 2015, National Ground Water Association.
Analytical calculation of vibrations of electromagnetic origin in electrical machines
NASA Astrophysics Data System (ADS)
McCloskey, Alex; Arrasate, Xabier; Hernández, Xabier; Gómez, Iratxo; Almandoz, Gaizka
2018-01-01
Electrical motors are widely used and are often required to satisfy comfort specifications. Thus, vibration response estimations are necessary to reach optimum machine designs. This work presents an improved analytical model to calculate vibration response of an electrical machine. The stator and windings are modelled as a double circular cylindrical shell. As the stator is a laminated structure, orthotropic properties are applied to it. The values of those material properties are calculated according to the characteristics of the motor and the known material properties taken from previous works. Therefore, the model proposed takes into account the axial direction, so that length is considered, and also the contribution of windings, which differs from one machine to another. These aspects make the model valuable for a wide range of electrical motor types. In order to validate the analytical calculation, natural frequencies are calculated and compared to those obtained by Finite Element Method (FEM), giving relative errors below 10% for several circumferential and axial mode order combinations. It is also validated the analytical vibration calculation with acceleration measurements in a real machine. The comparison shows good agreement for the proposed model, being the most important frequency components in the same magnitude order. A simplified two dimensional model is also applied and the results obtained are not so satisfactory.
Schaefer, Cédric; Clicq, David; Lecomte, Clémence; Merschaert, Alain; Norrant, Edith; Fotiadu, Frédéric
2014-03-01
Pharmaceutical companies are progressively adopting and introducing Process Analytical Technology (PAT) and Quality-by-Design (QbD) concepts promoted by the regulatory agencies, aiming the building of the quality directly into the product by combining thorough scientific understanding and quality risk management. An analytical method based on near infrared (NIR) spectroscopy was developed as a PAT tool to control on-line an API (active pharmaceutical ingredient) manufacturing crystallization step during which the API and residual solvent contents need to be precisely determined to reach the predefined seeding point. An original methodology based on the QbD principles was designed to conduct the development and validation of the NIR method and to ensure that it is fitted for its intended use. On this basis, Partial least squares (PLS) models were developed and optimized using chemometrics methods. The method was fully validated according to the ICH Q2(R1) guideline and using the accuracy profile approach. The dosing ranges were evaluated to 9.0-12.0% w/w for the API and 0.18-1.50% w/w for the residual methanol. As by nature the variability of the sampling method and the reference method are included in the variability obtained for the NIR method during the validation phase, a real-time process monitoring exercise was performed to prove its fit for purpose. The implementation of this in-process control (IPC) method on the industrial plant from the launch of the new API synthesis process will enable automatic control of the final crystallization step in order to ensure a predefined quality level of the API. In addition, several valuable benefits are expected including reduction of the process time, suppression of a rather difficult sampling and tedious off-line analyses. © 2013 Published by Elsevier B.V.
Ikuta, N; Yamada, Y; Hirokawa, T
2000-01-01
For capillary zone electrophoresis, a new method of transformation from migration time to effective mobility was proposed, in which the mobility increase due to Joule heating and the relaxation effect of the potential gradient were eliminated successfully. The precision of the mobility evaluated by the proposed transformation was discussed in relation to the analysis of rare earth ions. By using the transformation, almost the same pherograms could be obtained even from the pherograms obtained originally at different applied voltages.
Seasonal modulation of the 7Be solar neutrino rate in Borexino
NASA Astrophysics Data System (ADS)
Agostini, M.; Altenmüller, K.; Appel, S.; Atroshchenko, V.; Basilico, D.; Bellini, G.; Benziger, J.; Bick, D.; Bonfini, G.; Borodikhina, L.; Bravo, D.; Caccianiga, B.; Calaprice, F.; Caminata, A.; Caprioli, S.; Carlini, M.; Cavalcante, P.; Chepurnov, A.; Choi, K.; D'Angelo, D.; Davini, S.; Derbin, A.; Ding, X. F.; Di Noto, L.; Drachnev, I.; Fomenko, K.; Franco, D.; Froborg, F.; Gabriele, F.; Galbiati, C.; Ghiano, C.; Giammarchi, M.; Goeger-Neff, M.; Goretti, A.; Gromov, M.; Hagner, C.; Houdy, T.; Hungerford, E.; Ianni, Aldo; Ianni, Andrea; Jany, A.; Jeschke, D.; Kobychev, V.; Korablev, D.; Korga, G.; Kryn, D.; Laubenstein, M.; Lehnert, B.; Litvinovich, E.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Machulin, I.; Manecki, S.; Manuzio, G.; Marcocci, S.; Martyn, J.; Meroni, E.; Meyer, M.; Miramonti, L.; Misiaszek, M.; Montuschi, M.; Muratova, V.; Neumair, B.; Oberauer, L.; Opitz, B.; Ortica, F.; Pallavicini, M.; Papp, L.; Pocar, A.; Ranucci, G.; Razeto, A.; Re, A.; Romani, A.; Roncin, R.; Rossi, N.; Schönert, S.; Semenov, D.; Shakina, P.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Stokes, L. F. F.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Thurn, J.; Toropova, M.; Unzhakov, E.; Vishneva, A.; Vogelaar, R. B.; von Feilitzsch, F.; Wang, H.; Weinz, S.; Wojcik, M.; Wurm, M.; Yokley, Z.; Zaimidoroga, O.; Zavatarelli, S.; Zuber, K.; Zuzel, G.; Borexino Collaboration
2017-06-01
We present the evidence for the seasonal modulation of the 7Be neutrino interaction rate with the Borexino detector at the Laboratori Nazionali del Gran Sasso in Italy. The period, amplitude, and phase of the observed time evolution of the signal are consistent with its solar origin, and the absence of an annual modulation is rejected at 99.99% C.L. The data are analyzed using three methods: the analytical fit to event rate, the Lomb-Scargle and the Empirical Mode Decomposition techniques, which all yield results in excellent agreement.
NASA Technical Reports Server (NTRS)
1948-01-01
The conference on Turbojet-Engine Thrust-Augmentation Research was organized by the NACA to present in summarized form the results of the latest experimental and analytical investigations conducted at the Lewis Flight Propulsion Laboratory on methods of augmenting the thrust of turbojet engines. The technical discussions are reproduced herewith in the same form in which they were presented. The original presentation in this record are considered as complementary to, rather than substitutes for, the committee's system of complete and formal reports.
Potential energy surface and vibrational band origins of the triatomic lithium cation
NASA Astrophysics Data System (ADS)
Searles, Debra J.; Dunne, Simon J.; von Nagy-Felsobuki, Ellak I.
The 104 point CISD Li +3 potential energy surface and its analytical representation is reported. The calculations predict the minimum energy geometry to be an equilateral triangle of side RLiLi = 3.0 Å and of energy - 22.20506 E h. A fifth-order Morse—Dunham type analytical force field is used in the Carney—Porter normal co-ordinate vibrational Hamiltonian, the corresponding eigenvalue problem being solved variationally using a 560 configurational finite-element basis set. The predicted assignment of the vibrational band origins is in accord with that reported for H +3. Moreover, for 6Li +3 and 7Li +3 the lowest i.r. accessible band origin is the overlineν0,1,±1 predicted to be at 243.6 and 226.0 cm -1 respectively.
Salvo, Andrea; La Torre, Giovanna Loredana; Di Stefano, Vita; Capocchiano, Valentina; Mangano, Valentina; Saija, Emanuele; Pellizzeri, Vito; Casale, Katia Erminia; Dugo, Giacomo
2017-04-15
A fast reversed-phase UPLC method was developed for squalene determination in Sicilian pistachio samples that entry in the European register of the products with P.D.O. In the present study the SPE procedure was optimized for the squalene extraction prior to the UPLC/PDA analysis. The precision of the full analytical procedure was satisfactory and the mean recoveries were 92.8±0.3% and 96.6±0.1% for 25 and 50mgL -1 level of addition, respectively. Selected chromatographic conditions allowed a very fast squalene determination; in fact it was well separated in ∼0.54min with good resolution. Squalene was detected in all the pistachio samples analyzed and the levels ranged from 55.45-226.34mgkg -1 . Comparing our results with those of other studies it emerges that squalene contents in P.D.O. Sicilian pistachio samples, generally, were higher than those measured for other samples of different geographic origins. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wen, Sheng; Feng, Yanli; Wang, Xinming; Sheng, Guoying; Fu, Jiamo; Bi, Xinhui
2004-01-01
A novel method has been developed for compound-specific isotope analysis for acetone via DNPH (2,4-dinitrophenylhydrazine) derivatization together with combined gas chromatography/combustion/isotope ratio mass spectrometry (GC/C/IRMS). Acetone reagents were used to assess delta13C fractionation during the DNPH derivatization process. Reduplicate delta13C analyses were designed to evaluate the reproducibility of the derivatization, with an average error (1 standard deviation) of 0.17 +/- 0.05 per thousand, and average analytical error of 0.28 +/- 0.09 per thousand. The derivatization process introduces no isotopic fractionation for acetone (the average difference between the predicted and analytical delta13C values was 0.09 +/- 0.20 per thousand, within the precision limits of the GC/C/IRMS measurements), which permits computation of the delta13C values for the original underivatized acetone through a mass balance equation. Together with further studies of the carbon isotopic effect during the atmospheric acetone-sampling procedure, it will be possible to use DNPH derivatization for carbon isotope analysis of atmospheric acetone. Copyright (c) 2004 John Wiley & Sons, Ltd.
Computation of turbulent boundary layers employing the defect wall-function method. M.S. Thesis
NASA Technical Reports Server (NTRS)
Brown, Douglas L.
1994-01-01
In order to decrease overall computational time requirements of spatially-marching parabolized Navier-Stokes finite-difference computer code when applied to turbulent fluid flow, a wall-function methodology, originally proposed by R. Barnwell, was implemented. This numerical effort increases computational speed and calculates reasonably accurate wall shear stress spatial distributions and boundary-layer profiles. Since the wall shear stress is analytically determined from the wall-function model, the computational grid near the wall is not required to spatially resolve the laminar-viscous sublayer. Consequently, a substantially increased computational integration step size is achieved resulting in a considerable decrease in net computational time. This wall-function technique is demonstrated for adiabatic flat plate test cases from Mach 2 to Mach 8. These test cases are analytically verified employing: (1) Eckert reference method solutions, (2) experimental turbulent boundary-layer data of Mabey, and (3) finite-difference computational code solutions with fully resolved laminar-viscous sublayers. Additionally, results have been obtained for two pressure-gradient cases: (1) an adiabatic expansion corner and (2) an adiabatic compression corner.
System parameter identification from projection of inverse analysis
NASA Astrophysics Data System (ADS)
Liu, K.; Law, S. S.; Zhu, X. Q.
2017-05-01
The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.
Thompson, Darcy A.; Polk, Sarah; Cheah, Charissa S.L.; Vandewater, Elizabeth A.; Johnson, Susan L.; Chrismer, Marilyn Camacho; Tschann, Jeanne M.
2014-01-01
Objective To explore maternal beliefs about TV viewing and related parenting practices in low-income Mexican-origin mothers of preschoolers. Methods Semi-structured interviews were conducted with 21 low-income Mexican-origin mothers of preschoolers. Interviews were audio recorded and analyzed using a theoretically based thematic analytic approach. Results Mothers described strong beliefs about the positive and negative impact of television content. Mothers emphasized the educational value of specific programming. Content restrictions were common. Time restrictions were not clearly defined; however, many mothers preferred short versus long episodes of viewing. Mothers spoke positively about family viewing and the role of TV viewing in enabling mothers to accomplish household tasks. Discussion These findings have implications for intervening in this population. Interventionists should consider the value mothers place on the educational role of TV viewing, the direct benefit to mothers of viewing time, the lack of clear time limits, and the common practice of family co-viewing. PMID:25724994
Lüdeke, Catharina H M; Fischer, Markus; LaFon, Patti; Cooper, Kara; Jones, Jessica L
2014-07-01
Vibrio parahaemolyticus is the leading cause of infectious illness associated with seafood consumption in the United States. Molecular fingerprinting of strains has become a valuable research tool for understanding this pathogen. However, there are many subtyping methods available and little information on how they compare to one another. For this study, a collection of 67 oyster and 77 clinical V. parahaemolyticus isolates were analyzed by three subtyping methods--intergenic spacer region (ISR-1), direct genome restriction analysis (DGREA), and pulsed-field gel electrophoresis (PFGE)--to determine the utility of these methods for discriminatory subtyping. ISR-1 analysis, run as previously described, provided the lowest discrimination of all the methods (discriminatory index [DI]=0.8665). However, using a broader analytical range than previously reported, ISR-1 clustered isolates based on origin (oyster versus clinical) and had a DI=0.9986. DGREA provided a DI=0.9993-0.9995, but did not consistently cluster the isolates by any identifiable characteristics (origin, serotype, or virulence genotype) and ∼ 15% of isolates were untypeable by this method. PFGE provided a DI=0.9998 when using the combined pattern analysis of both restriction enzymes, SfiI and NotI. This analysis was more discriminatory than using either enzyme pattern alone and primarily grouped isolates by serotype, regardless of strain origin (clinical or oyster) or presence of currently accepted virulence markers. These results indicate that PFGE and ISR-1 are more reliable methods for subtyping V. parahemolyticus, rather than DGREA. Additionally, ISR-1 may provide an indication of pathogenic potential; however, more detailed studies are needed. These data highlight the diversity within V. parahaemolyticus and the need for appropriate selection of subtyping methods depending on the study objectives.
NHEXAS PHASE I REGION 5 STUDY--QA ANALYTICAL RESULTS FOR VOCS IN REPLICATES
This data set includes analytical results for measurements of VOCs in 204 duplicate (replicate) samples. Measurements were made for up to 23 VOCs in samples of air, water, and blood. Duplicate samples (samples collected along with or next to the original samples) were collected t...
Four-body trajectory optimization
NASA Technical Reports Server (NTRS)
Pu, C. L.; Edelbaum, T. N.
1974-01-01
A comprehensive optimization program has been developed for computing fuel-optimal trajectories between the earth and a point in the sun-earth-moon system. It presents methods for generating fuel optimal two-impulse trajectories which may originate at the earth or a point in space and fuel optimal three-impulse trajectories between two points in space. The extrapolation of the state vector and the computation of the state transition matrix are accomplished by the Stumpff-Weiss method. The cost and constraint gradients are computed analytically in terms of the terminal state and the state transition matrix. The 4-body Lambert problem is solved by using the Newton-Raphson method. An accelerated gradient projection method is used to optimize a 2-impulse trajectory with terminal constraint. The Davidon's Variance Method is used both in the accelerated gradient projection method and the outer loop of a 3-impulse trajectory optimization problem.
Gorecki, Sébastien; Bemrah, Nawel; Roudot, Alain-Claude; Marchioni, Eric; Le Bizec, Bruno; Faivre, Franck; Kadawathagedara, Manik; Botton, Jérémie; Rivière, Gilles
2017-12-01
Bisphenol A (BPA) is used in a wide variety of products and objects for consumers use (digital media such as CD's and DVD's, sport equipment, food and beverage containers, medical equipment). For humans, the main route of exposure to BPA is food. Based on previous estimates, almost 20% of the dietary exposure to BPA in the French population would be from food of animal origin. However, due to the use of composite samples, the source of the contamination had not been identified. Therefore, 322 individual samples of non-canned foods of animal origin were collected with the objectives of first updating the estimation of the exposure of the French population and second identifying the source of contamination of these foodstuffs using a specific analytical method. Compared to previous estimates in France, a decline in the contamination of the samples was observed, in particular with regard to meat. The estimated mean dietary exposures ranged from 0.048 to 0.050 μg (kg bw) -1 d -1 for 3-17 year children and adolescents, from 0.034 to 0.035 μg (kg bw) -1 d -1 for adults and from 0.047 to 0.049 μg (kg bw) -1 d -1 for pregnant women. The contribution of meat to total dietary exposure of pregnant women, adults and children was up to three times lower than the previous estimates. Despite this downward trend in contamination, the toxicological values were observed to have been exceeded for the population of pregnant women. With the aim of acquiring more knowledge about the origin the potential source(s) of contamination of non-canned foods of animal origin, a specific analytical method was developed to directly identify and quantify the presence of conjugated BPA (BPA-monoglucuronide, BPA-diglucuronide and sulphate forms) in 50 samples. No conjugated forms of BPAs were detected in the analysed samples, indicating clearly that BPA content in animal food was not due to metabolism but arise post mortem in food. This contamination may occur during food production. However, despite extensive sampling performed in several different shops (butcheries, supermarkets …. ) and in different conditions (fresh, prepared, frozen …), the source(s) of the contamination could not be specifically identified. Copyright © 2017 Elsevier Ltd. All rights reserved.
Coulomb interactions in charged fluids.
Vernizzi, Graziano; Guerrero-García, Guillermo Iván; de la Cruz, Monica Olvera
2011-07-01
The use of Ewald summation schemes for calculating long-range Coulomb interactions, originally applied to ionic crystalline solids, is a very common practice in molecular simulations of charged fluids at present. Such a choice imposes an artificial periodicity which is generally absent in the liquid state. In this paper we propose a simple analytical O(N(2)) method which is based on Gauss's law for computing exactly the Coulomb interaction between charged particles in a simulation box, when it is averaged over all possible orientations of a surrounding infinite lattice. This method mitigates the periodicity typical of crystalline systems and it is suitable for numerical studies of ionic liquids, charged molecular fluids, and colloidal systems with Monte Carlo and molecular dynamics simulations.
Analyses of exobiological and potential resource materials in the Martian soil.
Mancinelli, R L; Marshall, J R; White, M R
1992-01-01
Potential Martian soil components relevant to exobiology include water, organic matter, evaporites, clays, and oxides. These materials are also resources for human expeditions to Mars. When found in particular combinations, some of these materials constitute diagnostic paleobiomarker suites, allowing insight to be gained into the probability of life originating on Mars. Critically important to exobiology is the method of data analysis and data interpretation. To that end we are investigating methods of analysis of potential biomarker and paleobiomarker compounds and resource materials in soils and rocks pertinent to Martian geology. Differential thermal analysis coupled with gas chromatography is shown to be a highly useful analytical technique for detecting this wide and complex variety of materials.
Analyses of exobiological and potential resource materials in the Martian soil
NASA Technical Reports Server (NTRS)
Mancinelli, Rocco L.; Marshall, John R.; White, Melisa R.
1992-01-01
Potential Martian soil components relevant to exobiology include water, organic matter, evaporites, clays, and oxides. These materials are also resources for human expeditions to Mars. When found in particular combinations, some of these materials constitute diagnostic paleobiomarker suites, allowing insight to be gained into the probability of life originating on Mars. Critically important to exobiology is the method of data analysis and data interpretation. To that end, methods of analysis of potential biomarker and paleobiomarker compounds and resource materials in soils and rocks pertinent to Martian geology are investigated. Differential thermal analysis coupled with gas chromotography is shown to be a highly useful analytical technique for detecting this wide and complex variety of materials.
Unambiguous detection of nitrated explosive vapours by fluorescence quenching of dendrimer films
NASA Astrophysics Data System (ADS)
Geng, Yan; Ali, Mohammad A.; Clulow, Andrew J.; Fan, Shengqiang; Burn, Paul L.; Gentle, Ian R.; Meredith, Paul; Shaw, Paul E.
2015-09-01
Unambiguous and selective standoff (non-contact) infield detection of nitro-containing explosives and taggants is an important goal but difficult to achieve with standard analytical techniques. Oxidative fluorescence quenching is emerging as a high sensitivity method for detecting such materials but is prone to false positives--everyday items such as perfumes elicit similar responses. Here we report thin films of light-emitting dendrimers that detect vapours of explosives and taggants selectively--fluorescence quenching is not observed for a range of common interferents. Using a combination of neutron reflectometry, quartz crystal microbalance and photophysical measurements we show that the origin of the selectivity is primarily electronic and not the diffusion kinetics of the analyte or its distribution in the film. The results are a major advance in the development of sensing materials for the standoff detection of nitro-based explosive vapours, and deliver significant insights into the physical processes that govern the sensing efficacy.
Unambiguous detection of nitrated explosive vapours by fluorescence quenching of dendrimer films.
Geng, Yan; Ali, Mohammad A; Clulow, Andrew J; Fan, Shengqiang; Burn, Paul L; Gentle, Ian R; Meredith, Paul; Shaw, Paul E
2015-09-15
Unambiguous and selective standoff (non-contact) infield detection of nitro-containing explosives and taggants is an important goal but difficult to achieve with standard analytical techniques. Oxidative fluorescence quenching is emerging as a high sensitivity method for detecting such materials but is prone to false positives—everyday items such as perfumes elicit similar responses. Here we report thin films of light-emitting dendrimers that detect vapours of explosives and taggants selectively—fluorescence quenching is not observed for a range of common interferents. Using a combination of neutron reflectometry, quartz crystal microbalance and photophysical measurements we show that the origin of the selectivity is primarily electronic and not the diffusion kinetics of the analyte or its distribution in the film. The results are a major advance in the development of sensing materials for the standoff detection of nitro-based explosive vapours, and deliver significant insights into the physical processes that govern the sensing efficacy.
Unambiguous detection of nitrated explosive vapours by fluorescence quenching of dendrimer films
Geng, Yan; Ali, Mohammad A.; Clulow, Andrew J.; Fan, Shengqiang; Burn, Paul L.; Gentle, Ian R.; Meredith, Paul; Shaw, Paul E.
2015-01-01
Unambiguous and selective standoff (non-contact) infield detection of nitro-containing explosives and taggants is an important goal but difficult to achieve with standard analytical techniques. Oxidative fluorescence quenching is emerging as a high sensitivity method for detecting such materials but is prone to false positives—everyday items such as perfumes elicit similar responses. Here we report thin films of light-emitting dendrimers that detect vapours of explosives and taggants selectively—fluorescence quenching is not observed for a range of common interferents. Using a combination of neutron reflectometry, quartz crystal microbalance and photophysical measurements we show that the origin of the selectivity is primarily electronic and not the diffusion kinetics of the analyte or its distribution in the film. The results are a major advance in the development of sensing materials for the standoff detection of nitro-based explosive vapours, and deliver significant insights into the physical processes that govern the sensing efficacy. PMID:26370931
Riemannian geometry of Hamiltonian chaos: hints for a general theory.
Cerruti-Sola, Monica; Ciraolo, Guido; Franzosi, Roberto; Pettini, Marco
2008-10-01
We aim at assessing the validity limits of some simplifying hypotheses that, within a Riemmannian geometric framework, have provided an explanation of the origin of Hamiltonian chaos and have made it possible to develop a method of analytically computing the largest Lyapunov exponent of Hamiltonian systems with many degrees of freedom. Therefore, a numerical hypotheses testing has been performed for the Fermi-Pasta-Ulam beta model and for a chain of coupled rotators. These models, for which analytic computations of the largest Lyapunov exponents have been carried out in the mentioned Riemannian geometric framework, appear as paradigmatic examples to unveil the reason why the main hypothesis of quasi-isotropy of the mechanical manifolds sometimes breaks down. The breakdown is expected whenever the topology of the mechanical manifolds is nontrivial. This is an important step forward in view of developing a geometric theory of Hamiltonian chaos of general validity.
Metabolomic Technologies for Improving the Quality of Food: Practice and Promise.
Johanningsmeier, Suzanne D; Harris, G Keith; Klevorn, Claire M
2016-01-01
It is now well documented that the diet has a significant impact on human health and well-being. However, the complete set of small molecule metabolites present in foods that make up the human diet and the role of food production systems in altering this food metabolome are still largely unknown. Metabolomic platforms that rely on nuclear magnetic resonance (NMR) and mass spectrometry (MS) analytical technologies are being employed to study the impact of agricultural practices, processing, and storage on the global chemical composition of food; to identify novel bioactive compounds; and for authentication and region-of-origin classifications. This review provides an overview of the current terminology, analytical methods, and compounds associated with metabolomic studies, and provides insight into the application of metabolomics to generate new knowledge that enables us to produce, preserve, and distribute high-quality foods for health promotion.
Földes-Papp, Zeno; Liao, Shih-Chu Jeff; You, Tiefeng; Barbieri, Beniamino
2009-08-01
We first report on the development of new microscope means that reduce background contributions in fluorescence fluctuation methods: i) excitation shutter, ii) electronic switches, and iii) early and late time-gating. The elements allow for measuring molecules at low analyte concentrations. We first found conditions of early and late time-gating with time-correlated single-photon counting that made the fluorescence signal as bright as possible compared with the fluctuations in the background count rate in a diffraction-limited optical set-up. We measured about a 140-fold increase in the amplitude of autocorrelated fluorescence fluctuations at the lowest analyte concentration of about 15 pM, which gave a signal-to-background advantage of more than two-orders of magnitude. The results of this original article pave the way for single-molecule detection in solution and in live cells without immobilization or hydrodynamic/electrokinetic focusing at longer observation times than are currently available.
Conformal Bootstrap in Mellin Space
NASA Astrophysics Data System (ADS)
Gopakumar, Rajesh; Kaviraj, Apratim; Sen, Kallol; Sinha, Aninda
2017-02-01
We propose a new approach towards analytically solving for the dynamical content of conformal field theories (CFTs) using the bootstrap philosophy. This combines the original bootstrap idea of Polyakov with the modern technology of the Mellin representation of CFT amplitudes. We employ exchange Witten diagrams with built-in crossing symmetry as our basic building blocks rather than the conventional conformal blocks in a particular channel. Demanding consistency with the operator product expansion (OPE) implies an infinite set of constraints on operator dimensions and OPE coefficients. We illustrate the power of this method in the ɛ expansion of the Wilson-Fisher fixed point by reproducing anomalous dimensions and, strikingly, obtaining OPE coefficients to higher orders in ɛ than currently available using other analytic techniques (including Feynman diagram calculations). Our results enable us to get a somewhat better agreement between certain observables in the 3D Ising model and the precise numerical values that have been recently obtained.
A Framework for Integrating Environmental Justice in Regulatory Analysis
Nweke, Onyemaechi C.
2011-01-01
With increased interest in integrating environmental justice into the process for developing environmental regulations in the United States, analysts and decision makers are confronted with the question of what methods and data can be used to assess disproportionate environmental health impacts. However, as a first step to identifying data and methods, it is important that analysts understand what information on equity impacts is needed for decision making. Such knowledge originates from clearly stated equity objectives and the reflection of those objectives throughout the analytical activities that characterize Regulatory Impact Analysis (RIA), a process that is traditionally used to inform decision making. The framework proposed in this paper advocates structuring analyses to explicitly provide pre-defined output on equity impacts. Specifically, the proposed framework emphasizes: (a) defining equity objectives for the proposed regulatory action at the onset of the regulatory process, (b) identifying specific and related sub-objectives for key analytical steps in the RIA process, and (c) developing explicit analytical/research questions to assure that stated sub-objectives and objectives are met. In proposing this framework, it is envisioned that information on equity impacts informs decision-making in regulatory development, and that this is achieved through a systematic and consistent approach that assures linkages between stated equity objectives, regulatory analyses, selection of policy options, and the design of compliance and enforcement activities. PMID:21776235
First-order analytic propagation of satellites in the exponential atmosphere of an oblate planet
NASA Astrophysics Data System (ADS)
Martinusi, Vladimir; Dell'Elce, Lamberto; Kerschen, Gaëtan
2017-04-01
The paper offers the fully analytic solution to the motion of a satellite orbiting under the influence of the two major perturbations, due to the oblateness and the atmospheric drag. The solution is presented in a time-explicit form, and takes into account an exponential distribution of the atmospheric density, an assumption that is reasonably close to reality. The approach involves two essential steps. The first one concerns a new approximate mathematical model that admits a closed-form solution with respect to a set of new variables. The second step is the determination of an infinitesimal contact transformation that allows to navigate between the new and the original variables. This contact transformation is obtained in exact form, and afterwards a Taylor series approximation is proposed in order to make all the computations explicit. The aforementioned transformation accommodates both perturbations, improving the accuracy of the orbit predictions by one order of magnitude with respect to the case when the atmospheric drag is absent from the transformation. Numerical simulations are performed for a low Earth orbit starting at an altitude of 350 km, and they show that the incorporation of drag terms into the contact transformation generates an error reduction by a factor of 7 in the position vector. The proposed method aims at improving the accuracy of analytic orbit propagation and transforming it into a viable alternative to the computationally intensive numerical methods.
Hunt, R.J.; Anderson, M.P.; Kelson, V.A.
1998-01-01
This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.
Analytical quality by design: a tool for regulatory flexibility and robust analytics.
Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).
Analytical Quality by Design: A Tool for Regulatory Flexibility and Robust Analytics
Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT). PMID:25722723
Rodushkin, I; Bergman, T; Douglas, G; Engström, E; Sörlin, D; Baxter, D C
2007-02-05
Different analytical approaches for origin differentiation between vendace and whitefish caviars from brackish- and freshwaters were tested using inductively coupled plasma double focusing sector field mass spectrometry (ICP-SFMS) and multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS). These approaches involve identifying differences in elemental concentrations or sample-specific isotopic composition (Sr and Os) variations. Concentrations of 72 elements were determined by ICP-SFMS following microwave-assisted digestion in vendace and whitefish caviar samples from Sweden (from both brackish and freshwater), Finland and USA, as well as in unprocessed vendace roe and salt used in caviar production. This data set allows identification of elements whose contents in caviar can be affected by salt addition as well as by contamination during production and packaging. Long-term method reproducibility was assessed for all analytes based on replicate caviar preparations/analyses and variations in element concentrations in caviar from different harvests were evaluated. The greatest utility for differentiation was demonstrated for elements with varying concentrations between brackish and freshwaters (e.g. As, Br, Sr). Elemental ratios, specifically Sr/Ca, Sr/Mg and Sr/Ba, are especially useful for authentication of vendace caviar processed from brackish water roe, due to the significant differences between caviar from different sources, limited between-harvest variations and relatively high concentrations in samples, allowing precise determination by modern analytical instrumentation. Variations in the 87Sr/86Sr ratio for vendace caviar from different harvests (on the order of 0.05-0.1%) is at least 10-fold less than differences between caviar processed from brackish and freshwater roe. Hence, Sr isotope ratio measurements (either by ICP-SFMS or by MC-ICP-MS) have great potential for origin differentiation. On the contrary, it was impossible to differentiate between Swedish caviar processed from brackish water roe and Finnish freshwater caviar based solely on 187Os/188Os ratios.
NASA Astrophysics Data System (ADS)
Yttri, K. E.; Schnelle-Kreiss, J.; Maenhaut, W.; Alves, C.; Bossi, R.; Bjerke, A.; Claeys, M.; Dye, C.; Evtyugina, M.; García-Gacio, D.; Gülcin, A.; Hillamo, R.; Hoffer, A.; Hyder, M.; Iinuma, Y.; Jaffrezo, J.-L.; Kasper-Giebl, A.; Kiss, G.; López-Mahia, P. L.; Pio, C.; Piot, C.; Ramirez-Santa-Cruz, C.; Sciare, J.; Teinilä, K.; Vermeylen, R.; Vicente, A.; Zimmermann, R.
2014-07-01
The monosaccharide anhydrides (MAs) levoglucosan, galactosan and mannosan are products of incomplete combustion and pyrolysis of cellulose and hemicelluloses, and are found to be major constituents of biomass burning aerosol particles. Hence, ambient aerosol particle concentrations of levoglucosan are commonly used to study the influence of residential wood burning, agricultural waste burning and wild fire emissions on ambient air quality. A European-wide intercomparison on the analysis of the three monosaccharide anhydrides was conducted based on ambient aerosol quartz fiber filter samples collected at a Norwegian urban background site during winter. Thus, the samples' content of MAs is representative for biomass burning particles originating from residential wood burning. The purpose of the intercomparison was to examine the comparability of the great diversity of analytical methods used for analysis of levoglucosan, mannosan and galactosan in ambient aerosol filter samples. Thirteen laboratories participated, of which three applied High-Performance Anion-Exchange Chromatography (HPAEC), four used High-Performance Liquid Chromatography (HPLC) or Ultra-Performance Liquid Chromatography (UPLC), and six resorted to Gas Chromatography (GC). The analytical methods used were of such diversity that they should be considered as thirteen different analytical methods. All of the thirteen laboratories reported levels of levoglucosan, whereas nine reported data for mannosan and/or galactosan. Eight of the thirteen laboratories reported levels for all three isomers. The accuracy for levoglucosan, presented as the mean percentage error (PE) for each participating laboratory, varied from -63 to 23%; however, for 62% of the laboratories the mean PE was within ±10%, and for 85% the mean PE was within ±20%. For mannosan, the corresponding range was -60 to 69%, but as for levoglucosan, the range was substantially smaller for a subselection of the laboratories; i.e., for 33% of the laboratories the mean PE was within ±10%. For galactosan, the mean PE for the participating laboratories ranged from -84 to 593%, and as for mannosan 33% of the laboratories reported a mean PE within ±10%. The variability of the various analytical methods, as defined by their minimum and maximum PE value, was typically better for levoglucosan than for mannosan and galactosan, ranging from 3.2 to 41% for levoglucosan, from 10 to 67% for mannosan, and from 6 to 364% for galactosan. For the levoglucosan to mannosan ratio, which may be used to assess the relative importance of softwood vs. hardwood burning, the variability only ranged from 3.5 to 24%. To our knowledge, this is the first major intercomparison on analytical methods used to quantify monosaccharide anhydrides in ambient aerosol filter samples conducted and reported in the scientific literature. The results show that for levoglucosan the accuracy is only slightly lower than that reported for analysis of SO42- on filter samples, a constituent that has been analyzed by numerous laboratories for several decades, typically by ion chromatography, and which is considered a fairly easy constituent to measure. Hence, the results obtained for levoglucosan with respect to accuracy are encouraging and suggest that levels of levoglucosan, and to a lesser extent mannosan and galactosan, obtained by most of the analytical methods currently used to quantify monosaccharide anhydrides in ambient aerosol filter samples, are comparable. Finally, the various analytical methods used in the current study should be tested for other aerosol matrices and concentrations as well, the most obvious being summertime aerosol samples affected by wild fires and/or agricultural fires.
NASA Astrophysics Data System (ADS)
Yttri, K. E.; Schnelle-Kreis, J.; Maenhaut, W.; Abbaszade, G.; Alves, C.; Bjerke, A.; Bonnier, N.; Bossi, R.; Claeys, M.; Dye, C.; Evtyugina, M.; García-Gacio, D.; Hillamo, R.; Hoffer, A.; Hyder, M.; Iinuma, Y.; Jaffrezo, J.-L.; Kasper-Giebl, A.; Kiss, G.; López-Mahia, P. L.; Pio, C.; Piot, C.; Ramirez-Santa-Cruz, C.; Sciare, J.; Teinilä, K.; Vermeylen, R.; Vicente, A.; Zimmermann, R.
2015-01-01
The monosaccharide anhydrides (MAs) levoglucosan, galactosan and mannosan are products of incomplete combustion and pyrolysis of cellulose and hemicelluloses, and are found to be major constituents of biomass burning (BB) aerosol particles. Hence, ambient aerosol particle concentrations of levoglucosan are commonly used to study the influence of residential wood burning, agricultural waste burning and wildfire emissions on ambient air quality. A European-wide intercomparison on the analysis of the three monosaccharide anhydrides was conducted based on ambient aerosol quartz fiber filter samples collected at a Norwegian urban background site during winter. Thus, the samples' content of MAs is representative for BB particles originating from residential wood burning. The purpose of the intercomparison was to examine the comparability of the great diversity of analytical methods used for analysis of levoglucosan, mannosan and galactosan in ambient aerosol filter samples. Thirteen laboratories participated, of which three applied high-performance anion-exchange chromatography (HPAEC), four used high-performance liquid chromatography (HPLC) or ultra-performance liquid chromatography (UPLC) and six resorted to gas chromatography (GC). The analytical methods used were of such diversity that they should be considered as thirteen different analytical methods. All of the thirteen laboratories reported levels of levoglucosan, whereas nine reported data for mannosan and/or galactosan. Eight of the thirteen laboratories reported levels for all three isomers. The accuracy for levoglucosan, presented as the mean percentage error (PE) for each participating laboratory, varied from -63 to 20%; however, for 62% of the laboratories the mean PE was within ±10%, and for 85% the mean PE was within ±20%. For mannosan, the corresponding range was -60 to 69%, but as for levoglucosan, the range was substantially smaller for a subselection of the laboratories; i.e. for 33% of the laboratories the mean PE was within ±10%. For galactosan, the mean PE for the participating laboratories ranged from -84 to 593%, and as for mannosan 33% of the laboratories reported a mean PE within ±10%. The variability of the various analytical methods, as defined by their minimum and maximum PE value, was typically better for levoglucosan than for mannosan and galactosan, ranging from 3.2 to 41% for levoglucosan, from 10 to 67% for mannosan and from 6 to 364% for galactosan. For the levoglucosan to mannosan ratio, which may be used to assess the relative importance of softwood versus hardwood burning, the variability only ranged from 3.5 to 24 . To our knowledge, this is the first major intercomparison on analytical methods used to quantify monosaccharide anhydrides in ambient aerosol filter samples conducted and reported in the scientific literature. The results show that for levoglucosan the accuracy is only slightly lower than that reported for analysis of SO42- (sulfate) on filter samples, a constituent that has been analysed by numerous laboratories for several decades, typically by ion chromatography and which is considered a fairly easy constituent to measure. Hence, the results obtained for levoglucosan with respect to accuracy are encouraging and suggest that levels of levoglucosan, and to a lesser extent mannosan and galactosan, obtained by most of the analytical methods currently used to quantify monosaccharide anhydrides in ambient aerosol filter samples, are comparable. Finally, the various analytical methods used in the current study should be tested for other aerosol matrices and concentrations as well, the most obvious being summertime aerosol samples affected by wildfires and/or agricultural fires.
Kahl, Johannes; Busscher, Nicolaas; Mergardt, Gaby; Mäder, Paul; Torp, Torfinn; Ploeger, Angelika
2015-01-01
There is a need for authentication tools in order to verify the existing certification system. Recently, markers for analytical authentication of organic products were evaluated. Herein, crystallization with additives was described as an interesting fingerprint approach which needs further evidence, based on a standardized method and well-documented sample origin. The fingerprint of wheat cultivars from a controlled field trial is generated from structure analysis variables of crystal patterns. Method performance was tested on factors such as crystallization chamber, day of experiment and region of interest of the patterns. Two different organic treatments and two different treatments of the non-organic regime can be grouped together in each of three consecutive seasons. When the k-nearest-neighbor classification method was applied, approximately 84% of Runal samples and 95% of Titlis samples were classified correctly into organic and non-organic origin using cross-validation. Crystallization with additive offers an interesting complementary fingerprint method for organic wheat samples. When the method is applied to winter wheat from the DOK trial, organic and non-organic treated samples can be differentiated significantly based on pattern recognition. Therefore crystallization with additives seems to be a promising tool in organic wheat authentication. © 2014 Society of Chemical Industry.
The mean and variance of phylogenetic diversity under rarefaction
Matsen, Frederick A.
2013-01-01
Summary Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required. PMID:23833701
The mean and variance of phylogenetic diversity under rarefaction.
Nipperess, David A; Matsen, Frederick A
2013-06-01
Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
7 CFR 94.303 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...
SAM Radiochemical Methods Query
Laboratories measuring target radiochemical analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select radiochemical analytes.
Küme, Tuncay; Sağlam, Barıs; Ergon, Cem; Sisman, Ali Rıza
2018-01-01
The aim of this study is to evaluate and compare the analytical performance characteristics of the two creatinine methods based on the Jaffe and enzymatic methods. Two original creatinine methods, Jaffe and enzymatic, were evaluated on Architect c16000 automated analyzer via limit of detection (LOD) and limit of quantitation (LOQ), linearity, intra-assay and inter-assay precision, and comparability in serum and urine samples. The method comparison and bias estimation using patient samples according to CLSI guideline were performed on 230 serum and 141 urine samples by analyzing on the same auto-analyzer. The LODs were determined as 0.1 mg/dL for both serum methods and as 0.25 and 0.07 mg/dL for the Jaffe and the enzymatic urine method respectively. The LOQs were similar with 0.05 mg/dL value for both serum methods, and enzymatic urine method had a lower LOQ than Jaffe urine method, values at 0.5 and 2 mg/dL respectively. Both methods were linear up to 65 mg/dL for serum and 260 mg/dL for urine. The intra-assay and inter-assay precision data were under desirable levels in both methods. The higher correlations were determined between two methods in serum and urine (r=.9994, r=.9998 respectively). On the other hand, Jaffe method gave the higher creatinine results than enzymatic method, especially at the low concentrations in both serum and urine. Both Jaffe and enzymatic methods were found to meet the analytical performance requirements in routine use. However, enzymatic method was found to have better performance in low creatinine levels. © 2017 Wiley Periodicals, Inc.
The lost steps of infancy: symbolization, analytic process and the growth of the self.
Feldman, Brian
2002-07-01
In 'The Lost Steps' the Latin American novelist Alejo Carpentier describes the search by the protagonist for the origins of music among native peoples in the Amazon jungle. This metaphor can be utilized as a way of understanding the search for the pre-verbal origins of the self in analysis. The infant's experience of the tempo and rhythmicity of the mother/infant interaction and the bathing in words and sounds of the infant by the mother are at the core of the infant's development of the self. The infant observation method (Tavistock model) will be looked at as a way of developing empathy in the analyst to better understand infantile, pre-verbal states of mind. A case vignette from an adult analysis will be utilized to illustrate the theoretical concepts.
Origin of Analyte-Induced Porous Silicon Photoluminescence Quenching.
Reynard, Justin M; Van Gorder, Nathan S; Bright, Frank V
2017-09-01
We report on gaseous analyte-induced photoluminescence (PL) quenching of porous silicon, as-prepared (ap-pSi) and oxidized (ox-pSi). By using steady-state and emission wavelength-dependent time-resolved intensity luminescence measurements in concert with a global analysis scheme, we find that the analyte-induced quenching is best described by a three-component static quenching model. In the model, there are blue, green, and red emitters (associated with the nanocrystallite core and surface trap states) that each exhibit unique analyte-emitter association constants and these association constants are a consequence of differences in the pSi surface chemistries.
USDA-ARS?s Scientific Manuscript database
Analytical solutions of the advection-dispersion solute transport equation remain useful for a large number of applications in science and engineering. In this paper we extend the Duhamel theorem, originally established for diffusion type problems, to the case of advective-dispersive transport subj...
NASA Astrophysics Data System (ADS)
Oanta, Emil M.; Dascalescu, Anca-Elena; Sabau, Adrian
2016-12-01
The paper presents an original analytical model of the hydrodynamic loads applied on the half-bridge of a circular settling tank. The calculus domain is defined using analytical geometry and the calculus of the local dynamic pressure is based on the radius from the center of the settling tank to the current area, i.e. the relative velocity of the fluid and the depth where the current area is located, i.e. the density of the fluid. Calculus of the local drag forces uses the discrete frontal cross sectional areas of the submerged structure in contact with the fluid. In the last stage is performed the reduction of the local drag forces in the appropriate points belonging to the main beam. This class of loads is producing the flexure of the main beam in a horizontal plane and additional twisting moments along this structure. Taking into account the hydrodynamic loads, the results of the theoretical models, i.e. the analytical model and the finite element model, may have an increased accuracy.
2011-01-01
Background This study aims to identify the statistical software applications most commonly employed for data analysis in health services research (HSR) studies in the U.S. The study also examines the extent to which information describing the specific analytical software utilized is provided in published articles reporting on HSR studies. Methods Data were extracted from a sample of 1,139 articles (including 877 original research articles) published between 2007 and 2009 in three U.S. HSR journals, that were considered to be representative of the field based upon a set of selection criteria. Descriptive analyses were conducted to categorize patterns in statistical software usage in those articles. The data were stratified by calendar year to detect trends in software use over time. Results Only 61.0% of original research articles in prominent U.S. HSR journals identified the particular type of statistical software application used for data analysis. Stata and SAS were overwhelmingly the most commonly used software applications employed (in 46.0% and 42.6% of articles respectively). However, SAS use grew considerably during the study period compared to other applications. Stratification of the data revealed that the type of statistical software used varied considerably by whether authors were from the U.S. or from other countries. Conclusions The findings highlight a need for HSR investigators to identify more consistently the specific analytical software used in their studies. Knowing that information can be important, because different software packages might produce varying results, owing to differences in the software's underlying estimation methods. PMID:21977990
Meta-analysis as Statistical and Analytical Method of Journal's Content Scientific Evaluation.
Masic, Izet; Begic, Edin
2015-02-01
A meta-analysis is a statistical and analytical method which combines and synthesizes different independent studies and integrates their results into one common result. Analysis of the journals "Medical Archives", "Materia Socio Medica" and "Acta Informatica Medica", which are located in the most eminent indexed databases of the biomedical milieu. The study has retrospective and descriptive character, and included the period of the calendar year 2014. Study included six editions of all three journals (total of 18 journals). In this period was published a total of 291 articles (in the "Medical Archives" 110, "Materia Socio Medica" 97, and in "Acta Informatica Medica" 84). The largest number of articles was original articles. Small numbers have been published as professional, review articles and case reports. Clinical events were most common in the first two journals, while in the journal "Acta Informatica Medica" belonged to the field of medical informatics, as part of pre-clinical medical disciplines. Articles are usually required period of fifty to fifty nine days for review. Articles were received from four continents, mostly from Europe. The authors are most often from the territory of Bosnia and Herzegovina, then Iran, Kosovo and Macedonia. The number of articles published each year is increasing, with greater participation of authors from different continents and abroad. Clinical medical disciplines are the most common, with the broader spectrum of topics and with a growing number of original articles. Greater support of the wider scientific community is needed for further development of all three of the aforementioned journals.
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...
7 CFR 98.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture... Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MREs are listed as follows: (1) Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
7 CFR 93.4 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...
Faigen, Zachary; Deyneka, Lana; Ising, Amy; Neill, Daniel; Conway, Mike; Fairchild, Geoffrey; Gunn, Julia; Swenson, David; Painter, Ian; Johnson, Lauren; Kiley, Chris; Streichert, Laura
2015-01-01
Introduction: We document a funded effort to bridge the gap between constrained scientific challenges of public health surveillance and methodologies from academia and industry. Component tasks are the collection of epidemiologists’ use case problems, multidisciplinary consultancies to refine them, and dissemination of problem requirements and shareable datasets. We describe an initial use case and consultancy as a concrete example and challenge to developers. Materials and Methods: Supported by the Defense Threat Reduction Agency Biosurveillance Ecosystem project, the International Society for Disease Surveillance formed an advisory group to select tractable use case problems and convene inter-disciplinary consultancies to translate analytic needs into well-defined problems and to promote development of applicable solution methods. The initial consultancy’s focus was a problem originated by the North Carolina Department of Health and its NC DETECT surveillance system: Derive a method for detection of patient record clusters worthy of follow-up based on free-text chief complaints and without syndromic classification. Results: Direct communication between public health problem owners and analytic developers was informative to both groups and constructive for the solution development process. The consultancy achieved refinement of the asyndromic detection challenge and of solution requirements. Participants summarized and evaluated solution approaches and discussed dissemination and collaboration strategies. Practice Implications: A solution meeting the specification of the use case described above could improve human monitoring efficiency with expedited warning of events requiring follow-up, including otherwise overlooked events with no syndromic indicators. This approach can remove obstacles to collaboration with efficient, minimal data-sharing and without costly overhead. PMID:26834939
40 CFR 161.180 - Enforcement analytical method.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 25 2012-07-01 2012-07-01 false Enforcement analytical method. 161.180... DATA REQUIREMENTS FOR REGISTRATION OF ANTIMICROBIAL PESTICIDES Product Chemistry Data Requirements § 161.180 Enforcement analytical method. An analytical method suitable for enforcement purposes must be...
Madsen, James F.; Sandstrom, Mark W.; Zaugg, Steven D.
2002-01-01
A method for the isolation and detemrination of fipronil and four of its degradates has been developed. This method adapts an analytical method created by the U.S. Geological Survey National Water Quality Laboratory in 1995 for the determination of a broad range of high-use pesticides typically found in filtered natural-water samples. In 2000, fipronil and four of its degradates were extracted, analyzed, and validated using this method. The recoveries for these five compounds in reagent-water samples fortified at 1 microgram per liter (ug/L) avereraged 98 percent. Initial method detection limits averaged 0.0029 ug/L. The performance of these five new compounds is consistent with the performance of the compounds in the initial method, making it possible to include them in addition to the other 41 pesticides and pesticide degradates in the original method.
40 CFR 158.355 - Enforcement analytical method.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 24 2014-07-01 2014-07-01 false Enforcement analytical method. 158.355... DATA REQUIREMENTS FOR PESTICIDES Product Chemistry § 158.355 Enforcement analytical method. An analytical method suitable for enforcement purposes must be provided for each active ingredient in the...
Oblique Impact Ejecta Flow Fields: An Application of Maxwells Z Model
NASA Technical Reports Server (NTRS)
Anderson, J. L. B.; Schultz, P. H.; Heineck, J. T.
2001-01-01
Oblique impact flow fields show an evolution from asymmetric to symmetric ejecta flow. This evolution can be put into the simple analytical description of the evolving flow field origin using the Maxwell Z Model. Additional information is contained in the original extended abstract.
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
7 CFR 94.103 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...
Pérez-Rodríguez, Michael; Pellerano, Roberto Gerardo; Pezza, Leonardo; Pezza, Helena Redigolo
2018-05-15
Tetracyclines are widely used for both the treatment and prevention of diseases in animals as well as for the promotion of rapid animal growth and weight gain. This practice may result in trace amounts of these drugs in products of animal origin, such as milk and eggs, posing serious risks to human health. The presence of tetracycline residues in foods can lead to the transmission of antibiotic-resistant pathogenic bacteria through the food chain. In order to ensure food safety and avoid exposure to these substances, national and international regulatory agencies have established tolerance levels for authorized veterinary drugs, including tetracycline antimicrobials. In view of that, numerous sensitive and specific methods have been developed for the quantification of these compounds in different food matrices. One will note, however, that the determination of trace residues in foods such as milk and eggs often requires extensive sample extraction and preparation prior to conducting instrumental analysis. Sample pretreatment is usually the most complicated step in the analytical process and covers both cleaning and pre-concentration. Optimal sample preparation can reduce analysis time and sources of error, enhance sensitivity, apart from enabling unequivocal identification, confirmation and quantification of target analytes. The development and implementation of more environmentally friendly analytical procedures, which involve the use of less hazardous solvents and smaller sample sizes compared to traditional methods, is a rapidly increasing trend in analytical chemistry. This review seeks to provide an updated overview of the main trends in sample preparation for the determination of tetracycline residues in foodstuffs. The applicability of several extraction and clean-up techniques employed in the analysis of foodstuffs, especially milk and egg samples, is also thoroughly discussed. Copyright © 2018 Elsevier B.V. All rights reserved.
Ślączka-Wilk, Magdalena M; Włodarczyk, Elżbieta; Kaleniecka, Aleksandra; Zarzycki, Paweł K
2017-07-01
There is increasing interest in the development of simple analytical systems enabling the fast screening of target components in complex samples. A number of newly invented protocols are based on quasi separation techniques involving microfluidic paper-based analytical devices and/or micro total analysis systems. Under such conditions, the quantification of target components can be performed mainly due to selective detection. The main goal of this paper is to demonstrate that miniaturized planar chromatography has the capability to work as an efficient separation and quantification tool for the analysis of multiple targets within complex environmental samples isolated and concentrated using an optimized SPE method. In particular, we analyzed various samples collected from surface water ecosystems (lakes, rivers, and the Baltic Sea of Middle Pomerania in the northern part of Poland) in different seasons, as well as samples collected during key wastewater technological processes (originating from the "Jamno" wastewater treatment plant in Koszalin, Poland). We documented that the multiple detection of chromatographic spots on RP-18W microplates-under visible light, fluorescence, and fluorescence quenching conditions, and using the visualization reagent phosphomolybdic acid-enables fast and robust sample classification. The presented data reveal that the proposed micro-TLC system is useful, inexpensive, and can be considered as a complementary method for the fast control of treated sewage water discharged by a municipal wastewater treatment plant, particularly for the detection of low-molecular mass micropollutants with polarity ranging from estetrol to progesterone, as well as chlorophyll-related dyes. Due to the low consumption of mobile phases composed of water-alcohol binary mixtures (less than 1 mL/run for the simultaneous separation of up to nine samples), this method can be considered an environmentally friendly and green chemistry analytical tool. The described analytical protocol can be complementary to those involving classical column chromatography (HPLC) or various planar microfluidic devices.
Dillon, Roslyn; Croner, Lisa J; Bucci, John; Kairs, Stefanie N; You, Jia; Beasley, Sharon; Blimline, Mark; Carino, Rochele B; Chan, Vicky C; Cuevas, Danissa; Diggs, Jeff; Jennings, Megan; Levy, Jacob; Mina, Ginger; Yee, Alvin; Wilcox, Bruce
2018-05-30
Early detection of colorectal cancer (CRC) is key to reducing associated mortality. Despite the importance of early detection, approximately 40% of individuals in the United States between the ages of 50-75 have never been screened for CRC. The low compliance with colonoscopy and fecal-based screening may be addressed with a non-invasive alternative such as a blood-based test. We describe here the analytical validation of a multiplexed blood-based assay that measures the plasma concentrations of 15 proteins to assess advanced adenoma (AA) and CRC risk in symptomatic patients. The test was developed on an electrochemiluminescent immunoassay platform employing four multi-marker panels, to be implemented in the clinic as a laboratory developed test (LDT). Under the Clinical Laboratory Improvement Amendments (CLIA) and College of American Pathologists (CAP) regulations, a United States-based clinical laboratory utilizing an LDT must establish performance characteristics relating to analytical validity prior to releasing patient test results. This report describes a series of studies demonstrating the precision, accuracy, analytical sensitivity, and analytical specificity for each of the 15 assays, as required by CLIA/CAP. In addition, the report describes studies characterizing each of the assays' dynamic range, parallelism, tolerance to common interfering substances, spike recovery, and stability to sample freeze-thaw cycles. Upon completion of the analytical characterization, a clinical accuracy study was performed to evaluate concordance of AA and CRC classifier model calls using the analytical method intended for use in the clinic. Of 434 symptomatic patient samples tested, the percent agreement with original CRC and AA calls was 87% and 92% respectively. All studies followed CLSI guidelines and met the regulatory requirements for implementation of a new LDT. The results provide the analytical evidence to support the implementation of the novel multi-marker test as a clinical test for evaluating CRC and AA risk in symptomatic individuals. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Setar, Katherine Marie
1997-08-01
This dissertation analytically and critically examines composer Pauline Oliveros's philosophy of 'listening' as it applies to selected works created between 1961 and 1984. The dissertation is organized through the application of two criteria: three perspectives of listening (empirical, phenomenal, and, to a lesser extent, personal), and categories derived, in part, from her writings and interviews (improvisational, traditional, theatrical, electronic, meditational, and interactive). In general, Oliveros's works may be categorized by one of two listening perspectives. The 'empirical' listening perspective, which generally includes pure acoustic phenomenon, independent from human interpretation, is exemplified in the analyses of Sound Patterns (1961), OH HA AH (1968), and, to a lesser extent, I of IV (1966). The 'phenomenal' listening perspective, which involves the human interaction with the pure acoustic phenomenon, includes a critical examination of her post-1971 'meditation' pieces and an analytical and critical examination of her tonal 'interactive' improvisations in highly resonant space, such as Watertank Software (1984). The most pervasive element of Oliveros's stylistic evolution is her gradual change from the hierarchical aesthetic of the traditional composer, to one in which creative control is more equally shared by all participants. Other significant contributions by Oliveros include the probable invention of the 'meditation' genre, an emphasis on the subjective perceptions of musical participants as a means to greater musical awareness, her musical exploration of highly resonant space, and her pioneering work in American electronic music. Both analytical and critical commentary were applied to selective representative works from Oliveros's six compositional categories. The analytical methods applied to the Oliveros's works include Wayne Slawson's vowel/formant theory as described in his book, Sound Color, an original method of categorizing consonants as noise sources based upon the principles of the International Phonetic Association, traditional morphological analyses, linear-extrapolation analyses which are derived from Schenker's theory, and discussions of acoustic phenomena as they apply to such practices as 1960s electronic studio techniques and the dynamics of room acoustics.
7 CFR 94.4 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture... POULTRY AND EGG PRODUCTS Mandatory Analyses of Egg Products § 94.4 Analytical methods. The majority of analytical methods used by the USDA laboratories to perform mandatory analyses for egg products are listed as...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
Three-Dimensional Piecewise-Continuous Class-Shape Transformation of Wings
NASA Technical Reports Server (NTRS)
Olson, Erik D.
2015-01-01
Class-Shape Transformation (CST) is a popular method for creating analytical representations of the surface coordinates of various components of aerospace vehicles. A wide variety of two- and three-dimensional shapes can be represented analytically using only a modest number of parameters, and the surface representation is smooth and continuous to as fine a degree as desired. This paper expands upon the original two-dimensional representation of airfoils to develop a generalized three-dimensional CST parametrization scheme that is suitable for a wider range of aircraft wings than previous formulations, including wings with significant non-planar shapes such as blended winglets and box wings. The method uses individual functions for the spanwise variation of airfoil shape, chord, thickness, twist, and reference axis coordinates to build up the complete wing shape. An alternative formulation parameterizes the slopes of the reference axis coordinates in order to relate the spanwise variation to the tangents of the sweep and dihedral angles. Also discussed are methods for fitting existing wing surface coordinates, including the use of piecewise equations to handle discontinuities, and mathematical formulations of geometric continuity constraints. A subsonic transport wing model is used as an example problem to illustrate the application of the methodology and to quantify the effects of piecewise representation and curvature constraints.
NASA Astrophysics Data System (ADS)
Constantinescu, E.; Oanta, E.; Panait, C.
2017-08-01
The paper presents an initial study concerning the form factors for shear, for a rectangular and for a circular cross section, being used an analytical method and a numerical study. The numerical study considers a division of the cross section in small areas and uses the power of the definitions in order to compute the according integrals. The accurate values of the form factors are increasing the accuracy of the displacements computed by the use of the strain energy methods. The knowledge resulted from this study will be used for several directions of development: calculus of the form factors for a ring-type cross section of variable ratio of the inner and outer diameters, calculus of the geometrical characteristics of an inclined circular segment and, using a Bool algebra that operates with geometrical shapes, for an inclined circular ring segment. These shapes may be used to analytically define the geometrical model of a complex composite section, i.e. a ship hull cross section. The according calculus relations are also useful for the development of customized design commands in CAD commercial applications. The paper is a result of the long run development of original computer based instruments in engineering of the authors.
Assessment of regional air quality by a concentration-dependent Pollution Permeation Index
Liang, Chun-Sheng; Liu, Huan; He, Ke-Bin; Ma, Yong-Liang
2016-01-01
Although air quality monitoring networks have been greatly improved, interpreting their expanding data in both simple and efficient ways remains challenging. Therefore, needed are new analytical methods. We developed such a method based on the comparison of pollutant concentrations between target and circum areas (circum comparison for short), and tested its applications by assessing the air pollution in Jing-Jin-Ji, Yangtze River Delta, Pearl River Delta and Cheng-Yu, China during 2015. We found the circum comparison can instantly judge whether a city is a pollution permeation donor or a pollution permeation receptor by a Pollution Permeation Index (PPI). Furthermore, a PPI-related estimated concentration (original concentration plus halved average concentration difference) can be used to identify some overestimations and underestimations. Besides, it can help explain pollution process (e.g., Beijing’s PM2.5 maybe largely promoted by non-local SO2) though not aiming at it. Moreover, it is applicable to any region, easy-to-handle, and able to boost more new analytical methods. These advantages, despite its disadvantages in considering the whole process jointly influenced by complex physical and chemical factors, demonstrate that the PPI based circum comparison can be efficiently used in assessing air pollution by yielding instructive results, without the absolute need for complex operations. PMID:27731344
Evaluation of Analytical Modeling Functions for the Phonation Onset Process.
Petermann, Simon; Kniesburges, Stefan; Ziethe, Anke; Schützenberger, Anne; Döllinger, Michael
2016-01-01
The human voice originates from oscillations of the vocal folds in the larynx. The duration of the voice onset (VO), called the voice onset time (VOT), is currently under investigation as a clinical indicator for correct laryngeal functionality. Different analytical approaches for computing the VOT based on endoscopic imaging were compared to determine the most reliable method to quantify automatically the transient vocal fold oscillations during VO. Transnasal endoscopic imaging in combination with a high-speed camera (8000 fps) was applied to visualize the phonation onset process. Two different definitions of VO interval were investigated. Six analytical functions were tested that approximate the envelope of the filtered or unfiltered glottal area waveform (GAW) during phonation onset. A total of 126 recordings from nine healthy males and 210 recordings from 15 healthy females were evaluated. Three criteria were analyzed to determine the most appropriate computation approach: (1) reliability of the fit function for a correct approximation of VO; (2) consistency represented by the standard deviation of VOT; and (3) accuracy of the approximation of VO. The results suggest the computation of VOT by a fourth-order polynomial approximation in the interval between 32.2 and 67.8% of the saturation amplitude of the filtered GAW.
Elsayed, Mustafa M A; Vierl, Ulrich; Cevc, Gregor
2009-06-01
Potentiometric lipid membrane-water partition coefficient studies neglect electrostatic interactions to date; this leads to incorrect results. We herein show how to account properly for such interactions in potentiometric data analysis. We conducted potentiometric titration experiments to determine lipid membrane-water partition coefficients of four illustrative drugs, bupivacaine, diclofenac, ketoprofen and terbinafine. We then analyzed the results conventionally and with an improved analytical approach that considers Coulombic electrostatic interactions. The new analytical approach delivers robust partition coefficient values. In contrast, the conventional data analysis yields apparent partition coefficients of the ionized drug forms that depend on experimental conditions (mainly the lipid-drug ratio and the bulk ionic strength). This is due to changing electrostatic effects originating either from bound drug and/or lipid charges. A membrane comprising 10 mol-% mono-charged molecules in a 150 mM (monovalent) electrolyte solution yields results that differ by a factor of 4 from uncharged membranes results. Allowance for the Coulombic electrostatic interactions is a prerequisite for accurate and reliable determination of lipid membrane-water partition coefficients of ionizable drugs from potentiometric titration data. The same conclusion applies to all analytical methods involving drug binding to a surface.
Combined imaging and chemical sensing using a single optical imaging fiber.
Bronk, K S; Michael, K L; Pantano, P; Walt, D R
1995-09-01
Despite many innovations and developments in the field of fiber-optic chemical sensors, optical fibers have not been employed to both view a sample and concurrently detect an analyte of interest. While chemical sensors employing a single optical fiber or a noncoherent fiberoptic bundle have been applied to a wide variety of analytical determinations, they cannot be used for imaging. Similarly, coherent imaging fibers have been employed only for their originally intended purpose, image transmission. We herein report a new technique for viewing a sample and measuring surface chemical concentrations that employs a coherent imaging fiber. The method is based on the deposition of a thin, analyte-sensitive polymer layer on the distal surface of a 350-microns-diameter imaging fiber. We present results from a pH sensor array and an acetylcholine biosensor array, each of which contains approximately 6000 optical sensors. The acetylcholine biosensor has a detection limit of 35 microM and a fast (< 1 s) response time. In association with an epifluorescence microscope and a charge-coupled device, these modified imaging fibers can display visual information of a remote sample with 4-microns spatial resolution, allowing for alternating acquisition of both chemical analysis and visual histology.
Analytic Causative Constructions in Medieval Spanish: The Origins of a Construction
ERIC Educational Resources Information Center
Sanaphre Villanueva, Monica
2011-01-01
The goal of this study is to provide an inventory of the Analytic Causative constructions that were in use in Peninsular Spanish from the 12th to the 16th centuries from the constructional perspective of Cognitive Grammar. A detailed profile of each construction was made including its constructional schema along with relevant semantic, syntactic,…
Hess, Cornelius; Sydow, Konrad; Kueting, Theresa; Kraemer, Michael; Maas, Alexandra
2018-02-01
The requirement for correct evaluation of forensic toxicological results in daily routine work and scientific studies is reliable analytical data based on validated methods. Validation of a method gives the analyst tools to estimate the efficacy and reliability of the analytical method. Without validation, data might be contested in court and lead to unjustified legal consequences for a defendant. Therefore, new analytical methods to be used in forensic toxicology require careful method development and validation of the final method. Until now, there are no publications on the validation of chromatographic mass spectrometric methods for the detection of endogenous substances although endogenous analytes can be important in Forensic Toxicology (alcohol consumption marker, congener alcohols, gamma hydroxy butyric acid, human insulin and C-peptide, creatinine, postmortal clinical parameters). For these analytes, conventional validation instructions cannot be followed completely. In this paper, important practical considerations in analytical method validation for endogenous substances will be discussed which may be used as guidance for scientists wishing to develop and validate analytical methods for analytes produced naturally in the human body. Especially the validation parameters calibration model, analytical limits, accuracy (bias and precision) and matrix effects and recovery have to be approached differently. Highest attention should be paid to selectivity experiments. Copyright © 2017 Elsevier B.V. All rights reserved.
Mexican-Origin Parents' Involvement in Adolescent Peer Relationships: A Pattern Analytic Approach
ERIC Educational Resources Information Center
Updegraff, Kimberly A.; Killoren, Sarah E.; Thayer, Shawna M.
2007-01-01
The cultural backgrounds and experiences of Mexican-origin mothers and fathers (including their Anglo and Mexican cultural orientations and their familism values) and their socioeconomic background (parental education, family income, neighborhood poverty rate) are linked to the nature of their involvement in adolescent peer relationships.
Are K-12 Learners Motivated in Physical Education? A Meta-Analysis
ERIC Educational Resources Information Center
Chen, Senlin; Chen, Ang; Zhu, Xihe
2012-01-01
Previous studies devoted to K-12 learner motivation in physical education share a general assumption that students may lack motivation. This meta-analytic study examined published original studies (n = 79) to determine students' motivation level and the association between motivation and outcomes. Original means of motivation measures were…
Mexican-Origin Parents’ Involvement in Adolescent Peer Relationships: A Pattern Analytic Approach
Updegraff, Kimberly A.; Killoren, Sarah E.; Thayer, Shawna M.
2008-01-01
The cultural backgrounds and experiences of Mexican-origin mothers and fathers (including their Anglo and Mexican cultural orientations and their familism values) and their socioeconomic background (parental education, family income, neighborhood poverty rate) are linked to the nature of their involvement in adolescent peer relationships. PMID:17623405
Rapid Analysis of Copper Ore in Pre-Smelter Head Flow Slurry by Portable X-ray Fluorescence.
Burnett, Brandon J; Lawrence, Neil J; Abourahma, Jehad N; Walker, Edward B
2016-05-01
Copper laden ore is often concentrated using flotation. Before the head flow slurry can be smelted, it is important to know the concentration of copper and contaminants. The concentration of copper and other elements fluctuate significantly in the head flow, often requiring modification of the concentrations in the slurry prior to smelting. A rapid, real-time analytical method is needed to support on-site optimization of the smelter feedstock. A portable, handheld X-ray fluorescence spectrometer was utilized to determine the copper concentration in a head flow suspension at the slurry origin. The method requires only seconds and is reliable for copper concentrations of 2.0-25%, typically encountered in such slurries. © The Author(s) 2016.
In Search of a Pony: Sources, Methods, Outcomes, and Motivated Reasoning.
Stone, Marc B
2018-05-01
It is highly desirable to be able to evaluate the effect of policy interventions. Such evaluations should have expected outcomes based upon sound theory and be carefully planned, objectively evaluated and prospectively executed. In many cases, however, assessments originate with investigators' poorly substantiated beliefs about the effects of a policy. Instead of designing studies that test falsifiable hypotheses, these investigators adopt methods and data sources that serve as little more than descriptions of these beliefs in the guise of analysis. Interrupted time series analysis is one of the most popular forms of analysis used to present these beliefs. It is intuitively appealing but, in most cases, it is based upon false analogies, fallacious assumptions and analytical errors.
Tratamiento formal de imágenes astronómicas con PSF espacialmente variable
NASA Astrophysics Data System (ADS)
Sánchez, B. O.; Domínguez, M. J.; Lares, M.
2017-10-01
We present a python implementation of a method for PSF determination in the context of optimal subtraction of astronomical images. We introduce an expansion of the spatially variant point spread function (PSF) in terms of the Karhunen Loève basis. The advantage of this approach is that the basis is able to naturally adapt to the data, instead of imposing a fixed ad-hoc analytic form. Simulated image reconstruction was analyzed, by using the measured PSF, with good agreement in terms of sky background level between the reconstructed and original images. The technique is simple enough to be implemented on more sophisticated image subtraction methods, since it improves its results without extra computational cost in a spatially variant PSF environment.
Earth recovery mode analysis for a Martian sample return mission
NASA Technical Reports Server (NTRS)
Green, J. P.
1978-01-01
The analysis has concerned itself with evaluating alternative methods of recovering a sample module from a trans-earth trajectory originating in the vicinity of Mars. The major modes evaluated are: (1) direct atmospheric entry from trans-earth trajectory; (2) earth orbit insertion by retropropulsion; and (3) atmospheric braking to a capture orbit. In addition, the question of guided vs. unguided entry vehicles was considered, as well as alternative methods of recovery after orbit insertion for modes (2) and (3). A summary of results and conclusions is presented. Analytical results for aerodynamic and propulsive maneuvering vehicles are discussed. System performance requirements and alternatives for inertial systems implementation are also discussed. Orbital recovery operations and further studies required to resolve the recovery mode issue are described.
Structure and Development of the List of Prohibited Substances and Methods.
Kinahan, Audrey; Budgett, Richard; Mazzoni, Irene
2017-01-01
The list of prohibited substances and methods (the List) is the international standard that determines what is prohibited in sport both in- and out-of-competition. Since 2004, the official text of the List is produced by the World Anti-Doping Agency (WADA), the international independent organization responsible for promoting, coordinating, and monitoring the fight against doping in sport. Originally based on the prohibited lists established by the International Olympic Committee, the List has evolved to incorporate new doping trends, distinguish permitted from prohibited routes of administration, and adjust to new analytical and pharmacological breakthroughs. In this chapter, the elements that compose the List as well as the updates over the years are presented. © 2017 S. Karger AG, Basel.
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.
Gálvez, Akemi; Iglesias, Andrés
2013-01-01
Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380
Multiplex biosensing with highly sensitive magnetic nanoparticle quantification method
NASA Astrophysics Data System (ADS)
Nikitin, M. P.; Orlov, A. V.; Znoyko, S. L.; Bragina, V. A.; Gorshkov, B. G.; Ksenevich, T. I.; Cherkasov, V. R.; Nikitin, P. I.
2018-08-01
Unique properties of magnetic nanoparticles (MNP) have provided many breakthrough solutions for life science. The immense potential of MNP as labels in advanced immunoassays stems from the fact that they, unlike optical labels, can be easily detected inside 3D opaque porous biosensing structures or in colored mediums, manipulated by an external magnetic field, exhibit high stability and negligible background signal in biological samples, etc. In this research, the magnetic nanolabels and an original technique of their quantification by non-linear magnetization have permitted development of novel methods of multiplex biosensing. Several types of highly sensitive multi-channel readers that offer an extremely wide linear dynamic range are developed to count MNP in different recognition zones for quantitative concentration measurements of various analytes. Four approaches to multiplex biosensing based on MNP have been demonstrated in one-run tests based on several 3D porous structures; flat and micropillar microfluidic sensor chips; multi-line lateral flow strips and modular architecture of the strips, which is the first 3D multiplexing method that goes beyond the traditional planar techniques. Detection of cardio- and cancer markers, small molecules and oligonucleotides were used in the experiments. The analytical characteristics of the developed multiplex methods are on the level of the modern time-consuming laboratory techniques. The developed multiplex biosensing platforms are promising for medical and veterinary diagnostics, food inspection, environmental and security monitoring, etc.
A Newton method for the magnetohydrodynamic equilibrium equations
NASA Astrophysics Data System (ADS)
Oliver, Hilary James
We have developed and implemented a (J, B) space Newton method to solve the full nonlinear three dimensional magnetohydrodynamic equilibrium equations in toroidal geometry. Various cases have been run successfully, demonstrating significant improvement over Picard iteration, including a 3D stellarator equilibrium at β = 2%. The algorithm first solves the equilibrium force balance equation for the current density J, given a guess for the magnetic field B. This step is taken from the Picard-iterative PIES 3D equilibrium code. Next, we apply Newton's method to Ampere's Law by expansion of the functional J(B), which is defined by the first step. An analytic calculation in magnetic coordinates, of how the Pfirsch-Schlüter currents vary in the plasma in response to a small change in the magnetic field, yields the Newton gradient term (analogous to ∇f . δx in Newton's method for f(x) = 0). The algorithm is computationally feasible because we do this analytically, and because the gradient term is flux surface local when expressed in terms of a vector potential in an Ar=0 gauge. The equations are discretized by a hybrid spectral/offset grid finite difference technique, and leading order radial dependence is factored from Fourier coefficients to improve finite- difference accuracy near the polar-like origin. After calculating the Newton gradient term we transfer the equation from the magnetic grid to a fixed background grid, which greatly improves the code's performance.
Improved characterization of the botanical origin of sugar by carbon-13 SNIF-NMR applied to ethanol.
Thomas, Freddy; Randet, Celia; Gilbert, Alexis; Silvestre, Virginie; Jamin, Eric; Akoka, Serge; Remaud, Gerald; Segebarth, Nicolas; Guillou, Claude
2010-11-24
Until now, no analytical method, not even isotopic ones, had been able to differentiate between sugars coming from C4-metabolism plants (cane, maize, etc.) and some crassulacean acid metabolism plants (e.g., pineapple, agave) because in both cases the isotope distributions of the overall carbon-13/carbon-12 and site-specific deuterium/hydrogen isotope ratios are very similar. Following recent advances in the field of quantitative isotopic carbon-13 NMR measurements, a procedure for the analysis of the positional carbon-13/carbon-12 isotope ratios of ethanol derived from the sugars of pineapples and agave using the site-specific natural isotopic fractionation-nuclear magnetic resonance (SNIF-NMR) method is presented. It is shown that reproducible results can be obtained when appropriate analytical conditions are used. When applied to pineapple juice, this new method demonstrates a unique ability to detect cane and maize sugar, which are major potential adulterants, with a detection limit in the order of 15% of the total sugars, which provides an efficient mean of controlling the authenticity of juices made from this specific fruit. When applied to tequila products, this new method demonstrates a unique ability to unambiguously differentiate authentic 100% agave tequila, as well as misto tequila (made from at least 51% agave), from products made from a larger proportion of cane or maize sugar and therefore not complying with the legal definition of tequila.
Sobotník, Jan; Jirosová, Anna; Hanus, Robert
2010-09-01
The rapid development of analytical methods in the last four decades has led to the discovery of a fascinating diversity of defensive chemicals used by termites. The last exhaustive review on termite defensive chemicals was published by G.D. Prestwich in 1984. In this text, we aim to fill the gap of the past 25 years and overview all of the relevant primary sources about the chemistry of termite defense (126 original papers, see Fig. 1 and online supplementary material) along with related biological aspects, such as the anatomy of defensive glands and their functional mechanisms, alarm communication, and the evolutionary significance of these defensive elements.
Bond order potential module for LAMMPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-09-11
pair_bop is a module for performing energy calculations using the Bond Order Potential (BOP) for use in the parallel molecular dynamics code LAMMPS. The bop pair style computes BOP based upon quantum mechanical incorporating both sigma and pi bondings. By analytically deriving the BOP pair bop from quantum mechanical theory its transferability to different phases can approach that of quantum mechanical methods. This potential is extremely effective at modeling 111-V and II-VI compounds such as GaAs and CdTe. This potential is similar to the original BOP developed by Pettifor and later updated by Murdock et al. and Ward et al.
Sant'Ana, Luiza D'O; Sousa, Juliana P L M; Salgueiro, Fernanda B; Lorenzon, Maria Cristina Affonso; Castro, Rosane N
2012-01-01
Various bioactive chemical constituents were quantified for 21 honey samples obtained at Rio de Janeiro and Minas Gerais, Brazil. To evaluate their antioxidant activity, 3 different methods were used: the ferric reducing antioxidant power, the 1,1-diphenyl-2-picrylhydrazyl (DPPH) radical-scavenging activity, and the 2,2'-azinobis (3-ethylbenzothiazolin)-6-sulfonate (ABTS) assays. Correlations between the parameters were statistically significant (-0.6684 ≤ r ≤-0.8410, P < 0.05). Principal component analysis showed that honey samples from the same floral origins had more similar profiles, which made it possible to group the eucalyptus, morrão de candeia, and cambara honey samples in 3 distinct areas, while cluster analysis could separate the artificial honey from the floral honeys. This research might aid in the discrimination of honey floral origin, by using simple analytical methods in association with multivariate analysis, which could also show a great difference among floral honeys and artificial honey, indicating a possible way to help with the identification of artificial honeys. © 2011 Institute of Food Technologists®
NASA Technical Reports Server (NTRS)
Thanedar, B. D.
1972-01-01
A simple repetitive calculation was used to investigate what happens to the field in terms of the signal paths of disturbances originating from the energy source. The computation allowed the field to be reconstructed as a function of space and time on a statistical basis. The suggested Monte Carlo method is in response to the need for a numerical method to supplement analytical methods of solution which are only valid when the boundaries have simple shapes, rather than for a medium that is bounded. For the analysis, a suitable model was created from which was developed an algorithm for the estimation of acoustic pressure variations in the region under investigation. The validity of the technique was demonstrated by analysis of simple physical models with the aid of a digital computer. The Monte Carlo method is applicable to a medium which is homogeneous and is enclosed by either rectangular or curved boundaries.
Estimating the R-curve from residual strength data
NASA Technical Reports Server (NTRS)
Orange, T. W.
1985-01-01
A method is presented for estimating the crack-extension resistance curve (R-curve) from residual-strength (maximum load against original crack length) data for precracked fracture specimens. The method allows additional information to be inferred from simple test results, and that information can be used to estimate the failure loads of more complicated structures of the same material and thickness. The fundamentals of the R-curve concept are reviewed first. Then the analytical basis for the estimation method is presented. The estimation method has been verified in two ways. Data from the literature (involving several materials and different types of specimens) are used to show that the estimated R-curve is in good agreement with the measured R-curve. A recent predictive blind round-robin program offers a more crucial test. When the actual failure loads are disclosed, the predictions are found to be in good agreement.
NASA Technical Reports Server (NTRS)
Kvaternik, Raymond G.; Silva, Walter A.
2008-01-01
A computational procedure for identifying the state-space matrices corresponding to discrete bilinear representations of nonlinear systems is presented. A key feature of the method is the use of first- and second-order Volterra kernels (first- and second-order pulse responses) to characterize the system. The present method is based on an extension of a continuous-time bilinear system identification procedure given in a 1971 paper by Bruni, di Pillo, and Koch. The analytical and computational considerations that underlie the original procedure and its extension to the title problem are presented and described, pertinent numerical considerations associated with the process are discussed, and results obtained from the application of the method to a variety of nonlinear problems from the literature are presented. The results of these exploratory numerical studies are decidedly promising and provide sufficient credibility for further examination of the applicability of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Müller, Kathrin, E-mail: k.mueller@fz-juelich.de; Fedosov, Dmitry A., E-mail: d.fedosov@fz-juelich.de; Gompper, Gerhard, E-mail: g.gompper@fz-juelich.de
Smoothed dissipative particle dynamics (SDPD) combines two popular mesoscopic techniques, the smoothed particle hydrodynamics and dissipative particle dynamics (DPD) methods, and can be considered as an improved dissipative particle dynamics approach. Despite several advantages of the SDPD method over the conventional DPD model, the original formulation of SDPD by Español and Revenga (2003) [9], lacks angular momentum conservation, leading to unphysical results for problems where the conservation of angular momentum is essential. To overcome this limitation, we extend the SDPD method by introducing a particle spin variable such that local and global angular momentum conservation is restored. The new SDPDmore » formulation (SDPD+a) is directly derived from the Navier–Stokes equation for fluids with spin, while thermal fluctuations are incorporated similarly to the DPD method. We test the new SDPD method and demonstrate that it properly reproduces fluid transport coefficients. Also, SDPD with angular momentum conservation is validated using two problems: (i) the Taylor–Couette flow with two immiscible fluids and (ii) a tank-treading vesicle in shear flow with a viscosity contrast between inner and outer fluids. For both problems, the new SDPD method leads to simulation predictions in agreement with the corresponding analytical theories, while the original SDPD method fails to capture properly physical characteristics of the systems due to violation of angular momentum conservation. In conclusion, the extended SDPD method with angular momentum conservation provides a new approach to tackle fluid problems such as multiphase flows and vesicle/cell suspensions, where the conservation of angular momentum is essential.« less
Smoothed dissipative particle dynamics with angular momentum conservation
NASA Astrophysics Data System (ADS)
Müller, Kathrin; Fedosov, Dmitry A.; Gompper, Gerhard
2015-01-01
Smoothed dissipative particle dynamics (SDPD) combines two popular mesoscopic techniques, the smoothed particle hydrodynamics and dissipative particle dynamics (DPD) methods, and can be considered as an improved dissipative particle dynamics approach. Despite several advantages of the SDPD method over the conventional DPD model, the original formulation of SDPD by Español and Revenga (2003) [9], lacks angular momentum conservation, leading to unphysical results for problems where the conservation of angular momentum is essential. To overcome this limitation, we extend the SDPD method by introducing a particle spin variable such that local and global angular momentum conservation is restored. The new SDPD formulation (SDPD+a) is directly derived from the Navier-Stokes equation for fluids with spin, while thermal fluctuations are incorporated similarly to the DPD method. We test the new SDPD method and demonstrate that it properly reproduces fluid transport coefficients. Also, SDPD with angular momentum conservation is validated using two problems: (i) the Taylor-Couette flow with two immiscible fluids and (ii) a tank-treading vesicle in shear flow with a viscosity contrast between inner and outer fluids. For both problems, the new SDPD method leads to simulation predictions in agreement with the corresponding analytical theories, while the original SDPD method fails to capture properly physical characteristics of the systems due to violation of angular momentum conservation. In conclusion, the extended SDPD method with angular momentum conservation provides a new approach to tackle fluid problems such as multiphase flows and vesicle/cell suspensions, where the conservation of angular momentum is essential.
Revisiting the emission from relativistic blast waves in a density-jump medium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, J. J.; Huang, Y. F.; Dai, Z. G.
2014-09-01
Re-brightening bumps are frequently observed in gamma-ray burst afterglows. Many scenarios have been proposed to interpret the origin of these bumps, of which a blast wave encountering a density-jump in the circumburst environment has been questioned by recent works. We develop a set of differential equations to calculate the relativistic outflow encountering the density-jump by extending the work of Huang et al. This approach is a semi-analytic method and is very convenient. Our results show that late high-amplitude bumps cannot be produced under common conditions, rather only a short plateau may emerge even when the encounter occurs at an earlymore » time (<10{sup 4} s). In general, our results disfavor the density-jump origin for those observed bumps, which is consistent with the conclusion drawn from full hydrodynamics studies. The bumps thus should be caused by other scenarios.« less
Structural characterization of pharmaceutical heparins prepared from different animal tissues.
Fu, Li; Li, Guoyun; Yang, Bo; Onishi, Akihiro; Li, Lingyun; Sun, Peilong; Zhang, Fuming; Linhardt, Robert J
2013-05-01
Although most pharmaceutical heparin used today is obtained from porcine intestine, heparin has historically been prepared from bovine lung and ovine intestine. There is some regulatory concern about establishing the species origin of heparin. This concern began with the outbreak of mad cow disease in the 1990s and was exacerbated during the heparin shortage in the 2000s and the heparin contamination crisis of 2007-2008. Three heparins from porcine, ovine, and bovine were characterized through state-of-the-art carbohydrate analysis methods with a view profiling their physicochemical properties. Differences in molecular weight, monosaccharide and disaccharide composition, oligosaccharide sequence, and antithrombin III-binding affinity were observed. These data provide some insight into the variability of heparins obtained from these three species and suggest some analytical approaches that may be useful in confirming the species origin of a heparin active pharmaceutical ingredient. Copyright © 2013 Wiley Periodicals, Inc.
Magagna, Federico; Guglielmetti, Alessandro; Liberto, Erica; Reichenbach, Stephen E; Allegrucci, Elena; Gobino, Guido; Bicchi, Carlo; Cordero, Chiara
2017-08-02
This study investigates chemical information of volatile fractions of high-quality cocoa (Theobroma cacao L. Malvaceae) from different origins (Mexico, Ecuador, Venezuela, Columbia, Java, Trinidad, and Sao Tomè) produced for fine chocolate. This study explores the evolution of the entire pattern of volatiles in relation to cocoa processing (raw, roasted, steamed, and ground beans). Advanced chemical fingerprinting (e.g., combined untargeted and targeted fingerprinting) with comprehensive two-dimensional gas chromatography coupled with mass spectrometry allows advanced pattern recognition for classification, discrimination, and sensory-quality characterization. The entire data set is analyzed for 595 reliable two-dimensional peak regions, including 130 known analytes and 13 potent odorants. Multivariate analysis with unsupervised exploration (principal component analysis) and simple supervised discrimination methods (Fisher ratios and linear regression trees) reveal informative patterns of similarities and differences and identify characteristic compounds related to sample origin and manufacturing step.
Uncertainty for Part Density Determination: An Update
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valdez, Mario Orlando
2016-12-14
Accurate and precise density measurements by hydrostatic weighing requires the use of an analytical balance, configured with a suspension system, to both measure the weight of a part in water and in air. Additionally, the densities of these liquid media (water and air) must be precisely known for the part density determination. To validate the accuracy and precision of these measurements, uncertainty statements are required. The work in this report is a revision of an original report written more than a decade ago, specifically applying principles and guidelines suggested by the Guide to the Expression of Uncertainty in Measurement (GUM)more » for determining the part density uncertainty through sensitivity analysis. In this work, updated derivations are provided; an original example is revised with the updated derivations and appendix, provided solely to uncertainty evaluations using Monte Carlo techniques, specifically using the NIST Uncertainty Machine, as a viable alternative method.« less
Modification of LAMPF's magnet-mapping code for offsets of center coordinates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurd, J.W.; Gomulka, S.; Merrill, F.
1991-01-01
One of the magnet measurements performed at LAMPF is the determination of the cylindrical harmonics of a quadrupole magnet using a rotating coil. The data are analyzed with the code HARMAL to derive the amplitudes of the harmonics. Initially, the origin of the polar coordinate system is the axis of the rotating coil. A new coordinate system is found by a simple translation of the old system such that the dipole moment in the new system is zero. The origin of this translated system is referred to as the magnetic center. Given this translation, the code calculates the coefficients ofmore » the cylindrical harmonics in the new system. The code has been modified to use an analytical calculation to determine these new coefficients. The method of calculation is described and some implications of this formulation are presented. 8 refs., 2 figs.« less
Estimation of chaotic coupled map lattices using symbolic vector dynamics
NASA Astrophysics Data System (ADS)
Wang, Kai; Pei, Wenjiang; Cheung, Yiu-ming; Shen, Yi; He, Zhenya
2010-01-01
In [K. Wang, W.J. Pei, Z.Y. He, Y.M. Cheung, Phys. Lett. A 367 (2007) 316], an original symbolic vector dynamics based method has been proposed for initial condition estimation in additive white Gaussian noisy environment. The estimation precision of this estimation method is determined by symbolic errors of the symbolic vector sequence gotten by symbolizing the received signal. This Letter further develops the symbolic vector dynamical estimation method. We correct symbolic errors with backward vector and the estimated values by using different symbols, and thus the estimation precision can be improved. Both theoretical and experimental results show that this algorithm enables us to recover initial condition of coupled map lattice exactly in both noisy and noise free cases. Therefore, we provide novel analytical techniques for understanding turbulences in coupled map lattice.
Efficient visualization of urban spaces
NASA Astrophysics Data System (ADS)
Stamps, A. E.
2012-10-01
This chapter presents a new method for calculating efficiency and applies that method to the issues of selecting simulation media and evaluating the contextual fit of new buildings in urban spaces. The new method is called "meta-analysis". A meta-analytic review of 967 environments indicated that static color simulations are the most efficient media for visualizing urban spaces. For contextual fit, four original experiments are reported on how strongly five factors influence visual appeal of a street: architectural style, trees, height of a new building relative to the heights of existing buildings, setting back a third story, and distance. A meta-analysis of these four experiments and previous findings, covering 461 environments, indicated that architectural style, trees, and height had effects strong enough to warrant implementation, but the effects of setting back third stories and distance were too small to warrant implementation.
NASA Astrophysics Data System (ADS)
Wong, Kin-Yiu; Gao, Jiali
2007-12-01
Based on Kleinert's variational perturbation (KP) theory [Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. (World Scientific, Singapore, 2004)], we present an analytic path-integral approach for computing the effective centroid potential. The approach enables the KP theory to be applied to any realistic systems beyond the first-order perturbation (i.e., the original Feynman-Kleinert [Phys. Rev. A 34, 5080 (1986)] variational method). Accurate values are obtained for several systems in which exact quantum results are known. Furthermore, the computed kinetic isotope effects for a series of proton transfer reactions, in which the potential energy surfaces are evaluated by density-functional theory, are in good accordance with experiments. We hope that our method could be used by non-path-integral experts or experimentalists as a "black box" for any given system.
Schuh, V.; Šír, J.; Galliová, J.; Švandová, E.
1966-01-01
A comparison of the weight and photometric methods of primary assay of BCG vaccine has been made, using a vaccine prepared in albumin-free medium but containing Tween 80. In the weight method, the bacteria were trapped on a membrane filter; for photometry a Pulfrich Elpho photometer and an instrument of Czech origin were used. The photometric results were the more precise, provided that the measurements were made within two days of completion of growth; after this time the optical density of the suspension began to decrease slowly. The lack of precision of the weighing method is probably due to the small weight of culture deposit (which was almost on the limit of accuracy of the analytical balance) and to difficulties in the manipulation of the ultrafilter. PMID:5335458
Wolf, Jan-Christoph; Gyr, Luzia; Mirabelli, Mario F; Schaer, Martin; Siegenthaler, Peter; Zenobi, Renato
2016-09-01
Active capillary plasma ionization is a highly efficient ambient ionization method. Its general principle of ion formation is closely related to atmospheric pressure chemical ionization (APCI). The method is based on dielectric barrier discharge ionization (DBDI), and can be constructed in the form of a direct flow-through interface to a mass spectrometer. Protonated species ([M + H](+)) are predominantly formed, although in some cases radical cations are also observed. We investigated the underlying ionization mechanisms and reaction pathways for the formation of protonated analyte ([M + H](+)). We found that ionization occurs in the presence and in the absence of water vapor. Therefore, the mechanism cannot exclusively rely on hydronium clusters, as generally accepted for APCI. Based on isotope labeling experiments, protons were shown to originate from various solvents (other than water) and, to a minor extent, from gaseous impurities and/or self-protonation. By using CO2 instead of air or N2 as plasma gas, additional species like [M + OH](+) and [M - H](+) were observed. These gas-phase reaction products of CO2 with the analyte (tertiary amines) indicate the presence of a radical-mediated ionization pathway, which proceeds by direct reaction of the ionized plasma gas with the analyte. The proposed reaction pathway is supported with density functional theory (DFT) calculations. These findings add a new ionization pathway leading to the protonated species to those currently known for APCI. Graphical Abstract ᅟ.
Ariyama, Kaoru; Kadokura, Masashi; Suzuki, Tadanao
2008-01-01
Techniques to determine the geographic origin of foods have been developed for various agricultural and fishery products, and they have used various principles. Some of these techniques are already in use for checking the authenticity of the labeling. Many are based on multielement analysis and chemometrics. We have developed such a technique to determine the geographic origin of onions (Allium cepa L.). This technique, which determines whether an onion is from outside Japan, is designed for onions labeled as having a geographic origin of Hokkaido, Hyogo, or Saga, the main onion production areas in Japan. However, estimations of discrimination errors for this technique have not been fully conducted; they have been limited to those for discrimination models and do not include analytical errors. Interlaboratory studies were conducted to estimate the analytical errors of the technique. Four collaborators each determined 11 elements (Na, Mg, P, Mn, Zn, Rb, Sr, Mo, Cd, Cs, and Ba) in 4 test materials of fresh and dried onions. Discrimination errors in this technique were estimated by summing (1) individual differences within lots, (2) variations between lots from the same production area, and (3) analytical errors. The discrimination errors for onions from Hokkaido, Hyogo, and Saga were estimated to be 2.3, 9.5, and 8.0%, respectively. Those for onions from abroad in determinations targeting Hokkaido, Hyogo, and Saga were estimated to be 28.2, 21.6, and 21.9%, respectively.
Solid-phase extraction versus matrix solid-phase dispersion: Application to white grapes.
Dopico-García, M S; Valentão, P; Jagodziñska, A; Klepczyñska, J; Guerra, L; Andrade, P B; Seabra, R M
2007-11-15
The use of matrix solid-phase dispersion (MSPD) was tested to, separately, extract phenolic compounds and organic acids from white grapes. This method was compared with a more conventional analytical method previously developed that combines solid liquid extraction (SL) to simultaneously extract phenolic compounds and organic acids followed by a solid-phase extraction (SPE) to separate the two types of compounds. Although the results were qualitatively similar for both techniques, the levels of extracted compounds were in general quite lower on using MSPD, especially for organic acids. Therefore, SL-SPE method was preferred to analyse white "Vinho Verde" grapes. Twenty samples of 10 different varieties (Alvarinho, Avesso, Asal-Branco, Batoca, Douradinha, Esganoso de Castelo Paiva, Loureiro, Pedernã, Rabigato and Trajadura) from four different locations in Minho (Portugal) were analysed in order to study the effects of variety and origin on the profile of the above mentioned compounds. Principal component analysis (PCA) was applied separately to establish the main sources of variability present in the data sets for phenolic compounds, organic acids and for the global data. PCA of phenolic compounds accounted for the highest variability (77.9%) with two PCs, enabling characterization of the varieties of samples according to their higher content in flavonol derivatives or epicatechin. Additionally, a strong effect of sample origin was observed. Stepwise linear discriminant analysis (SLDA) was used for differentiation of grapes according to the origin and variety, resulting in a correct classification of 100 and 70%, respectively.
Isotopic tracing of perchlorate in the environment
Sturchio, Neil C.; Böhlke, John Karl; Gu, Baohua; Hatzinger, Paul B.; Jackson, W. Andrew; Baskaran, Mark
2012-01-01
Isotopic measurements can be used for tracing the sources and behavior of environmental contaminants. Perchlorate (ClO 4 − ) has been detected widely in groundwater, soils, fertilizers, plants, milk, and human urine since 1997, when improved analytical methods for analyzing ClO 4 −concentration became available for routine use. Perchlorate ingestion poses a risk to human health because of its interference with thyroidal hormone production. Consequently, methods for isotopic analysis of ClO 4 − have been developed and applied to assist evaluation of the origin and migration of this common contaminant. Isotopic data are now available for stable isotopes of oxygen and chlorine, as well as 36Cl isotopic abundances, in ClO 4 − samples from a variety of natural and synthetic sources. These isotopic data provide a basis for distinguishing sources of ClO 4 − found in the environment, and for understanding the origin of natural ClO 4 − . In addition, the isotope effects of microbial ClO 4 − reduction have been measured in laboratory and field experiments, providing a tool for assessing ClO 4 − attenuation in the environment. Isotopic data have been used successfully in some areas for identifying major sources of ClO 4 − contamination in drinking water supplies. Questions about the origin and global biogeochemical cycle of natural ClO 4 − remain to be addressed; such work would benefit from the development of methods for preparation and isotopic analysis of ClO 4 − in samples with low concentrations and complex matrices.
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2011 CFR
2011-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2011-04-01 2011-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2014 CFR
2014-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2014-04-01 2014-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2012 CFR
2012-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2012-04-01 2012-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...
21 CFR 530.22 - Safe levels and analytical methods for food-producing animals.
Code of Federal Regulations, 2013 CFR
2013-04-01
... analytical method; or (3) Establish a safe level based on other appropriate scientific, technical, or... 21 Food and Drugs 6 2013-04-01 2013-04-01 false Safe levels and analytical methods for food... § 530.22 Safe levels and analytical methods for food-producing animals. (a) FDA may establish a safe...
Abushareeda, Wadha; Vonaparti, Ariadni; Saad, Khadija Al; Almansoori, Moneera; Meloug, Mbarka; Saleh, Amal; Aguilera, Rodrigo; Angelis, Yiannis; Horvatovich, Peter L; Lommen, Arjen; Alsayrafi, Mohammed; Georgakopoulos, Costas
2018-03-20
The aim of this paper is to present the development and validation of a high-resolution full scan (HR-FS) electrospray ionization (ESI) liquid chromatography coupled to quadrupole Orbitrap mass spectrometer (LC/Q/Orbitrap MS) platform for the screening of prohibited substances in human urine according to World Antidoping Agency (WADA) requirements. The method was also validated for quantitative analysis of six endogenous steroids (epitestosterone, testosterone, 5α-dihydrotestosterone, dehydroepiandrosterone, androsterone and etiocholanolone) in their intact sulfates form. The sample preparation comprised a combination of a hydrolyzed urine liquid-liquid extraction and the dilute & shoot addition of original urine in the extracted aliquot. The HR-FS MS acquisition mode with Polarity Switching was applied in combination of the Quadrupole-Orbitrap mass filter. The HR-FS acquisition of analytical signal, for known and unknown small molecules, allows the inclusion of all analytes detectable with LC-MS for antidoping investigations to identify the use of known or novel prohibited substances and metabolites after electronic data files' reprocessing. The method has been validated to be fit-for-purpose for the antidoping analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Ehlers, F. E.; Sebastian, J. D.; Weatherill, W. H.
1979-01-01
Analytical and empirical studies of a finite difference method for the solution of the transonic flow about harmonically oscillating wings and airfoils are presented. The procedure is based on separating the velocity potential into steady and unsteady parts and linearizing the resulting unsteady equations for small disturbances. Since sinusoidal motion is assumed, the unsteady equation is independent of time. Three finite difference investigations are discussed including a new operator for mesh points with supersonic flow, the effects on relaxation solution convergence of adding a viscosity term to the original differential equation, and an alternate and relatively simple downstream boundary condition. A method is developed which uses a finite difference procedure over a limited inner region and an approximate analytical procedure for the remaining outer region. Two investigations concerned with three-dimensional flow are presented. The first is the development of an oblique coordinate system for swept and tapered wings. The second derives the additional terms required to make row relaxation solutions converge when mixed flow is present. A finite span flutter analysis procedure is described using the two-dimensional unsteady transonic program with a full three-dimensional steady velocity potential.
Stebelska, Katarzyna
2013-08-01
Psychoactive drugs of fungal origin, psilocin, ibotenic acid, and muscimol among them have been proposed for recreational use and popularized since the 1960s, XX century. Despite their well-documented neurotoxicity, they reached reputation of being safe and nonaddictive. Scientific efforts to find any medical application for these hallucinogens in psychiatry, psychotherapy, and even for religious rituals support are highly controversial. Even if they show any healing potential, their usage in psychotherapy is in some cases inadequate and may additionally harm seriously suffering patients. Hallucinogens are thought to reduce cognitive functions. However, in case of indolealkylamines, such as psilocin, some recent findings suggest their ability to improve perception and mental skills, what would motivate the consumption of "magic mushrooms." The present article offers an opportunity to find out what are the main symptoms of intoxication with mushrooms containing psilocybin/psilocin, muscimol, and ibotenic acid. The progress in analytical methods for detection of them in fungal material, food, and body fluids is reviewed. Findings on the mechanisms of their biologic activity are summarized. Additionally, therapeutic potential of these fungal psychoactive compounds and health risk associated with their abuse are discussed.
Cacho, J I; Campillo, N; Viñas, P; Hernández-Córdoba, M
2015-06-19
Headspace sorptive extraction (HSSE) was used to preconcentrate seven monoterpenes (eucalyptol, linalool, menthol, geraniol, carvacrol, thymol and eugenol) for separation by gas chromatography and mass spectrometry (GC-MS). Three commercially available coatings for the stir bars, namely Polydimethylsiloxane (PDMS), polyacrilate (PA) and Ethylene glycol-silicone (EG-Silicone), were tested, and the influential parameters both in the adsorption and the thermal desorption steps were optimized. PDMS provided the best sensitivity for linalool, geraniol, menthol and eucalyptol, whereas EG-Silicone was best for extracting the phenolic monoterpenes studied. Considering the average obtained slopes from all compounds, PDMS pointed as the best option, and the analytical characteristics for the HSSE-TD-GC-MS method using this coating were obtained. Quantification of the samples was carried out by matrix-matched calibration using a synthetic honey. Detection limits ranged between 0.007 and 0.032 ng g(-1), depending on the compound. Twelve honey samples of different floral origins were analyzed using the HSSE-GC-MS method, the analytes being detected at concentrations up to 64 ng g(-1). Copyright © 2015 Elsevier B.V. All rights reserved.
Fast and Analytical EAP Approximation from a 4th-Order Tensor.
Ghosh, Aurobrata; Deriche, Rachid
2012-01-01
Generalized diffusion tensor imaging (GDTI) was developed to model complex apparent diffusivity coefficient (ADC) using higher-order tensors (HOTs) and to overcome the inherent single-peak shortcoming of DTI. However, the geometry of a complex ADC profile does not correspond to the underlying structure of fibers. This tissue geometry can be inferred from the shape of the ensemble average propagator (EAP). Though interesting methods for estimating a positive ADC using 4th-order diffusion tensors were developed, GDTI in general was overtaken by other approaches, for example, the orientation distribution function (ODF), since it is considerably difficult to recuperate the EAP from a HOT model of the ADC in GDTI. In this paper, we present a novel closed-form approximation of the EAP using Hermite polynomials from a modified HOT model of the original GDTI-ADC. Since the solution is analytical, it is fast, differentiable, and the approximation converges well to the true EAP. This method also makes the effort of computing a positive ADC worthwhile, since now both the ADC and the EAP can be used and have closed forms. We demonstrate our approach with 4th-order tensors on synthetic data and in vivo human data.
Fast and Analytical EAP Approximation from a 4th-Order Tensor
Ghosh, Aurobrata; Deriche, Rachid
2012-01-01
Generalized diffusion tensor imaging (GDTI) was developed to model complex apparent diffusivity coefficient (ADC) using higher-order tensors (HOTs) and to overcome the inherent single-peak shortcoming of DTI. However, the geometry of a complex ADC profile does not correspond to the underlying structure of fibers. This tissue geometry can be inferred from the shape of the ensemble average propagator (EAP). Though interesting methods for estimating a positive ADC using 4th-order diffusion tensors were developed, GDTI in general was overtaken by other approaches, for example, the orientation distribution function (ODF), since it is considerably difficult to recuperate the EAP from a HOT model of the ADC in GDTI. In this paper, we present a novel closed-form approximation of the EAP using Hermite polynomials from a modified HOT model of the original GDTI-ADC. Since the solution is analytical, it is fast, differentiable, and the approximation converges well to the true EAP. This method also makes the effort of computing a positive ADC worthwhile, since now both the ADC and the EAP can be used and have closed forms. We demonstrate our approach with 4th-order tensors on synthetic data and in vivo human data. PMID:23365552
Rahman, Md Musfiqur; Lee, Han Sol; Abd El-Aty, A M; Kabir, Md Humayun; Chung, Hyung Suk; Park, Jong-Hyouk; Kim, Mi-Ra; Kim, Ji-Hyun; Shin, Ho-Chul; Shin, Sung Shik; Shim, Jae-Han
2018-10-15
A simple quick, easy, cheap, effective, rugged, and safe (QuEChERS)-based method was developed for the analysis of endrin and its metabolite, δ-keto endrin, in five animal-derived food products (chicken, pork, beef, egg, and milk) using a gas chromatography-micro electron capture detector (GC-μECD). Samples were extracted with acidified acetonitrile, salted out with magnesium sulfate and sodium acetate, and finally purified with a dual layer solid-phase extraction cartridge (SPE) that contains both Supelclean ENVI-Carb (upper layer) and primary secondary amine (lower layer) SPE sorbents. A seven-point external calibration curve was constructed both for the solvent and matrix for both compounds. Good linearity was achieved for both analytes, with coefficients of determination (R 2 ) ≥ 0.9960. The limits of detection (LODs) were 0.003 mg/kg, whereas the limits of quantification (LOQ) were 0.01 mg/kg, which were 10 times lower than the extraneous maximum residue limit (EMRL) designated by CODEX Alimentarius for the specified matrices. The method was validated via recovery performances in triplicates, with three fortification levels equivalent to LOQ, 2 × LOQ, and 10 × LOQ. The method provided excellent recoveries, ranging between 75.63 and 117.92%, with relative standard deviations (RSD) ≤ 8.52% for both analytes in various matrices. The developed method was successfully applied to monitor market samples collected from 20 different places throughout the Republic of Korea, and none of the tested analytes were found in the analyzed samples. Conclusively, we could propose that the current method can be used for routine analysis of endrin and δ-keto endrin in any type of fatty food matrix. Copyright © 2018 Elsevier Ltd. All rights reserved.
Molecular characterization of dissolved organic matter (DOM): a critical review.
Nebbioso, Antonio; Piccolo, Alessandro
2013-01-01
Advances in water chemistry in the last decade have improved our knowledge about the genesis, composition, and structure of dissolved organic matter, and its effect on the environment. Improvements in analytical technology, for example Fourier-transform ion cyclotron (FT-ICR) mass spectrometry (MS), homo and hetero-correlated multidimensional nuclear magnetic resonance (NMR) spectroscopy, and excitation emission matrix fluorimetry (EEMF) with parallel factor (PARAFAC) analysis for UV-fluorescence spectroscopy have resulted in these advances. Improved purification methods, for example ultrafiltration and reverse osmosis, have enabled facile desalting and concentration of freshly collected DOM samples, thereby complementing the analytical process. Although its molecular weight (MW) remains undefined, DOM is described as a complex mixture of low-MW substances and larger-MW biomolecules, for example proteins, polysaccharides, and exocellular macromolecules. There is a general consensus that marine DOM originates from terrestrial and marine sources. A combination of diagenetic and microbial processes contributes to its origin, resulting in refractory organic matter which acts as carbon sink in the ocean. Ocean DOM is derived partially from humified products of plants decay dissolved in fresh water and transported to the ocean, and partially from proteinaceous and polysaccharide material from phytoplankton metabolism, which undergoes in-situ microbial processes, becoming refractory. Some of the DOM interacts with radiation and is, therefore, defined as chromophoric DOM (CDOM). CDOM is classified as terrestrial, marine, anthropogenic, or mixed, depending on its origin. Terrestrial CDOM reaches the oceans via estuaries, whereas autochthonous CDOM is formed in sea water by microbial activity; anthropogenic CDOM is a result of human activity. CDOM also affects the quality of water, by shielding it from solar radiation, and constitutes a carbon sink pool. Evidence in support of the hypothesis that part of marine DOM is of terrestrial origin, being the result of a long-term carbon sedimentation, has been obtained from several studies discussed herein.
Gao, Fei; Xu, Lingzhi; Zhang, Yuejing; Yang, Zengling; Han, Lujia; Liu, Xian
2018-02-01
The objectives of the current study were to explore the correlation between Raman spectroscopy and lipid characteristics and to assess the potential of Raman spectroscopic methods for distinguishing the different sources of animal-originated feed based on lipid characteristics. A total of 105 lipid samples derived from five animal species have been analyzed by gas chromatography (GC) and FT-Raman spectroscopy. High correlations (r 2 >0.94) were found between the characteristic peak ratio of the Raman spectra (1654/1748 and 1654/1445) and the degree of unsaturation of the animal lipids. The results of FT-Raman data combined with chemometrics showed that the fishmeal, poultry, porcine and ruminant (bovine and ovine) MBMs could be well separated based on their lipid spectral characteristics. This study demonstrated that FT-Raman spectroscopy can mostly exhibit the lipid structure specificity of different species of animal-originated feed and can be used to discriminate different animal-originated feed samples. Copyright © 2017. Published by Elsevier Ltd.
Rossier, Joël S; Maury, Valérie; de Voogd, Blaise; Pfammatter, Elmar
2014-10-01
Here we present the use of isotope ratio mass spectrometry (IRMS) for the detection of mislabelling of food produced in Switzerland. The system is based on the analysis of the oxygen isotope distribution in water (δ(18)O). Depending on the location on the earth, lake or groundwater has a specific isotopic distribution, which can serve as a fingerprint in order to verify whether a product has grown by means of the corresponding water. This report presents specifically the IRMS technique and the results obtained in the origin detection of fish grown in selected Swiss lakes as well as asparagus grown in Valais ground. Strengths and limitations of the method are presented for both cited products; on one hand, the technique is relatively universal for any product which contains significant water but on the other hand, it necessitates a rather heavy workload to build up a database of water δ(18)O values of products of different origins. This analytical tool is part of the concept of combating fraud currently in use in Switzerland.
Kortesniemi, Maaria; Rosenvald, Sirli; Laaksonen, Oskar; Vanag, Anita; Ollikka, Tarja; Vene, Kristel; Yang, Baoru
2018-04-25
The sensory-chemical profiles of Finnish honeys (labeled as buckwheat, cloudberry-bog, lingonberry, sweet clover, willowherb and multifloral honeys) were investigated using a multi-analytical approach. The sensory test (untrained panel, n = 62) was based on scaling and check-all-that-apply (CATA) methods accompanied with questions on preference and usage of honey. The results were correlated with corresponding profiles of odor-active compounds, determined using gas chromatography coupled with mass spectrometry/olfactometry (GC-MS/O). Botanical origins and chemical compositions including sugars were evaluated using NMR spectroscopy. A total of 73 odor-active compounds were listed based on GC-O. Sweet and mild honeys with familiar sensory properties were preferred by the panelists (PCA, R 2 X(1) = 0.7) while buckwheat and cloudberry-bog honeys with strong odor, flavor and color were regarded as unfamiliar and unpleasant. The data will give the honey industry novel information on honey properties in relation to the botanical origin, and consumer preference. Copyright © 2017 Elsevier Ltd. All rights reserved.
Time reversal invariance for a nonlinear scatterer exhibiting contact acoustic nonlinearity
NASA Astrophysics Data System (ADS)
Blanloeuil, Philippe; Rose, L. R. Francis; Veidt, Martin; Wang, Chun H.
2018-03-01
The time reversal invariance of an ultrasonic plane wave interacting with a contact interface characterized by a unilateral contact law is investigated analytically and numerically. It is shown analytically that despite the contact nonlinearity, the re-emission of a time reversed version of the reflected and transmitted waves can perfectly recover the original pulse shape, thereby demonstrating time reversal invariance for this type of contact acoustic nonlinearity. With the aid of finite element modelling, the time-reversal analysis is extended to finite-size nonlinear scatterers such as closed cracks. The results show that time reversal invariance holds provided that all the additional frequencies generated during the forward propagation, such as higher harmonics, sub-harmonics and zero-frequency component, are fully included in the retro-propagation. If the scattered waves are frequency filtered during receiving or transmitting, such as through the use of narrowband transducers, the recombination of the time-reversed waves will not exactly recover the original incident wave. This discrepancy due to incomplete time invariance can be exploited as a new method for characterizing damage by defining damage indices that quantify the departure from time reversal invariance. The sensitivity of these damage indices for various crack lengths and contact stress levels is investigated computationally, indicating some advantages of this narrowband approach relative to the more conventional measurement of higher harmonic amplitude, which requires broadband transducers.
ERIC Educational Resources Information Center
Oliver, Martin
2005-01-01
This article reviews the concept of "affordance", a term widely used in the literature on learning and technology to try and explain the properties technologies have. It is argued that the concept has drifted so far from its origins that it is now too ambiguous to be analytically valuable. In addition, it is suggested that its origins in…
ERIC Educational Resources Information Center
Updegraff, Kimberly A.; Perez-Brena, Norma J.; Baril, Megan E.; McHale, Susan M.; Umana-Taylor, Adriana J.
2012-01-01
Using latent profile analysis, the authors examined patterns of mother-father involvement in adolescents' peer relationships along three dimensions--support, guidance, and restrictions--in 240 Mexican-origin families. Three profiles were identified: (a) High Mother Involvement (mothers higher than fathers on all three dimensions), (b) High…
Erasmus, Sara Wilhelmina; Muller, Magdalena; van der Rijst, Marieta; Hoffman, Louwrens Christiaan
2016-02-01
Stable isotope ratios ((13)C/(12)C and (15)N/(14)N) of South African Dorper lambs from farms with different vegetation types were measured by isotope ratio mass spectrometry (IRMS), to evaluate it as a tool for the authentication of origin and feeding regime. Homogenised and defatted meat of the Longissimus lumborum (LL) muscle of lambs from seven different farms was assessed. The δ(13)C values were affected by the origin of the meat, mainly reflecting the diet. The Rûens and Free State farms had the lowest (p ⩽ 0.05) δ(15)N values, followed by the Northern Cape farms, with Hantam Karoo/Calvinia having the highest δ(15)N values. Discriminant analysis showed δ(13)C and δ(15)N differences as promising results for the use of IRMS as a reliable analytical tool for lamb meat authentication. The results suggest that diet, linked to origin, is an important factor to consider regarding region of origin classification for South African lamb. Copyright © 2015 Elsevier Ltd. All rights reserved.
Laboratories measuring target chemical, radiochemical, pathogens, and biotoxin analytes in environmental samples can use this online query tool to identify analytical methods included in EPA's Selected Analytical Methods for Environmental Remediation
Sanchez, Clinton; Sundermeier, Brian; Gray, Kenneth
2017-01-01
Gervais & Norenzayan (2012) reported in Science a series of 4 experiments in which manipulations intended to foster analytic thinking decreased religious belief. We conducted a precise, large, multi-site pre-registered replication of one of these experiments. We observed little to no effect of the experimental manipulation on religious belief (d = 0.07 in the wrong direction, 95% CI[-0.12, 0.25], N = 941). The original finding does not seem to provide reliable or valid evidence that analytic thinking causes a decrease in religious belief. PMID:28234942
Sanchez, Clinton; Sundermeier, Brian; Gray, Kenneth; Calin-Jageman, Robert J
2017-01-01
Gervais & Norenzayan (2012) reported in Science a series of 4 experiments in which manipulations intended to foster analytic thinking decreased religious belief. We conducted a precise, large, multi-site pre-registered replication of one of these experiments. We observed little to no effect of the experimental manipulation on religious belief (d = 0.07 in the wrong direction, 95% CI[-0.12, 0.25], N = 941). The original finding does not seem to provide reliable or valid evidence that analytic thinking causes a decrease in religious belief.
Bovens, M; Csesztregi, T; Franc, A; Nagy, J; Dujourdy, L
2014-01-01
The basic goal in sampling for the quantitative analysis of illicit drugs is to maintain the average concentration of the drug in the material from its original seized state (the primary sample) all the way through to the analytical sample, where the effect of particle size is most critical. The size of the largest particles of different authentic illicit drug materials, in their original state and after homogenisation, using manual or mechanical procedures, was measured using a microscope with a camera attachment. The comminution methods employed included pestle and mortar (manual) and various ball and knife mills (mechanical). The drugs investigated were amphetamine, heroin, cocaine and herbal cannabis. It was shown that comminution of illicit drug materials using these techniques reduces the nominal particle size from approximately 600 μm down to between 200 and 300 μm. It was demonstrated that the choice of 1 g increments for the primary samples of powdered drugs and cannabis resin, which were used in the heterogeneity part of our study (Part I) was correct for the routine quantitative analysis of illicit seized drugs. For herbal cannabis we found that the appropriate increment size was larger. Based on the results of this study we can generally state that: An analytical sample weight of between 20 and 35 mg of an illicit powdered drug, with an assumed purity of 5% or higher, would be considered appropriate and would generate an RSDsampling in the same region as the RSDanalysis for a typical quantitative method of analysis for the most common, powdered, illicit drugs. For herbal cannabis, with an assumed purity of 1% THC (tetrahydrocannabinol) or higher, an analytical sample weight of approximately 200 mg would be appropriate. In Part III we will pull together our homogeneity studies and particle size investigations and use them to devise sampling plans and sample preparations suitable for the quantitative instrumental analysis of the most common illicit drugs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Rice, J P; Saccone, N L; Corbett, J
2001-01-01
The lod score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential, so that pedigrees or lod curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders, where the maximum lod score (MLS) statistic shares some of the advantages of the traditional lod score approach but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the lod score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.
Hybrid Method for Mobile learning Cooperative: Study of Timor Leste
NASA Astrophysics Data System (ADS)
da Costa Tavares, Ofelia Cizela; Suyoto; Pranowo
2018-02-01
In the modern world today the decision support system is very useful to help in solving a problem, so this study discusses the learning process of savings and loan cooperatives in Timor Leste. The purpose of the observation is that the people of Timor Leste are still in the process of learning the use DSS for good saving and loan cooperative process. Based on existing research on the Timor Leste community on credit cooperatives, a mobile application will be built that will help the cooperative learning process in East Timorese society. The methods used for decision making are AHP (Analytical Hierarchy Process) and SAW (simple additive Weighting) method to see the result of each criterion and the weight of the value. The result of this research is mobile leaning cooperative in decision support system by using SAW and AHP method. Originality Value: Changed the two methods of mobile application development using AHP and SAW methods to help the decision support system process of a savings and credit cooperative in Timor Leste.
Laboratories measuring target pathogen analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select pathogens.
Le, Laetitia Minh Maï; Kégl, Balázs; Gramfort, Alexandre; Marini, Camille; Nguyen, David; Cherti, Mehdi; Tfaili, Sana; Tfayli, Ali; Baillet-Guffroy, Arlette; Prognon, Patrice; Chaminade, Pierre; Caudron, Eric
2018-07-01
The use of monoclonal antibodies (mAbs) constitutes one of the most important strategies to treat patients suffering from cancers such as hematological malignancies and solid tumors. These antibodies are prescribed by the physician and prepared by hospital pharmacists. An analytical control enables the quality of the preparations to be ensured. The aim of this study was to explore the development of a rapid analytical method for quality control. The method used four mAbs (Infliximab, Bevacizumab, Rituximab and Ramucirumab) at various concentrations and was based on recording Raman data and coupling them to a traditional chemometric and machine learning approach for data analysis. Compared to conventional linear approach, prediction errors are reduced with a data-driven approach using statistical machine learning methods. In the latter, preprocessing and predictive models are jointly optimized. An additional original aspect of the work involved on submitting the problem to a collaborative data challenge platform called Rapid Analytics and Model Prototyping (RAMP). This allowed using solutions from about 300 data scientists in collaborative work. Using machine learning, the prediction of the four mAbs samples was considerably improved. The best predictive model showed a combined error of 2.4% versus 14.6% using linear approach. The concentration and classification errors were 5.8% and 0.7%, only three spectra were misclassified over the 429 spectra of the test set. This large improvement obtained with machine learning techniques was uniform for all molecules but maximal for Bevacizumab with an 88.3% reduction on combined errors (2.1% versus 17.9%). Copyright © 2018 Elsevier B.V. All rights reserved.
Restaino, Odile Francesca; Finamore, Rosario; Diana, Paola; Marseglia, Mariacarmela; Vitiello, Mario; Casillo, Angela; Bedini, Emiliano; Parrilli, Michelangelo; Corsaro, Maria Michela; Trifuoggi, Marco; De Rosa, Mario; Schiraldi, Chiara
2017-03-15
Chondroitin sulfate is a glycosaminoglycan widely used as active principle of anti-osteoarthritis drugs and nutraceuticals, manufactured by extraction from animal cartilaginous tissues. During the manufacturing procedures, another glycosaminoglycan, the keratan sulfate, might be contemporarily withdrawn, thus eventually constituting a contaminant difficult to be determined because of its structural similarity. Considering the strict regulatory rules on the pureness of pharmaceutical grade chondrotin sulfate there is an urgent need and interest to determine the residual keratan sulfate with specific, sensitive and reliable methods. To pursue this aim, in this paper, for the first time, we set up a multi-analytical and preparative approach based on: i) a newly developed method by high performance anion-exchange chromatography with pulsed amperometric detection, ii) gas chromatography-mass spectrometry analyses, iii) size exclusion chromatography analyses coupled with triple detector array module and on iv) strong anion exchange chromatography separation. Varied KS percentages, in the range from 0.1 to 19.0% (w/w), were determined in seven pharmacopeia and commercial standards and nine commercial samples of different animal origin and manufacturers. Strong anion exchange chromatography profiles of the samples showed three or four different peaks. These peaks analyzed by high performance anion-exchange with pulsed amperometric detection and size exclusion chromatography with triple detector array, ion chromatography and by mono- or two-dimensional nuclear magnetic resonance revealed a heterogeneous composition of both glycosaminoglycans in terms of sulfation grade and molecular weight. High molecular weight species (>100 KDa) were also present in the samples that counted for chains still partially linked to a proteoglycan core. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.
Gronau, Quentin Frederik; Duizer, Monique; Bakker, Marjan; Wagenmakers, Eric-Jan
2017-09-01
Publication bias and questionable research practices have long been known to corrupt the published record. One method to assess the extent of this corruption is to examine the meta-analytic collection of significant p values, the so-called p -curve (Simonsohn, Nelson, & Simmons, 2014a). Inspired by statistical research on false-discovery rates, we propose a Bayesian mixture model analysis of the p -curve. Our mixture model assumes that significant p values arise either from the null-hypothesis H ₀ (when their distribution is uniform) or from the alternative hypothesis H1 (when their distribution is accounted for by a simple parametric model). The mixture model estimates the proportion of significant results that originate from H ₀, but it also estimates the probability that each specific p value originates from H ₀. We apply our model to 2 examples. The first concerns the set of 587 significant p values for all t tests published in the 2007 volumes of Psychonomic Bulletin & Review and the Journal of Experimental Psychology: Learning, Memory, and Cognition; the mixture model reveals that p values higher than about .005 are more likely to stem from H ₀ than from H ₁. The second example concerns 159 significant p values from studies on social priming and 130 from yoked control studies. The results from the yoked controls confirm the findings from the first example, whereas the results from the social priming studies are difficult to interpret because they are sensitive to the prior specification. To maximize accessibility, we provide a web application that allows researchers to apply the mixture model to any set of significant p values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method... (analytical method) provided that the chemistry of the method or the determinative technique is not changed... prevent efficient recovery of organic pollutants and prevent the method from meeting QC requirements, the...
Laboratories measuring target biotoxin analytes in environmental samples can use this online query tool to identify analytical methods included in EPA's Selected Analytical Methods for Environmental Remediation and Recovery for select biotoxins.
Laboratories measuring target chemical, radiochemical, pathogens, and biotoxin analytes in environmental samples can use this online query tool to identify analytical methods in EPA's Selected Analytical Methods for Environmental Remediation and Recovery
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...
7 CFR 91.23 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 91.23 Section 91.23 Agriculture... SERVICES AND GENERAL INFORMATION Method Manuals § 91.23 Analytical methods. Most analyses are performed according to approved procedures described in manuals of standardized methodology. These standard methods...
NASA Astrophysics Data System (ADS)
Asadpour-Zeynali, Karim; Bastami, Mohammad
2010-02-01
In this work a new modification of the standard addition method called "net analyte signal standard addition method (NASSAM)" is presented for the simultaneous spectrofluorimetric and spectrophotometric analysis. The proposed method combines the advantages of standard addition method with those of net analyte signal concept. The method can be applied for the determination of analyte in the presence of known interferents. The accuracy of the predictions against H-point standard addition method is not dependent on the shape of the analyte and interferent spectra. The method was successfully applied to simultaneous spectrofluorimetric and spectrophotometric determination of pyridoxine (PY) and melatonin (MT) in synthetic mixtures and in a pharmaceutical formulation.
CMCpy: Genetic Code-Message Coevolution Models in Python
Becich, Peter J.; Stark, Brian P.; Bhat, Harish S.; Ardell, David H.
2013-01-01
Code-message coevolution (CMC) models represent coevolution of a genetic code and a population of protein-coding genes (“messages”). Formally, CMC models are sets of quasispecies coupled together for fitness through a shared genetic code. Although CMC models display plausible explanations for the origin of multiple genetic code traits by natural selection, useful modern implementations of CMC models are not currently available. To meet this need we present CMCpy, an object-oriented Python API and command-line executable front-end that can reproduce all published results of CMC models. CMCpy implements multiple solvers for leading eigenpairs of quasispecies models. We also present novel analytical results that extend and generalize applications of perturbation theory to quasispecies models and pioneer the application of a homotopy method for quasispecies with non-unique maximally fit genotypes. Our results therefore facilitate the computational and analytical study of a variety of evolutionary systems. CMCpy is free open-source software available from http://pypi.python.org/pypi/CMCpy/. PMID:23532367
Turbulent Motion of Liquids in Hydraulic Resistances with a Linear Cylindrical Slide-Valve
Velescu, C.; Popa, N. C.
2015-01-01
We analyze the motion of viscous and incompressible liquids in the annular space of controllable hydraulic resistances with a cylindrical linear slide-valve. This theoretical study focuses on the turbulent and steady-state motion regimes. The hydraulic resistances mentioned above are the most frequent type of hydraulic resistances used in hydraulic actuators and automation systems. To study the liquids' motion in the controllable hydraulic resistances with a linear cylindrical slide-valve, the report proposes an original analytic method. This study can similarly be applied to any other type of hydraulic resistance. Another purpose of this study is to determine certain mathematical relationships useful to approach the theoretical functionality of hydraulic resistances with magnetic controllable fluids as incompressible fluids in the presence of a controllable magnetic field. In this report, we established general analytic equations to calculate (i) velocity and pressure distributions, (ii) average velocity, (iii) volume flow rate of the liquid, (iv) pressures difference, and (v) radial clearance. PMID:26167532
NASA Astrophysics Data System (ADS)
Gulin, O. E.; Yaroshchuk, I. O.
2017-03-01
The paper is devoted to the analytic study and numerical simulation of mid-frequency acoustic signal propagation in a two-dimensional inhomogeneous random shallow-water medium. The study was carried out by the cross section method (local modes). We present original theoretical estimates for the behavior of the average acoustic field intensity and show that at different distances, the features of propagation loss behavior are determined by the intensity of fluctuations and their horizontal scale and depend on the initial regular parameters, such as the emission frequency and size of sound losses in the bottom. We establish analytically that for the considered waveguide and sound frequency parameters, mode coupling effect has a local character and weakly influences the statistics. We establish that the specific form of the spatial spectrum of sound velocity inhomogeneities for the statistical patterns of the field intensity is insignificant during observations in the range of shallow-water distances of practical interest.
NASA Astrophysics Data System (ADS)
Wang, Hongmei; Zhang, Yafei; Xu, Huaizhe
2007-01-01
The effect of transverse wave vector and magnetic fields on resonant tunneling times in double-barrier structures, which is significant but has been frequently omitted in previous theoretical methods, has been reported in this paper. The analytical expressions of the longitudinal energies of quasibound levels (LEQBL) and the lifetimes of quasibound levels (LQBL) in symmetrical double-barrier (SDB) structures have been derived as a function of transverse wave vector and longitudinal magnetic fields perpendicular to interfaces. Based on our derived analytical expressions, the LEQBL and LQBL dependence upon transverse wave vector and longitudinal magnetic fields has been explored numerically for a SDB structure. Model calculations show that the LEQBL decrease monotonically and the LQBL shorten with increasing transverse wave vector, and each original LEQBL splits to a series of sub-LEQBL which shift nearly linearly toward the well bottom and the lifetimes of quasibound level series (LQBLS) shorten with increasing Landau-level indices and magnetic fields.
Turbulent Motion of Liquids in Hydraulic Resistances with a Linear Cylindrical Slide-Valve.
Velescu, C; Popa, N C
2015-01-01
We analyze the motion of viscous and incompressible liquids in the annular space of controllable hydraulic resistances with a cylindrical linear slide-valve. This theoretical study focuses on the turbulent and steady-state motion regimes. The hydraulic resistances mentioned above are the most frequent type of hydraulic resistances used in hydraulic actuators and automation systems. To study the liquids' motion in the controllable hydraulic resistances with a linear cylindrical slide-valve, the report proposes an original analytic method. This study can similarly be applied to any other type of hydraulic resistance. Another purpose of this study is to determine certain mathematical relationships useful to approach the theoretical functionality of hydraulic resistances with magnetic controllable fluids as incompressible fluids in the presence of a controllable magnetic field. In this report, we established general analytic equations to calculate (i) velocity and pressure distributions, (ii) average velocity, (iii) volume flow rate of the liquid, (iv) pressures difference, and (v) radial clearance.
Oud, Bart; Maris, Antonius J A; Daran, Jean-Marc; Pronk, Jack T
2012-01-01
Successful reverse engineering of mutants that have been obtained by nontargeted strain improvement has long presented a major challenge in yeast biotechnology. This paper reviews the use of genome-wide approaches for analysis of Saccharomyces cerevisiae strains originating from evolutionary engineering or random mutagenesis. On the basis of an evaluation of the strengths and weaknesses of different methods, we conclude that for the initial identification of relevant genetic changes, whole genome sequencing is superior to other analytical techniques, such as transcriptome, metabolome, proteome, or array-based genome analysis. Key advantages of this technique over gene expression analysis include the independency of genome sequences on experimental context and the possibility to directly and precisely reproduce the identified changes in naive strains. The predictive value of genome-wide analysis of strains with industrially relevant characteristics can be further improved by classical genetics or simultaneous analysis of strains derived from parallel, independent strain improvement lineages. PMID:22152095
NASA Astrophysics Data System (ADS)
Tsivilskiy, I. V.; Nagulin, K. Yu.; Gilmutdinov, A. Kh.
2016-02-01
A full three-dimensional nonstationary numerical model of graphite electrothermal atomizers of various types is developed. The model is based on solution of a heat equation within solid walls of the atomizer with a radiative heat transfer and numerical solution of a full set of Navier-Stokes equations with an energy equation for a gas. Governing equations for the behavior of a discrete phase, i.e., atomic particles suspended in a gas (including gas-phase processes of evaporation and condensation), are derived from the formal equations molecular kinetics by numerical solution of the Hertz-Langmuir equation. The following atomizers test the model: a Varian standard heated electrothermal vaporizer (ETV), a Perkin Elmer standard THGA transversely heated graphite tube with integrated platform (THGA), and the original double-stage tube-helix atomizer (DSTHA). The experimental verification of computer calculations is carried out by a method of shadow spectral visualization of the spatial distributions of atomic and molecular vapors in an analytical space of an atomizer.
Oud, Bart; van Maris, Antonius J A; Daran, Jean-Marc; Pronk, Jack T
2012-03-01
Successful reverse engineering of mutants that have been obtained by nontargeted strain improvement has long presented a major challenge in yeast biotechnology. This paper reviews the use of genome-wide approaches for analysis of Saccharomyces cerevisiae strains originating from evolutionary engineering or random mutagenesis. On the basis of an evaluation of the strengths and weaknesses of different methods, we conclude that for the initial identification of relevant genetic changes, whole genome sequencing is superior to other analytical techniques, such as transcriptome, metabolome, proteome, or array-based genome analysis. Key advantages of this technique over gene expression analysis include the independency of genome sequences on experimental context and the possibility to directly and precisely reproduce the identified changes in naive strains. The predictive value of genome-wide analysis of strains with industrially relevant characteristics can be further improved by classical genetics or simultaneous analysis of strains derived from parallel, independent strain improvement lineages. © 2011 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.
Fast and global authenticity screening of honey using ¹H-NMR profiling.
Spiteri, Marc; Jamin, Eric; Thomas, Freddy; Rebours, Agathe; Lees, Michèle; Rogers, Karyne M; Rutledge, Douglas N
2015-12-15
An innovative analytical approach was developed to tackle the most common adulterations and quality deviations in honey. Using proton-NMR profiling coupled to suitable quantification procedures and statistical models, analytical criteria were defined to check the authenticity of both mono- and multi-floral honey. The reference data set used was a worldwide collection of more than 800 honeys, covering most of the economically significant botanical and geographical origins. Typical plant nectar markers can be used to check monofloral honey labeling. Spectral patterns and natural variability were established for multifloral honeys, and marker signals for sugar syrups were identified by statistical comparison with a commercial dataset of ca. 200 honeys. Although the results are qualitative, spiking experiments have confirmed the ability of the method to detect sugar addition down to 10% levels in favorable cases. Within the same NMR experiments, quantification of glucose, fructose, sucrose and 5-HMF (regulated parameters) was performed. Finally markers showing the onset of fermentation are described. Copyright © 2014 Elsevier Ltd. All rights reserved.
Realini, Marco; Botteon, Alessandra; Colombo, Chiara; Noll, Sarah; Elliott, Stephen R.; Matousek, Pavel
2016-01-01
A recently developed micrometer-scale spatially offset Raman spectroscopy (μ-SORS) method provides a new analytical capability for investigating non-destructively the chemical composition of sub-surface, micrometer-scale thickness, diffusely scattering layers at depths beyond the reach of conventional confocal Raman microscopy. Here, we demonstrate experimentally, for the first time, the capability of μ-SORS to determine whether two detected chemical components originate from two separate layers or whether the two components are mixed together in a single layer. Such information is important in a number of areas, including conservation of cultural heritage objects, and is not available, for highly turbid media, from conventional Raman microscopy, where axial (confocal) scanning is not possible due to an inability to facilitate direct imaging within the highly scattering sample. This application constitutes an additional capability for μ-SORS in addition to its basic capacity to determine the overall chemical make-up of layers in a turbid system. PMID:26767641
NASA Astrophysics Data System (ADS)
Abbas, O.; Fernández Pierna, J. A.; Dardenne, P.; Baeten, V.
2010-04-01
Since the BSE crisis, researches concern mainly the detection, identification, and quantification of meat and bone meal with an important focus on the development of new analytical methods. Microscopic based spectroscopy methods (NIR microscopy - NIRM or/and NIR hyperspectral imaging) have been proposed as complementary methods to the official method; the optical microscopy. NIR spectroscopy offers the advantage of being rapid, accurate and independent of human analyst skills. The combination of an NIR detector and a microscope or a camera allows the collection of high quality spectra for small feed particles having a size larger than 50 μm. Several studies undertaken have demonstrated the clear potential of NIR microscopic methods for the detection of animal particles in both raw and sediment fractions. Samples are sieved and only the gross fraction (superior than 250 μm) is investigated. Proposed methodologies have been developed to assure, with an acceptable level of confidence (95%), the detection of at least one animal particle when a feed sample is adulterated at a level of 0.1%. NIRM and NIR hyperspectral imaging are running under accreditation ISO 17025 since 2005 at CRA-W. A quantitative NIRM approach has been developed in order to fulfill the new requirements of the European commission policies. The capacities of NIRM method have been improved; only the raw fraction is analyzed, both the gross and the fine fractions of the samples are considered, and the acquisition parameters are optimized (the aperture, the gap, and the composition of the animal feed). A mapping method for a faster collection of spectra is also developed. The aim of this work is to show the new advances in the analytical methods developed in the frame of the feed ban applied in Europe.
PCR technology for screening and quantification of genetically modified organisms (GMOs).
Holst-Jensen, Arne; Rønning, Sissel B; Løvseth, Astrid; Berdal, Knut G
2003-04-01
Although PCR technology has obvious limitations, the potentially high degree of sensitivity and specificity explains why it has been the first choice of most analytical laboratories interested in detection of genetically modified (GM) organisms (GMOs) and derived materials. Because the products that laboratories receive for analysis are often processed and refined, the quality and quantity of target analyte (e.g. protein or DNA) frequently challenges the sensitivity of any detection method. Among the currently available methods, PCR methods are generally accepted as the most sensitive and reliable methods for detection of GM-derived material in routine applications. The choice of target sequence motif is the single most important factor controlling the specificity of the PCR method. The target sequence is normally a part of the modified gene construct, for example a promoter, a terminator, a gene, or a junction between two of these elements. However, the elements may originate from wildtype organisms, they may be present in more than one GMO, and their copy number may also vary from one GMO to another. They may even be combined in a similar way in more than one GMO. Thus, the choice of method should fit the purpose. Recent developments include event-specific methods, particularly useful for identification and quantification of GM content. Thresholds for labelling are now in place in many countries including those in the European Union. The success of the labelling schemes is dependent upon the efficiency with which GM-derived material can be detected. We will present an overview of currently available PCR methods for screening and quantification of GM-derived DNA, and discuss their applicability and limitations. In addition, we will discuss some of the major challenges related to determination of the limits of detection (LOD) and quantification (LOQ), and to validation of methods.
ERIC Educational Resources Information Center
Kimaru, Irene; Koether, Marina; Chichester, Kimberly; Eaton, Lafayette
2017-01-01
Analytical method transfer (AMT) and dissolution testing are important topics required in industry that should be taught in analytical chemistry courses. Undergraduate students in senior level analytical chemistry laboratory courses at Kennesaw State University (KSU) and St. John Fisher College (SJFC) participated in development, validation, and…
Visual Analytics and Storytelling through Video
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Pak C.; Perrine, Kenneth A.; Mackey, Patrick S.
2005-10-31
This paper supplements a video clip submitted to the Video Track of IEEE Symposium on Information Visualization 2005. The original video submission applies a two-way storytelling approach to demonstrate the visual analytics capabilities of a new visualization technique. The paper presents our video production philosophy, describes the plot of the video, explains the rationale behind the plot, and finally, shares our production experiences with our readers.
ERIC Educational Resources Information Center
Teplovs, Chris
2015-01-01
This commentary reflects on the contributions to learning analytics and theory by a paper that describes how multiple theoretical frameworks were woven together to inform the creation of a new, automated discourse analysis tool. The commentary highlights the contributions of the original paper, provides some alternative approaches, and touches on…
NASA Astrophysics Data System (ADS)
Li, Jiangui; Wang, Junhua; Zhigang, Zhao; Yan, Weili
2012-04-01
In this paper, analytical analysis of the permanent magnet vernier (PMV) is presented. The key is to analytically solve the governing Laplacian/quasi-Poissonian field equations in the motor regions. By using the time-stepping finite element method, the analytical method is verified. Hence, the performances of the PMV machine are quantitatively compared with that of the analytical results. The analytical results agree well with the finite element method results. Finally, the experimental results are given to further show the validity of the analysis.
An Analytical Framework for Soft and Hard Data Fusion: A Dempster-Shafer Belief Theoretic Approach
2012-08-01
fusion. Therefore, we provide a detailed discussion on uncertain data types, their origins and three uncertainty pro- cessing formalisms that are popular...suitable membership functions corresponding to the fuzzy sets. 3.2.3 DS Theory The DS belief theory, originally proposed by Dempster, can be thought of as... originated and various imperfections of the source. Uncertainty handling formalisms provide techniques for modeling and working with these uncertain data types
Andrew Fowler
2015-10-01
Compilation of rare earth element and associated major and minor dissolved constituent analytical data for USA geothermal fields and global seafloor hydrothermal vents. Data is in original units. Reference to and use of this data should be attributed to the original authors and publications according to the provisions outlined therein.
ERIC Educational Resources Information Center
Cruz, Rick A.; Wilkinson, Anna V.; Bondy, Melissa L.; Koehly, Laura M.
2012-01-01
Reliability and validity evidence is provided for the Demographic Index of Cultural Exposure (DICE), consisting of six demographic proxy indicators of acculturation, within two community samples of Mexican-origin adults (N= 497 for each sample). Factor analytic procedures were used to examine the common variance shared between the six demographic…
Immediacy, Emotion, and the Filling of Glasses: Next Round's on You
ERIC Educational Resources Information Center
Heinrichs, R. Walter
2006-01-01
In this article, I respond to comments made by K. Salzinger and A. Aleman and A. S. David on my original article. The constructive, reconstructive, and interpretive nature of human cognition is well illustrated by these two responses to my recent article on schizophrenia. In the original article, I used meta-analytic summaries of the published…
A sample preparation method for recovering suppressed analyte ions in MALDI TOF MS.
Lou, Xianwen; de Waal, Bas F M; Milroy, Lech-Gustav; van Dongen, Joost L J
2015-05-01
In matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI TOF MS), analyte signals can be substantially suppressed by other compounds in the sample. In this technical note, we describe a modified thin-layer sample preparation method that significantly reduces the analyte suppression effect (ASE). In our method, analytes are deposited on top of the surface of matrix preloaded on the MALDI plate. To prevent embedding of analyte into the matrix crystals, the sample solution were prepared without matrix and efforts were taken not to re-dissolve the preloaded matrix. The results with model mixtures of peptides, synthetic polymers and lipids show that detection of analyte ions, which were completely suppressed using the conventional dried-droplet method, could be effectively recovered by using our method. Our findings suggest that the incorporation of analytes in the matrix crystals has an important contributory effect on ASE. By reducing ASE, our method should be useful for the direct MALDI MS analysis of multicomponent mixtures. Copyright © 2015 John Wiley & Sons, Ltd.
Characterization of Organic and Conventional Coffee Using Neutron Activation Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
E. A. De Nadai Fernandes; P. Bode; F. S. Tagliaferro
2000-11-12
Countries importing organic coffee are facing the difficulty of assessing the quality of the product to distinguish original organic coffee from other coffees, thereby eliminating possible fraud. Many analytical methods are matrix sensitive and require matrix-matching reference materials for validation, which are currently nonexistent. This work aims to establish the trace element characterization of organic and conventional Brazilian coffees and to establish correlations with the related soil and the type of fertilizer and agrochemicals applied. It was observed that the variability in element concentrations between the various types of coffee is not so large, which emphasizes the need for analyticalmore » methods of high accuracy, reproducibility, and a well-known uncertainty. Moreover, the analyses indicate that sometimes the coffee packages may contain some soil remnants.« less
Optical asymmetric cryptography based on amplitude reconstruction of elliptically polarized light
NASA Astrophysics Data System (ADS)
Cai, Jianjun; Shen, Xueju; Lei, Ming
2017-11-01
We propose a novel optical asymmetric image encryption method based on amplitude reconstruction of elliptically polarized light, which is free from silhouette problem. The original image is analytically separated into two phase-only masks firstly, and then the two masks are encoded into amplitudes of the orthogonal polarization components of an elliptically polarized light. Finally, the elliptically polarized light propagates through a linear polarizer, and the output intensity distribution is recorded by a CCD camera to obtain the ciphertext. The whole encryption procedure could be implemented by using commonly used optical elements, and it combines diffusion process and confusion process. As a result, the proposed method achieves high robustness against iterative-algorithm-based attacks. Simulation results are presented to prove the validity of the proposed cryptography.
NASA Technical Reports Server (NTRS)
Burton, Aaron S.; Stern, Jennifer C.; Elsila, Jamie E.; Glavin, Daniel P.; Dworkin, Jason P.
2012-01-01
The discoveries of amino acids of extraterrestrial origin in many meteorites over the last 40 years have revolutionized the Astrobiology field. A variety of non-terrestrial amino acids similar to those found in life on Earth have been detected in meteorites. A few amino acids have even been found with chiral excesses, suggesting that meteorites could have contributed to the origin of homochirality in life on Earth. In addition to amino acids, which have been productively studied for years, sugar-like molecules, activated phosphates, and nucleobases have also been determined to be indigenous to numerous meteorites. Because these molecules are essential for life as we know it, and meteorites have been delivering them to the Earth since accretion, it is plausible that the origin(s) of life on Earth were aided by extraterrestrially-synthesized molecules. Understanding the origins of life on Earth guides our search for life elsewhere, helping to answer the question of whether biology is unique to Earth. This tutorial review focuses on meteoritic amino acids and nucleobases, exploring modern analytical methods and possible formation mechanisms. We will also discuss the unique window that meteorites provide into the chemistry that preceded life on Earth, a chemical record we do not have access to on Earth due to geologic recycling of rocks and the pervasiveness of biology across the planet. Finally, we will address the future of meteorite research, including asteroid sample return mIssIons.
Zhou, Y.; Ren, Y.; Tang, D.; Bohor, B.
1994-01-01
Kaolinitic tonsteins of altered synsedimentary volcanic ash-fall origin are well developed in the Late Permian coal-bearing formations of eastern Yunnan Province. Because of their unique origin, wide lateral extent, relatively constant thickness and sharp contacts with enclosing strata, great importance has been attached to these isochronous petrographic markers. In order to compare tonsteins with co-existing, non-cineritic claystones and characterize the individuality of tonsteins from different horizons for coal bed correlation, a semi-quantitative method was developed that is based on statistical analyses of the concentration and morphology of zircons and their spatial distribution patterns. This zircon-based analytical method also serves as a means for reconstructing volcanic ash-fall dispersal patterns. The results demonstrate that zircons from claystones of two different origins (i.e., tonstein and non-cineritic claystone) differ greatly in their relative abundances, crystal morphologies and spatial distribution patterns. Tonsteins from the same area but from different horizons are characterized by their own unique statistical patterns in terms of zircon concentration values and morphologic parameters (crystal length, width and the ratio of these values), thus facilitating stratigraphic correlation. Zircons from the same tonstein horizon also show continuous variation in these statistical patterns as a function of areal distribution, making it possible to identify the main path and direction in which the volcanic source materials were transported by prevailing winds. ?? 1994.
Wetherbee, Gregory A.; Latysh, Natalie E.; Chesney, Tanya A.
2010-01-01
The U.S. Geological Survey (USGS) used six distinct programs to provide external quality-assurance monitoring for the National Atmospheric Deposition Program / National Trends Network (NTN) and Mercury Deposition Network (MDN) during 2007-08. The field-audit program assessed the effects of onsite exposure, sample handling, and shipping on the chemistry of NTN samples, and a system-blank program assessed the same effects for MDN. Two interlaboratory-comparison programs assessed the bias and variability of the chemical analysis data from the Central Analytical Laboratory (CAL), Mercury (Hg) Analytical Laboratory (HAL), and 12 other participating laboratories. A blind-audit program was also implemented for the MDN to evaluate analytical bias in HAL total Hg concentration data. A co-located-sampler program was used to identify and quantify potential shifts in NADP data resulting from replacement of original network instrumentation with new electronic recording rain gages (E-gages) and prototype precipitation collectors. The results indicate that NADP data continue to be of sufficient quality for the analysis of spatial distributions and time trends of chemical constituents in wet deposition across the U.S. NADP data-quality objectives continued to be achieved during 2007-08. Results also indicate that retrofit of the NADP networks with the new E-gages is not likely to create step-function type shifts in NADP precipitation-depth records, except for sites where annual precipitation depth is dominated by snow because the E-gages tend to catch more snow than the original NADP rain gages. Evaluation of prototype precipitation collectors revealed no difference in sample volumes and analyte concentrations between the original NADP collectors and modified, deep-bucket collectors, but the Yankee Environmental Systems, Inc. (YES) collector obtained samples of significantly higher volumes and analyte concentrations than the standard NADP collector.
Physical-geometric optics method for large size faceted particles.
Sun, Bingqiang; Yang, Ping; Kattawar, George W; Zhang, Xiaodong
2017-10-02
A new physical-geometric optics method is developed to compute the single-scattering properties of faceted particles. It incorporates a general absorption vector to accurately account for inhomogeneous wave effects, and subsequently yields the relevant analytical formulas effective and computationally efficient for absorptive scattering particles. A bundle of rays incident on a certain facet can be traced as a single beam. For a beam incident on multiple facets, a systematic beam-splitting technique based on computer graphics is used to split the original beam into several sub-beams so that each sub-beam is incident only on an individual facet. The new beam-splitting technique significantly reduces the computational burden. The present physical-geometric optics method can be generalized to arbitrary faceted particles with either convex or concave shapes and with a homogeneous or an inhomogeneous (e.g., a particle with a core) composition. The single-scattering properties of irregular convex homogeneous and inhomogeneous hexahedra are simulated and compared to their counterparts from two other methods including a numerically rigorous method.
NASA Astrophysics Data System (ADS)
Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi
This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
77 FR 41336 - Analytical Methods Used in Periodic Reporting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-13
... Methods Used in Periodic Reporting AGENCY: Postal Regulatory Commission. ACTION: Notice of filing. SUMMARY... proceeding to consider changes in analytical methods used in periodic reporting. This notice addresses... informal rulemaking proceeding to consider changes in the analytical methods approved for use in periodic...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
40 CFR 141.704 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...
DOT National Transportation Integrated Search
1974-10-01
The author has brought the review of published analytical methods for determining alcohol in body materials up-to- date. The review deals with analytical methods for alcohol in blood and other body fluids and tissues; breath alcohol methods; factors ...
Automated Clean Chemistry for Bulk Analysis of Environmental Swipe Samples - FY17 Year End Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ticknor, Brian W.; Metzger, Shalina C.; McBay, Eddy H.
Sample preparation methods for mass spectrometry are being automated using commercial-off-the-shelf (COTS) equipment to shorten lengthy and costly manual chemical purification procedures. This development addresses a serious need in the International Atomic Energy Agency’s Network of Analytical Laboratories (IAEA NWAL) to increase efficiency in the Bulk Analysis of Environmental Samples for Safeguards program with a method that allows unattended, overnight operation. In collaboration with Elemental Scientific Inc., the prepFAST-MC2 was designed based on COTS equipment. It was modified for uranium/plutonium separations using renewable columns packed with Eichrom TEVA and UTEVA resins, with a chemical separation method based on the Oakmore » Ridge National Laboratory (ORNL) NWAL chemical procedure. The newly designed prepFAST-SR has had several upgrades compared with the original prepFAST-MC2. Both systems are currently installed in the Ultra-Trace Forensics Science Center at ORNL.« less
Uncertainties in Atomic Data and Their Propagation Through Spectral Models. I.
NASA Technical Reports Server (NTRS)
Bautista, M. A.; Fivet, V.; Quinet, P.; Dunn, J.; Gull, T. R.; Kallman, T. R.; Mendoza, C.
2013-01-01
We present a method for computing uncertainties in spectral models, i.e., level populations, line emissivities, and emission line ratios, based upon the propagation of uncertainties originating from atomic data.We provide analytic expressions, in the form of linear sets of algebraic equations, for the coupled uncertainties among all levels. These equations can be solved efficiently for any set of physical conditions and uncertainties in the atomic data. We illustrate our method applied to spectral models of Oiii and Fe ii and discuss the impact of the uncertainties on atomic systems under different physical conditions. As to intrinsic uncertainties in theoretical atomic data, we propose that these uncertainties can be estimated from the dispersion in the results from various independent calculations. This technique provides excellent results for the uncertainties in A-values of forbidden transitions in [Fe ii]. Key words: atomic data - atomic processes - line: formation - methods: data analysis - molecular data - molecular processes - techniques: spectroscopic
NASA Astrophysics Data System (ADS)
Seadawy, Aly R.
2017-09-01
Nonlinear two-dimensional Kadomtsev-Petviashvili (KP) equation governs the behaviour of nonlinear waves in dusty plasmas with variable dust charge and two temperature ions. By using the reductive perturbation method, the two-dimensional dust-acoustic solitary waves (DASWs) in unmagnetized cold plasma consisting of dust fluid, ions and electrons lead to a KP equation. We derived the solitary travelling wave solutions of the two-dimensional nonlinear KP equation by implementing sech-tanh, sinh-cosh, extended direct algebraic and fraction direct algebraic methods. We found the electrostatic field potential and electric field in the form travelling wave solutions for two-dimensional nonlinear KP equation. The solutions for the KP equation obtained by using these methods can be demonstrated precisely and efficiency. As an illustration, we used the readymade package of Mathematica program 10.1 to solve the original problem. These solutions are in good agreement with the analytical one.
Google matrix analysis of directed networks
NASA Astrophysics Data System (ADS)
Ermann, Leonardo; Frahm, Klaus M.; Shepelyansky, Dima L.
2015-10-01
In the past decade modern societies have developed enormous communication and social networks. Their classification and information retrieval processing has become a formidable task for the society. Because of the rapid growth of the World Wide Web, and social and communication networks, new mathematical methods have been invented to characterize the properties of these networks in a more detailed and precise way. Various search engines extensively use such methods. It is highly important to develop new tools to classify and rank a massive amount of network information in a way that is adapted to internal network structures and characteristics. This review describes the Google matrix analysis of directed complex networks demonstrating its efficiency using various examples including the World Wide Web, Wikipedia, software architectures, world trade, social and citation networks, brain neural networks, DNA sequences, and Ulam networks. The analytical and numerical matrix methods used in this analysis originate from the fields of Markov chains, quantum chaos, and random matrix theory.
NASA Technical Reports Server (NTRS)
King, R. B.; Fordyce, J. S.; Antoine, A. C.; Leibecki, H. F.; Neustadter, H. E.; Sidik, S. M.
1976-01-01
Concentrations of 75 chemical constituents in the airborne particulate matter were measured in Cleveland, Ohio, during 1971 and 1972. Values covering a 1-year period (45 to 50 sampling days) at each of 16 sites are presented for 60 elements. A lesser number of values is given for sulfate, nitrate, fluoride, acidity, 10 polynuclear aromatic hydrocarbon compounds, and the aliphatic hydrocarbon compounds as a group. Methods used included instrumental neutron activation, emission spectroscopy, gas chromatography, combustion techniques, and colorimetry. Uncertainties in the concentrations associated with the sampling procedures, the analysis methods, the use of several analytical facilities, and samples with concentrations below the detection limits are evaluated in detail. The data is discussed in relation to other studies and source origins. The trace constituent concentrations as a function of wind direction are used to suggest a practical method for air pollution source identification.
Horacek, Micha; Hansel-Hohl, Karin; Burg, Kornel; Soja, Gerhard; Okello-Anyanga, Walter; Fluch, Silvia
2015-01-01
The indication of origin of sesame seeds and sesame oil is one of the important factors influencing its price, as it is produced in many regions worldwide and certain provenances are especially sought after. We joined stable carbon and hydrogen isotope analysis with DNA based molecular marker analysis to study their combined potential for the discrimination of different origins of sesame seeds. For the stable carbon and hydrogen isotope data a positive correlation between both isotope parameters was observed, indicating a dominant combined influence of climate and water availability. This enabled discrimination between sesame samples from tropical and subtropical/moderate climatic provenances. Carbon isotope values also showed differences between oil from black and white sesame seeds from identical locations, indicating higher water use efficiency of plants producing black seeds. DNA based markers gave independent evidence for geographic variation as well as provided information on the genetic relatedness of the investigated samples. Depending on the differences in ambient environmental conditions and in the genotypic fingerprint, a combination of both analytical methods is a very powerful tool to assess the declared geographic origin. To our knowledge this is the first paper on food authenticity combining the stable isotope analysis of bio-elements with DNA based markers and their combined statistical analysis. PMID:25831054
Horacek, Micha; Hansel-Hohl, Karin; Burg, Kornel; Soja, Gerhard; Okello-Anyanga, Walter; Fluch, Silvia
2015-01-01
The indication of origin of sesame seeds and sesame oil is one of the important factors influencing its price, as it is produced in many regions worldwide and certain provenances are especially sought after. We joined stable carbon and hydrogen isotope analysis with DNA based molecular marker analysis to study their combined potential for the discrimination of different origins of sesame seeds. For the stable carbon and hydrogen isotope data a positive correlation between both isotope parameters was observed, indicating a dominant combined influence of climate and water availability. This enabled discrimination between sesame samples from tropical and subtropical/moderate climatic provenances. Carbon isotope values also showed differences between oil from black and white sesame seeds from identical locations, indicating higher water use efficiency of plants producing black seeds. DNA based markers gave independent evidence for geographic variation as well as provided information on the genetic relatedness of the investigated samples. Depending on the differences in ambient environmental conditions and in the genotypic fingerprint, a combination of both analytical methods is a very powerful tool to assess the declared geographic origin. To our knowledge this is the first paper on food authenticity combining the stable isotope analysis of bio-elements with DNA based markers and their combined statistical analysis.
Method for discovering relationships in data by dynamic quantum clustering
Weinstein, Marvin; Horn, David
2017-05-09
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
Method for discovering relationships in data by dynamic quantum clustering
Weinstein, Marvin; Horn, David
2014-10-28
Data clustering is provided according to a dynamical framework based on quantum mechanical time evolution of states corresponding to data points. To expedite computations, we can approximate the time-dependent Hamiltonian formalism by a truncated calculation within a set of Gaussian wave-functions (coherent states) centered around the original points. This allows for analytic evaluation of the time evolution of all such states, opening up the possibility of exploration of relationships among data-points through observation of varying dynamical-distances among points and convergence of points into clusters. This formalism may be further supplemented by preprocessing, such as dimensional reduction through singular value decomposition and/or feature filtering.
The Modified Hartmann Potential Effects on γ-rigid Bohr Hamiltonian
NASA Astrophysics Data System (ADS)
Suparmi, A.; Cari, C.; Nur Pratiwi, Beta
2018-04-01
In this paper, we present the solution of Bohr Hamiltonian in the case of γ-rigid for the modified Hartmann potential. The modified Hartmann potential was formed from the original Hartmann potential, consists of β function and θ function. By using the separation method, the three-dimensional Bohr Hamiltonian equation was reduced into three one-dimensional Schrodinger-like equation which was solved analytically. The results for the wavefunction were shown in mathematically, while for the binding energy was solved numerically. The numerical binding energy for the presence of the modified Hartmann potential is lower than the binding energy value in the absence of modified Hartmann potential effect.
CHClF/sub 2/ (F-22) in the earth's atmosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rasmussen, R.A.; Khalil, M.A.K.; Penkett, S.A.
1980-10-01
Recent global measurements of CHClF/sub 2/ (F-22) are reported. Originally, GC/MS techniques were used to obtain these data. Since then, significant advances using an O/sub 2/-doped electron capture detector have been made in the analytical techniques, so that F-22 can be measured by EC/GC methods at ambient concentrations. The atmospheric burden of F-22 calculated from these measurements (average mixing ratio, mid-1979, approx.45 pptv) is considerably greater than that expected from the estimates of direct industrial emissions (average mixing ratio, mid-1979, approx.30 pptv). This difference is probably due to underestimates of F-22 emissions.
CHClF2 (F-22) in the Earth's atmosphere
NASA Astrophysics Data System (ADS)
Rasmussen, R. A.; Khalil, M. A. K.; Penkett, S. A.; Prosser, N. J. D.
1980-10-01
Recent global measurements of CHClF2 (F-22) are reported. Originally, GC/MS techniques were used to obtain these data. Since then, significant advances using an O2-doped electron capture detector have been made in the analytical techniques, so that F-22 can be measured by EC/GC methods at ambient concentrations. The atmospheric burden of F-22 calculated from these measurements (average mixing ratio, mid-1979, ˜45 pptv) is considerably greater than that expected from the estimates of direct industrial emissions (average mixing ratio, mid-1979, ˜30 pptv). This difference is probably due to underestimates of F-22 emissions.
CHClF2 /F-22/ in the earth's atmosphere
NASA Technical Reports Server (NTRS)
Rasmussen, R. A.; Khalil, M. A. K.; Penkett, S. A.; Prosser, N. J. D.
1980-01-01
Recent global measurements of CHClF2 (F-22) are reported. Originally, GC/MS techniques were used to obtain these data. Since then, significant advances using an O2-doped electron capture detector have been made in the analytical techniques, so that F-22 can be measured by EC/GC methods at ambient concentrations. The atmospheric burden of F-22 calculated from these measurements (average mixing ratio, mid-1979, approximately 45 pptv) is considerably greater than that expected from the estimates of direct industrial emissions (average mixing ratio, mid-1979, approximately 30 pptv). This difference is probably due to underestimates of F-22 emissions.
NASA Astrophysics Data System (ADS)
Yu, Bo; Ning, Chao-lie; Li, Bing
2017-03-01
A probabilistic framework for durability assessment of concrete structures in marine environments was proposed in terms of reliability and sensitivity analysis, which takes into account the uncertainties under the environmental, material, structural and executional conditions. A time-dependent probabilistic model of chloride ingress was established first to consider the variations in various governing parameters, such as the chloride concentration, chloride diffusion coefficient, and age factor. Then the Nataf transformation was adopted to transform the non-normal random variables from the original physical space into the independent standard Normal space. After that the durability limit state function and its gradient vector with respect to the original physical parameters were derived analytically, based on which the first-order reliability method was adopted to analyze the time-dependent reliability and parametric sensitivity of concrete structures in marine environments. The accuracy of the proposed method was verified by comparing with the second-order reliability method and the Monte Carlo simulation. Finally, the influences of environmental conditions, material properties, structural parameters and execution conditions on the time-dependent reliability of concrete structures in marine environments were also investigated. The proposed probabilistic framework can be implemented in the decision-making algorithm for the maintenance and repair of deteriorating concrete structures in marine environments.
Li, Zhong; Liu, Ming-de; Ji, Shou-xiang
2016-03-01
The Fourier Transform Infrared Spectroscopy (FTIR) is established to find the geographic origins of Chinese wolfberry quickly. In the paper, the 45 samples of Chinese wolfberry from different places of Qinghai Province are to be surveyed by FTIR. The original data matrix of FTIR is pretreated with common preprocessing and wavelet transform. Compared with common windows shifting smoothing preprocessing, standard normal variation correction and multiplicative scatter correction, wavelet transform is an effective spectrum data preprocessing method. Before establishing model through the artificial neural networks, the spectra variables are compressed by means of the wavelet transformation so as to enhance the training speed of the artificial neural networks, and at the same time the related parameters of the artificial neural networks model are also discussed in detail. The survey shows even if the infrared spectroscopy data is compressed to 1/8 of its original data, the spectral information and analytical accuracy are not deteriorated. The compressed spectra variables are used for modeling parameters of the backpropagation artificial neural network (BP-ANN) model and the geographic origins of Chinese wolfberry are used for parameters of export. Three layers of neural network model are built to predict the 10 unknown samples by using the MATLAB neural network toolbox design error back propagation network. The number of hidden layer neurons is 5, and the number of output layer neuron is 1. The transfer function of hidden layer is tansig, while the transfer function of output layer is purelin. Network training function is trainl and the learning function of weights and thresholds is learngdm. net. trainParam. epochs=1 000, while net. trainParam. goal = 0.001. The recognition rate of 100% is to be achieved. It can be concluded that the method is quite suitable for the quick discrimination of producing areas of Chinese wolfberry. The infrared spectral analysis technology combined with the artificial neural networks is proved to be a reliable and new method for the identification of the original place of Traditional Chinese Medicine.
Toledano, R M; Díaz-Plaza, E M; Cortes, J M; Aragón, A; Vázquez, A M; Villén, J; Muñoz-Guerra, J
2014-11-28
Boldenone (Bo), androsta-1,4-dien-17β-ol-3-one, is an anabolic androgenic steroid not clinically approved for human application. Despite this, many cases are reported every year of athletes testing positive for Bo or its main metabolite 5β-androst-1-en-17β-ol-3-one (BoM). Recently the capability of different human intestinal bacteria to produce enzymes able to modify endogenous steroids in Bo has been demonstrated. When a urinary concentration of Bo and/or BoM between 5 and 30 ng/mL is measured a complementary analysis by gas chromatography combustion isotope ratio mass spectrometry (GC-C-IRMS) must be carried out to discriminate the endogenous or exogenous origin. In the present work, a novel analytical method that couples LC-GC by means of the TOTAD interface with C-IRMS is described. The method is based on a first RPLC separation of unacetyled steroids, followed by acetylation and automated on-line LC-GC-C-IRMS, which includes a second RPLC clean-up of acetyl Bo and BoM, isolation of the two fractions in a fraction collector and their consecutive analysis by GC-C-IRMS. The method has been applied to the analysis of urine samples fortified at 5 and 10 ng/mL, where it has shown a good performance. Copyright © 2014 Elsevier B.V. All rights reserved.
A literature review of empirical research on learning analytics in medical education
Saqr, Mohammed
2018-01-01
The number of publications in the field of medical education is still markedly low, despite recognition of the value of the discipline in the medical education literature, and exponential growth of publications in other fields. This necessitates raising awareness of the research methods and potential benefits of learning analytics (LA). The aim of this paper was to offer a methodological systemic review of empirical LA research in the field of medical education and a general overview of the common methods used in the field in general. Search was done in Medline database using the term “LA.” Inclusion criteria included empirical original research articles investigating LA using qualitative, quantitative, or mixed methodologies. Articles were also required to be written in English, published in a scholarly peer-reviewed journal and have a dedicated section for methods and results. A Medline search resulted in only six articles fulfilling the inclusion criteria for this review. Most of the studies collected data about learners from learning management systems or online learning resources. Analysis used mostly quantitative methods including descriptive statistics, correlation tests, and regression models in two studies. Patterns of online behavior and usage of the digital resources as well as predicting achievement was the outcome most studies investigated. Research about LA in the field of medical education is still in infancy, with more questions than answers. The early studies are encouraging and showed that patterns of online learning can be easily revealed as well as predicting students’ performance. PMID:29599699
A literature review of empirical research on learning analytics in medical education.
Saqr, Mohammed
2018-01-01
The number of publications in the field of medical education is still markedly low, despite recognition of the value of the discipline in the medical education literature, and exponential growth of publications in other fields. This necessitates raising awareness of the research methods and potential benefits of learning analytics (LA). The aim of this paper was to offer a methodological systemic review of empirical LA research in the field of medical education and a general overview of the common methods used in the field in general. Search was done in Medline database using the term "LA." Inclusion criteria included empirical original research articles investigating LA using qualitative, quantitative, or mixed methodologies. Articles were also required to be written in English, published in a scholarly peer-reviewed journal and have a dedicated section for methods and results. A Medline search resulted in only six articles fulfilling the inclusion criteria for this review. Most of the studies collected data about learners from learning management systems or online learning resources. Analysis used mostly quantitative methods including descriptive statistics, correlation tests, and regression models in two studies. Patterns of online behavior and usage of the digital resources as well as predicting achievement was the outcome most studies investigated. Research about LA in the field of medical education is still in infancy, with more questions than answers. The early studies are encouraging and showed that patterns of online learning can be easily revealed as well as predicting students' performance.
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...
40 CFR 141.25 - Analytical methods for radioactivity.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Analytical methods for radioactivity... § 141.25 Analytical methods for radioactivity. (a) Analysis for the following contaminants shall be conducted to determine compliance with § 141.66 (radioactivity) in accordance with the methods in the...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...
7 CFR 93.13 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture... PROCESSED FRUITS AND VEGETABLES Peanuts, Tree Nuts, Corn and Other Oilseeds § 93.13 Analytical methods... manuals: (a) Approved Methods of the American Association of Cereal Chemists (AACC), American Association...
Baltussen, E; Snijders, H; Janssen, H G; Sandra, P; Cramers, C A
1998-04-10
A recently developed method for the extraction of organic micropollutants from aqueous samples based on sorptive enrichment in columns packed with 100% polydimethylsiloxane (PDMS) particles was coupled on-line with HPLC analysis. The sorptive enrichment procedure originally developed for relatively nonpolar analytes was used to preconcentrate polar phenylurea herbicides from aqueous samples. PDMS extraction columns of 5, 10 and 25 cm were used to extract the herbicides from distilled, tap and river water samples. A model that allows prediction of retention and breakthrough volumes is presented. Despite the essentially apolar nature of the PDMS material, it is possible to concentrate sample volumes up to 10 ml on PDMS cartridges without losses of the most polar analyte under investigation, fenuron. For less polar analytes significantly larger sample volumes can be applied. Since standard UV detection does not provide adequate selectivity for river water samples, an electrospray (ES)-MS instrument was used to determine phenylurea herbicides in a water sample from the river Dommel. Methoxuron was present at a level of 80 ng/l. The detection limit of the current set-up, using 10 ml water samples and ES-MS detection is 10 ng/l in river water samples. Strategies for further improvement of the detection limits are identified.
Carbon based sample supports and matrices for laser desorption/ ionization mass spectrometry.
Rainer, Matthias; Najam-ul-Haq, Muhammad; Huck, Christian W; Vallant, Rainer M; Heigl, Nico; Hahn, Hans; Bakry, Rania; Bonn, Günther K
2007-01-01
Laser desorption/ionization mass spectrometry (LDI-MS) is a widespread and powerful technique for mass analysis allowing the soft ionization of molecules such as peptides, proteins and carbohydrates. In many applications, an energy absorbing matrix has to be added to the analytes in order to protect them from being fragmented by direct laser beam. LDI-MS in conjunction with matrix is commonly referred as matrix-assisted LDI (MALDI). One of the striking disadvantages of this method is the desorption of matrix molecules, which causes interferences originating from matrix background ions in lower mass range (< 1000 Da). This has been led to the development of a variety of different carbon based LDI sample supports, which are capable of absorbing laser light and simultaneously transfering energy to the analytes for desorption. Furthermore carbon containing sample supports are used as carrier materials for the specific binding and preconcentration of molecules out of complex samples. Their subsequent analysis with MALDI mass spectrometry allows performing studies in metabolomics and proteomics. Finally a thin layer of carbon significantly improves sensitivity concerning detection limit. Analytes in low femtomole and attomole range can be detected in this regard. In the present article, these aspects are reviewed from patents where nano-based carbon materials are comprehensively utilized.
Thevis, Mario; Schänzer, Wilhelm
2014-12-01
The number and diversity of potentially performance-enhancing substances is continuously growing, fueled by new pharmaceutical developments but also by the inventiveness and, at the same time, unscrupulousness of black-market (designer) drug producers and providers. In terms of sports drug testing, this situation necessitates reactive as well as proactive research and expansion of the analytical armamentarium to ensure timely, adequate, and comprehensive doping controls. This review summarizes literature published over the past 5 years on new drug entities, discontinued therapeutics, and 'tailored' compounds classified as doping agents according to the regulations of the World Anti-Doping Agency, with particular attention to analytical strategies enabling their detection in human blood or urine. Among these compounds, low- and high-molecular mass substances of peptidic (e.g. modified insulin-like growth factor-1, TB-500, hematide/peginesatide, growth hormone releasing peptides, AOD-9604, etc.) and non-peptidic (selective androgen receptor modulators, hypoxia-inducible factor stabilizers, siRNA, S-107 and ARM036/aladorian, etc.) as well as inorganic (cobalt) nature are considered and discussed in terms of specific requirements originating from physicochemical properties, concentration levels, metabolism, and their amenability for chromatographic-mass spectrometric or alternative detection methods. Copyright © 2014 Elsevier B.V. All rights reserved.
Lou, Xianwen; van Dongen, Joost L J; Milroy, Lech-Gustav; Meijer, E W
2016-12-30
Ionization in matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) is a very complicated process. It has been reported that quaternary ammonium salts show extremely strong matrix and analyte suppression effects which cannot satisfactorily be explained by charge transfer reactions. Further investigation of the reasons causing these effects can be useful to improve our understanding of the MALDI process. The dried-droplet and modified thin-layer methods were used as sample preparation methods. In the dried-droplet method, analytes were co-crystallized with matrix, whereas in the modified thin-layer method analytes were deposited on the surface of matrix crystals. Model compounds, tetrabutylammonium iodide ([N(Bu) 4 ]I), cesium iodide (CsI), trihexylamine (THA) and polyethylene glycol 600 (PEG 600), were selected as the test analytes given their ability to generate exclusively pre-formed ions, protonated ions and metal ion adducts respectively in MALDI. The strong matrix suppression effect (MSE) observed using the dried-droplet method might disappear using the modified thin-layer method, which suggests that the incorporation of analytes in matrix crystals contributes to the MSE. By depositing analytes on the matrix surface instead of incorporating in the matrix crystals, the competition for evaporation/ionization from charged matrix/analyte clusters could be weakened resulting in reduced MSE. Further supporting evidence for this inference was found by studying the analyte suppression effect using the same two sample deposition methods. By comparing differences between the mass spectra obtained via the two sample preparation methods, we present evidence suggesting that the generation of gas-phase ions from charged matrix/analyte clusters may induce significant suppression of matrix and analyte ions. The results suggest that the generation of gas-phase ions from charged matrix/analyte clusters is an important ionization step in MALDI-MS. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Analytical methods applied to diverse types of Brazilian propolis
2011-01-01
Propolis is a bee product, composed mainly of plant resins and beeswax, therefore its chemical composition varies due to the geographic and plant origins of these resins, as well as the species of bee. Brazil is an important supplier of propolis on the world market and, although green colored propolis from the southeast is the most known and studied, several other types of propolis from Apis mellifera and native stingless bees (also called cerumen) can be found. Propolis is usually consumed as an extract, so the type of solvent and extractive procedures employed further affect its composition. Methods used for the extraction; analysis the percentage of resins, wax and insoluble material in crude propolis; determination of phenolic, flavonoid, amino acid and heavy metal contents are reviewed herein. Different chromatographic methods applied to the separation, identification and quantification of Brazilian propolis components and their relative strengths are discussed; as well as direct insertion mass spectrometry fingerprinting. Propolis has been used as a popular remedy for several centuries for a wide array of ailments. Its antimicrobial properties, present in propolis from different origins, have been extensively studied. But, more recently, anti-parasitic, anti-viral/immune stimulating, healing, anti-tumor, anti-inflammatory, antioxidant and analgesic activities of diverse types of Brazilian propolis have been evaluated. The most common methods employed and overviews of their relative results are presented. PMID:21631940
Fukushima, Romualdo S; Kerley, Monty S
2011-04-27
A nongravimetric acetyl bromide lignin (ABL) method was evaluated to quantify lignin concentration in a variety of plant materials. The traditional approach to lignin quantification required extraction of lignin with acidic dioxane and its isolation from each plant sample to construct a standard curve via spectrophotometric analysis. Lignin concentration was then measured in pre-extracted plant cell walls. However, this presented a methodological complexity because extraction and isolation procedures are lengthy and tedious, particularly if there are many samples involved. This work was targeted to simplify lignin quantification. Our hypothesis was that any lignin, regardless of its botanical origin, could be used to construct a standard curve for the purpose of determining lignin concentration in a variety of plants. To test our hypothesis, lignins were isolated from a range of diverse plants and, along with three commercial lignins, standard curves were built and compared among them. Slopes and intercepts derived from these standard curves were close enough to allow utilization of a mean extinction coefficient in the regression equation to estimate lignin concentration in any plant, independent of its botanical origin. Lignin quantification by use of a common regression equation obviates the steps of lignin extraction, isolation, and standard curve construction, which substantially expedites the ABL method. Acetyl bromide lignin method is a fast, convenient analytical procedure that may routinely be used to quantify lignin.
Amendola, Graziella; Pelosi, Patrizia; Attard Barbini, Danilo
2015-01-01
A simple, fast and multiresidue method for the determination of pesticide residues in baby foods of animal origin has been developed in order to check the compliance with the Maximum Residue Levels (MRLs) set at a general value of 0.01 mg/kg by Commission Directive 2006/125/EC for infant foods. The main classes of organochlorine, organophosphorus and pyrethroid compounds have been considered, which are mainly fat soluble pesticides. The analytical procedure consists in the extraction of baby food samples by acetonitrile (ACN) followed by a clean up using C18 solid-phase extraction column eluted with ACN. The compounds were determined by gas chromatography-triple quadrupole mass spectrometry equipped with a Programmed Temperature Vaporizer (PTV) injection and a backflush system. In order to compensate for matrix effects PTV and matrix matched standard calibrations have been used. The method has been fully validated for 57 pesticides according to the Document SANCO/12571/2013. Accuracy and precision (repeatability) have been studied by recoveries at two spiking levels, the Limit of Quantitation (LOQ) (0.003-0.008 mg/kg) and 10 time greater (0.03-0.08 mg/kg), and the results were in the acceptable range of 70-120% with Relative Standards Deviations (RSD) ≤20%. Selectivity, linearity, LOQ and uncertainty of measurement were also determined for all the compounds. The method has been also applied for the analysis of 18 baby food animal origin samples, bought form the local market in Rome (Italy), and no pesticide in the scope of the method has been found above the MRL or the LOQ.
Finite-analytic numerical solution of heat transfer in two-dimensional cavity flow
NASA Technical Reports Server (NTRS)
Chen, C.-J.; Naseri-Neshat, H.; Ho, K.-S.
1981-01-01
Heat transfer in cavity flow is numerically analyzed by a new numerical method called the finite-analytic method. The basic idea of the finite-analytic method is the incorporation of local analytic solutions in the numerical solutions of linear or nonlinear partial differential equations. In the present investigation, the local analytic solutions for temperature, stream function, and vorticity distributions are derived. When the local analytic solution is evaluated at a given nodal point, it gives an algebraic relationship between a nodal value in a subregion and its neighboring nodal points. A system of algebraic equations is solved to provide the numerical solution of the problem. The finite-analytic method is used to solve heat transfer in the cavity flow at high Reynolds number (1000) for Prandtl numbers of 0.1, 1, and 10.
Selected Analytical Methods for Environmental Remediation and Recovery (SAM) - Home
The SAM Home page provides access to all information provided in EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM), and includes a query function allowing users to search methods by analyte, sample type and instrumentation.
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 24 2013-07-01 2013-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 23 2014-07-01 2014-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-16
... Currently Approved Total Coliform Analytical Methods AGENCY: Environmental Protection Agency (EPA). ACTION... of currently approved Total Coliform Rule (TCR) analytical methods. At these meetings, stakeholders will be given an opportunity to discuss potential elements of a method re-evaluation study, such as...
40 CFR 141.89 - Analytical methods.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 24 2012-07-01 2012-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...
Fingerprint Analysis: Moving Toward Multiattribute Determination via Individual Markers.
Brunelle, Erica; Huynh, Crystal; Alin, Eden; Eldridge, Morgan; Le, Anh Minh; Halámková, Lenka; Halámek, Jan
2018-01-02
Forensic science will be forever revolutionized if law enforcement can identify personal attributes of a person of interest solely from a fingerprint. For the past 2 years, the goal of our group has been to establish a way to identify originator attributes, specifically biological sex, from a single analyte. To date, an enzymatic assay and two chemical assays have been developed for the analysis of multiple analytes. In this manuscript, two additional assays have been developed. This time, however, the assays utilize only one amino acid each. The enzymatic assay targets alanine and employs alanine transaminase (ALT), pyruvate oxidase (POx), and horseradish peroxidase (HRP). The other, a chemical assay, is known as the Sakaguchi test and targets arginine. It is important to note that alanine has a significantly higher concentration than arginine in the fingerprint content of both males and females. Both assays proved to be capable of accurately differentiating between male and female fingerprints, regardless of their respective average concentration. The ability to target a single analyte will transform forensic science as each originator attribute can be correlated to a different analyte. This would then lead to the possibility of identifying multiple attributes from a single fingerprint sample. Ultimately, this would allow for a profile of a person of interest to be established without the need for time-consuming lab processes.
NASA Astrophysics Data System (ADS)
Maspero, A.
2018-05-01
For the defocusing nonlinear Schrödinger equation on the circle, we construct a Birkhoff map Φ which is tame majorant analytic in a neighborhood of the origin. Roughly speaking, majorant analytic means that replacing the coefficients of the Taylor expansion of Φ by their absolute values gives rise to a series (the majorant map) which is uniformly and absolutely convergent, at least in a small neighborhood. Tame majorant analytic means that the majorant map of Φ fulfills tame estimates. The proof is based on a new tame version of the Kuksin–Perelman theorem (2010 Discrete Contin. Dyn. Syst. 1 1–24), which is an infinite dimensional Vey type theorem.
A Lightweight I/O Scheme to Facilitate Spatial and Temporal Queries of Scientific Data Analytics
NASA Technical Reports Server (NTRS)
Tian, Yuan; Liu, Zhuo; Klasky, Scott; Wang, Bin; Abbasi, Hasan; Zhou, Shujia; Podhorszki, Norbert; Clune, Tom; Logan, Jeremy; Yu, Weikuan
2013-01-01
In the era of petascale computing, more scientific applications are being deployed on leadership scale computing platforms to enhance the scientific productivity. Many I/O techniques have been designed to address the growing I/O bottleneck on large-scale systems by handling massive scientific data in a holistic manner. While such techniques have been leveraged in a wide range of applications, they have not been shown as adequate for many mission critical applications, particularly in data post-processing stage. One of the examples is that some scientific applications generate datasets composed of a vast amount of small data elements that are organized along many spatial and temporal dimensions but require sophisticated data analytics on one or more dimensions. Including such dimensional knowledge into data organization can be beneficial to the efficiency of data post-processing, which is often missing from exiting I/O techniques. In this study, we propose a novel I/O scheme named STAR (Spatial and Temporal AggRegation) to enable high performance data queries for scientific analytics. STAR is able to dive into the massive data, identify the spatial and temporal relationships among data variables, and accordingly organize them into an optimized multi-dimensional data structure before storing to the storage. This technique not only facilitates the common access patterns of data analytics, but also further reduces the application turnaround time. In particular, STAR is able to enable efficient data queries along the time dimension, a practice common in scientific analytics but not yet supported by existing I/O techniques. In our case study with a critical climate modeling application GEOS-5, the experimental results on Jaguar supercomputer demonstrate an improvement up to 73 times for the read performance compared to the original I/O method.
Cuadros-Rodríguez, Luis; Ruiz-Samblás, Cristina; Valverde-Som, Lucia; Pérez-Castaño, Estefanía; González-Casado, Antonio
2016-02-25
Fingerprinting methods describe a variety of analytical methods that provide analytical signals related to the composition of foodstuffs in a non-selective way such as by collecting a spectrum or a chromatogram. Mathematical processing of the information in such fingerprints may allow the characterisation and/or authentication of foodstuffs. In this context, the particular meaning of 'fingerprinting', in conjunction with 'profiling', is different from the original meanings used in metabolomics. This fact has produced some confusion with the use of these terms in analytical papers. Researchers coming from the metabolomic field could use 'profiling' or 'fingerprinting' on a different way to researchers who are devoted to food science. The arrival of an eclectic discipline, named 'foodomics' has not been enough to allay this terminological problem, since the authors keep on using the terms with both meanings. Thus, a first goal of this tutorial is to clarify the difference between both terms. In addition, the chemical approaches for food authentication, i.e., chemical markers, component profiling and instrumental fingerprinting, have been described. A new term, designated as 'food identitation', has been introduced in order to complete the life cycle of the chemical-based food authentication process. Chromatographic fingerprinting has been explained in detail and some strategies which could be applied has been clarified and discussed. Particularly, the strategies for chromatographic signals acquisition and chromatographic data handling are unified in a single framework. Finally, an overview about the applications of chromatographic (GC and LC) fingerprints in food authentication using different chemometric techniques has been included. Copyright © 2016 Elsevier B.V. All rights reserved.
Faigen, Zachary; Deyneka, Lana; Ising, Amy; Neill, Daniel; Conway, Mike; Fairchild, Geoffrey; Gunn, Julia; Swenson, David; Painter, Ian; Johnson, Lauren; Kiley, Chris; Streichert, Laura; Burkom, Howard
2015-01-01
We document a funded effort to bridge the gap between constrained scientific challenges of public health surveillance and methodologies from academia and industry. Component tasks are the collection of epidemiologists' use case problems, multidisciplinary consultancies to refine them, and dissemination of problem requirements and shareable datasets. We describe an initial use case and consultancy as a concrete example and challenge to developers. Supported by the Defense Threat Reduction Agency Biosurveillance Ecosystem project, the International Society for Disease Surveillance formed an advisory group to select tractable use case problems and convene inter-disciplinary consultancies to translate analytic needs into well-defined problems and to promote development of applicable solution methods. The initial consultancy's focus was a problem originated by the North Carolina Department of Health and its NC DETECT surveillance system: Derive a method for detection of patient record clusters worthy of follow-up based on free-text chief complaints and without syndromic classification. Direct communication between public health problem owners and analytic developers was informative to both groups and constructive for the solution development process. The consultancy achieved refinement of the asyndromic detection challenge and of solution requirements. Participants summarized and evaluated solution approaches and discussed dissemination and collaboration strategies. A solution meeting the specification of the use case described above could improve human monitoring efficiency with expedited warning of events requiring follow-up, including otherwise overlooked events with no syndromic indicators. This approach can remove obstacles to collaboration with efficient, minimal data-sharing and without costly overhead.
Connatser, Raynella M.; Lewis, Sr., Samuel Arthur; Keiser, James R.; ...
2014-10-03
Integrating biofuels with conventional petroleum products requires improvements in processing to increase blendability with existing fuels. This work demonstrates analysis techniques for more hydrophilic bio-oil liquids that give improved quantitative and qualitative description of the total acid content and organic acid profiles. To protect infrastructure from damage and reduce the cost associated with upgrading, accurate determination of acid content and representative chemical compound analysis are central imperatives to assessing both the corrosivity and the progress toward removing oxygen and acidity in processed biomass liquids. Established techniques form an ample basis for bio-liquids evaluation. However, early in the upgrading process, themore » unique physical phases and varied hydrophilicity of many pyrolysis liquids can render analytical methods originally designed for use in petroleum-derived oils inadequate. In this work, the water solubility of the organic acids present in bio-oils is exploited in a novel extraction and titration technique followed by analysis on the water-based capillary electrophoresis (CE) platform. The modification of ASTM D664, the standard for Total Acid Number (TAN), to include aqueous carrier solvents improves the utility of that approach for quantifying acid content in hydrophilic bio-oils. Termed AMTAN (modified Total Acid Number), this technique offers 1.2% relative standard deviation and dynamic range comparable to the conventional ASTM method. Furthermore, the results of corrosion product evaluations using several different sources of real bio-oil are discussed in the context of the unique AMTAN and CE analytical approaches developed to facilitate those measurements.« less
NASA Astrophysics Data System (ADS)
Lu, Zheng; Huang, Biao; Zhang, Qi; Lu, Xilin
2018-05-01
Eddy-current tuned mass dampers (EC-TMDs) are non-contacting passive control devices and are developed on the basis of conventional tuned mass dampers. They comprise a solid mass, a stiffness element, and a damping element, wherein the damping mechanism originates from eddy currents. By relative motion between a non-magnetic conductive metal and a permanent magnet in a dynamic system, a time-varying magnetic field is induced in the conductor, thereby generating eddy currents. The eddy currents induce a magnetic field with opposite polarity, causing repulsive forces, i.e., damping forces. This technology can overcome the drawbacks of conventional tuned mass dampers, such as limited service life, deterioration of mechanical properties, and undesired additional stiffness. The experimental and analytical study of this system installed on a multi-degree-of-freedom structure is presented in this paper. A series of shaking table tests were conducted on a five-story steel-frame model with/without an EC-TMD to evaluate the effectiveness and performance of the EC-TMD in suppressing the vibration of the model under seismic excitations. The experimental results show that the EC-TMD can effectively reduce the displacement response, acceleration response, interstory drift ratio, and maximum strain of the columns under different earthquake excitations. Moreover, an analytical method was proposed on the basis of electromagnetic and structural dynamic theories. A comparison between the test and simulation results shows that the simulation method can be used to estimate the response of structures with an EC-TMD under earthquake excitations with acceptable accuracy.
Fast 2D NMR Spectroscopy for In vivo Monitoring of Bacterial Metabolism in Complex Mixtures.
Dass, Rupashree; Grudzia Ż, Katarzyna; Ishikawa, Takao; Nowakowski, Michał; Dȩbowska, Renata; Kazimierczuk, Krzysztof
2017-01-01
The biological toolbox is full of techniques developed originally for analytical chemistry. Among them, spectroscopic experiments are very important source of atomic-level structural information. Nuclear magnetic resonance (NMR) spectroscopy, although very advanced in chemical and biophysical applications, has been used in microbiology only in a limited manner. So far, mostly one-dimensional 1 H experiments have been reported in studies of bacterial metabolism monitored in situ . However, low spectral resolution and limited information on molecular topology limits the usability of these methods. These problems are particularly evident in the case of complex mixtures, where spectral peaks originating from many compounds overlap and make the interpretation of changes in a spectrum difficult or even impossible. Often a suite of two-dimensional (2D) NMR experiments is used to improve resolution and extract structural information from internuclear correlations. However, for dynamically changing sample, like bacterial culture, the time-consuming sampling of so-called indirect time dimensions in 2D experiments is inefficient. Here, we propose the technique known from analytical chemistry and structural biology of proteins, i.e., time-resolved non-uniform sampling. The method allows application of 2D (and multi-D) experiments in the case of quickly varying samples. The indirect dimension here is sparsely sampled resulting in significant reduction of experimental time. Compared to conventional approach based on a series of 1D measurements, this method provides extraordinary resolution and is a real-time approach to process monitoring. In this study, we demonstrate the usability of the method on a sample of Escherichia coli culture affected by ampicillin and on a sample of Propionibacterium acnes , an acne causing bacterium, mixed with a dose of face tonic, which is a complicated, multi-component mixture providing complex NMR spectrum. Through our experiments we determine the exact concentration and time at which the anti-bacterial agents affect the bacterial metabolism. We show, that it is worth to extend the NMR toolbox for microbiology by including techniques of 2D z-TOCSY, for total "fingerprinting" of a sample and 2D 13 C-edited HSQC to monitor changes in concentration of metabolites in selected metabolic pathways.
Scheduled Relaxation Jacobi method: Improvements and applications
NASA Astrophysics Data System (ADS)
Adsuara, J. E.; Cordero-Carrión, I.; Cerdá-Durán, P.; Aloy, M. A.
2016-09-01
Elliptic partial differential equations (ePDEs) appear in a wide variety of areas of mathematics, physics and engineering. Typically, ePDEs must be solved numerically, which sets an ever growing demand for efficient and highly parallel algorithms to tackle their computational solution. The Scheduled Relaxation Jacobi (SRJ) is a promising class of methods, atypical for combining simplicity and efficiency, that has been recently introduced for solving linear Poisson-like ePDEs. The SRJ methodology relies on computing the appropriate parameters of a multilevel approach with the goal of minimizing the number of iterations needed to cut down the residuals below specified tolerances. The efficiency in the reduction of the residual increases with the number of levels employed in the algorithm. Applying the original methodology to compute the algorithm parameters with more than 5 levels notably hinders obtaining optimal SRJ schemes, as the mixed (non-linear) algebraic-differential system of equations from which they result becomes notably stiff. Here we present a new methodology for obtaining the parameters of SRJ schemes that overcomes the limitations of the original algorithm and provide parameters for SRJ schemes with up to 15 levels and resolutions of up to 215 points per dimension, allowing for acceleration factors larger than several hundreds with respect to the Jacobi method for typical resolutions and, in some high resolution cases, close to 1000. Most of the success in finding SRJ optimal schemes with more than 10 levels is based on an analytic reduction of the complexity of the previously mentioned system of equations. Furthermore, we extend the original algorithm to apply it to certain systems of non-linear ePDEs.
NASA Astrophysics Data System (ADS)
Sajjadi, S. Maryam; Abdollahi, Hamid; Rahmanian, Reza; Bagheri, Leila
2016-03-01
A rapid, simple and inexpensive method using fluorescence spectroscopy coupled with multi-way methods for the determination of aflatoxins B1 and B2 in peanuts has been developed. In this method, aflatoxins are extracted with a mixture of water and methanol (90:10), and then monitored by fluorescence spectroscopy producing EEMs. Although the combination of EEMs and multi-way methods is commonly used to determine analytes in complex chemical systems with unknown interference(s), rank overlap problem in excitation and emission profiles may restrain the application of this strategy. If there is rank overlap in one mode, there are several three-way algorithms such as PARAFAC under some constraints that can resolve this kind of data successfully. However, the analysis of EEM data is impossible when some species have rank overlap in both modes because the information of the data matrix is equivalent to a zero-order data for that species, which is the case in our study. Aflatoxins B1 and B2 have the same shape of spectral profiles in both excitation and emission modes and we propose creating a third order data for each sample using solvent as a new additional selectivity mode. This third order data, in turn, converted to the second order data by augmentation, a fact which resurrects the second order advantage in original EEMs. The three-way data is constructed by stacking augmented data in the third way, and then analyzed by two powerful second order calibration methods (BLLS-RBL and PARAFAC) to quantify the analytes in four kinds of peanut samples. The results of both methods are in good agreement and reasonable recoveries are obtained.
A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.
Yang, Harry; Zhang, Jianchun
2015-01-01
The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current methods. Analytical methods are often used to ensure safety, efficacy, and quality of medicinal products. According to government regulations and regulatory guidelines, these methods need to be validated through well-designed studies to minimize the risk of accepting unsuitable methods. This article describes a novel statistical test for analytical method validation, which provides better protection for the risk of accepting unsuitable analytical methods. © PDA, Inc. 2015.
ERIC Educational Resources Information Center
Taub, Edward
2012-01-01
Constraint-induced (CI) therapy is a term given to a family of efficacious neurorehabilitation treatments including to date: upper extremity CI movement therapy, lower extremity CI movement therapy, pediatric CI therapy, and CI aphasia therapy. The purpose of this article is to outline the behavior analysis origins of CI therapy and the ways in…