NASA Technical Reports Server (NTRS)
Sutliff, Daniel L.; Walker, Bruce E.
2014-01-01
An Ultrasonic Configurable Fan Artificial Noise Source (UCFANS) was designed, built, and tested in support of the NASA Langley Research Center's 14x22 wind tunnel test of the Hybrid Wing Body (HWB) full 3-D 5.8% scale model. The UCFANS is a 5.8% rapid prototype scale model of a high-bypass turbofan engine that can generate the tonal signature of proposed engines using artificial sources (no flow). The purpose of the program was to provide an estimate of the acoustic shielding benefits possible from mounting an engine on the upper surface of a wing; a flat plate model was used as the shielding surface. Simple analytical simulations were used to preview the radiation patterns - Fresnel knife-edge diffraction was coupled with a dense phased array of point sources to compute shielded and unshielded sound pressure distributions for potential test geometries and excitation modes. Contour plots of sound pressure levels, and integrated power levels, from nacelle alone and shielded configurations for both the experimental measurements and the analytical predictions are presented in this paper.
An Open-source Community Web Site To Support Ground-Water Model Testing
NASA Astrophysics Data System (ADS)
Kraemer, S. R.; Bakker, M.; Craig, J. R.
2007-12-01
A community wiki wiki web site has been created as a resource to support ground-water model development and testing. The Groundwater Gourmet wiki is a repository for user supplied analytical and numerical recipes, howtos, and examples. Members are encouraged to submit analytical solutions, including source code and documentation. A diversity of code snippets are sought in a variety of languages, including Fortran, C, C++, Matlab, Python. In the spirit of a wiki, all contributions may be edited and altered by other users, and open source licensing is promoted. Community accepted contributions are graduated into the library of analytic solutions and organized into either a Strack (Groundwater Mechanics, 1989) or Bruggeman (Analytical Solutions of Geohydrological Problems, 1999) classification. The examples section of the wiki are meant to include laboratory experiments (e.g., Hele Shaw), classical benchmark problems (e.g., Henry Problem), and controlled field experiments (e.g., Borden landfill and Cape Cod tracer tests). Although this work was reviewed by EPA and approved for publication, it may not necessarily reflect official Agency policy. Mention of trade names or commercial products does not constitute endorsement or recommendation for use.
Statistical correlation analysis for comparing vibration data from test and analysis
NASA Technical Reports Server (NTRS)
Butler, T. G.; Strang, R. F.; Purves, L. R.; Hershfeld, D. J.
1986-01-01
A theory was developed to compare vibration modes obtained by NASTRAN analysis with those obtained experimentally. Because many more analytical modes can be obtained than experimental modes, the analytical set was treated as expansion functions for putting both sources in comparative form. The dimensional symmetry was developed for three general cases: nonsymmetric whole model compared with a nonsymmetric whole structural test, symmetric analytical portion compared with a symmetric experimental portion, and analytical symmetric portion with a whole experimental test. The theory was coded and a statistical correlation program was installed as a utility. The theory is established with small classical structures.
System and method for laser assisted sample transfer to solution for chemical analysis
Van Berkel, Gary J; Kertesz, Vilmos
2014-01-28
A system and method for laser desorption of an analyte from a specimen and capturing of the analyte in a suspended solvent to form a testing solution are described. The method can include providing a specimen supported by a desorption region of a specimen stage and desorbing an analyte from a target site of the specimen with a laser beam centered at a radiation wavelength (.lamda.). The desorption region is transparent to the radiation wavelength (.lamda.) and the sampling probe and a laser source emitting the laser beam are on opposite sides of a primary surface of the specimen stage. The system can also be arranged where the laser source and the sampling probe are on the same side of a primary surface of the specimen stage. The testing solution can then be analyzed using an analytical instrument or undergo further processing.
40 CFR Appendix A to Part 63 - Test Methods Pollutant Measurement Methods From Various Waste Media
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 15 2013-07-01 2013-07-01 false Test Methods Pollutant Measurement... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. A Appendix A to Part 63—Test Methods Pollutant... analyte spiking? 13.0How do I conduct tests at similar sources? Optional Requirements 14.0How do I use and...
40 CFR Appendix A to Part 63 - Test Methods Pollutant Measurement Methods From Various Waste Media
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 15 2014-07-01 2014-07-01 false Test Methods Pollutant Measurement... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. A Appendix A to Part 63—Test Methods Pollutant... analyte spiking? 13.0How do I conduct tests at similar sources? Optional Requirements 14.0How do I use and...
40 CFR Appendix A to Part 63 - Test Methods Pollutant Measurement Methods From Various Waste Media
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 14 2011-07-01 2011-07-01 false Test Methods Pollutant Measurement... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. A Appendix A to Part 63—Test Methods Pollutant... analyte spiking? 13.0How do I conduct tests at similar sources? Optional Requirements 14.0How do I use and...
40 CFR Appendix A to Part 63 - Test Methods Pollutant Measurement Methods From Various Waste Media
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 15 2012-07-01 2012-07-01 false Test Methods Pollutant Measurement... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. A Appendix A to Part 63—Test Methods Pollutant... analyte spiking? 13.0How do I conduct tests at similar sources? Optional Requirements 14.0How do I use and...
Army Digital Test Requirements Analytic Report.
1983-07-01
NATION4AL BUREAU Of STANOARCA- 963-A RESEARCH AID DEVELOPMENT TECHNICAL REPORT CECOM800520-1 ARMY DIGITAL TEST I O~ ~REGUIREMENTS ANALYTIC REPORT...16I" I i00000TABLE OF COTIWMI (CmnthnePa 3 6.0 DATA REVIEW 17 6.1 COMPREHENSIVE REVIEW 17 3 6.2 REVIEW CONCLUSIONS 17 7.0 SPECIAL RESEARCH 19 8 .0...Identification - Identification of Information Sources S-- Data Collection - Data Organization - Data Review - Special Research - Technology Analysis - Test
Evaluation of seismic spatial interaction effects through an impact testing program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, B.D.; Driesen, G.E.
The consequences of non-seismically qualified objects falling and striking essential, seismically qualified objects is an analytically difficult problem to assess. Analytical solutions to impact problems are conservative and only available for simple situations. In a nuclear facility, the numerous ``sources`` and ``targets`` requiring evaluation often have complex geometric configurations, which makes calculations and computer modeling difficult. Few industry or regulatory rules are available for this specialized assessment. A drop test program was recently conducted to ``calibrate`` the judgment of seismic qualification engineers who perform interaction evaluations and to further develop seismic interaction criteria. Impact tests on varying combinations of sourcesmore » and targets were performed by dropping the sources from various heights onto targets that were connected to instruments. This paper summarizes the scope, test configurations, and some results of the drop test program. Force and acceleration time history data and general observations are presented on the ruggedness of various targets when subjected to impacts from different types of sources.« less
Evaluation of seismic spatial interaction effects through an impact testing program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, B.D.; Driesen, G.E.
The consequences of non-seismically qualified objects falling and striking essential, seismically qualified objects is an analytically difficult problem to assess. Analytical solutions to impact problems are conservative and only available for simple situations. In a nuclear facility, the numerous sources'' and targets'' requiring evaluation often have complex geometric configurations, which makes calculations and computer modeling difficult. Few industry or regulatory rules are available for this specialized assessment. A drop test program was recently conducted to calibrate'' the judgment of seismic qualification engineers who perform interaction evaluations and to further develop seismic interaction criteria. Impact tests on varying combinations of sourcesmore » and targets were performed by dropping the sources from various heights onto targets that were connected to instruments. This paper summarizes the scope, test configurations, and some results of the drop test program. Force and acceleration time history data and general observations are presented on the ruggedness of various targets when subjected to impacts from different types of sources.« less
Abdel Massih, M; Planchon, V; Polet, M; Dierick, K; Mahillon, J
2016-02-01
Based on the results of 19 food microbiology proficiency testing (PT) schemes, this study aimed to assess the laboratory performances, to highlight the main sources of unsatisfactory analytical results and to suggest areas of improvement. The 2009-2015 results of REQUASUD and IPH PT, involving a total of 48 laboratories, were analysed. On average, the laboratories failed to detect or enumerate foodborne pathogens in 3·0% of the tests. Thanks to a close collaboration with the PT participants, the causes of outliers could be identified in 74% of the cases. The main causes of erroneous PT results were either pre-analytical (handling of the samples, timing of analysis), analytical (unsuitable methods, confusion of samples, errors in colony counting or confirmation) or postanalytical mistakes (calculation and encoding of results). PT schemes are a privileged observation post to highlight analytical problems, which would otherwise remain unnoticed. In this perspective, this comprehensive study of PT results provides insight into the sources of systematic errors encountered during the analyses. This study draws the attention of the laboratories to the main causes of analytical errors and suggests practical solutions to avoid them, in an educational purpose. The observations support the hypothesis that regular participation to PT, when followed by feed-back and appropriate corrective actions, can play a key role in quality improvement and provide more confidence in the laboratory testing results. © 2015 The Society for Applied Microbiology.
Hens, Koen; Berth, Mario; Armbruster, Dave; Westgard, Sten
2014-07-01
Six Sigma metrics were used to assess the analytical quality of automated clinical chemistry and immunoassay tests in a large Belgian clinical laboratory and to explore the importance of the source used for estimation of the allowable total error. Clinical laboratories are continually challenged to maintain analytical quality. However, it is difficult to measure assay quality objectively and quantitatively. The Sigma metric is a single number that estimates quality based on the traditional parameters used in the clinical laboratory: allowable total error (TEa), precision and bias. In this study, Sigma metrics were calculated for 41 clinical chemistry assays for serum and urine on five ARCHITECT c16000 chemistry analyzers. Controls at two analyte concentrations were tested and Sigma metrics were calculated using three different TEa targets (Ricos biological variability, CLIA, and RiliBÄK). Sigma metrics varied with analyte concentration, the TEa target, and between/among analyzers. Sigma values identified those assays that are analytically robust and require minimal quality control rules and those that exhibit more variability and require more complex rules. The analyzer to analyzer variability was assessed on the basis of Sigma metrics. Six Sigma is a more efficient way to control quality, but the lack of TEa targets for many analytes and the sometimes inconsistent TEa targets from different sources are important variables for the interpretation and the application of Sigma metrics in a routine clinical laboratory. Sigma metrics are a valuable means of comparing the analytical quality of two or more analyzers to ensure the comparability of patient test results.
NASA Astrophysics Data System (ADS)
Chen, Jui-Sheng; Liu, Chen-Wuing; Liang, Ching-Ping; Lai, Keng-Hsin
2012-08-01
SummaryMulti-species advective-dispersive transport equations sequentially coupled with first-order decay reactions are widely used to describe the transport and fate of the decay chain contaminants such as radionuclide, chlorinated solvents, and nitrogen. Although researchers attempted to present various types of methods for analytically solving this transport equation system, the currently available solutions are mostly limited to an infinite or a semi-infinite domain. A generalized analytical solution for the coupled multi-species transport problem in a finite domain associated with an arbitrary time-dependent source boundary is not available in the published literature. In this study, we first derive generalized analytical solutions for this transport problem in a finite domain involving arbitrary number of species subject to an arbitrary time-dependent source boundary. Subsequently, we adopt these derived generalized analytical solutions to obtain explicit analytical solutions for a special-case transport scenario involving an exponentially decaying Bateman type time-dependent source boundary. We test the derived special-case solutions against the previously published coupled 4-species transport solution and the corresponding numerical solution with coupled 10-species transport to conduct the solution verification. Finally, we compare the new analytical solutions derived for a finite domain against the published analytical solutions derived for a semi-infinite domain to illustrate the effect of the exit boundary condition on coupled multi-species transport with an exponential decaying source boundary. The results show noticeable discrepancies between the breakthrough curves of all the species in the immediate vicinity of the exit boundary obtained from the analytical solutions for a finite domain and a semi-infinite domain for the dispersion-dominated condition.
How Reliable Is Laboratory Testing?
... laboratory testing. (See Who's Who in the Lab .) Post-Analytic Activities After the test is completed, the result must be delivered in ... View Sources NOTE: This article is based on research that ... of the Lab Tests Online Editorial Review Board . This article is periodically ...
Nationwide Multicenter Reference Interval Study for 28 Common Biochemical Analytes in China.
Xia, Liangyu; Chen, Ming; Liu, Min; Tao, Zhihua; Li, Shijun; Wang, Liang; Cheng, Xinqi; Qin, Xuzhen; Han, Jianhua; Li, Pengchang; Hou, Li'an; Yu, Songlin; Ichihara, Kiyoshi; Qiu, Ling
2016-03-01
A nationwide multicenter study was conducted in the China to explore sources of variation of reference values and establish reference intervals for 28 common biochemical analytes, as a part of the International Federation of Clinical Chemistry and Laboratory Medicine, Committee on Reference Intervals and Decision Limits (IFCC/C-RIDL) global study on reference values. A total of 3148 apparently healthy volunteers were recruited in 6 cities covering a wide area in China. Blood samples were tested in 2 central laboratories using Beckman Coulter AU5800 chemistry analyzers. Certified reference materials and value-assigned serum panel were used for standardization of test results. Multiple regression analysis was performed to explore sources of variation. Need for partition of reference intervals was evaluated based on 3-level nested ANOVA. After secondary exclusion using the latent abnormal values exclusion method, reference intervals were derived by a parametric method using the modified Box-Cox formula. Test results of 20 analytes were made traceable to reference measurement procedures. By the ANOVA, significant sex-related and age-related differences were observed in 12 and 12 analytes, respectively. A small regional difference was observed in the results for albumin, glucose, and sodium. Multiple regression analysis revealed BMI-related changes in results of 9 analytes for man and 6 for woman. Reference intervals of 28 analytes were computed with 17 analytes partitioned by sex and/or age. In conclusion, reference intervals of 28 common chemistry analytes applicable to Chinese Han population were established by use of the latest methodology. Reference intervals of 20 analytes traceable to reference measurement procedures can be used as common reference intervals, whereas others can be used as the assay system-specific reference intervals in China.
Nationwide Multicenter Reference Interval Study for 28 Common Biochemical Analytes in China
Xia, Liangyu; Chen, Ming; Liu, Min; Tao, Zhihua; Li, Shijun; Wang, Liang; Cheng, Xinqi; Qin, Xuzhen; Han, Jianhua; Li, Pengchang; Hou, Li’an; Yu, Songlin; Ichihara, Kiyoshi; Qiu, Ling
2016-01-01
Abstract A nationwide multicenter study was conducted in the China to explore sources of variation of reference values and establish reference intervals for 28 common biochemical analytes, as a part of the International Federation of Clinical Chemistry and Laboratory Medicine, Committee on Reference Intervals and Decision Limits (IFCC/C-RIDL) global study on reference values. A total of 3148 apparently healthy volunteers were recruited in 6 cities covering a wide area in China. Blood samples were tested in 2 central laboratories using Beckman Coulter AU5800 chemistry analyzers. Certified reference materials and value-assigned serum panel were used for standardization of test results. Multiple regression analysis was performed to explore sources of variation. Need for partition of reference intervals was evaluated based on 3-level nested ANOVA. After secondary exclusion using the latent abnormal values exclusion method, reference intervals were derived by a parametric method using the modified Box–Cox formula. Test results of 20 analytes were made traceable to reference measurement procedures. By the ANOVA, significant sex-related and age-related differences were observed in 12 and 12 analytes, respectively. A small regional difference was observed in the results for albumin, glucose, and sodium. Multiple regression analysis revealed BMI-related changes in results of 9 analytes for man and 6 for woman. Reference intervals of 28 analytes were computed with 17 analytes partitioned by sex and/or age. In conclusion, reference intervals of 28 common chemistry analytes applicable to Chinese Han population were established by use of the latest methodology. Reference intervals of 20 analytes traceable to reference measurement procedures can be used as common reference intervals, whereas others can be used as the assay system-specific reference intervals in China. PMID:26945390
Physics of cosmological cascades and observable properties
NASA Astrophysics Data System (ADS)
Fitoussi, T.; Belmont, R.; Malzac, J.; Marcowith, A.; Cohen-Tanugi, J.; Jean, P.
2017-04-01
TeV photons from extragalactic sources are absorbed in the intergalactic medium and initiate electromagnetic cascades. These cascades offer a unique tool to probe the properties of the universe at cosmological scales. We present a new Monte Carlo code dedicated to the physics of such cascades. This code has been tested against both published results and analytical approximations, and is made publicly available. Using this numerical tool, we investigate the main cascade properties (spectrum, halo extension and time delays), and study in detail their dependence on the physical parameters (extragalactic magnetic field, extragalactic background light, source redshift, source spectrum and beaming emission). The limitations of analytical solutions are emphasized. In particular, analytical approximations account only for the first generation of photons and higher branches of the cascade tree are neglected.
A two-dimensional analytical model of vapor intrusion involving vertical heterogeneity.
Yao, Yijun; Verginelli, Iason; Suuberg, Eric M
2017-05-01
In this work, we present an analytical chlorinated vapor intrusion (CVI) model that can estimate source-to-indoor air concentration attenuation by simulating two-dimensional (2-D) vapor concentration profile in vertically heterogeneous soils overlying a homogenous vapor source. The analytical solution describing the 2-D soil gas transport was obtained by applying a modified Schwarz-Christoffel mapping method. A partial field validation showed that the developed model provides results (especially in terms of indoor emission rates) in line with the measured data from a case involving a building overlying a layered soil. In further testing, it was found that the new analytical model can very closely replicate the results of three-dimensional (3-D) numerical models at steady state in scenarios involving layered soils overlying homogenous groundwater sources. By contrast, by adopting a two-layer approach (capillary fringe and vadose zone) as employed in the EPA implementation of the Johnson and Ettinger model, the spatially and temporally averaged indoor concentrations in the case of groundwater sources can be higher than the ones estimated by the numerical model up to two orders of magnitude. In short, the model proposed in this work can represent an easy-to-use tool that can simulate the subsurface soil gas concentration in layered soils overlying a homogenous vapor source while keeping the simplicity of an analytical approach that requires much less computational effort.
In situ impulse test: an experimental and analytical evaluation of data interpretation procedures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1975-08-01
Special experimental field testing and analytical studies were undertaken at Fort Lawton in Seattle, Washington, to study ''close-in'' wave propagation and evaluate data interpretation procedures for a new in situ impulse test. This test was developed to determine the shear wave velocity and dynamic modulus of soils underlying potential nuclear power plant sites. The test is different from conventional geophysical testing in that the velocity variation with strain is determined for each test. In general, strains between 10/sup -1/ and 10/sup -3/ percent are achieved. The experimental field work consisted of performing special tests in a large test sand fillmore » to obtain detailed ''close-in'' data. Six recording transducers were placed at various points on the energy source, while approximately 37 different transducers were installed within the soil fill, all within 7 feet of the energy source. Velocity measurements were then taken simultaneously under controlled test conditions to study shear wave propagation phenomenology and help evaluate data interpretation procedures. Typical test data are presented along with detailed descriptions of the results.« less
SOURCE SAMPLING AND ANALYSIS GUIDANCE - A METHODS DIRECTORY
Sampling and analytical methodologies are needed by EPA and industry for testing stationary sources for specific organic compounds such as those listed under the Resource Conservation and Recovery Act (RCRA) Appendix VIII and Appendix IX and the Clean Air Act of 1990. omputerized...
Schultze, A E; Irizarry, A R
2017-02-01
Veterinary clinical pathologists are well positioned via education and training to assist in investigations of unexpected results or increased variation in clinical pathology data. Errors in testing and unexpected variability in clinical pathology data are sometimes referred to as "laboratory errors." These alterations may occur in the preanalytical, analytical, or postanalytical phases of studies. Most of the errors or variability in clinical pathology data occur in the preanalytical or postanalytical phases. True analytical errors occur within the laboratory and are usually the result of operator or instrument error. Analytical errors are often ≤10% of all errors in diagnostic testing, and the frequency of these types of errors has decreased in the last decade. Analytical errors and increased data variability may result from instrument malfunctions, inability to follow proper procedures, undetected failures in quality control, sample misidentification, and/or test interference. This article (1) illustrates several different types of analytical errors and situations within laboratories that may result in increased variability in data, (2) provides recommendations regarding prevention of testing errors and techniques to control variation, and (3) provides a list of references that describe and advise how to deal with increased data variability.
ANALYTICAL METHOD DEVELOPMENT FOR THE ANALYSIS OF N-NITROSODIMETHYLAMINE (NDMA) IN DRINKING WATER
N-Nitrosodimethylamine (NDMA), a by-product of the manufacture of liquid rocket fuel, has recently been identified as a contaminant in several California drinking water sources. The initial source of the contamination was identified as an aerospace facility. Subsequent testing ...
Vapor phase diamond growth technology
NASA Technical Reports Server (NTRS)
Angus, J. C.
1981-01-01
Ion beam deposition chambers used for carbon film generation were designed and constructed. Features of the developed equipment include: (1) carbon ion energies down to approx. 50 eV; (2) in suit surface monitoring with HEED; (3) provision for flooding the surface with ultraviolet radiation; (4) infrared laser heating of substrate; (5) residual gas monitoring; (6) provision for several source gases, including diborane for doping studies; and (7) growth from either hydrocarbon source gases or from carbon/argon arc sources. Various analytical techniques for characterization of from carbon/argon arc sources. Various analytical techniques for characterization of the ion deposited carbon films used to establish the nature of the chemical bonding and crystallographic structure of the films are discussed. These include: H2204/HN03 etch; resistance measurements; hardness tests; Fourier transform infrared spectroscopy; scanning auger microscopy; electron spectroscopy for chemical analysis; electron diffraction and energy dispersive X-ray analysis; electron energy loss spectroscopy; density measurements; secondary ion mass spectroscopy; high energy electron diffraction; and electron spin resonance. Results of the tests are summarized.
On precisely modelling surface deformation due to interacting magma chambers and dykes
NASA Astrophysics Data System (ADS)
Pascal, Karen; Neuberg, Jurgen; Rivalta, Eleonora
2014-01-01
Combined data sets of InSAR and GPS allow us to observe surface deformation in volcanic settings. However, at the vast majority of volcanoes, a detailed 3-D structure that could guide the modelling of deformation sources is not available, due to the lack of tomography studies, for example. Therefore, volcano ground deformation due to magma movement in the subsurface is commonly modelled using simple point (Mogi) or dislocation (Okada) sources, embedded in a homogeneous, isotropic and elastic half-space. When data sets are too complex to be explained by a single deformation source, the magmatic system is often represented by a combination of these sources and their displacements fields are simply summed. By doing so, the assumption of homogeneity in the half-space is violated and the resulting interaction between sources is neglected. We have quantified the errors of such a simplification and investigated the limits in which the combination of analytical sources is justified. We have calculated the vertical and horizontal displacements for analytical models with adjacent deformation sources and have tested them against the solutions of corresponding 3-D finite element models, which account for the interaction between sources. We have tested various double-source configurations with either two spherical sources representing magma chambers, or a magma chamber and an adjacent dyke, modelled by a rectangular tensile dislocation or pressurized crack. For a tensile Okada source (representing an opening dyke) aligned or superposed to a Mogi source (magma chamber), we find the discrepancies with the numerical models to be insignificant (<5 per cent) independently of the source separation. However, if a Mogi source is placed side by side to an Okada source (in the strike-perpendicular direction), we find the discrepancies to become significant for a source separation less than four times the radius of the magma chamber. For horizontally or vertically aligned pressurized sources, the discrepancies are up to 20 per cent, which translates into surprisingly large errors when inverting deformation data for source parameters such as depth and volume change. Beyond 8 radii however, we demonstrate that the summation of analytical sources represents adjacent magma chambers correctly.
Hardcastle, Chris D; Harris, Joel M
2015-08-04
The ability of a vesicle membrane to preserve a pH gradient, while allowing for diffusion of neutral molecules across the phospholipid bilayer, can provide the isolation and preconcentration of ionizable compounds within the vesicle interior. In this work, confocal Raman microscopy is used to observe (in situ) the pH-gradient preconcentration of compounds into individual optically trapped vesicles that provide sub-femtoliter collectors for small-volume samples. The concentration of analyte accumulated in the vesicle interior is determined relative to a perchlorate-ion internal standard, preloaded into the vesicle along with a high-concentration buffer. As a guide to the experiments, a model for the transfer of analyte into the vesicle based on acid-base equilibria is developed to predict the concentration enrichment as a function of source-phase pH and analyte concentration. To test the concept, the accumulation of benzyldimethylamine (BDMA) was measured within individual 1 μm phospholipid vesicles having a stable initial pH that is 7 units lower than the source phase. For low analyte concentrations in the source phase (100 nM), a concentration enrichment into the vesicle interior of (5.2 ± 0.4) × 10(5) was observed, in agreement with the model predictions. Detection of BDMA from a 25 nM source-phase sample was demonstrated, a noteworthy result for an unenhanced Raman scattering measurement. The developed model accurately predicts the falloff of enrichment (and measurement sensitivity) at higher analyte concentrations, where the transfer of greater amounts of BDMA into the vesicle titrates the internal buffer and decreases the pH gradient. The predictable calibration response over 4 orders of magnitude in source-phase concentration makes it suitable for quantitative analysis of ionizable compounds from small-volume samples. The kinetics of analyte accumulation are relatively fast (∼15 min) and are consistent with the rate of transfer of a polar aromatic molecule across a gel-phase phospholipid membrane.
40 CFR 63.7142 - What are the requirements for claiming area source status?
Code of Federal Regulations, 2013 CFR
2013-07-01
... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...
40 CFR 63.7142 - What are the requirements for claiming area source status?
Code of Federal Regulations, 2011 CFR
2011-07-01
... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...
40 CFR 63.7142 - What are the requirements for claiming area source status?
Code of Federal Regulations, 2012 CFR
2012-07-01
... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...
40 CFR 63.7142 - What are the requirements for claiming area source status?
Code of Federal Regulations, 2010 CFR
2010-07-01
... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...
40 CFR 63.7142 - What are the requirements for claiming area source status?
Code of Federal Regulations, 2014 CFR
2014-07-01
... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...
Interferences from blood collection tube components on clinical chemistry assays
Bowen, Raffick A.R.; Remaley, Alan T.
2014-01-01
Improper design or use of blood collection devices can adversely affect the accuracy of laboratory test results. Vascular access devices, such as catheters and needles, exert shear forces during blood flow, which creates a predisposition to cell lysis. Components from blood collection tubes, such as stoppers, lubricants, surfactants, and separator gels, can leach into specimens and/or adsorb analytes from a specimen; special tube additives may also alter analyte stability. Because of these interactions with blood specimens, blood collection devices are a potential source of pre-analytical error in laboratory testing. Accurate laboratory testing requires an understanding of the complex interactions between collection devices and blood specimens. Manufacturers, vendors, and clinical laboratorians must consider the pre-analytical challenges in laboratory testing. Although other authors have described the effects of endogenous substances on clinical assay results, the effects/impact of blood collection tube additives and components have not been well systematically described or explained. This review aims to identify and describe blood collection tube additives and their components and the strategies used to minimize their effects on clinical chemistry assays. PMID:24627713
NASA Technical Reports Server (NTRS)
Kuhlman, E. A.; Baranowski, L. C.
1977-01-01
The effects of the Thermal Protection Subsystem (TPS) contamination on the space shuttle orbiter S band quad antenna due to multiple mission buildup are discussed. A test fixture was designed, fabricated and exposed to ten cycles of simulated ground and flight environments. Radiation pattern and impedance tests were performed to measure the effects of the contaminates. The degradation in antenna performance was attributed to the silicone waterproofing in the TPS tiles rather than exposure to the contaminating sources used in the test program. Validation of the accuracy of an analytical thermal model is discussed. Thermal vacuum tests with a test fixture and a representative S band quad antenna were conducted to evaluate the predictions of the analytical thermal model for two orbital heating conditions and entry from each orbit. The results show that the accuracy of predicting the test fixture thermal responses is largely dependent on the ability to define the boundary and ambient conditions. When the test conditions were accurately included in the analytical model, the predictions were in excellent agreement with measurements.
AN OPEN-SOURCE COMMUNITY WEB SITE TO SUPPORT GROUND-WATER MODEL TESTING
A community wiki wiki web site has been created as a resource to support ground-water model development and testing. The Groundwater Gourmet wiki is a repository for user supplied analytical and numerical recipes, how-to's, and examples. Members are encouraged to submit analyti...
An analytical and experimental evaluation of a Fresnel lens solar concentrator
NASA Technical Reports Server (NTRS)
Hastings, L. J.; Allums, S. A.; Cosby, R. M.
1976-01-01
An analytical and experimental evaluation of line focusing Fresnel lenses with application potential in the 200 to 370 C range was studied. Analytical techniques were formulated to assess the solar transmission and imaging properties of a grooves down lens. Experimentation was based on a 56 cm wide, f/1.0 lens. A Sun tracking heliostat provided a nonmoving solar source. Measured data indicated more spreading at the profile base than analytically predicted, resulting in a peak concentration 18 percent lower than the computed peak of 57. The measured and computed transmittances were 85 and 87 percent, respectively. Preliminary testing with a subsequent lens indicated that modified manufacturing techniques corrected the profile spreading problem and should enable improved analytical experimental correlation.
Thorenz, Ute R; Kundel, Michael; Müller, Lars; Hoffmann, Thorsten
2012-11-01
In this work, we describe a simple diffusion capillary device for the generation of various organic test gases. Using a set of basic equations the output rate of the test gas devices can easily be predicted only based on the molecular formula and the boiling point of the compounds of interest. Since these parameters are easily accessible for a large number of potential analytes, even for those compounds which are typically not listed in physico-chemical handbooks or internet databases, the adjustment of the test gas source to the concentration range required for the individual analytical application is straightforward. The agreement of the predicted and measured values is shown to be valid for different groups of chemicals, such as halocarbons, alkanes, alkenes, and aromatic compounds and for different dimensions of the diffusion capillaries. The limits of the predictability of the output rates are explored and observed to result in an underprediction of the output rates when very thin capillaries are used. It is demonstrated that pressure variations are responsible for the observed deviation of the output rates. To overcome the influence of pressure variations and at the same time to establish a suitable test gas source for highly volatile compounds, also the usability of permeation sources is explored, for example for the generation of molecular bromine test gases.
Architectural Considerations for Highly Scalable Computing to Support On-demand Video Analytics
2017-04-19
enforcement . The system was tested in the wild using video files as well as a commercial Video Management System supporting more than 100 surveillance...research were used to implement a distributed on-demand video analytics system that was prototyped for the use of forensics investigators in law...cameras as video sources. The architectural considerations of this system are presented. Issues to be reckoned with in implementing a scalable
Implications on 1+1 D runup modeling due to time features of the earthquake source
NASA Astrophysics Data System (ADS)
Fuentes, M.; Riquelme, S.; Campos, J. A.
2017-12-01
The time characteristics of the seismic source are usually neglected in tsunami modeling, due to the difference in the time scale of both processes. Nonetheless, there are just a few analytical studies that intended to explain separately the role of the rise time and the rupture velocity. In this work, we extend an analytical 1+1D solution for the shoreline motion time series, from the static case to the dynamic case, by including both, rise time and rupture velocity. Results show that the static case correspond to a limit case of null rise time and infinite rupture velocity. Both parameters contribute in shifting the arrival time, but maximum run-up may be affected by very slow ruptures and long rise time. The analytical solution has been tested for the Nicaraguan tsunami earthquake, suggesting that the rupture was not slow enough to cause wave amplification to explain the high runup observations.
Pre-analytical issues in the haemostasis laboratory: guidance for the clinical laboratories.
Magnette, A; Chatelain, M; Chatelain, B; Ten Cate, H; Mullier, F
2016-01-01
Ensuring quality has become a daily requirement in laboratories. In haemostasis, even more than in other disciplines of biology, quality is determined by a pre-analytical step that encompasses all procedures, starting with the formulation of the medical question, and includes patient preparation, sample collection, handling, transportation, processing, and storage until time of analysis. This step, based on a variety of manual activities, is the most vulnerable part of the total testing process and is a major component of the reliability and validity of results in haemostasis and constitutes the most important source of erroneous or un-interpretable results. Pre-analytical errors may occur throughout the testing process and arise from unsuitable, inappropriate or wrongly handled procedures. Problems may arise during the collection of blood specimens such as misidentification of the sample, use of inadequate devices or needles, incorrect order of draw, prolonged tourniquet placing, unsuccessful attempts to locate the vein, incorrect use of additive tubes, collection of unsuitable samples for quality or quantity, inappropriate mixing of a sample, etc. Some factors can alter the result of a sample constituent after collection during transportation, preparation and storage. Laboratory errors can often have serious adverse consequences. Lack of standardized procedures for sample collection accounts for most of the errors encountered within the total testing process. They can also have clinical consequences as well as a significant impact on patient care, especially those related to specialized tests as these are often considered as "diagnostic". Controlling pre-analytical variables is critical since this has a direct influence on the quality of results and on their clinical reliability. The accurate standardization of the pre-analytical phase is of pivotal importance for achieving reliable results of coagulation tests and should reduce the side effects of the influence factors. This review is a summary of the most important recommendations regarding the importance of pre-analytical factors for coagulation testing and should be a tool to increase awareness about the importance of pre-analytical factors for coagulation testing.
Analytical solutions describing the time-dependent DNAPL source-zone mass and contaminant discharge rate are used as a flux-boundary condition in a semi-analytical contaminant transport model. These analytical solutions assume a power relationship between the flow-averaged sourc...
Clinical laboratory analytics: Challenges and promise for an emerging discipline.
Shirts, Brian H; Jackson, Brian R; Baird, Geoffrey S; Baron, Jason M; Clements, Bryan; Grisson, Ricky; Hauser, Ronald George; Taylor, Julie R; Terrazas, Enrique; Brimhall, Brad
2015-01-01
The clinical laboratory is a major source of health care data. Increasingly these data are being integrated with other data to inform health system-wide actions meant to improve diagnostic test utilization, service efficiency, and "meaningful use." The Academy of Clinical Laboratory Physicians and Scientists hosted a satellite meeting on clinical laboratory analytics in conjunction with their annual meeting on May 29, 2014 in San Francisco. There were 80 registrants for the clinical laboratory analytics meeting. The meeting featured short presentations on current trends in clinical laboratory analytics and several panel discussions on data science in laboratory medicine, laboratory data and its role in the larger healthcare system, integrating laboratory analytics, and data sharing for collaborative analytics. One main goal of meeting was to have an open forum of leaders that work with the "big data" clinical laboratories produce. This article summarizes the proceedings of the meeting and content discussed.
Clinical laboratory analytics: Challenges and promise for an emerging discipline
Shirts, Brian H.; Jackson, Brian R.; Baird, Geoffrey S.; Baron, Jason M.; Clements, Bryan; Grisson, Ricky; Hauser, Ronald George; Taylor, Julie R.; Terrazas, Enrique; Brimhall, Brad
2015-01-01
The clinical laboratory is a major source of health care data. Increasingly these data are being integrated with other data to inform health system-wide actions meant to improve diagnostic test utilization, service efficiency, and “meaningful use.” The Academy of Clinical Laboratory Physicians and Scientists hosted a satellite meeting on clinical laboratory analytics in conjunction with their annual meeting on May 29, 2014 in San Francisco. There were 80 registrants for the clinical laboratory analytics meeting. The meeting featured short presentations on current trends in clinical laboratory analytics and several panel discussions on data science in laboratory medicine, laboratory data and its role in the larger healthcare system, integrating laboratory analytics, and data sharing for collaborative analytics. One main goal of meeting was to have an open forum of leaders that work with the “big data” clinical laboratories produce. This article summarizes the proceedings of the meeting and content discussed. PMID:25774320
FEMFLOW3D; a finite-element program for the simulation of three-dimensional aquifers; version 1.0
Durbin, Timothy J.; Bond, Linda D.
1998-01-01
This document also includes model validation, source code, and example input and output files. Model validation was performed using four test problems. For each test problem, the results of a model simulation with FEMFLOW3D were compared with either an analytic solution or the results of an independent numerical approach. The source code, written in the ANSI x3.9-1978 FORTRAN standard, and the complete input and output of an example problem are listed in the appendixes.
AiResearch QCGAT engine: Acoustic test results
NASA Technical Reports Server (NTRS)
Kisner, L. S.
1980-01-01
The noise levels of the quiet, general aviation turbofan (QCGAT) engine were measured in ground static noise tests. The static noise levels were found to be markedly lower than the demonstrably quiet AiResearch model TFE731 engine. The measured QCGAT noise levels were correlated with analytical noise source predictions to derive free-field component noise predictions. These component noise sources were used to predict the QCGAT flyover noise levels at FAR Part 36 conditions. The predicted flyover noise levels are about 10 decibels lower than the current quietest business jets.
Sorbello, Alfred; Ripple, Anna; Tonning, Joseph; Munoz, Monica; Hasan, Rashedul; Ly, Thomas; Francis, Henry; Bodenreider, Olivier
2017-03-22
We seek to develop a prototype software analytical tool to augment FDA regulatory reviewers' capacity to harness scientific literature reports in PubMed/MEDLINE for pharmacovigilance and adverse drug event (ADE) safety signal detection. We also aim to gather feedback through usability testing to assess design, performance, and user satisfaction with the tool. A prototype, open source, web-based, software analytical tool generated statistical disproportionality data mining signal scores and dynamic visual analytics for ADE safety signal detection and management. We leveraged Medical Subject Heading (MeSH) indexing terms assigned to published citations in PubMed/MEDLINE to generate candidate drug-adverse event pairs for quantitative data mining. Six FDA regulatory reviewers participated in usability testing by employing the tool as part of their ongoing real-life pharmacovigilance activities to provide subjective feedback on its practical impact, added value, and fitness for use. All usability test participants cited the tool's ease of learning, ease of use, and generation of quantitative ADE safety signals, some of which corresponded to known established adverse drug reactions. Potential concerns included the comparability of the tool's automated literature search relative to a manual 'all fields' PubMed search, missing drugs and adverse event terms, interpretation of signal scores, and integration with existing computer-based analytical tools. Usability testing demonstrated that this novel tool can automate the detection of ADE safety signals from published literature reports. Various mitigation strategies are described to foster improvements in design, productivity, and end user satisfaction.
An analytical and experimental evaluation of the plano-cylindrical Fresnel lens solar concentrator
NASA Technical Reports Server (NTRS)
Hastings, L. J.; Allums, S. L.; Cosby, R. M.
1976-01-01
Plastic Fresnel lenses for solar concentration are attractive because of potential for low-cost mass production. An analytical and experimental evaluation of line-focusing Fresnel lenses with application potential in the 200 to 370 C range is reported. Analytical techniques were formulated to assess the solar transmission and imaging properties of a grooves-down lens. Experimentation was based primarily on a 56 cm-wide lens with f-number 1.0. A sun-tracking heliostat provided a non-moving solar source. Measured data indicated more spreading at the profile base than analytically predicted. The measured and computed transmittances were 85 and 87% respectively. Preliminary testing with a second lens (1.85 m) indicated that modified manufacturing techniques corrected the profile spreading problem.
NASA Technical Reports Server (NTRS)
Coggi, J. V.; Loscutoff, A. V.; Barker, R. S.
1973-01-01
An analytical simulation of the RITE-Integrated Waste Management and Water Recovery System using radioisotopes for thermal energy was prepared for the NASA-Manned Space Flight Center (MSFC). The RITE system is the most advanced concept water-waste management system currently under development and has undergone extended duration testing. It has the capability of disposing of nearly all spacecraft wastes including feces and trash and of recovering water from usual waste water sources: urine, condensate, wash water, etc. All of the process heat normally used in the system is produced from low penalty radioisotope heat sources. The analytical simulation was developed with the G189A computer program. The objective of the simulation was to obtain an analytical simulation which can be used to (1) evaluate the current RITE system steady state and transient performance during normal operating conditions, and also during off normal operating conditions including failure modes; and (2) evaluate the effects of variations in component design parameters and vehicle interface parameters on system performance.
Swing arm profilometer: analytical solutions of misalignment errors for testing axisymmetric optics
NASA Astrophysics Data System (ADS)
Xiong, Ling; Luo, Xiao; Liu, Zhenyu; Wang, Xiaokun; Hu, Haixiang; Zhang, Feng; Zheng, Ligong; Zhang, Xuejun
2016-07-01
The swing arm profilometer (SAP) has been playing a very important role in testing large aspheric optics. As one of most significant error sources that affects the test accuracy, misalignment error leads to low-order errors such as aspherical aberrations and coma apart from power. In order to analyze the effect of misalignment errors, the relation between alignment parameters and test results of axisymmetric optics is presented. Analytical solutions of SAP system errors from tested mirror misalignment, arm length L deviation, tilt-angle θ deviation, air-table spin error, and air-table misalignment are derived, respectively; and misalignment tolerance is given to guide surface measurement. In addition, experiments on a 2-m diameter parabolic mirror are demonstrated to verify the model; according to the error budget, we achieve the SAP test for low-order errors except power with accuracy of 0.1 μm root-mean-square.
Harries, Megan; Bukovsky-Reyes, Santiago; Bruno, Thomas J
2016-01-15
This paper details the sampling methods used with the field portable porous layer open tubular cryoadsorption (PLOT-cryo) approach, described in Part I of this two-part series, applied to several analytes of interest. We conducted tests with coumarin and 2,4,6-trinitrotoluene (two solutes that were used in initial development of PLOT-cryo technology), naphthalene, aviation turbine kerosene, and diesel fuel, on a variety of matrices and test beds. We demonstrated that these analytes can be easily detected and reliably identified using the portable unit for analyte collection. By leveraging efficiency-boosting temperature control and the high flow rate multiple capillary wafer, very short collection times (as low as 3s) yielded accurate detection. For diesel fuel spiked on glass beads, we determined a method detection limit below 1 ppm. We observed greater variability among separate samples analyzed with the portable unit than previously documented in work using the laboratory-based PLOT-cryo technology. We identify three likely sources that may help explain the additional variation: the use of a compressed air source to generate suction, matrix geometry, and variability in the local vapor concentration around the sampling probe as solute depletion occurs both locally around the probe and in the test bed as a whole. This field-portable adaptation of the PLOT-cryo approach has numerous and diverse potential applications. Published by Elsevier B.V.
Harries, Megan; Bukovsky-Reyes, Santiago; Bruno, Thomas J.
2016-01-01
This paper details the sampling methods used with the field portable porous layer open tubular cryoadsorption (PLOT-cryo) approach, described in Part I of this two-part series, applied to several analytes of interest. We conducted tests with coumarin and 2,4,6-trinitrotoluene (two solutes that were used in initial development of PLOT-cryo technology), naphthalene, aviation turbine kerosene, and diesel fuel, on a variety of matrices and test beds. We demonstrated that these analytes can be easily detected and reliably identified using the portable unit for analyte collection. By leveraging efficiency-boosting temperature control and the high flow rate multiple capillary wafer, very short collection times (as low as 3 s) yielded accurate detection. For diesel fuel spiked on glass beads, we determined a method detection limit below 1 ppm. We observed greater variability among separate samples analyzed with the portable unit than previously documented in work using the laboratory-based PLOT-cryo technology. We identify three likely sources that may help explain the additional variation: the use of a compressed air source to generate suction, matrix geometry, and variability in the local vapor concentration around the sampling probe as solute depletion occurs both locally around the probe and in the test bed as a whole. This field-portable adaptation of the PLOT-cryo approach has numerous and diverse potential applications. PMID:26726934
openECA Platform and Analytics Alpha Test Results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, Russell
The objective of the Open and Extensible Control and Analytics (openECA) Platform for Phasor Data project is to develop an open source software platform that significantly accelerates the production, use, and ongoing development of real-time decision support tools, automated control systems, and off-line planning systems that (1) incorporate high-fidelity synchrophasor data and (2) enhance system reliability while enabling the North American Electric Reliability Corporation (NERC) operating functions of reliability coordinator, transmission operator, and/or balancing authority to be executed more effectively.
Loit, Evelin; Tricco, Andrea C; Tsouros, Sophia; Sears, Margaret; Ansari, Mohammed T; Booth, Ronald A
2011-07-01
Low thiopurine S-methyltransferase (TPMT) enzyme activity is associated with increased thiopurine drug toxicity, particularly myelotoxicity. Pre-analytic and analytic variables for TPMT genotype and phenotype (enzyme activity) testing were reviewed. A systematic literature review was performed, and diagnostic laboratories were surveyed. Thirty-five studies reported relevant data for pre-analytic variables (patient age, gender, race, hematocrit, co-morbidity, co-administered drugs and specimen stability) and thirty-three for analytic variables (accuracy, reproducibility). TPMT is stable in blood when stored for up to 7 days at room temperature, and 3 months at -30°C. Pre-analytic patient variables do not affect TPMT activity. Fifteen drugs studied to date exerted no clinically significant effects in vivo. Enzymatic assay is the preferred technique. Radiochemical and HPLC techniques had intra- and inter-assay coefficients of variation (CVs) below 10%. TPMT is a stable enzyme, and its assay is not affected by age, gender, race or co-morbidity. Copyright © 2011. Published by Elsevier Inc.
Waste Characterization Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vigil-Holterman, Luciana R.; Naranjo, Felicia Danielle
2016-02-02
This report discusses ways to classify waste as outlined by LANL. Waste Generators must make a waste determination and characterize regulated waste by appropriate analytical testing or use of acceptable knowledge (AK). Use of AK for characterization requires several source documents. Waste characterization documentation must be accurate, sufficient, and current (i.e., updated); relevant and traceable to the waste stream’s generation, characterization, and management; and not merely a list of information sources.
Westenberger, Benjamin J; Ellison, Christopher D; Fussner, Andrew S; Jenney, Susan; Kolinski, Richard E; Lipe, Terra G; Lyon, Robbe C; Moore, Terry W; Revelle, Larry K; Smith, Anjanette P; Spencer, John A; Story, Kimberly D; Toler, Duckhee Y; Wokovich, Anna M; Buhse, Lucinda F
2005-12-08
This work investigated the use of non-traditional analytical methods to evaluate the quality of a variety of pharmaceutical products purchased via internet sites from foreign sources and compared the results with those obtained from conventional quality assurance methods. Traditional analytical techniques employing HPLC for potency, content uniformity, chromatographic purity and drug release profiles were used to evaluate the quality of five selected drug products (fluoxetine hydrochloride, levothyroxine sodium, metformin hydrochloride, phenytoin sodium, and warfarin sodium). Non-traditional techniques, such as near infrared spectroscopy (NIR), NIR imaging and thermogravimetric analysis (TGA), were employed to verify the results and investigate their potential as alternative testing methods. Two of 20 samples failed USP monographs for quality attributes. The additional analytical methods found 11 of 20 samples had different formulations when compared to the U.S. product. Seven of the 20 samples arrived in questionable containers, and 19 of 20 had incomplete labeling. Only 1 of the 20 samples had final packaging similar to the U.S. products. The non-traditional techniques complemented the traditional techniques used and highlighted additional quality issues for the products tested. For example, these methods detected suspect manufacturing issues (such as blending), which were not evident from traditional testing alone.
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-10-21
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum dose difference within 1.7%. The maximum relative difference of output factors was within 0.5%. Over 98.5% passing rate was achieved in 3D gamma-index tests with 2%/2 mm criteria in both an IMRT prostate patient case and a head-and-neck case. These results demonstrated the efficacy of our model in terms of accurately representing a reference phase-space file. We have also tested the efficiency gain of our source model over our previously developed phase-space-let file source model. The overall efficiency of dose calculation was found to be improved by ~1.3-2.2 times in water and patient cases using our analytical model.
Investigation of Acoustical Shielding by a Wedge-Shaped Airframe
NASA Technical Reports Server (NTRS)
Gerhold, Carl H.; Clark, Lorenzo R.; Dunn, Mark H.; Tweed, John
2006-01-01
Experiments on a scale model of an advanced unconventional subsonic transport concept, the Blended Wing Body (BWB), have demonstrated significant shielding of inlet-radiated noise. A computational model of the shielding mechanism has been developed using a combination of boundary integral equation method (BIEM) and equivalent source method (ESM). The computation models the incident sound from a point source in a nacelle and determines the scattered sound field. In this way the sound fields with and without the airfoil can be estimated for comparison to experiment. An experimental test bed using a simplified wedge-shape airfoil and a broadband point noise source in a simulated nacelle has been developed for the purposes of verifying the analytical model and also to study the effect of engine nacelle placement on shielding. The experimental study is conducted in the Anechoic Noise Research Facility at NASA Langley Research Center. The analytic and experimental results are compared at 6300 and 8000 Hz. These frequencies correspond to approximately 150 Hz on the full scale aircraft. Comparison between the experimental and analytic results is quite good, not only for the noise scattering by the airframe, but also for the total sound pressure in the far field. Many of the details of the sound field that the analytic model predicts are seen or indicated in the experiment, within the spatial resolution limitations of the experiment. Changing nacelle location produces comparable changes in noise shielding contours evaluated analytically and experimentally. Future work in the project will be enhancement of the analytic model to extend the analysis to higher frequencies corresponding to the blade passage frequency of the high bypass ratio ducted fan engines that are expected to power the BWB.
Investigation of Acoustical Shielding by a Wedge-Shaped Airframe
NASA Technical Reports Server (NTRS)
Gerhold, Carl H.; Clark, Lorenzo R.; Dunn, Mark H.; Tweed, John
2004-01-01
Experiments on a scale model of an advanced unconventional subsonic transport concept, the Blended Wing Body (BWB), have demonstrated significant shielding of inlet-radiated noise. A computational model of the shielding mechanism has been developed using a combination of boundary integral equation method (BIEM) and equivalent source method (ESM). The computation models the incident sound from a point source in a nacelle and determines the scattered sound field. In this way the sound fields with and without the airfoil can be estimated for comparison to experiment. An experimental test bed using a simplified wedge-shape airfoil and a broadband point noise source in a simulated nacelle has been developed for the purposes of verifying the analytical model and also to study the effect of engine nacelle placement on shielding. The experimental study is conducted in the Anechoic Noise Research Facility at NASA Langley Research Center. The analytic and experimental results are compared at 6300 and 8000 Hz. These frequencies correspond to approximately 150 Hz on the full scale aircraft. Comparison between the experimental and analytic results is quite good, not only for the noise scattering by the airframe, but also for the total sound pressure in the far field. Many of the details of the sound field that the analytic model predicts are seen or indicated in the experiment, within the spatial resolution limitations of the experiment. Changing nacelle location produces comparable changes in noise shielding contours evaluated analytically and experimentally. Future work in the project will be enhancement of the analytic model to extend the analysis to higher frequencies corresponding to the blade passage frequency of the high bypass ratio ducted fan engines that are expected to power the BWB.
NASA Astrophysics Data System (ADS)
Zhang, Shou-ping; Xin, Xiao-kang
2017-07-01
Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apel, William A; Thompson, Vicki S
A method for analyzing a biological sample by antibody profiling for identifying forensic samples or for detecting the presence of an analyte. In an embodiment of the invention, the analyte is a drug, such as marijuana, Cocaine (crystalline tropane alkaloid), methamphetamine, methyltestosterone, or mesterolone. The method comprises attaching antigens to a surface of a solid support in a preselected pattern to form an array wherein locations of the antigens are known; contacting the array with the biological sample such that a portion of antibodies in the sample reacts with and binds to the antigens in the array to form immunemore » complexes; washing away antibodies that do form immune complexes; and detecting the immune complexes, to form an antibody profile. Forensic samples are identified by comparing a sample from an unknown source with a sample from a known source. Further, an assay, such as a test for illegal drug use, can be coupled to a test for identity such that the results of the assay can be positively correlated to the subject's identity.« less
Antibody profiling sensitivity through increased reporter antibody layering
Apel, William A.; Thompson, Vicki S.
2013-02-26
A method for analyzing a biological sample by antibody profiling for identifying forensic samples or for detecting the presence of an analyte. In an embodiment of the invention, the analyte is a drug, such as marijuana, Cocaine (crystalline tropane alkaloid), methamphetamine, methyltestosterone, or mesterolone. The method comprises attaching antigens to a surface of a solid support in a preselected pattern to form an array wherein locations of the antigens are known; contacting the array with the biological sample such that a portion of antibodies in the sample reacts with and binds to the antigens in the array to form immune complexes; washing away antibodies that do form immune complexes; and detecting the immune complexes, to form an antibody profile. Forensic samples are identified by comparing a sample from an unknown source with a sample from a known source. Further, an assay, such as a test for illegal drug use, can be coupled to a test for identity such that the results of the assay can be positively correlated to the subject's identity.
Rapid classification of biological components
Thompson, Vicki S.; Barrett, Karen B.; Key, Diane E.
2013-10-15
A method is disclosed for analyzing a biological sample by antibody profiling for identifying forensic samples or for detecting the presence of an analyte. In an illustrative embodiment of the invention, the analyte is a drug, such as marijuana, cocaine (crystalline tropane alkaloid), methamphetamine, methyltestosterone, or mesterolone. The method involves attaching antigens to a surface of a solid support in a preselected pattern to form an array wherein the locations of the antigens are known; contacting the array with the biological sample such that a portion of antibodies in the sample reacts with and binds to antigens in the array, thereby forming immune complexes; washing away antibodies that do not form immune complexes; and detecting the immune complexes, thereby forming an antibody profile. Forensic samples are identified by comparing a sample from an unknown source with a sample from a known source. Further, an assay, such as a test for illegal drug use, can be coupled to a test for identity such that the results of the assay can be positively correlated to a subject's identity.
Rapid classification of biological components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Vicki S.; Barrett, Karen B.; Key, Diane E.
A method is disclosed for analyzing a biological sample by antibody profiling for identifying forensic samples or for detecting the presence of an analyte. In an illustrative embodiment of the invention, the analyte is a drug, such as marijuana, cocaine, methamphetamine, methyltestosterone, or mesterolone. The method involves attaching antigens to the surface of a solid support in a preselected pattern to form an array wherein the locations of the antigens are known; contacting the array with the biological sample such that a portion of antibodies in the sample reacts with and binds to antigens in the array, thereby forming immunemore » complexes; washing away antibodies that do form immune complexes; and detecting the immune complexes, thereby forming an antibody profile. Forensic samples are identified by comparing a sample from an unknown source with a sample from a known source. Further, an assay, such as a test for illegal drug use, can be coupled to a test for identity such that the results of the assay can be positively correlated to the subject's identity.« less
Antibody profiling sensitivity through increased reporter antibody layering
Apel, William A.; Thompson, Vicki S.
2017-03-28
A method for analyzing a biological sample by antibody profiling for identifying forensic samples or for detecting the presence of an analyte. In an embodiment of the invention, the analyte is a drug, such as marijuana, Cocaine (crystalline tropane alkaloid), methamphetamine, methyltestosterone, or mesterolone. The method comprises attaching antigens to a surface of a solid support in a preselected pattern to form an array wherein locations of the antigens are known; contacting the array with the biological sample such that a portion of antibodies in the sample reacts with and binds to the antigens in the array to form immune complexes; washing away antibodies that do form immune complexes; and detecting the immune complexes, to form an antibody profile. Forensic samples are identified by comparing a sample from an unknown source with a sample from a known source. Further, an assay, such as a test for illegal drug use, can be coupled to a test for identity such that the results of the assay can be positively correlated to the subject's identity.
PyVCI: A flexible open-source code for calculating accurate molecular infrared spectra
NASA Astrophysics Data System (ADS)
Sibaev, Marat; Crittenden, Deborah L.
2016-06-01
The PyVCI program package is a general purpose open-source code for simulating accurate molecular spectra, based upon force field expansions of the potential energy surface in normal mode coordinates. It includes harmonic normal coordinate analysis and vibrational configuration interaction (VCI) algorithms, implemented primarily in Python for accessibility but with time-consuming routines written in C. Coriolis coupling terms may be optionally included in the vibrational Hamiltonian. Non-negligible VCI matrix elements are stored in sparse matrix format to alleviate the diagonalization problem. CPU and memory requirements may be further controlled by algorithmic choices and/or numerical screening procedures, and recommended values are established by benchmarking using a test set of 44 molecules for which accurate analytical potential energy surfaces are available. Force fields in normal mode coordinates are obtained from the PyPES library of high quality analytical potential energy surfaces (to 6th order) or by numerical differentiation of analytic second derivatives generated using the GAMESS quantum chemical program package (to 4th order).
Witinski, Mark F; Blanchard, Romain; Pfluegl, Christian; Diehl, Laurent; Li, Biao; Krishnamurthy, Kalyani; Pein, Brandt C; Azimi, Masud; Chen, Peili; Ulu, Gokhan; Vander Rhodes, Greg; Howle, Chris R; Lee, Linda; Clewes, Rhea J; Williams, Barry; Vakhshoori, Daryoosh
2018-04-30
This article presents new spectroscopic results in standoff chemical detection that are enabled by monolithic arrays of Distributed Feedback (DFB) Quantum Cascade Lasers (QCLs), with each array element at a slightly different wavelength than its neighbor. The standoff analysis of analyte/substrate pairs requires a laser source with characteristics offered uniquely by a QCL Array. This is particularly true for time-evolving liquid chemical warfare agent (CWA) analysis. In addition to describing the QCL array source developed for long wave infrared coverage, a description of an integrated prototype standoff detection system is provided. Experimental standoff detection results using the man-portable system for droplet examination from 1.3 meters are presented using the CWAs VX and T-mustard as test cases. Finally, we consider three significant challenges to working with droplets and liquid films in standoff spectroscopy: substrate uptake of the analyte, time-dependent droplet spread of the analyte, and variable substrate contributions to retrieved signals.
Poore, Joshua C; Forlines, Clifton L; Miller, Sarah M; Regan, John R; Irvine, John M
2014-12-01
The decision sciences are increasingly challenged to advance methods for modeling analysts, accounting for both analytic strengths and weaknesses, to improve inferences taken from increasingly large and complex sources of data. We examine whether psychometric measures-personality, cognitive style, motivated cognition-predict analytic performance and whether psychometric measures are competitive with aptitude measures (i.e., SAT scores) as analyst sample selection criteria. A heterogeneous, national sample of 927 participants completed an extensive battery of psychometric measures and aptitude tests and was asked 129 geopolitical forecasting questions over the course of 1 year. Factor analysis reveals four dimensions among psychometric measures; dimensions characterized by differently motivated "top-down" cognitive styles predicted distinctive patterns in aptitude and forecasting behavior. These dimensions were not better predictors of forecasting accuracy than aptitude measures. However, multiple regression and mediation analysis reveals that these dimensions influenced forecasting accuracy primarily through bias in forecasting confidence. We also found that these facets were competitive with aptitude tests as forecast sampling criteria designed to mitigate biases in forecasting confidence while maximizing accuracy. These findings inform the understanding of individual difference dimensions at the intersection of analytic aptitude and demonstrate that they wield predictive power in applied, analytic domains.
Forlines, Clifton L.; Miller, Sarah M.; Regan, John R.; Irvine, John M.
2014-01-01
The decision sciences are increasingly challenged to advance methods for modeling analysts, accounting for both analytic strengths and weaknesses, to improve inferences taken from increasingly large and complex sources of data. We examine whether psychometric measures—personality, cognitive style, motivated cognition—predict analytic performance and whether psychometric measures are competitive with aptitude measures (i.e., SAT scores) as analyst sample selection criteria. A heterogeneous, national sample of 927 participants completed an extensive battery of psychometric measures and aptitude tests and was asked 129 geopolitical forecasting questions over the course of 1 year. Factor analysis reveals four dimensions among psychometric measures; dimensions characterized by differently motivated “top-down” cognitive styles predicted distinctive patterns in aptitude and forecasting behavior. These dimensions were not better predictors of forecasting accuracy than aptitude measures. However, multiple regression and mediation analysis reveals that these dimensions influenced forecasting accuracy primarily through bias in forecasting confidence. We also found that these facets were competitive with aptitude tests as forecast sampling criteria designed to mitigate biases in forecasting confidence while maximizing accuracy. These findings inform the understanding of individual difference dimensions at the intersection of analytic aptitude and demonstrate that they wield predictive power in applied, analytic domains. PMID:25983670
Median of patient results as a tool for assessment of analytical stability.
Jørgensen, Lars Mønster; Hansen, Steen Ingemann; Petersen, Per Hyltoft; Sölétormos, György
2015-06-15
In spite of the well-established external quality assessment and proficiency testing surveys of analytical quality performance in laboratory medicine, a simple tool to monitor the long-term analytical stability as a supplement to the internal control procedures is often needed. Patient data from daily internal control schemes was used for monthly appraisal of the analytical stability. This was accomplished by using the monthly medians of patient results to disclose deviations from analytical stability, and by comparing divergences with the quality specifications for allowable analytical bias based on biological variation. Seventy five percent of the twenty analytes achieved on two COBASs INTEGRA 800 instruments performed in accordance with the optimum and with the desirable specifications for bias. Patient results applied in analytical quality performance control procedures are the most reliable sources of material as they represent the genuine substance of the measurements and therefore circumvent the problems associated with non-commutable materials in external assessment. Patient medians in the monthly monitoring of analytical stability in laboratory medicine are an inexpensive, simple and reliable tool to monitor the steadiness of the analytical practice. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ellis, Wade C.; Lewis, Charlotte R.; Openshaw, Anna P.; Farnsworth, Paul B.
2016-09-01
We demonstrate the effectiveness of using hydrogen-doped argon as the support gas for the dielectric barrier discharge (DBD) ambient desorption/ionization (ADI) source in mass spectrometry. Also, we explore the chemistry responsible for the signal enhancement observed when using both hydrogen-doped argon and hydrogen-doped helium. The hydrogen-doped argon was tested for five analytes representing different classes of molecules. Addition of hydrogen to the argon plasma gas enhanced signals for gas-phase analytes and for analytes coated onto glass slides in positive and negative ion mode. The enhancements ranged from factors of 4 to 5 for gas-phase analytes and factors of 2 to 40 for coated slides. There was no significant increase in the background. The limit of detection for caffeine was lowered by a factor of 79 using H2/Ar and 2 using H2/He. Results are shown that help explain the fundamental differences between the pure-gas discharges and those that are hydrogen-doped for both argon and helium. Experiments with different discharge geometries and grounding schemes indicate that observed signal enhancements are strongly dependent on discharge configuration.
Helios: Understanding Solar Evolution Through Text Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Randazzese, Lucien
This proof-of-concept project focused on developing, testing, and validating a range of bibliometric, text analytic, and machine-learning based methods to explore the evolution of three photovoltaic (PV) technologies: Cadmium Telluride (CdTe), Dye-Sensitized solar cells (DSSC), and Multi-junction solar cells. The analytical approach to the work was inspired by previous work by the same team to measure and predict the scientific prominence of terms and entities within specific research domains. The goal was to create tools that could assist domain-knowledgeable analysts in investigating the history and path of technological developments in general, with a focus on analyzing step-function changes in performance,more » or “breakthroughs,” in particular. The text-analytics platform developed during this project was dubbed Helios. The project relied on computational methods for analyzing large corpora of technical documents. For this project we ingested technical documents from the following sources into Helios: Thomson Scientific Web of Science (papers), the U.S. Patent & Trademark Office (patents), the U.S. Department of Energy (technical documents), the U.S. National Science Foundation (project funding summaries), and a hand curated set of full-text documents from Thomson Scientific and other sources.« less
NASA Astrophysics Data System (ADS)
Bernegger, R.; Altenburg, S. J.; Röllig, M.; Maierhofer, C.
2018-03-01
Pulse thermography (PT) has proven to be a valuable non-destructive testing method to identify and quantify defects in fiber-reinforced polymers. To perform a quantitative defect characterization, the heat diffusion within the material as well as the material parameters must be known. The heterogeneous material structure of glass fiber-reinforced polymers (GFRP) as well as the semitransparency of the material for optical excitation sources of PT is still challenging. For homogeneous semitransparent materials, 1D analytical models describing the temperature distribution are available. Here, we present an analytical approach to model PT for laterally inhomogeneous semitransparent materials. We show the validity of the model by considering different configurations of the optical heating source, the IR camera, and the differently coated GFRP sample. The model considers the lateral inhomogeneity of the semitransparency by an additional absorption coefficient. It includes additional effects such as thermal losses at the samples surfaces, multilayer systems with thermal contact resistance, and a finite duration of the heating pulse. By using a sufficient complexity of the analytical model, similar values of the material parameters were found for all six investigated configurations by numerical fitting.
Fatigue crack localization with near-field acoustic emission signals
NASA Astrophysics Data System (ADS)
Zhou, Changjiang; Zhang, Yunfeng
2013-04-01
This paper presents an AE source localization technique using near-field acoustic emission (AE) signals induced by crack growth and propagation. The proposed AE source localization technique is based on the phase difference in the AE signals measured by two identical AE sensing elements spaced apart at a pre-specified distance. This phase difference results in canceling-out of certain frequency contents of signals, which can be related to AE source direction. Experimental data from simulated AE source such as pencil breaks was used along with analytical results from moment tensor analysis. It is observed that the theoretical predictions, numerical simulations and the experimental test results are in good agreement. Real data from field monitoring of an existing fatigue crack on a bridge was also used to test this system. Results show that the proposed method is fairly effective in determining the AE source direction in thick plates commonly encountered in civil engineering structures.
Active Control of Inlet Noise on the JT15D Turbofan Engine
NASA Technical Reports Server (NTRS)
Smith, Jerome P.; Hutcheson, Florence V.; Burdisso, Ricardo A.; Fuller, Chris R.
1999-01-01
This report presents the key results obtained by the Vibration and Acoustics Laboratories at Virginia Tech over the year from November 1997 to December 1998 on the Active Noise Control of Turbofan Engines research project funded by NASA Langley Research Center. The concept of implementing active noise control techniques with fuselage-mounted error sensors is investigated both analytically and experimentally. The analytical part of the project involves the continued development of an advanced modeling technique to provide prediction and design guidelines for application of active noise control techniques to large, realistic high bypass engines of the type on which active control methods are expected to be applied. Results from the advanced analytical model are presented that show the effectiveness of the control strategies, and the analytical results presented for fuselage error sensors show good agreement with the experimentally observed results and provide additional insight into the control phenomena. Additional analytical results are presented for active noise control used in conjunction with a wavenumber sensing technique. The experimental work is carried out on a running JT15D turbofan jet engine in a test stand at Virginia Tech. The control strategy used in these tests was the feedforward Filtered-X LMS algorithm. The control inputs were supplied by single and multiple circumferential arrays of acoustic sources equipped with neodymium iron cobalt magnets mounted upstream of the fan. The reference signal was obtained from an inlet mounted eddy current probe. The error signals were obtained from a number of pressure transducers flush-mounted in a simulated fuselage section mounted in the engine test cell. The active control methods are investigated when implemented with the control sources embedded within the acoustically absorptive material on a passively-lined inlet. The experimental results show that the combination of active control techniques with fuselage-mounted error sensors and passive control techniques is an effective means of reducing radiated noise from turbofan engines. Strategic selection of the location of the error transducers is shown to be effective for reducing the radiation towards particular directions in the farfield. An analytical model is used to predict the behavior of the control system and to guide the experimental design configurations, and the analytical results presented show good agreement with the experimentally observed results.
Fluorescence tomography using synchrotron radiation at the NSLS
NASA Astrophysics Data System (ADS)
Boisseau, P.; Grodzins, L.
1987-03-01
Fluorescence tomography utilizing focussed, tunable, monoenergetic X-rays from synchrotron light sources hold the promise of a non-invasive analytic tool for studying trace elements in specimens, particularly biological, at spatial resolutions of the order of micrometers. This note reports an early test at the National Synchrotron Light Source at Brookhaven National Laboratories in which fluorescence tomographic scans were successfully made of trace elements of iron and titanium in NBS standard glass and in a bee.
Ozarda, Yesim; Ichihara, Kiyoshi; Aslan, Diler; Aybek, Hulya; Ari, Zeki; Taneli, Fatma; Coker, Canan; Akan, Pinar; Sisman, Ali Riza; Bahceci, Onur; Sezgin, Nurzen; Demir, Meltem; Yucel, Gultekin; Akbas, Halide; Ozdem, Sebahat; Polat, Gurbuz; Erbagci, Ayse Binnur; Orkmez, Mustafa; Mete, Nuriye; Evliyaoglu, Osman; Kiyici, Aysel; Vatansev, Husamettin; Ozturk, Bahadir; Yucel, Dogan; Kayaalp, Damla; Dogan, Kubra; Pinar, Asli; Gurbilek, Mehmet; Cetinkaya, Cigdem Damla; Akin, Okhan; Serdar, Muhittin; Kurt, Ismail; Erdinc, Selda; Kadicesme, Ozgur; Ilhan, Necip; Atali, Dilek Sadak; Bakan, Ebubekir; Polat, Harun; Noyan, Tevfik; Can, Murat; Bedir, Abdulkerim; Okuyucu, Ali; Deger, Orhan; Agac, Suret; Ademoglu, Evin; Kaya, Ayşem; Nogay, Turkan; Eren, Nezaket; Dirican, Melahat; Tuncer, GulOzlem; Aykus, Mehmet; Gunes, Yeliz; Ozmen, Sevda Unalli; Kawano, Reo; Tezcan, Sehavet; Demirpence, Ozlem; Degirmen, Elif
2014-12-01
A nationwide multicenter study was organized to establish reference intervals (RIs) in the Turkish population for 25 commonly tested biochemical analytes and to explore sources of variation in reference values, including regionality. Blood samples were collected nationwide in 28 laboratories from the seven regions (≥400 samples/region, 3066 in all). The sera were collectively analyzed in Uludag University in Bursa using Abbott reagents and analyzer. Reference materials were used for standardization of test results. After secondary exclusion using the latent abnormal values exclusion method, RIs were derived by a parametric method employing the modified Box-Cox formula and compared with the RIs by the non-parametric method. Three-level nested ANOVA was used to evaluate variations among sexes, ages and regions. Associations between test results and age, body mass index (BMI) and region were determined by multiple regression analysis (MRA). By ANOVA, differences of reference values among seven regions were significant in none of the 25 analytes. Significant sex-related and age-related differences were observed for 10 and seven analytes, respectively. MRA revealed BMI-related changes in results for uric acid, glucose, triglycerides, high-density lipoprotein (HDL)-cholesterol, alanine aminotransferase, and γ-glutamyltransferase. Their RIs were thus derived by applying stricter criteria excluding individuals with BMI >28 kg/m2. Ranges of RIs by non-parametric method were wider than those by parametric method especially for those analytes affected by BMI. With the lack of regional differences and the well-standardized status of test results, the RIs derived from this nationwide study can be used for the entire Turkish population.
Sensitive glow discharge ion source for aerosol and gas analysis
Reilly, Peter T. A. [Knoxville, TN
2007-08-14
A high sensitivity glow discharge ion source system for analyzing particles includes an aerodynamic lens having a plurality of constrictions for receiving an aerosol including at least one analyte particle in a carrier gas and focusing the analyte particles into a collimated particle beam. A separator separates the carrier gas from the analyte particle beam, wherein the analyte particle beam or vapors derived from the analyte particle beam are selectively transmitted out of from the separator. A glow discharge ionization source includes a discharge chamber having an entrance orifice for receiving the analyte particle beam or analyte vapors, and a target electrode and discharge electrode therein. An electric field applied between the target electrode and discharge electrode generates an analyte ion stream from the analyte vapors, which is directed out of the discharge chamber through an exit orifice, such as to a mass spectrometer. High analyte sensitivity is obtained by pumping the discharge chamber exclusively through the exit orifice and the entrance orifice.
NASA Technical Reports Server (NTRS)
Mckinzie, D. J., Jr.; Burns, R. J.; Wagner, J. M.
1976-01-01
Noise data were obtained with a large-scale cold-flow model of a two-flap, under-the-wing, externally blown flap proposed for use on future STOL aircraft. The noise suppression effectiveness of locating a slot conical nozzle at the trailing edge of the second flap and of applying partial covers to the slots between the wing and flaps was evaluated. Overall-sound-pressure-level reductions of 5 db occurred below the wing in the flyover plane. Existing models of several noise sources were applied to the test results. The resulting analytical relation compares favorably with the test data. The noise source mechanisms were analyzed and are discussed.
NASA Astrophysics Data System (ADS)
King, J. N.; Walsh, V.; Cunningham, K. J.; Evans, F. S.; Langevin, C. D.; Dausman, A.
2009-12-01
The Miami-Dade Water and Sewer Department (MDWASD) injects buoyant effluent from the North District Wastewater Treatment Plant (NDWWTP) through four Class I injection wells into the Boulder Zone---a saline (35 parts per thousand) and transmissive (105 to 106 square meters per day) hydrogeologic unit located approximately 1000 meters below land surface. Miami-Dade County is located in southeast Florida, U.S.A. Portions of the Floridan and Biscayne aquifers are located above the Boulder Zone. The Floridan and Biscayne aquifers---underground sources of drinking water---are protected by U.S. Federal Laws and Regulations, Florida Statutes, and Miami-Dade County ordinances. In 1998, MDWASD began to observe effluent constituents within the Floridan aquifer. Continuous-source and impulse-source analytical models for advective and diffusive transport of effluent are used in the present work to test contaminant flow-path hypotheses, suggest transport mechanisms, and estimate dispersivity. MDWASD collected data in the Floridan aquifer between 1996 and 2007. A parameter estimation code is used to optimize analytical model parameters by fitting model data to collected data. These simple models will be used to develop conceptual and numerical models of effluent transport at the NDWWTP, and in the vicinity of the NDWWTP.
Analytical Round Robin for Elastic-Plastic Analysis of Surface Cracked Plates: Phase I Results
NASA Technical Reports Server (NTRS)
Wells, D. N.; Allen, P. A.
2012-01-01
An analytical round robin for the elastic-plastic analysis of surface cracks in flat plates was conducted with 15 participants. Experimental results from a surface crack tension test in 2219-T8 aluminum plate provided the basis for the inter-laboratory study (ILS). The study proceeded in a blind fashion given that the analysis methodology was not specified to the participants, and key experimental results were withheld. This approach allowed the ILS to serve as a current measure of the state of the art for elastic-plastic fracture mechanics analysis. The analytical results and the associated methodologies were collected for comparison, and sources of variability were studied and isolated. The results of the study revealed that the J-integral analysis methodology using the domain integral method is robust, providing reliable J-integral values without being overly sensitive to modeling details. General modeling choices such as analysis code, model size (mesh density), crack tip meshing, or boundary conditions, were not found to be sources of significant variability. For analyses controlled only by far-field boundary conditions, the greatest source of variability in the J-integral assessment is introduced through the constitutive model. This variability can be substantially reduced by using crack mouth opening displacements to anchor the assessment. Conclusions provide recommendations for analysis standardization.
New Cs sputter ion source with polyatomic ion beams for secondary ion mass spectrometry applications
NASA Astrophysics Data System (ADS)
Belykh, S. F.; Palitsin, V. V.; Veryovkin, I. V.; Kovarsky, A. P.; Chang, R. J. H.; Adriaens, A.; Dowsett, M. G.; Adams, F.
2007-08-01
A simple design for a cesium sputter ion source compatible with vacuum and ion-optical systems as well as with electronics of the commercially available Cameca IMS-4f instrument is reported. This ion source has been tested with the cluster primary ions of Sin- and Cun-. Our experiments with surface characterization and depth profiling conducted to date demonstrate improvements of the analytical capabilities of the secondary ion mass spectrometry instrument due to the nonadditive enhancement of secondary ion emission and shorter ion ranges of polyatomic projectiles compared to atomic ones with the same impact energy.
Directly comparing gravitational wave data to numerical relativity simulations: systematics
NASA Astrophysics Data System (ADS)
Lange, Jacob; O'Shaughnessy, Richard; Healy, James; Lousto, Carlos; Zlochower, Yosef; Shoemaker, Deirdre; Lovelace, Geoffrey; Pankow, Christopher; Brady, Patrick; Scheel, Mark; Pfeiffer, Harald; Ossokine, Serguei
2017-01-01
We compare synthetic data directly to complete numerical relativity simulations of binary black holes. In doing so, we circumvent ad-hoc approximations introduced in semi-analytical models previously used in gravitational wave parameter estimation and compare the data against the most accurate waveforms including higher modes. In this talk, we focus on the synthetic studies that test potential sources of systematic errors. We also run ``end-to-end'' studies of intrinsically different synthetic sources to show we can recover parameters for different systems.
Deciphering Sources of Variability in Clinical Pathology.
Tripathi, Niraj K; Everds, Nancy E; Schultze, A Eric; Irizarry, Armando R; Hall, Robert L; Provencher, Anne; Aulbach, Adam
2017-01-01
The objectives of this session were to explore causes of variability in clinical pathology data due to preanalytical and analytical variables as well as study design and other procedures that occur in toxicity testing studies. The presenters highlighted challenges associated with such variability in differentiating test article-related effects from the effects of experimental procedures and its impact on overall data interpretation. These presentations focused on preanalytical and analytical variables and study design-related factors and their influence on clinical pathology data, and the importance of various factors that influence data interpretation including statistical analysis and reference intervals. Overall, these presentations touched upon potential effect of many variables on clinical pathology parameters, including animal physiology, sample collection process, specimen handling and analysis, study design, and some discussion points on how to manage those variables to ensure accurate interpretation of clinical pathology data in toxicity studies. This article is a brief synopsis of presentations given in a session entitled "Deciphering Sources of Variability in Clinical Pathology-It's Not Just about the Numbers" that occurred at the 35th Annual Symposium of the Society of Toxicologic Pathology in San Diego, California.
Rapid classification of biological components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Vicki S.; Barrett, Karen B.; Key, Diane E.
A method is disclosed for analyzing a biological sample by antibody profiling for identifying forensic samples or for detecting the presence of an analyte. In an illustrative embodiment of the invention, the analyte is a drug, such as marijuana, Cocaine (crystalline tropane alkaloid), methamphetamine, methyltestosterone, or mesterolone. The method involves attaching antigens of the surface of a solid support in a preselected pattern to form an array wherein the locations of the antigens are known; contacting the array with the biological sample such that a portion of antibodies in the sample reacts with and binds to antigens in the array,more » thereby forming immune complexes; washing away antibodies that do not form immune complexes; and detecting the immune complexes, thereby forming an antibody profile. Forensic samples are identified by comparing a sample from an unknown source with a sample from a known source. Further, an assay, such as a test for illegal drug use, can be coupled to a test for identity such that the results of the assay can be positively correlated to a subject's identity.« less
Rapid classification of biological components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Vicki S.; Barrett, Karen B.; Key, Diane E.
A method is disclosed for analyzing a biological sample by antibody profiling for identifying forensic samples or for detecting the presence of an analyte. In an illustrative embodiment of the invention, the analyte is a drug, such as marijuana, cocaine (crystalline tropane alkaloid), methamphetamine, methyltestosterone, or mesterolone. The method involves attaching antigens to a surface of a solid support in a preselected pattern to form an array wherein the locations of the antigens are known; contacting the array with the biological sample such that a portion of antibodies in the sample reacts with and binds to antigens in the array,more » thereby forming immune complexes; washing away antibodies that do not form immune complexes; and detecting the immune complexes, thereby forming an antibody profile. Forensic samples are identified by comparing a sample from an unknown source with a sample from a known source. Further, an assay, such as a test for illegal drug use, can be coupled to a test for identity such that the results of the assay can be positively correlated to a subject's identity.« less
Antibody profiling sensitivity through increased reporter antibody layering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apel, William A.; Thompson, Vicki S
A method for analyzing a biological sample by antibody profiling for identifying forensic samples or for detecting the presence of an analyte. In an embodiment of the invention, the analyte is a drug, such as marijuana, Cocaine (crystalline tropane alkaloid), methamphetamine, methyltestosterone, or mesterolone. The method comprises attaching antigens to a surface of a solid support in a preselected pattern to form an array wherein locations of the antigens are known; contacting the array with the biological sample such that a portion of antibodies in the sample reacts with and binds to the antigens in the array to form immunemore » complexes; washing away antibodies that do form immune complexes; and detecting the immune complexes, to form an antibody profile. Forensic samples are identified by comparing a sample from an unknown source with a sample from a known source. Further, an assay, such as a test for illegal drug use, can be coupled to a test for identity such that the results of the assay can be positively correlated to the subject's identity.« less
Physics of thermo-acoustic sound generation
NASA Astrophysics Data System (ADS)
Daschewski, M.; Boehm, R.; Prager, J.; Kreutzbruck, M.; Harrer, A.
2013-09-01
We present a generalized analytical model of thermo-acoustic sound generation based on the analysis of thermally induced energy density fluctuations and their propagation into the adjacent matter. The model provides exact analytical prediction of the sound pressure generated in fluids and solids; consequently, it can be applied to arbitrary thermal power sources such as thermophones, plasma firings, laser beams, and chemical reactions. Unlike existing approaches, our description also includes acoustic near-field effects and sound-field attenuation. Analytical results are compared with measurements of sound pressures generated by thermo-acoustic transducers in air for frequencies up to 1 MHz. The tested transducers consist of titanium and indium tin oxide coatings on quartz glass and polycarbonate substrates. The model reveals that thermo-acoustic efficiency increases linearly with the supplied thermal power and quadratically with thermal excitation frequency. Comparison of the efficiency of our thermo-acoustic transducers with those of piezoelectric-based airborne ultrasound transducers using impulse excitation showed comparable sound pressure values. The present results show that thermo-acoustic transducers can be applied as broadband, non-resonant, high-performance ultrasound sources.
Analysis and Synthesis of Tonal Aircraft Noise Sources
NASA Technical Reports Server (NTRS)
Allen, Matthew P.; Rizzi, Stephen A.; Burdisso, Ricardo; Okcu, Selen
2012-01-01
Fixed and rotary wing aircraft operations can have a significant impact on communities in proximity to airports. Simulation of predicted aircraft flyover noise, paired with listening tests, is useful to noise reduction efforts since it allows direct annoyance evaluation of aircraft or operations currently in the design phase. This paper describes efforts to improve the realism of synthesized source noise by including short term fluctuations, specifically for inlet-radiated tones resulting from the fan stage of turbomachinery. It details analysis performed on an existing set of recorded turbofan data to isolate inlet-radiated tonal fan noise, then extract and model short term tonal fluctuations using the analytic signal. Methodologies for synthesizing time-variant tonal and broadband turbofan noise sources using measured fluctuations are also described. Finally, subjective listening test results are discussed which indicate that time-variant synthesized source noise is perceived to be very similar to recordings.
Polymerase chain reaction technology as analytical tool in agricultural biotechnology.
Lipp, Markus; Shillito, Raymond; Giroux, Randal; Spiegelhalter, Frank; Charlton, Stacy; Pinero, David; Song, Ping
2005-01-01
The agricultural biotechnology industry applies polymerase chain reaction (PCR) technology at numerous points in product development. Commodity and food companies as well as third-party diagnostic testing companies also rely on PCR technology for a number of purposes. The primary use of the technology is to verify the presence or absence of genetically modified (GM) material in a product or to quantify the amount of GM material present in a product. This article describes the fundamental elements of PCR analysis and its application to the testing of grains. The document highlights the many areas to which attention must be paid in order to produce reliable test results. These include sample preparation, method validation, choice of appropriate reference materials, and biological and instrumental sources of error. The article also discusses issues related to the analysis of different matrixes and the effect they may have on the accuracy of the PCR analytical results.
Harnessing Scientific Literature Reports for Pharmacovigilance
Ripple, Anna; Tonning, Joseph; Munoz, Monica; Hasan, Rashedul; Ly, Thomas; Francis, Henry; Bodenreider, Olivier
2017-01-01
Summary Objectives We seek to develop a prototype software analytical tool to augment FDA regulatory reviewers’ capacity to harness scientific literature reports in PubMed/MEDLINE for pharmacovigilance and adverse drug event (ADE) safety signal detection. We also aim to gather feedback through usability testing to assess design, performance, and user satisfaction with the tool. Methods A prototype, open source, web-based, software analytical tool generated statistical disproportionality data mining signal scores and dynamic visual analytics for ADE safety signal detection and management. We leveraged Medical Subject Heading (MeSH) indexing terms assigned to published citations in PubMed/MEDLINE to generate candidate drug-adverse event pairs for quantitative data mining. Six FDA regulatory reviewers participated in usability testing by employing the tool as part of their ongoing real-life pharmacovigilance activities to provide subjective feedback on its practical impact, added value, and fitness for use. Results All usability test participants cited the tool’s ease of learning, ease of use, and generation of quantitative ADE safety signals, some of which corresponded to known established adverse drug reactions. Potential concerns included the comparability of the tool’s automated literature search relative to a manual ‘all fields’ PubMed search, missing drugs and adverse event terms, interpretation of signal scores, and integration with existing computer-based analytical tools. Conclusions Usability testing demonstrated that this novel tool can automate the detection of ADE safety signals from published literature reports. Various mitigation strategies are described to foster improvements in design, productivity, and end user satisfaction. PMID:28326432
NASA Astrophysics Data System (ADS)
Addari, Daniele
The term microvibrations generally refers to accelerations in the order of micro-gs and which manifest in a bandwidth from a few Hz up to say 500-1000 Hz. The need to accurately characterise this small disturbances acting on-board modern satellites, thus allowing the design of dedicated minimisation and control systems, is nowadays a major concern for the success of some space missions. The main issues related to microvibrations are the feasibility to analytically describe the microvibration sources using a series of analysis tools and test experiments and the prediction of how the dynamics of the microvibration sources couple with those of the satellite structure. In this thesis, a methodology to facilitate the modelling of these phenomena is described. Two aspects are investigated: the characterisation of the microvibration sources with a semi-empirical procedure which allows derivation of the dynamic mass properties of the source, also including the gyroscopic effect, with a significantly simpler test configuration and lower computational effort compared to traditional approaches; and the modelling of the coupled dynamics when the source is mounted on a representative supporting structure of a spacecraft, including the passive and active effects of the source, which allows prediction of the structure response at any location. The methodology has been defined conducting an extensive study, both experimental and numerical, on a reaction wheel assembly, as this is usually identified as the main contributory factor among all microvibration sources. The contributions to the state-of-the-art made during this work include: i) the development of a cantilever configured reaction wheel analytical model able to reproduce all the configurations in which the mechanism may operate and inclusive of the gyroscopic effect; ii) the reformulation of the coupling theory which allows retrieving the dynamic mass of a microvibration source over a wide range of frequencies and speeds, by means of the experimental data obtained from measurements of the forces generated when the source is rigidly secured on a dynamometric platform and measurements of the accelerations at the source mounting interface in a freefree suspended boundary condition; iii) a practical example of coupling between a reaction wheel and a honeycomb structural panel, where the coupled loads and the panel response have been estimated using the mathematical model and compared with test results, obtained during the physical microvibration testing of the structural panel, showing a good level of agreement when the gyroscopic effect is also taken into account.
A two-dimensional solution of the FW-H equation for rectilinear motion of sources
NASA Astrophysics Data System (ADS)
Bozorgi, Alireza; Siozos-Rousoulis, Leonidas; Nourbakhsh, Seyyed Ahmad; Ghorbaniasl, Ghader
2017-02-01
In this paper, a subsonic solution of the two-dimensional Ffowcs Williams and Hawkings (FW-H) equation is presented for calculation of noise generated by sources moving with constant velocity in a medium at rest or in a moving medium. The solution is represented in the frequency domain and is valid for observers located far from the noise sources. In order to verify the validity of the derived formula, three test cases are considered, namely a monopole, a dipole, and a quadrupole source in a medium at rest or in motion. The calculated results well coincide with the analytical solutions, validating the applicability of the formula to rectilinear subsonic motion problems.
Risk management of drinking water relies on quality analytical data. Analytical methodology can often be adapted from environmental monitoring sources. However, risk management sometimes presents special analytical challenges because data may be needed from a source for which n...
NASA Astrophysics Data System (ADS)
Falta, R. W.
2004-05-01
Analytical solutions are developed that relate changes in the contaminant mass in a source area to the behavior of biologically reactive dissolved contaminant groundwater plumes. Based on data from field experiments, laboratory experiments, numerical streamtube models, and numerical multiphase flow models, the chemical discharge from a source region is assumed to be a nonlinear power function of the fraction of contaminant mass removed from the source zone. This function can approximately represent source zone mass discharge behavior over a wide range of site conditions ranging from simple homogeneous systems, to complex heterogeneous systems. A mass balance on the source zone with advective transport and first order decay leads to a nonlinear differential equation that is solved analytically to provide a prediction of the time-dependent contaminant mass discharge leaving the source zone. The solution for source zone mass discharge is coupled semi-analytically with a modified version of the Domenico (1987) analytical solution for three-dimensional reactive advective and dispersive transport in groundwater. The semi-analytical model then employs the BIOCHLOR (Aziz et al., 2000; Sun et al., 1999) transformations to model sequential first order parent-daughter biological decay reactions of chlorinated ethenes and ethanes in the groundwater plume. The resulting semi-analytic model thus allows for transient simulation of complex source zone behavior that is fully coupled to a dissolved contaminant plume undergoing sequential biological reactions. Analyses of several realistic scenarios show that substantial changes in the ground water plume can result from the partial removal of contaminant mass from the source zone. These results, however, are sensitive to the nature of the source mass reduction-source discharge reduction curve, and to the rates of degradation of the primary contaminant and its daughter products in the ground water plume. Aziz, C.E., C.J. Newell, J.R. Gonzales, P. Haas, T.P. Clement, and Y. Sun, 2000, BIOCHLOR Natural Attenuation Decision Support System User's Manual Version 1.0, US EPA Report EPA/600/R-00/008 Domenico, P.A., 1987, An analytical model for multidimensional transport of a decaying contaminant species, J. Hydrol., 91: 49-58. Sun, Y., J.N. Petersen, T.P. Clement, and R.S. Skeen, 1999, A new analytical solution for multi-species transport equations with serial and parallel reactions, Water Resour. Res., 35(1): 185-190.
NASA Technical Reports Server (NTRS)
Rentz, P. E.
1976-01-01
Experimental evaluations of the acoustical characteristics and source sound power and directionality measurement capabilities of the NASA Lewis 9 x 15 foot low speed wind tunnel in the untreated or hardwall configuration were performed. The results indicate that source sound power estimates can be made using only settling chamber sound pressure measurements. The accuracy of these estimates, expressed as one standard deviation, can be improved from + or - 4 db to + or - 1 db if sound pressure measurements in the preparation room and diffuser are also used and source directivity information is utilized. A simple procedure is presented. Acceptably accurate measurements of source direct field acoustic radiation were found to be limited by the test section reverberant characteristics to 3.0 feet for omni-directional and highly directional sources. Wind-on noise measurements in the test section, settling chamber and preparation room were found to depend on the sixth power of tunnel velocity. The levels were compared with various analytic models. Results are presented and discussed.
Finite Element modelling of deformation induced by interacting volcanic sources
NASA Astrophysics Data System (ADS)
Pascal, Karen; Neuberg, Jürgen; Rivalta, Eleonora
2010-05-01
The displacement field due to magma movements in the subsurface is commonly modelled using the solutions for a point source (Mogi, 1958), a finite spherical source (McTigue, 1987), or a dislocation source (Okada, 1992) embedded in a homogeneous elastic half-space. When the magmatic system comprises more than one source, the assumption of homogeneity in the half-space is violated and several sources are combined, their respective deformation field being summed. We have investigated the effects of neglecting the interaction between sources on the surface deformation field. To do so, we calculated the vertical and horizontal displacements for models with adjacent sources and we tested them against the solutions of corresponding numerical 3D finite element models. We implemented several models combining spherical pressure sources and dislocation sources, varying their relative position. Furthermore we considered the impact of topography, loading, and magma compressibility. To quantify the discrepancies and compare the various models, we calculated the difference between analytical and numerical maximum horizontal or vertical surface displacements.We will demonstrate that for certain conditions combining analytical sources can cause an error of up to 20%. References: McTigue, D. F. (1987), Elastic Stress and Deformation Near a Finite Spherical Magma Body: Resolution of the Point Source Paradox, J. Geophys. Res. 92, 12931-12940. Mogi, K. (1958), Relations between the eruptions of various volcanoes and the deformations of the ground surfaces around them, Bull Earthquake Res Inst, Univ Tokyo 36, 99-134. Okada, Y. (1992), Internal Deformation Due to Shear and Tensile Faults in a Half-Space, Bulletin of the Seismological Society of America 82(2), 1018-1040.
A Methodology in the Teaching Process of the Derivative and Its Motivation.
ERIC Educational Resources Information Center
Vasquez-Martinez, Claudio-Rafael
The development of the derivative because of being part of calculus in permanent dialectic, demands on one part an analytical, deductive study and on another an application of rochrematic methods, sources of resources, within calculus of derivative which allows to dialectically confront knowledge in its different phases and to test the results.…
Oyaert, Matthijs; Van Maerken, Tom; Bridts, Silke; Van Loon, Silvi; Laverge, Heleen; Stove, Veronique
2018-03-01
Point-of-care blood gas test results may benefit therapeutic decision making by their immediate impact on patient care. We evaluated the (pre-)analytical performance of a novel cartridge-type blood gas analyzer, the GEM Premier 5000 (Werfen), for the determination of pH, partial carbon dioxide pressure (pCO 2 ), partial oxygen pressure (pO 2 ), sodium (Na + ), potassium (K + ), chloride (Cl - ), ionized calcium ( i Ca 2+ ), glucose, lactate, and total hemoglobin (tHb). Total imprecision was estimated according to the CLSI EP5-A2 protocol. The estimated total error was calculated based on the mean of the range claimed by the manufacturer. Based on the CLSI EP9-A2 evaluation protocol, a method comparison with the Siemens RapidPoint 500 and Abbott i-STAT CG8+ was performed. Obtained data were compared against preset quality specifications. Interference of potential pre-analytical confounders on co-oximetry and electrolyte concentrations were studied. The analytical performance was acceptable for all parameters tested. Method comparison demonstrated good agreement to the RapidPoint 500 and i-STAT CG8+, except for some parameters (RapidPoint 500: pCO 2 , K + , lactate and tHb; i-STAT CG8+: pO 2 , Na + , i Ca 2+ and tHb) for which significant differences between analyzers were recorded. No interference of lipemia or methylene blue on CO-oximetry results was found. On the contrary, significant interference for benzalkonium and hemolysis on electrolyte measurements were found, for which the user is notified by an interferent specific flag. Identification of sample errors from pre-analytical sources, such as interferences and automatic corrective actions, along with the analytical performance, ease of use and low maintenance time of the instrument, makes the evaluated instrument a suitable blood gas analyzer for both POCT and laboratory use. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Nuclear Forensics and Attribution: A National Laboratory Perspective
NASA Astrophysics Data System (ADS)
Hall, Howard L.
2008-04-01
Current capabilities in technical nuclear forensics - the extraction of information from nuclear and/or radiological materials to support the attribution of a nuclear incident to material sources, transit routes, and ultimately perpetrator identity - derive largely from three sources: nuclear weapons testing and surveillance programs of the Cold War, advances in analytical chemistry and materials characterization techniques, and abilities to perform ``conventional'' forensics (e.g., fingerprints) on radiologically contaminated items. Leveraging that scientific infrastructure has provided a baseline capability to the nation, but we are only beginning to explore the scientific challenges that stand between today's capabilities and tomorrow's requirements. These scientific challenges include radically rethinking radioanalytical chemistry approaches, developing rapidly deployable sampling and analysis systems for field applications, and improving analytical instrumentation. Coupled with the ability to measure a signature faster or more exquisitely, we must also develop the ability to interpret those signatures for meaning. This requires understanding of the physics and chemistry of nuclear materials processes well beyond our current level - especially since we are unlikely to ever have direct access to all potential sources of nuclear threat materials.
NASA Astrophysics Data System (ADS)
Zhang, Yong; Sun, HongGuang; Lu, Bingqing; Garrard, Rhiannon; Neupauer, Roseanna M.
2017-09-01
Backward models have been applied for four decades by hydrologists to identify the source of pollutants undergoing Fickian diffusion, while analytical tools are not available for source identification of super-diffusive pollutants undergoing decay. This technical note evaluates analytical solutions for the source location and release time of a decaying contaminant undergoing super-diffusion using backward probability density functions (PDFs), where the forward model is the space fractional advection-dispersion equation with decay. Revisit of the well-known MADE-2 tracer test using parameter analysis shows that the peak backward location PDF can predict the tritium source location, while the peak backward travel time PDF underestimates the tracer release time due to the early arrival of tracer particles at the detection well in the maximally skewed, super-diffusive transport. In addition, the first-order decay adds additional skewness toward earlier arrival times in backward travel time PDFs, resulting in a younger release time, although this impact is minimized at the MADE-2 site due to tritium's half-life being relatively longer than the monitoring period. The main conclusion is that, while non-trivial backward techniques are required to identify pollutant source location, the pollutant release time can and should be directly estimated given the speed of the peak resident concentration for super-diffusive pollutants with or without decay.
Suh, Joon Hyuk; Niu, Yue S; Hung, Wei-Lun; Ho, Chi-Tang; Wang, Yu
2017-06-01
Lipid peroxidation gives rise to carbonyl species, some of which are reactive and play a role in the pathogenesis of numerous human diseases. Oils are ubiquitous sources that can be easily oxidized to generate these compounds under oxidative stress. In this present work, we developed a targeted lipidomic method for the simultaneous determination of thirty-five aldehydes and ketones derived from fish oil, the omega-3 fatty acid-rich source, by using liquid chromatography-tandem mass spectrometry (LC-MS/MS). The analytes include highly toxic reactive carbonyl species (RCS) such as acrolein, crotonaldehyde, trans-4-hydroxy-2-hexenal (HHE), trans-4-hydroxy-2-nonenal (HNE), trans-4-oxo-2-nonenal (ONE), glyoxal and methylglyoxal, all of which are promising biomarkers of lipid peroxidation. They were formed using in vitro Fe(II)-mediated oxidation, and derivatized using 2,4-dinitrophenylhydrazine (DNPH) for the feasibility of quantitative assay. Before analysis, solid phase extraction (SPE) was used to clean samples further. Uniquely different patterns of carbonyl compound generation between omega-3 and 6 fatty acids were observed using this lipidomic approach. The method developed was both validated, and successfully applied to monitor formation of carbonyl species by lipid peroxidation using ten different fish oil products. Hypotheses of correlations between the monitored dataset of analytes and their parent fatty acids were also tested using the Pearson's correlation test. Results indicate our method is a useful analytical tool for lipid peroxidation studies. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Walter, T. J.
1978-01-01
Vibration characteristics for overhauled T53 engines, including rejection rate, principal sources of vibration, and normal procedures taken by the overhaul center to reduce engine vibration are summarized. Analytical and experimental data were compared to determine the engine's dynamic response to unbalance forces with results showing that the engine operates through bending critical speeds. Present rigid rotor balancing techniques are incapable of compensating for the flexible rotor unbalance. A comparison of typical test cell and aircraft vibration levels disclosed significant differences in the engine's dynamic response. A probable spline shift phenomenon was uncovered and investigated. Action items to control costs and reduce vibration levels were identified from analytical and experimental studies.
Tests and applications of nonlinear force-free field extrapolations in spherical geometry
NASA Astrophysics Data System (ADS)
Guo, Y.; Ding, M. D.
2013-07-01
We test a nonlinear force-free field (NLFFF) optimization code in spherical geometry with an analytical solution from Low and Lou. The potential field source surface (PFSS) model is served as the initial and boundary conditions where observed data are not available. The analytical solution can be well recovered if the boundary and initial conditions are properly handled. Next, we discuss the preprocessing procedure for the noisy bottom boundary data, and find that preprocessing is necessary for NLFFF extrapolations when we use the observed photospheric magnetic field as bottom boundaries. Finally, we apply the NLFFF model to a solar area where four active regions interacting with each other. An M8.7 flare occurred in one active region. NLFFF modeling in spherical geometry simultaneously constructs the small and large scale magnetic field configurations better than the PFSS model does.
NASA Technical Reports Server (NTRS)
Mosher, Marianne
1990-01-01
The principal objective is to assess the adequacy of linear acoustic theory with an impedence wall boundary condition to model the detailed sound field of an acoustic source in a duct. Measurements and calculations are compared of a simple acoustic source in a rectangular concrete duct lined with foam on the walls and anechoic end terminations. Measurement of acoustic pressure for twelve wave numbers provides variation in frequency and absorption characteristics of the duct walls. Close to the source, where the interference of wall reflections is minimal, correlation is very good. Away from the source, correlation degrades, especially for the lower frequencies. Sensitivity studies show little effect on the predicted results for changes in impedance boundary condition values, source location, measurement location, temperature, and source model for variations spanning the expected measurement error.
Evaluation of gamma dose effect on PIN photodiode using analytical model
NASA Astrophysics Data System (ADS)
Jafari, H.; Feghhi, S. A. H.; Boorboor, S.
2018-03-01
The PIN silicon photodiodes are widely used in the applications which may be found in radiation environment such as space mission, medical imaging and non-destructive testing. Radiation-induced damage in these devices causes to degrade the photodiode parameters. In this work, we have used new approach to evaluate gamma dose effects on a commercial PIN photodiode (BPX65) based on an analytical model. In this approach, the NIEL parameter has been calculated for gamma rays from a 60Co source by GEANT4. The radiation damage mechanisms have been considered by solving numerically the Poisson and continuity equations with the appropriate boundary conditions, parameters and physical models. Defects caused by radiation in silicon have been formulated in terms of the damage coefficient for the minority carriers' lifetime. The gamma induced degradation parameters of the silicon PIN photodiode have been analyzed in detail and the results were compared with experimental measurements and as well as the results of ATLAS semiconductor simulator to verify and parameterize the analytical model calculations. The results showed reasonable agreement between them for BPX65 silicon photodiode irradiated by 60Co gamma source at total doses up to 5 kGy under different reverse voltages.
NASA Astrophysics Data System (ADS)
Nunes, Josane C.
1991-02-01
This work quantifies the changes effected in electron absorbed dose to a soft-tissue equivalent medium when part of this medium is replaced by a material that is not soft -tissue equivalent. That is, heterogeneous dosimetry is addressed. Radionuclides which emit beta particles are the electron sources of primary interest. They are used in brachytherapy and in nuclear medicine: for example, beta -ray applicators made with strontium-90 are employed in certain ophthalmic treatments and iodine-131 is used to test thyroid function. More recent medical procedures under development and which involve beta radionuclides include radioimmunotherapy and radiation synovectomy; the first is a cancer modality and the second deals with the treatment of rheumatoid arthritis. In addition, the possibility of skin surface contamination exists whenever there is handling of radioactive material. Determination of absorbed doses in the examples of the preceding paragraph requires considering boundaries of interfaces. Whilst the Monte Carlo method can be applied to boundary calculations, for routine work such as in clinical situations, or in other circumstances where doses need to be determined quickly, analytical dosimetry would be invaluable. Unfortunately, few analytical methods for boundary beta dosimetry exist. Furthermore, the accuracy of results from both Monte Carlo and analytical methods has to be assessed. Although restricted to one radionuclide, phosphorus -32, the experimental data obtained in this work serve several purposes, one of which is to provide standards against which calculated results can be tested. The experimental data also contribute to the relatively sparse set of published boundary dosimetry data. At the same time, they may be useful in developing analytical boundary dosimetry methodology. The first application of the experimental data is demonstrated. Results from two Monte Carlo codes and two analytical methods, which were developed elsewhere, are compared with experimental data. Monte Carlo results compare satisfactory with experimental results for the boundaries considered. The agreement with experimental results for air interfaces is of particular interest because of discrepancies reported previously by another investigator who used data obtained from a different experimental technique. Results from one of the analytical methods differ significantly from the experimental data obtained here. The second analytical method provided data which approximate experimental results to within 30%. This is encouraging but it remains to be determined whether this method performs equally well for other source energies.
Over the last 10 years the EPA has invested in analytic elements as a computational method used in public domain software supporting capture zone delineation for source water assessments and wellhead protection. The current release is called WhAEM2000 (wellhead analytic element ...
Crawford, C L; Hill, H H
2013-03-30
(63)Nickel radioactive ionization ((63)Ni) is the most common and widely used ion source for ion mobility spectrometry (IMS). Regulatory, financial, and operational concerns with this source have promoted recent development of non-radioactive sources, such as corona discharge ionization (CD), for stand-alone IMS systems. However, there has been no comparison of the negative ion species produced by all three sources in the literature. This study compares the negative reactant and analyte ions produced by three sources on an ion mobility-mass spectrometer: conventional (63)Ni, CD, and secondary electrospray ionization (SESI). Results showed that (63)Ni and SESI produced the same reactant ion species while CD produced only the nitrate monomer and dimer ions. The analyte ions produced by each ion source were the same except for the CD source which produced a different ion species for the explosive RDX than either the (63)Ni or SESI source. Accurate and reproducible reduced mobility (K0) values, including several values reported here for the first time, were found for each explosive with each ion source. Overall, the SESI source most closely reproduced the reactant ion species and analyte ion species profiles for (63)Ni. This source may serve as a non-radioactive, robust, and flexible alternative for (63)Ni. Copyright © 2013 Elsevier B.V. All rights reserved.
Calculation of Dynamic Loads Due to Random Vibration Environments in Rocket Engine Systems
NASA Technical Reports Server (NTRS)
Christensen, Eric R.; Brown, Andrew M.; Frady, Greg P.
2007-01-01
An important part of rocket engine design is the calculation of random dynamic loads resulting from internal engine "self-induced" sources. These loads are random in nature and can greatly influence the weight of many engine components. Several methodologies for calculating random loads are discussed and then compared to test results using a dynamic testbed consisting of a 60K thrust engine. The engine was tested in a free-free condition with known random force inputs from shakers attached to three locations near the main noise sources on the engine. Accelerations and strains were measured at several critical locations on the engines and then compared to the analytical results using two different random response methodologies.
Flight research on natural laminar flow nacelles - A progress report
NASA Technical Reports Server (NTRS)
Hastings, E. C., Jr.; Schoenster, J. A.; Obara, C. J.; Dodbele, S. S.
1986-01-01
This paper presents a progress report on an ongoing flight experiment for natural laminar flow nacelles. The results given herein were obtained during the first phase of the experiment, in which an instrumented natural laminar flow nacelle fairing was flight tested in the presence of turbofan engine noise and a controlled noise source. The results indicate that with the controlled noise source off, natural laminar flow was measured as far aft as 37 percent of the fairing length. The transition front was irregular in contour, and the extent of natural laminar flow was significantly affected by the relative flow angle for the fairing. In addition to these test results, the paper discusses the results of some recent computational analyses to predict pressure distributions and transition location, and to explain some of the data trends. Comparisons between measured and predicted data indicate that the analytical methods successfully predicted trends for the baseline (no controlled noise source) studies.
What makes us think? A three-stage dual-process model of analytic engagement.
Pennycook, Gordon; Fugelsang, Jonathan A; Koehler, Derek J
2015-08-01
The distinction between intuitive and analytic thinking is common in psychology. However, while often being quite clear on the characteristics of the two processes ('Type 1' processes are fast, autonomous, intuitive, etc. and 'Type 2' processes are slow, deliberative, analytic, etc.), dual-process theorists have been heavily criticized for being unclear on the factors that determine when an individual will think analytically or rely on their intuition. We address this issue by introducing a three-stage model that elucidates the bottom-up factors that cause individuals to engage Type 2 processing. According to the model, multiple Type 1 processes may be cued by a stimulus (Stage 1), leading to the potential for conflict detection (Stage 2). If successful, conflict detection leads to Type 2 processing (Stage 3), which may take the form of rationalization (i.e., the Type 1 output is verified post hoc) or decoupling (i.e., the Type 1 output is falsified). We tested key aspects of the model using a novel base-rate task where stereotypes and base-rate probabilities cued the same (non-conflict problems) or different (conflict problems) responses about group membership. Our results support two key predictions derived from the model: (1) conflict detection and decoupling are dissociable sources of Type 2 processing and (2) conflict detection sometimes fails. We argue that considering the potential stages of reasoning allows us to distinguish early (conflict detection) and late (decoupling) sources of analytic thought. Errors may occur at both stages and, as a consequence, bias arises from both conflict monitoring and decoupling failures. Copyright © 2015 Elsevier Inc. All rights reserved.
Badal, Sunil P; Michalak, Shawn D; Chan, George C-Y; You, Yi; Shelley, Jacob T
2016-04-05
Plasma-based ambient desorption/ionization sources are versatile in that they enable direct ionization of gaseous samples as well as desorption/ionization of analytes from liquid and solid samples. However, ionization matrix effects, caused by competitive ionization processes, can worsen sensitivity or even inhibit detection all together. The present study is focused on expanding the analytical capabilities of the flowing atmospheric-pressure afterglow (FAPA) source by exploring additional types of ionization chemistry. Specifically, it was found that the abundance and type of reagent ions produced by the FAPA source and, thus, the corresponding ionization pathways of analytes, can be altered by changing the source working conditions. High abundance of proton-transfer reagent ions was observed with relatively high gas flow rates and low discharge currents. Conversely, charge-transfer reagent species were most abundant at low gas flows and high discharge currents. A rather nonpolar model analyte, biphenyl, was found to significantly change ionization pathway based on source operating parameters. Different analyte ions (e.g., MH(+) via proton-transfer and M(+.) via charge-transfer) were formed under unique operating parameters demonstrating two different operating regimes. These tunable ionization modes of the FAPA were used to enable or enhance detection of analytes which traditionally exhibit low-sensitivity in plasma-based ADI-MS analyses. In one example, 2,2'-dichloroquaterphenyl was detected under charge-transfer FAPA conditions, which were difficult or impossible to detect with proton-transfer FAPA or direct analysis in real-time (DART). Overall, this unique mode of operation increases the number and range of detectable analytes and has the potential to lessen ionization matrix effects in ADI-MS analyses.
A Generic analytical solution for modelling pumping tests in wells intersecting fractures
NASA Astrophysics Data System (ADS)
Dewandel, Benoît; Lanini, Sandra; Lachassagne, Patrick; Maréchal, Jean-Christophe
2018-04-01
The behaviour of transient flow due to pumping in fractured rocks has been studied for at least the past 80 years. Analytical solutions were proposed for solving the issue of a well intersecting and pumping from one vertical, horizontal or inclined fracture in homogeneous aquifers, but their domain of application-even if covering various fracture geometries-was restricted to isotropic or anisotropic aquifers, whose potential boundaries had to be parallel or orthogonal to the fracture direction. The issue thus remains unsolved for many field cases. For example, a well intersecting and pumping a fracture in a multilayer or a dual-porosity aquifer, where intersected fractures are not necessarily parallel or orthogonal to aquifer boundaries, where several fractures with various orientations intersect the well, or the effect of pumping not only in fractures, but also in the aquifer through the screened interval of the well. Using a mathematical demonstration, we show that integrating the well-known Theis analytical solution (Theis, 1935) along the fracture axis is identical to the equally well-known analytical solution of Gringarten et al. (1974) for a uniform-flux fracture fully penetrating a homogeneous aquifer. This result implies that any existing line- or point-source solution can be used for implementing one or more discrete fractures that are intersected by the well. Several theoretical examples are presented and discussed: a single vertical fracture in a dual-porosity aquifer or in a multi-layer system (with a partially intersecting fracture); one and two inclined fractures in a leaky-aquifer system with pumping either only from the fracture(s), or also from the aquifer between fracture(s) in the screened interval of the well. For the cases with several pumping sources, analytical solutions of flowrate contribution from each individual source (fractures and well) are presented, and the drawdown behaviour according to the length of the pumped screened interval of the well is discussed. Other advantages of this proposed generic analytical solution are also given. The application of this solution to field data should provide additional field information on fracture geometry, as well as identifying the connectivity between the pumped fractures and other aquifers.
Caesium sputter ion source compatible with commercial SIMS instruments
NASA Astrophysics Data System (ADS)
Belykh, S. F.; Palitsin, V. V.; Veryovkin, I. V.; Kovarsky, A. P.; Chang, R. J. H.; Adriaens, A.; Dowsett, M.; Adams, F.
2006-07-01
A simple design for a caesium sputter cluster ion source compatible with commercially available secondary ion mass spectrometers is reported. This source has been tested with the Cameca IMS 4f instrument using the cluster Si n- and Cu n- ions, and will shortly be retrofitted to the floating low energy ion gun (FLIG) of the type used on the Cameca 4500/4550 quadruple instruments. Our experiments with surface characterization and depth profiling conducted to date demonstrate improvements of analytical capabilities of the SIMS instrument due to the non-additive enhancement of secondary ion emission and shorter ion ranges of polyatomic projectiles compared to atomic ions with the same impact energy.
Recommended OSC design and analysis of AMTEC power system for outer-planet missions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schock, A.; Noravian, H.; Or, C.
1999-01-01
The paper describes OSC designs and analyses of AMTEC cells and radioisotope power systems for possible application to NASA{close_quote}s Europa Orbiter and Pluto Kuiper Express missions, and compares their predicted performance with JPL{close_quote}s preliminary mission goals. The latest cell and generator designs presented here were the culmination of studies covering a wide variety of generator configurations and operating parameters. The many steps and rationale leading to OSC{close_quote}s design evolution and materials selection were discussed in earlier publications and will not be repeated here except for a description of OSC{close_quote}s latest design, including a recent heat source support scheme and cellmore » configuration that have not been described in previous publications. As shown, that heat source support scheme eliminates all contact between the heat source and the AMTEC (Alkali Metal Thermal-to-Electrical Conversion) cells, which simplifies the generator{close_quote}s structural design as well as its fabrication and assembly procedure. An additional purpose of the paper is to describe a revised cell design and fabrication procedure which represent a major departure from previous OSC designs. Previous cells had a uniform diameter, but in the revised design the cell wall beyond the BASE tubes has a greatly reduced diameter. The paper presents analytical performance predictions which show that the revised ({open_quotes}chimney{close_quotes}) cell design yields substantially higher efficiencies than the previous (cylindrical) design. This makes it possible to meet and substantially exceed the JPL-stipulated EOM power goal with four instead of six General Purpose Heat Source (GPHS) modules, resulting in a one-third reduction in the heat source mass, cost, and fuel requirements. OSC{close_quote}s performance predictions were based on its techniques for the coupled thermal, electrical, and fluid flow analyses of AMTEC generators. Those analytical techniques have been partially validated by tests of prototypic test assemblies designed by OSC, built by AMPS, and tested by AFRL. The analytical results indicate that the OSC power system design, operating within the stipulated evaporator and clad temperature limits and well within its mass goals, can yield EOM power outputs and system efficiencies that substantially exceed the JPL-specified goals for the Europa and Pluto missions. However, those results only account for radioisotope decay. Other degradation mechanisms are still under study, and their short-and long-term effects must be quantified and understood before final conclusions about the adequacy and competitiveness of the AMTEC system can be drawn. {copyright} {ital 1999 American Institute of Physics.}« less
Davidson, Scott E; Cui, Jing; Kry, Stephen; Deasy, Joseph O; Ibbott, Geoffrey S; Vicic, Milos; White, R Allen; Followill, David S
2016-08-01
A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who uses these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today's modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.
Manickum, Thavrin; John, Wilson
2015-07-01
The availability of national test centers to offer a routine service for analysis and quantitation of some selected steroid hormones [natural estrogens (17-β-estradiol, E2; estrone, E1; estriol, E3), synthetic estrogen (17-α-ethinylestradiol, EE2), androgen (testosterone), and progestogen (progesterone)] in wastewater matrix was investigated; corresponding internationally used chemical- and immuno-analytical test methods were reviewed. The enzyme-linked immunosorbent assay (ELISA) (immuno-analytical technique) was also assessed for its suitability as a routine test method to quantitate the levels of these hormones at a sewage/wastewater treatment plant (WTP) (Darvill, Pietermaritzburg, South Africa), over a 2-year period. The method performance and other relevant characteristics of the immuno-analytical ELISA method were compared to the conventional chemical-analytical methodology, like gas/liquid chromatography-mass spectrometry (GC/LC-MS), and GC-LC/tandem mass spectrometry (MSMS), for quantitation of the steroid hormones in wastewater and environmental waters. The national immuno-analytical ELISA technique was found to be sensitive (LOQ 5 ng/L, LOD 0.2-5 ng/L), accurate (mean recovery 96%), precise (RSD 7-10%), and cost-effective for screening and quantitation of these steroid hormones in wastewater and environmental water matrix. A survey of the most current international literature indicates a fairly equal use of the LC-MS/MS, GC-MS/MS (chemical-analytical), and ELISA (immuno-analytical) test methods for screening and quantitation of the target steroid hormones in both water and wastewater matrix. Internationally, the observed sensitivity, based on LOQ (ng/L), for the steroid estrogens E1, E2, EE2, is, in decreasing order: LC-MSMS (0.08-9.54) > GC-MS (1) > ELISA (5) (chemical-analytical > immuno-analytical). At the national level, the routine, unoptimized chemical-analytical LC-MSMS method was found to lack the required sensitivity for meeting environmental requirements for steroid hormone quantitation. Further optimization of the sensitivity of the chemical-analytical LC-tandem mass spectrometry methods, especially for wastewater screening, in South Africa is required. Risk assessment studies showed that it was not practical to propose standards or allowable limits for the steroid estrogens E1, E2, EE2, and E3; the use of predicted-no-effect concentration values of the steroid estrogens appears to be appropriate for use in their risk assessment in relation to aquatic organisms. For raw water sources, drinking water, raw and treated wastewater, the use of bioassays, with trigger values, is a useful screening tool option to decide whether further examination of specific endocrine activity may be warranted, or whether concentrations of such activity are of low priority, with respect to health concerns in the human population. The achievement of improved quantitation limits for immuno-analytical methods, like ELISA, used for compound quantitation, and standardization of the method for measuring E2 equivalents (EEQs) used for biological activity (endocrine: e.g., estrogenic) are some areas for future EDC research.
Whittlesea, B W; Price, J R
2001-03-01
In studies of the mere exposure effect, rapid presentation of items can increase liking without accurate recognition. The effect on liking has been explained as a misattribution of fluency caused by prior presentation. However, fluency is also a source of feelings of familiarity. It is, therefore, surprising that prior experience can enhance liking without also causing familiarity-based recognition. We suggest that when study opportunities are minimal and test items are perceptually similar, people adopt an analytic approach, attempting to recognize distinctive features. That strategy fails because rapid presentation prevents effective encoding of such features; it also prevents people from experiencing fluency and a consequent feeling of familiarity. We suggest that the liking-without-recognition effect results from using an effective (nonanalytic) strategy in judging pleasantness, but an ineffective (analytic) strategy in recognition. Explanations of the mere exposure effect based on a distinction between implicit and explicit memory are unnecessary.
Investigation of fluorine content in PM2.5 airborne particles of Istanbul, Turkey.
Ozbek, Nil; Baltaci, Hakki; Baysal, Asli
2016-07-01
Fluorine determination in airborne samples is important due to its spread into the air from both natural and artificial sources. It can travel by wind over large distances before depositing on the Earth's surface. Its concentration in various matrices are limited and controlled by the regulations for causing health risks associated with environmental exposures. In this work, fluorine was determined in PM2.5 airborne samples by high-resolution continuum source electrothermal atomic absorption spectrometry. For these purpose, the PM2.5 airborne particulates were collected on quartz filters using high-volume samplers (500 L/min) in Istanbul (Turkey) for 96 h during January to June in 2 years. Then, instrumental and experimental parameters were optimized for the analyte in airborne samples. The validity of the method for the analyte was tested using standard reference material, and certified values were found in the limits of 95 % confidence level. The fluorine concentrations and meteorological conditions were compared statistically.
van de Geijn, J; Fraass, B A
1984-01-01
The net fractional depth dose (NFD) is defined as the fractional depth dose (FDD) corrected for inverse square law. Analysis of its behavior as a function of depth, field size, and source-surface distance has led to an analytical description with only seven model parameters related to straightforward physical properties. The determination of the characteristic parameter values requires only seven experimentally determined FDDs. The validity of the description has been tested for beam qualities ranging from 60Co gamma rays to 18-MV x rays, using published data from several different sources as well as locally measured data sets. The small number of model parameters is attractive for computer or hand-held calculator applications. The small amount of required measured data is important in view of practical data acquisition for implementation of a computer-based dose calculation system. The generating function allows easy and accurate generation of FDD, tissue-air ratio, tissue-maximum ratio, and tissue-phantom ratio tables.
Net fractional depth dose: a basis for a unified analytical description of FDD, TAR, TMR, and TPR
DOE Office of Scientific and Technical Information (OSTI.GOV)
van de Geijn, J.; Fraass, B.A.
The net fractional depth dose (NFD) is defined as the fractional depth dose (FDD) corrected for inverse square law. Analysis of its behavior as a function of depth, field size, and source-surface distance has led to an analytical description with only seven model parameters related to straightforward physical properties. The determination of the characteristic parameter values requires only seven experimentally determined FDDs. The validity of the description has been tested for beam qualities ranging from /sup 60/Co gamma rays to 18-MV x rays, using published data from several different sources as well as locally measured data sets. The small numbermore » of model parameters is attractive for computer or hand-held calculator applications. The small amount of required measured data is important in view of practical data acquisition for implementation of a computer-based dose calculation system. The generating function allows easy and accurate generation of FDD, tissue-air ratio, tissue-maximum ratio, and tissue-phantom ratio tables.« less
NASA Technical Reports Server (NTRS)
Bennett, G.; Koenig, K.; Miley, S. J.; Mcwhorter, J.; Wells, G.
1981-01-01
A bibliography was compiled of all readily available sources of propeller analytical and experimental studies conducted during the 1930 through 1960 period. A propeller test stand was developed for the measurement of thrust and torque characteristics of full scale general aviation propellers and installed in the LaRC 30 x 60 foot full scale wind tunnel. A tunnel entry was made during the January through February 1980 period. Several propellers were tested, but unforseen difficulties with the shaft thrust torque balance severely degraded the data quality.
Assessment of COPD-related outcomes via a national electronic medical record database.
Asche, Carl; Said, Quayyim; Joish, Vijay; Hall, Charles Oaxaca; Brixner, Diana
2008-01-01
The technology and sophistication of healthcare utilization databases have expanded over the last decade to include results of lab tests, vital signs, and other clinical information. This review provides an assessment of the methodological and analytical challenges of conducting chronic obstructive pulmonary disease (COPD) outcomes research in a national electronic medical records (EMR) dataset and its potential application towards the assessment of national health policy issues, as well as a description of the challenges or limitations. An EMR database and its application to measuring outcomes for COPD are described. The ability to measure adherence to the COPD evidence-based practice guidelines, generated by the NIH and HEDIS quality indicators, in this database was examined. Case studies, before and after their publication, were used to assess the adherence to guidelines and gauge the conformity to quality indicators. EMR was the only source of information for pulmonary function tests, but low frequency in ordering by primary care was an issue. The EMR data can be used to explore impact of variation in healthcare provision on clinical outcomes. The EMR database permits access to specific lab data and biometric information. The richness and depth of information on "real world" use of health services for large population-based analytical studies at relatively low cost render such databases an attractive resource for outcomes research. Various sources of information exist to perform outcomes research. It is important to understand the desired endpoints of such research and choose the appropriate database source.
42 CFR 493.845 - Standard; Toxicology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.851 - Standard; Hematology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.843 - Standard; Endocrinology.
Code of Federal Regulations, 2013 CFR
2013-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.845 - Standard; Toxicology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.845 - Standard; Toxicology.
Code of Federal Regulations, 2013 CFR
2013-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.851 - Standard; Hematology.
Code of Federal Regulations, 2013 CFR
2013-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.843 - Standard; Endocrinology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.843 - Standard; Endocrinology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.851 - Standard; Hematology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
Testing and Analytical Modeling for Purging Process of a Cryogenic Line
NASA Technical Reports Server (NTRS)
Hedayat, A.; Mazurkivich, P. V.; Nelson, M. A.; Majumdar, A. K.
2013-01-01
The purging operations for cryogenic main propulsion systems of upper stage are usually carried out for the following cases: 1) Purging of the Fill/Drain line after completion of propellant loading. This operation allows the removal of residual propellant mass; and 2) Purging of the Feed/Drain line if the mission is scrubbed. The lines would be purged by connections to a ground high-pressure gas storage source. The flowrate of purge gas should be regulated such that the pressure in the line will not exceed the required maximum allowable value. Exceeding the maximum allowable pressure may lead to structural damage in the line. To gain confidence in analytical models of the purge process, a test series was conducted. The test article, a 20-cm incline line, was filled with liquid hydrogen and then purged with gaseous helium (GHe). The influences of GHe flowrates and initial temperatures were evaluated. The Generalized Fluid System Simulation Program, an in-house general-purpose computer program for flow network analysis, was utilized to model and simulate the testing. The test procedures, modeling descriptions, and the results will be presented in the final paper.
Testing and Analytical Modeling for Purging Process of a Cryogenic Line
NASA Technical Reports Server (NTRS)
Hedayat, A.; Mazurkivich, P. V.; Nelson, M. A.; Majumdar, A. K.
2015-01-01
The purging operations for cryogenic main propulsion systems of upper stage are usually carried out for the following cases: 1) Purging of the Fill/Drain line after completion of propellant loading. This operation allows the removal of residual propellant mass; and 2) Purging of the Feed/Drain line if the mission is scrubbed. The lines would be purged by connections to a ground high-pressure gas storage source. The flow-rate of purge gas should be regulated such that the pressure in the line will not exceed the required maximum allowable value. Exceeding the maximum allowable pressure may lead to structural damage in the line. To gain confidence in analytical models of the purge process, a test series was conducted. The test article, a 20-cm incline line, was filled with liquid hydrogen and then purged with gaseous helium (GHe). The influences of GHe flow-rates and initial temperatures were evaluated. The Generalized Fluid System Simulation Program, an in-house general-purpose computer program for flow network analysis, was utilized to model and simulate the testing. The test procedures, modeling descriptions, and the results will be presented in the final paper.
Electrospray ion source with reduced analyte electrochemistry
Kertesz, Vilmos [Knoxville, TN; Van Berkel, Gary [Clinton, TN
2011-08-23
An electrospray ion (ESI) source and method capable of ionizing an analyte molecule without oxidizing or reducing the analyte of interest. The ESI source can include an emitter having a liquid conduit, a working electrode having a liquid contacting surface, a spray tip, a secondary working electrode, and a charge storage coating covering partially or fully the liquid contacting surface of the working electrode. The liquid conduit, the working electrode and the secondary working electrode can be in liquid communication. The electrospray ion source can also include a counter electrode proximate to, but separated from, said spray tip. The electrospray ion source can also include a power system for applying a voltage difference between the working electrodes and a counter-electrode. The power system can deliver pulsed voltage changes to the working electrodes during operation of said electrospray ion source to minimize the surface potential of the charge storage coating.
Electrospray ion source with reduced analyte electrochemistry
Kertesz, Vilmos; Van Berkel, Gary J
2013-07-30
An electrospray ion (ESI) source and method capable of ionizing an analyte molecule without oxidizing or reducing the analyte of interest. The ESI source can include an emitter having a liquid conduit, a working electrode having a liquid contacting surface, a spray tip, a secondary working electrode, and a charge storage coating covering partially or fully the liquid contacting surface of the working electrode. The liquid conduit, the working electrode and the secondary working electrode can be in liquid communication. The electrospray ion source can also include a counter electrode proximate to, but separated from, said spray tip. The electrospray ion source can also include a power system for applying a voltage difference between the working electrodes and a counter-electrode. The power system can deliver pulsed voltage changes to the working electrodes during operation of said electrospray ion source to minimize the surface potential of the charge storage coating.
The PAC-MAN model: Benchmark case for linear acoustics in computational physics
NASA Astrophysics Data System (ADS)
Ziegelwanger, Harald; Reiter, Paul
2017-10-01
Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.
High Frequency Plasma Generators for Ion Thrusters
NASA Technical Reports Server (NTRS)
Divergilio, W. F.; Goede, H.; Fosnight, V. V.
1981-01-01
The results of a one year program to experimentally adapt two new types of high frequency plasma generators to Argon ion thrusters and to analytically study a third high frequency source concept are presented. Conventional 30 cm two grid ion extraction was utilized or proposed for all three sources. The two plasma generating methods selected for experimental study were a radio frequency induction (RFI) source, operating at about 1 MHz, and an electron cyclotron heated (ECH) plasma source operating at about 5 GHz. Both sources utilize multi-linecusp permanent magnet configurations for plasma confinement. The plasma characteristics, plasma loading of the rf antenna, and the rf frequency dependence of source efficiency and antenna circuit efficiency are described for the RFI Multi-cusp source. In a series of tests of this source at Lewis Research Center, minimum discharge losses of 220+/-10 eV/ion were obtained with propellant utilization of .45 at a beam current of 3 amperes. Possible improvement modifications are discussed.
ESIP Earth Sciences Data Analytics (ESDA) Cluster - Work in Progress
NASA Technical Reports Server (NTRS)
Kempler, Steven
2015-01-01
The purpose of this poster is to promote a common understanding of the usefulness of, and activities that pertain to, Data Analytics and more broadly, the Data Scientist; Facilitate collaborations to better understand the cross usage of heterogeneous datasets and to provide accommodating data analytics expertise, now and as the needs evolve into the future; Identify gaps that, once filled, will further collaborative activities. Objectives Provide a forum for Academic discussions that provides ESIP members a better understanding of the various aspects of Earth Science Data Analytics Bring in guest speakers to describe external efforts, and further teach us about the broader use of Data Analytics. Perform activities that:- Compile use cases generated from specific community needs to cross analyze heterogeneous data- Compile sources of analytics tools, in particular, to satisfy the needs of the above data users- Examine gaps between needs and sources- Examine gaps between needs and community expertise- Document specific data analytics expertise needed to perform Earth science data analytics Seek graduate data analytics Data Science student internship opportunities.
Ichihara, Kiyoshi; Ozarda, Yesim; Barth, Julian H; Klee, George; Shimizu, Yoshihisa; Xia, Liangyu; Hoffmann, Mariza; Shah, Swarup; Matsha, Tandi; Wassung, Janette; Smit, Francois; Ruzhanskaya, Anna; Straseski, Joely; Bustos, Daniel N; Kimura, Shogo; Takahashi, Aki
2017-04-01
The intent of this study, based on a global multicenter study of reference values (RVs) for serum analytes was to explore biological sources of variation (SVs) of the RVs among 12 countries around the world. As described in the first part of this paper, RVs of 50 major serum analytes from 13,396 healthy individuals living in 12 countries were obtained. Analyzed in this study were 23 clinical chemistry analytes and 8 analytes measured by immunoturbidimetry. Multiple regression analysis was performed for each gender, country by country, analyte by analyte, by setting four major SVs (age, BMI, and levels of drinking and smoking) as a fixed set of explanatory variables. For analytes with skewed distributions, log-transformation was applied. The association of each source of variation with RVs was expressed as the partial correlation coefficient (r p ). Obvious gender and age-related changes in the RVs were observed in many analytes, almost consistently between countries. Compilation of age-related variations of RVs after adjusting for between-country differences revealed peculiar patterns specific to each analyte. Judged fromthe r p , BMI related changes were observed for many nutritional and inflammatory markers in almost all countries. However, the slope of linear regression of BMI vs. RV differed greatly among countries for some analytes. Alcohol and smoking-related changes were observed less conspicuously in a limited number of analytes. The features of sex, age, alcohol, and smoking-related changes in RVs of the analytes were largely comparable worldwide. The finding of differences in BMI-related changes among countries in some analytes is quite relevant to understanding ethnic differences in susceptibility to nutritionally related diseases. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Isotope-ratio-monitoring gas chromatography-mass spectrometry: methods for isotopic calibration
NASA Technical Reports Server (NTRS)
Merritt, D. A.; Brand, W. A.; Hayes, J. M.
1994-01-01
In trial analyses of a series of n-alkanes, precise determinations of 13C contents were based on isotopic standards introduced by five different techniques and results were compared. Specifically, organic-compound standards were coinjected with the analytes and carried through chromatography and combustion with them; or CO2 was supplied from a conventional inlet and mixed with the analyte in the ion source, or CO2 was supplied from an auxiliary mixing volume and transmitted to the source without interruption of the analyte stream. Additionally, two techniques were investigated in which the analyte stream was diverted and CO2 standards were placed on a near-zero background. All methods provided accurate results. Where applicable, methods not involving interruption of the analyte stream provided the highest performance (sigma = 0.00006 at.% 13C or 0.06% for 250 pmol C as CO2 reaching the ion source), but great care was required. Techniques involving diversion of the analyte stream were immune to interference from coeluting sample components and still provided high precision (0.0001 < or = sigma < or = 0.0002 at.% or 0.1 < or = sigma < or = 0.2%).
OLED-based biosensing platform with ZnO nanoparticles for enzyme immobilization
NASA Astrophysics Data System (ADS)
Cai, Yuankun; Shinar, Ruth; Shinar, Joseph
2009-08-01
Organic light-emitting diode (OLED)-based sensing platforms are attractive for photoluminescence (PL)-based monitoring of a variety of analytes. Among the promising OLED attributes for sensing applications is the thin and flexible size and design of the OLED pixel array that is used for PL excitation. To generate a compact, fielddeployable sensor, other major sensor components, such as the sensing probe and the photodetector, in addition to the thin excitation source, should be compact. To this end, the OLED-based sensing platform was tested with composite thin biosensing films, where oxidase enzymes were immobilized on ZnO nanoparticles, rather than dissolved in solution, to generate a more compact device. The analytes tested, glucose, cholesterol, and lactate, were monitored by following their oxidation reactions in the presence of oxygen and their respective oxidase enzymes. During such reactions, oxygen is consumed and its residual concentration, which is determined by the initial concentration of the above-mentioned analytes, is monitored. The sensors utilized the oxygen-sensitive dye Pt octaethylporphyrin, embedded in polystyrene. The enzymes were sandwiched between two thin ZnO layers, an approach that was found to improve the stability of the sensing probes.
2006-10-17
Name, address, telephone number, and technical point of contact at company supplying product. (3) Material safety data sheet (MSDS) and label...existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments...Depot level maintenance cleaning. Data analysis and interpretation are based on analytical test results as well as visual inspections performed on
BARTTest: Community-Standard Atmospheric Radiative-Transfer and Retrieval Tests
NASA Astrophysics Data System (ADS)
Harrington, Joseph; Himes, Michael D.; Cubillos, Patricio E.; Blecic, Jasmina; Challener, Ryan C.
2018-01-01
Atmospheric radiative transfer (RT) codes are used both to predict planetary and brown-dwarf spectra and in retrieval algorithms to infer atmospheric chemistry, clouds, and thermal structure from observations. Observational plans, theoretical models, and scientific results depend on the correctness of these calculations. Yet, the calculations are complex and the codes implementing them are often written without modern software-verification techniques. The community needs a suite of test calculations with analytically, numerically, or at least community-verified results. We therefore present the Bayesian Atmospheric Radiative Transfer Test Suite, or BARTTest. BARTTest has four categories of tests: analytically verified RT tests of simple atmospheres (single line in single layer, line blends, saturation, isothermal, multiple line-list combination, etc.), community-verified RT tests of complex atmospheres, synthetic retrieval tests on simulated data with known answers, and community-verified real-data retrieval tests.BARTTest is open-source software intended for community use and further development. It is available at https://github.com/ExOSPORTS/BARTTest. We propose this test suite as a standard for verifying atmospheric RT and retrieval codes, analogous to the Held-Suarez test for general circulation models. This work was supported by NASA Planetary Atmospheres grant NX12AI69G, NASA Astrophysics Data Analysis Program grant NNX13AF38G, and NASA Exoplanets Research Program grant NNX17AB62G.
Vacata, Vladimir; Jahns-Streubel, Gerlinde; Baldus, Mirjana; Wood, William Graham
2007-01-01
This report was written in response to the article by Wood published recently in this journal. It describes a practical solution to the problems of controlling the pre-analytical phase in the clinical diagnostic laboratory. As an indicator of quality in the pre-analytical phase of sample processing, a target analyte was chosen which is sensitive to delay in centrifugation and/or analysis. The results of analyses of the samples sent by satellite medical practitioners were compared with those from an on-site hospital laboratory with a controllable optimized pre-analytical phase. The aim of the comparison was: (a) to identify those medical practices whose mean/median sample values significantly deviate from those of the control situation in the hospital laboratory due to the possible problems in the pre-analytical phase; (b) to aid these laboratories in the process of rectifying these problems. A Microsoft Excel-based Pre-Analytical Survey tool (PAS tool) has been developed which addresses the above mentioned problems. It has been tested on serum potassium which is known to be sensitive to delay and/or irregularities in sample treatment. The PAS tool has been shown to be one possibility for improving the quality of the analyses by identifying the sources of problems within the pre-analytical phase, thus allowing them to be rectified. Additionally, the PAS tool has an educational value and can also be adopted for use in other decentralized laboratories.
Llorente-Mirandes, Toni; Rubio, Roser; López-Sánchez, José Fermín
2017-01-01
Here we review recent developments in analytical proposals for the assessment of inorganic arsenic (iAs) content in food products. Interest in the determination of iAs in products for human consumption such as food commodities, wine, and seaweed among others is fueled by the wide recognition of its toxic effects on humans, even at low concentrations. Currently, the need for robust and reliable analytical methods is recognized by various international safety and health agencies, and by organizations in charge of establishing acceptable tolerance levels of iAs in food. This review summarizes the state of the art of analytical methods while highlighting tools for the assessment of quality assessment of the results, such as the production and evaluation of certified reference materials (CRMs) and the availability of specific proficiency testing (PT) programmes. Because the number of studies dedicated to the subject of this review has increased considerably over recent years, the sources consulted and cited here are limited to those from 2010 to the end of 2015.
RBioCloud: A Light-Weight Framework for Bioconductor and R-based Jobs on the Cloud.
Varghese, Blesson; Patel, Ishan; Barker, Adam
2015-01-01
Large-scale ad hoc analytics of genomic data is popular using the R-programming language supported by over 700 software packages provided by Bioconductor. More recently, analytical jobs are benefitting from on-demand computing and storage, their scalability and their low maintenance cost, all of which are offered by the cloud. While biologists and bioinformaticists can take an analytical job and execute it on their personal workstations, it remains challenging to seamlessly execute the job on the cloud infrastructure without extensive knowledge of the cloud dashboard. How analytical jobs can not only with minimum effort be executed on the cloud, but also how both the resources and data required by the job can be managed is explored in this paper. An open-source light-weight framework for executing R-scripts using Bioconductor packages, referred to as `RBioCloud', is designed and developed. RBioCloud offers a set of simple command-line tools for managing the cloud resources, the data and the execution of the job. Three biological test cases validate the feasibility of RBioCloud. The framework is available from http://www.rbiocloud.com.
100-B/C Target Analyte List Development for Soil
DOE Office of Scientific and Technical Information (OSTI.GOV)
R.W. Ovink
2010-03-18
This report documents the process used to identify source area target analytes in support of the 100-B/C remedial investigation/feasibility study addendum to DOE/RL-2008-46. This report also establishes the analyte exclusion criteria applicable for 100-B/C use and the analytical methods needed to analyze the target analytes.
Performance criteria and quality indicators for the post-analytical phase.
Sciacovelli, Laura; Aita, Ada; Padoan, Andrea; Pelloso, Michela; Antonelli, Giorgia; Piva, Elisa; Chiozza, Maria Laura; Plebani, Mario
2016-07-01
Quality indicators (QIs) used as performance measurements are an effective tool in accurately estimating quality, identifying problems that may need to be addressed, and monitoring the processes over time. In Laboratory Medicine, QIs should cover all steps of the testing process, as error studies have confirmed that most errors occur in the pre- and post-analytical phase of testing. Aim of the present study is to provide preliminary results on QIs and related performance criteria in the post-analytical phase. This work was conducted according to a previously described study design based on the voluntary participation of clinical laboratories in the project on QIs of the Working Group "Laboratory Errors and Patient Safety" (WG-LEPS) of the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC). Overall, data collected highlighted an improvement or stability in performances over time for all reported indicators thus demonstrating that the use of QIs is effective in the quality improvement strategy. Moreover, QIs data are an important source for defining the state-of-the-art concerning the error rate in the total testing process. The definition of performance specifications based on the state-of-the-art, as suggested by consensus documents, is a valuable benchmark point in evaluating the performance of each laboratory. Laboratory tests play a relevant role in the monitoring and evaluation of the efficacy of patient outcome thus assisting clinicians in decision-making. Laboratory performance evaluation is therefore crucial to providing patients with safe, effective and efficient care.
42 CFR 493.959 - Immunohematology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... challenges per testing event a program must provide for each analyte or test procedure is five. Analyte or... Compatibility testing Antibody identification (d) Evaluation of a laboratory's analyte or test performance. HHS... program must compare the laboratory's response for each analyte with the response that reflects agreement...
42 CFR 493.959 - Immunohematology.
Code of Federal Regulations, 2013 CFR
2013-10-01
... challenges per testing event a program must provide for each analyte or test procedure is five. Analyte or... Compatibility testing Antibody identification (d) Evaluation of a laboratory's analyte or test performance. HHS... program must compare the laboratory's response for each analyte with the response that reflects agreement...
42 CFR 493.959 - Immunohematology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... challenges per testing event a program must provide for each analyte or test procedure is five. Analyte or... Compatibility testing Antibody identification (d) Evaluation of a laboratory's analyte or test performance. HHS... program must compare the laboratory's response for each analyte with the response that reflects agreement...
42 CFR 493.959 - Immunohematology.
Code of Federal Regulations, 2011 CFR
2011-10-01
... challenges per testing event a program must provide for each analyte or test procedure is five. Analyte or... Compatibility testing Antibody identification (d) Evaluation of a laboratory's analyte or test performance. HHS... program must compare the laboratory's response for each analyte with the response that reflects agreement...
Yu, Kate; Di, Li; Kerns, Edward; Li, Susan Q; Alden, Peter; Plumb, Robert S
2007-01-01
We report in this paper an ultra-performance liquid chromatography/tandem mass spectrometric (UPLC(R)/MS/MS) method utilizing an ESI-APCI multimode ionization source to quantify structurally diverse analytes. Eight commercial drugs were used as test compounds. Each LC injection was completed in 1 min using a UPLC system coupled with MS/MS multiple reaction monitoring (MRM) detection. Results from three separate sets of experiments are reported. In the first set of experiments, the eight test compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes (ESI+, ESI-, APCI-, and APCI+) during an LC run. Approximately 8-10 data points were collected across each LC peak. This was insufficient for a quantitative analysis. In the second set of experiments, four compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes during an LC run. Approximately 15 data points were obtained for each LC peak. Quantification results were obtained with a limit of detection (LOD) as low as 0.01 ng/mL. For the third set of experiments, the eight test compounds were analyzed as a batch. During each LC injection, a single compound was analyzed. The mass spectrometer was detecting at a particular ionization mode during each LC injection. More than 20 data points were obtained for each LC peak. Quantification results were also obtained. This single-compound analytical method was applied to a microsomal stability test. Compared with a typical HPLC method currently used for the microsomal stability test, the injection-to-injection cycle time was reduced to 1.5 min (UPLC method) from 3.5 min (HPLC method). The microsome stability results were comparable with those obtained by traditional HPLC/MS/MS.
NASA Technical Reports Server (NTRS)
Woronowicz, Michael; Abel, Joshua; Autrey, David; Blackmon, Rebecca; Bond, Tim; Brown, Martin; Buffington, Jesse; Cheng, Edward; DeLatte, Danielle; Garcia, Kelvin;
2014-01-01
The International Space Station program is developing a robotically-operated leak locator tool to be used externally. The tool would consist of a Residual Gas Analyzer for partial pressure measurements and a full range pressure gauge for total pressure measurements. The primary application is to detect NH3 coolant leaks in the ISS thermal control system. An analytical model of leak plume physics is presented that can account for effusive flow as well as plumes produced by sonic orifices and thruster operations. This model is used along with knowledge of typical RGA and full range gauge performance to analyze the expected instrument sensitivity to ISS leaks of various sizes and relative locations ("directionality"). The paper also presents experimental results of leak simulation testing in a large thermal vacuum chamber at NASA Goddard Space Flight Center. This test characterized instrument sensitivity as a function of leak rates ranging from 1 lb-mass/yr. to about 1 lb-mass/day. This data may represent the first measurements collected by an RGA or ion gauge system monitoring off-axis point sources as a function of location and orientation. Test results are compared to the analytical model and used to propose strategies for on-orbit leak location and environment characterization using the proposed instrument while taking into account local ISS conditions and the effects of ram/wake flows and structural shadowing within low Earth orbit.
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.; Abel, Joshua C.; Autrey, David; Blackmon, Rebecca; Bond, Tim; Brown, Martin; Buffington, Jesse; Cheng, Edward; DeLatte, Danielle; Garcia, Kelvin;
2014-01-01
The International Space Station program is developing a robotically-operated leak locator tool to be used externally. The tool would consist of a Residual Gas Analyzer for partial pressure measurements and a full range pressure gauge for total pressure measurements. The primary application is to detect NH3 coolant leaks in the ISS thermal control system.An analytical model of leak plume physics is presented that can account for effusive flow as well as plumes produced by sonic orifices and thruster operations. This model is used along with knowledge of typical RGA and full range gauge performance to analyze the expected instrument sensitivity to ISS leaks of various sizes and relative locations (directionality).The paper also presents experimental results of leak simulation testing in a large thermal vacuum chamber at NASA Goddard Space Flight Center. This test characterized instrument sensitivity as a function of leak rates ranging from 1 lbmyr. to about 1 lbmday. This data may represent the first measurements collected by an RGA or ion gauge system monitoring off-axis point sources as a function of location and orientation. Test results are compared to the analytical model and used to propose strategies for on-orbit leak location and environment characterization using the proposed instrument while taking into account local ISS conditions and the effects of ramwake flows and structural shadowing within low Earth orbit.
Quality Assurance of RNA Expression Profiling in Clinical Laboratories
Tang, Weihua; Hu, Zhiyuan; Muallem, Hind; Gulley, Margaret L.
2012-01-01
RNA expression profiles are increasingly used to diagnose and classify disease, based on expression patterns of as many as several thousand RNAs. To ensure quality of expression profiling services in clinical settings, a standard operating procedure incorporates multiple quality indicators and controls, beginning with preanalytic specimen preparation and proceeding thorough analysis, interpretation, and reporting. Before testing, histopathological examination of each cellular specimen, along with optional cell enrichment procedures, ensures adequacy of the input tissue. Other tactics include endogenous controls to evaluate adequacy of RNA and exogenous or spiked controls to evaluate run- and patient-specific performance of the test system, respectively. Unique aspects of quality assurance for array-based tests include controls for the pertinent outcome signatures that often supersede controls for each individual analyte, built-in redundancy for critical analytes or biochemical pathways, and software-supported scrutiny of abundant data by a laboratory physician who interprets the findings in a manner facilitating appropriate medical intervention. Access to high-quality reagents, instruments, and software from commercial sources promotes standardization and adoption in clinical settings, once an assay is vetted in validation studies as being analytically sound and clinically useful. Careful attention to the well-honed principles of laboratory medicine, along with guidance from government and professional groups on strategies to preserve RNA and manage large data sets, promotes clinical-grade assay performance. PMID:22020152
A New Time Domain Formulation for Broadband Noise Predictions
NASA Technical Reports Server (NTRS)
Casper, J.; Farassat, F.
2002-01-01
A new analytic result in acoustics called "Formulation 1B," proposed by Farassat, is used to compute the loading noise from an unsteady surface pressure distribution on a thin airfoil in the time domain. This formulation is a new solution of the Ffowcs Williams-Hawkings equation with the loading source term. The formulation contains a far field surface integral that depends on the time derivative and the surface gradient of the pressure on the airfoil, as well as a contour integral on the boundary of the airfoil surface. As a first test case, the new formulation is used to compute the noise radiated from a flat plate, moving through a sinusoidal gust of constant frequency. The unsteady surface pressure for this test case is analytically specified from a result based on linear airfoil theory. This test case is used to examine the velocity scaling properties of Formulation 1B and to demonstrate its equivalence to Formulation 1A of Farassat. The new acoustic formulation, again with an analytic surface pressure, is then used to predict broadband noise radiated from an airfoil immersed in homogeneous, isotropic turbulence. The results are compared with experimental data previously reported by Paterson and Amiet. Good agreement between predictions and measurements is obtained. Finally, an alternative form of Formulation 1B is described for statistical analysis of broadband noise.
A New Time Domain Formulation for Broadband Noise Predictions
NASA Technical Reports Server (NTRS)
Casper, Jay H.; Farassat, Fereidoun
2002-01-01
A new analytic result in acoustics called "Formulation 1B," proposed by Farassat, is used to compute the loading noise from an unsteady surface pressure distribution on a thin airfoil in the time domain. This formulation is a new solution of the Ffowcs Williams-Hawkings equation with the loading source term. The formulation contains a far field surface integral that depends on the time derivative and the surface gradient of the pressure on the airfoil, as well as a contour integral on the boundary of the airfoil surface. As a first test case, the new formulation is used to compute the noise radiated from a flat plate, moving through a sinusoidal gust of constant frequency. The unsteady surface pressure for this test case is analytically specied from a result based on linear airfoil theory. This test case is used to examine the velocity scaling properties of Formulation 1B and to demonstrate its equivalence to Formulation 1A of Farassat. The new acoustic formulation, again with an analytic surface pressure, is then used to predict broadband noise radiated from an airfoil immersed in homogeneous, isotropic turbulence. The results are compared with experimental data previously reported by Paterson and Amiet. Good agreement between predictions and measurements is obtained. Finally, an alternative form of Formulation 1B is described for statistical analysis of broadband noise.
Structural Benchmark Creep Testing for the Advanced Stirling Convertor Heater Head
NASA Technical Reports Server (NTRS)
Krause, David L.; Kalluri, Sreeramesh; Bowman, Randy R.; Shah, Ashwin R.
2008-01-01
The National Aeronautics and Space Administration (NASA) has identified the high efficiency Advanced Stirling Radioisotope Generator (ASRG) as a candidate power source for use on long duration Science missions such as lunar applications, Mars rovers, and deep space missions. For the inherent long life times required, a structurally significant design limit for the heater head component of the ASRG Advanced Stirling Convertor (ASC) is creep deformation induced at low stress levels and high temperatures. Demonstrating proof of adequate margins on creep deformation and rupture for the operating conditions and the MarM-247 material of construction is a challenge that the NASA Glenn Research Center is addressing. The combined analytical and experimental program ensures integrity and high reliability of the heater head for its 17-year design life. The life assessment approach starts with an extensive series of uniaxial creep tests on thin MarM-247 specimens that comprise the same chemistry, microstructure, and heat treatment processing as the heater head itself. This effort addresses a scarcity of openly available creep properties for the material as well as for the virtual absence of understanding of the effect on creep properties due to very thin walls, fine grains, low stress levels, and high-temperature fabrication steps. The approach continues with a considerable analytical effort, both deterministically to evaluate the median creep life using nonlinear finite element analysis, and probabilistically to calculate the heater head s reliability to a higher degree. Finally, the approach includes a substantial structural benchmark creep testing activity to calibrate and validate the analytical work. This last element provides high fidelity testing of prototypical heater head test articles; the testing includes the relevant material issues and the essential multiaxial stress state, and applies prototypical and accelerated temperature profiles for timely results in a highly controlled laboratory environment. This paper focuses on the last element and presents a preliminary methodology for creep rate prediction, the experimental methods, test challenges, and results from benchmark testing of a trial MarM-247 heater head test article. The results compare favorably with the analytical strain predictions. A description of other test findings is provided, and recommendations for future test procedures are suggested. The manuscript concludes with describing the potential impact of the heater head creep life assessment and benchmark testing effort on the ASC program.
Developing retinal biomarkers of neurological disease: an analytical perspective
MacCormick, Ian JC; Czanner, Gabriela; Faragher, Brian
2015-01-01
The inaccessibility of the brain poses a problem for neuroscience. Scientists have traditionally responded by developing biomarkers for brain physiology and disease. The retina is an attractive source of biomarkers since it shares many features with the brain. Some even describe the retina as a ‘window’ to the brain, implying that retinal signs are analogous to brain disease features. However, new analytical methods are needed to show whether or not retinal signs really are equivalent to brain abnormalities, since this requires greater evidence than direct associations between retina and brain. We, therefore propose a new way to think about, and test, how clearly one might see the brain through the retinal window, using cerebral malaria as a case study. PMID:26174843
Field validation of the dnph method for aldehydes and ketones. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Workman, G.S.; Steger, J.L.
1996-04-01
A stationary source emission test method for selected aldehydes and ketones has been validated. The method employs a sampling train with impingers containing 2,4-dinitrophenylhydrazine (DNPH) to derivatize the analytes. The resulting hydrazones are recovered and analyzed by high performance liquid chromatography. Nine analytes were studied; the method was validated for formaldehyde, acetaldehyde, propionaldehyde, acetophenone and isophorone. Acrolein, menthyl ethyl ketone, menthyl isobutyl ketone, and quinone did not meet the validation criteria. The study employed the validation techniques described in EPA method 301, which uses train spiking to determine bias, and collocated sampling trains to determine precision. The studies were carriedmore » out at a plywood veneer dryer and a polyester manufacturing plant.« less
NASA Hydrogen Peroxide Propellant Hazards Technical Manual
NASA Technical Reports Server (NTRS)
Baker, David L.; Greene, Ben; Frazier, Wayne
2005-01-01
The Fire, Explosion, Compatibility and Safety Hazards of Hydrogen Peroxide NASA technical manual was developed at the NASA Johnson Space Center White Sands Test Facility. NASA Technical Memorandum TM-2004-213151 covers topics concerning high concentration hydrogen peroxide including fire and explosion hazards, material and fluid reactivity, materials selection information, personnel and environmental hazards, physical and chemical properties, analytical spectroscopy, specifications, analytical methods, and material compatibility data. A summary of hydrogen peroxide-related accidents, incidents, dose calls, mishaps and lessons learned is included. The manual draws from art extensive literature base and includes recent applicable regulatory compliance documentation. The manual may be obtained by United States government agencies from NASA Johnson Space Center and used as a reference source for hazards and safe handling of hydrogen peroxide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, Scott E., E-mail: sedavids@utmb.edu
Purpose: A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who usesmore » these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today’s modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. Methods: The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Results: Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. Conclusions: A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.« less
Zhang, Yilong; Han, Sung Won; Cox, Laura M; Li, Huilin
2017-12-01
Human microbiome is the collection of microbes living in and on the various parts of our body. The microbes living on our body in nature do not live alone. They act as integrated microbial community with massive competing and cooperating and contribute to our human health in a very important way. Most current analyses focus on examining microbial differences at a single time point, which do not adequately capture the dynamic nature of the microbiome data. With the advent of high-throughput sequencing and analytical tools, we are able to probe the interdependent relationship among microbial species through longitudinal study. Here, we propose a multivariate distance-based test to evaluate the association between key phenotypic variables and microbial interdependence utilizing the repeatedly measured microbiome data. Extensive simulations were performed to evaluate the validity and efficiency of the proposed method. We also demonstrate the utility of the proposed test using a well-designed longitudinal murine experiment and a longitudinal human study. The proposed methodology has been implemented in the freely distributed open-source R package and Python code. © 2017 WILEY PERIODICALS, INC.
Upon the Shoulders of Giants: Open-Source Hardware and Software in Analytical Chemistry.
Dryden, Michael D M; Fobel, Ryan; Fobel, Christian; Wheeler, Aaron R
2017-04-18
Isaac Newton famously observed that "if I have seen further it is by standing on the shoulders of giants." We propose that this sentiment is a powerful motivation for the "open-source" movement in scientific research, in which creators provide everything needed to replicate a given project online, as well as providing explicit permission for users to use, improve, and share it with others. Here, we write to introduce analytical chemists who are new to the open-source movement to best practices and concepts in this area and to survey the state of open-source research in analytical chemistry. We conclude by considering two examples of open-source projects from our own research group, with the hope that a description of the process, motivations, and results will provide a convincing argument about the benefits that this movement brings to both creators and users.
Delre, Antonio; Mønster, Jacob; Samuelsson, Jerker; Fredenslund, Anders M; Scheutz, Charlotte
2018-09-01
The tracer gas dispersion method (TDM) is a remote sensing method used for quantifying fugitive emissions by relying on the controlled release of a tracer gas at the source, combined with concentration measurements of the tracer and target gas plumes. The TDM was tested at a wastewater treatment plant for plant-integrated methane emission quantification, using four analytical instruments simultaneously and four different tracer gases. Measurements performed using a combination of an analytical instrument and a tracer gas, with a high ratio between the tracer gas release rate and instrument precision (a high release-precision ratio), resulted in well-defined plumes with a high signal-to-noise ratio and a high methane-to-tracer gas correlation factor. Measured methane emission rates differed by up to 18% from the mean value when measurements were performed using seven different instrument and tracer gas combinations. Analytical instruments with a high detection frequency and good precision were established as the most suitable for successful TDM application. The application of an instrument with a poor precision could only to some extent be overcome by applying a higher tracer gas release rate. A sideward misplacement of the tracer gas release point of about 250m resulted in an emission rate comparable to those obtained using a tracer gas correctly simulating the methane emission. Conversely, an upwind misplacement of about 150m resulted in an emission rate overestimation of almost 50%, showing the importance of proper emission source simulation when applying the TDM. Copyright © 2018 Elsevier B.V. All rights reserved.
PFLOTRAN Verification: Development of a Testing Suite to Ensure Software Quality
NASA Astrophysics Data System (ADS)
Hammond, G. E.; Frederick, J. M.
2016-12-01
In scientific computing, code verification ensures the reliability and numerical accuracy of a model simulation by comparing the simulation results to experimental data or known analytical solutions. The model is typically defined by a set of partial differential equations with initial and boundary conditions, and verification ensures whether the mathematical model is solved correctly by the software. Code verification is especially important if the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment [Oberkampf and Trucano (2007)]. Justified confidence in a particular computational tool requires clarity in the exercised physics and transparency in its verification process with proper documentation. We present a quality assurance (QA) testing suite developed by Sandia National Laboratories that performs code verification for PFLOTRAN, an open source, massively-parallel subsurface simulator. PFLOTRAN solves systems of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport processes in porous media. PFLOTRAN's QA test suite compares the numerical solutions of benchmark problems in heat and mass transport against known, closed-form, analytical solutions, including documentation of the exercised physical process models implemented in each PFLOTRAN benchmark simulation. The QA test suite development strives to follow the recommendations given by Oberkampf and Trucano (2007), which describes four essential elements in high-quality verification benchmark construction: (1) conceptual description, (2) mathematical description, (3) accuracy assessment, and (4) additional documentation and user information. Several QA tests within the suite will be presented, including details of the benchmark problems and their closed-form analytical solutions, implementation of benchmark problems in PFLOTRAN simulations, and the criteria used to assess PFLOTRAN's performance in the code verification procedure. References Oberkampf, W. L., and T. G. Trucano (2007), Verification and Validation Benchmarks, SAND2007-0853, 67 pgs., Sandia National Laboratories, Albuquerque, NM.
Itzï (version 17.1): an open-source, distributed GIS model for dynamic flood simulation
NASA Astrophysics Data System (ADS)
Guillaume Courty, Laurent; Pedrozo-Acuña, Adrián; Bates, Paul David
2017-05-01
Worldwide, floods are acknowledged as one of the most destructive hazards. In human-dominated environments, their negative impacts are ascribed not only to the increase in frequency and intensity of floods but also to a strong feedback between the hydrological cycle and anthropogenic development. In order to advance a more comprehensive understanding of this complex interaction, this paper presents the development of a new open-source tool named Itzï
that enables the 2-D numerical modelling of rainfall-runoff processes and surface flows integrated with the open-source geographic information system (GIS) software known as GRASS. Therefore, it takes advantage of the ability given by GIS environments to handle datasets with variations in both temporal and spatial resolutions. Furthermore, the presented numerical tool can handle datasets from different sources with varied spatial resolutions, facilitating the preparation and management of input and forcing data. This ability reduces the preprocessing time usually required by other models. Itzï uses a simplified form of the shallow water equations, the damped partial inertia equation, for the resolution of surface flows, and the Green-Ampt model for the infiltration. The source code is now publicly available online, along with complete documentation. The numerical model is verified against three different tests cases: firstly, a comparison with an analytic solution of the shallow water equations is introduced; secondly, a hypothetical flooding event in an urban area is implemented, where results are compared to those from an established model using a similar approach; and lastly, the reproduction of a real inundation event that occurred in the city of Kingston upon Hull, UK, in June 2007, is presented. The numerical approach proved its ability at reproducing the analytic and synthetic test cases. Moreover, simulation results of the real flood event showed its suitability at identifying areas affected by flooding, which were verified against those recorded after the event by local authorities.
Evaluation of a total energy-rate sensor on a transport airplane
NASA Technical Reports Server (NTRS)
Ostroff, A. J.; Hueschen, R. M.; Hellbaum, R. F.; Belcastro, C. M.; Creedon, J. F.
1983-01-01
A sensor that measures the rate of change of total energy of an airplane with respect to the airstream has been evaluated. The sensor consists of two cylindrical probes located on the fuselage of a transport airplane, an in line acoustic filter, and a pressure sensing altitude rate transducer. Sections of this report include the sensor description and experimental configuration, frequency response tests, analytical model development, and flight test results for several airplane maneuvers. The results section includes time history comparisons between data generated by the total energy rate sensor and calculated data derived from independent sources.
Safety and Suitability for Service Assessment Testing for Surface and Underwater Launched Munitions
2014-12-05
test efficiency that tend to associate the Analytical S3 Test Approach with large, complex munition systems and the Empirical S3 Test Approach with...the smaller, less complex munition systems . 8.1 ANALYTICAL S3 TEST APPROACH. The Analytical S3 test approach, as shown in Figure 3, evaluates...assets than the Analytical S3 Test approach to establish the safety margin of the system . This approach is generally applicable to small munitions
NASA Technical Reports Server (NTRS)
King, R. B.; Fordyce, J. S.; Antoine, A. C.; Leibecki, H. F.; Neustadter, H. E.; Sidik, S. M.
1976-01-01
Concentrations of 60 chemical elements in the airborne particulate matter were measured at 16 sites in Cleveland, OH over a 1 year period during 1971 and 1972 (45 to 50 sampling days). Analytical methods used included instrumental neutron activation, emission spectroscopy, and combustion techniques. Uncertainties in the concentrations associated with the sampling procedures, the analytical methods, the use of several analytical facilities, and samples with concentrations below the detection limits are evaluated in detail. The data are discussed in relation to other studies and source origins. The trace constituent concentrations as a function of wind direction are used to suggest a practical method for air pollution source identification.
Analysis of structural dynamic data from Skylab. Volume 2: Skylab analytical and test model data
NASA Technical Reports Server (NTRS)
Demchak, L.; Harcrow, H.
1976-01-01
The orbital configuration test modal data, analytical test correlation modal data, and analytical flight configuration modal data are presented. Tables showing the generalized mass contributions (GMCs) for each of the thirty tests modes are given along with the two dimensional mode shape plots and tables of GMCs for the test correlated analytical modes. The two dimensional mode shape plots for the analytical modes and uncoupled and coupled modes of the orbital flight configuration at three development phases of the model are included.
Nanometric depth resolution from multi-focal images in microscopy.
Dalgarno, Heather I C; Dalgarno, Paul A; Dada, Adetunmise C; Towers, Catherine E; Gibson, Gavin J; Parton, Richard M; Davis, Ilan; Warburton, Richard J; Greenaway, Alan H
2011-07-06
We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels.
Nanometric depth resolution from multi-focal images in microscopy
Dalgarno, Heather I. C.; Dalgarno, Paul A.; Dada, Adetunmise C.; Towers, Catherine E.; Gibson, Gavin J.; Parton, Richard M.; Davis, Ilan; Warburton, Richard J.; Greenaway, Alan H.
2011-01-01
We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels. PMID:21247948
Andersson, Maria; Stephanson, Nikolai; Ohman, Inger; Terzuoli, Tommy; Lindh, Jonatan D; Beck, Olof
2014-04-01
Opiates comprise a class of abused drugs that is of primary interest in clinical and forensic urine drug testing. Determination of heroin, codeine, or a multi-drug ingestion is complicated since both heroin and codeine can lead to urinary excretion of free and conjugated morphine. Liquid chromatography-tandem mass spectrometry (LC-MS/MS) offers advantage over gas chromatography-mass spectrometry by simplifying sample preparation but increases the number of analytes. A method based on direct injection of five-fold diluted urine for confirmation of morphine, morphine-3-glucuronide, morphine-6-glucuronide, codeine, codeine-6-glucuronide and 6-acetylmorphine was validated using LC-MS/MS in positive electrospray mode monitoring two transitions using selected reaction monitoring. The method was applied for the analysis of 3155 unknown urine samples which were positive for opiates in immunochemical screening. A linear response was observed for all compounds in the calibration curves covering more than three orders of magnitude. Cut off was set to 2 ng/ml for 6-acetylmorphine and 150 ng/ml for the other analytes. 6-Acetylmorphine was found to be effective (sensitivity 82%) in detecting samples as heroin intake. Morphine-3-glucuronide and codeine-6-glucuronide was the predominant components of total morphine and codeine, 84% and 93%, respectively. The authors have validated a robust LC-MS/MS method for rapid qualitative and quantitative analysis of opiates in urine. 6-Acetylmorphine has been demonstrated as a sensitive and important parameter for a heroin intake. A possible interpretation strategy to conclude the source of detected analytes was proposed. The method might be further developed by reducing the number of analytes to morphine-3-glucuronide, codeine-6-glucuronide and 6-acetylmorphine without compromising test performance. Copyright © 2013 John Wiley & Sons, Ltd.
Optimization of a coaxial electron cyclotron resonance plasma thruster with an analytical model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cannat, F., E-mail: felix.cannat@onera.fr, E-mail: felix.cannat@gmail.com; Lafleur, T.; Laboratoire de Physique des Plasmas, CNRS, Sorbonne Universites, UPMC Univ Paris 06, Univ Paris-Sud, Ecole Polytechnique, 91128 Palaiseau
2015-05-15
A new cathodeless plasma thruster currently under development at Onera is presented and characterized experimentally and analytically. The coaxial thruster consists of a microwave antenna immersed in a magnetic field, which allows electron heating via cyclotron resonance. The magnetic field diverges at the thruster exit and forms a nozzle that accelerates the quasi-neutral plasma to generate a thrust. Different thruster configurations are tested, and in particular, the influence of the source diameter on the thruster performance is investigated. At microwave powers of about 30 W and a xenon flow rate of 0.1 mg/s (1 SCCM), a mass utilization of 60% and amore » thrust of 1 mN are estimated based on angular electrostatic probe measurements performed downstream of the thruster in the exhaust plume. Results are found to be in fair agreement with a recent analytical helicon thruster model that has been adapted for the coaxial geometry used here.« less
Three-dimensional analytical solutions of the atmospheric diffusion equation with multiple sources and height-dependent wind speed and eddy diffusivities are derived in a systematic fashion. For homogeneous Neumann (total reflection), Dirichlet (total adsorpti...
Three-dimensional analytical solutions of the atmospheric diffusion equation with multiple sources and height-dependent wind speed and eddy diffusivities are derived in a systematic fashion. For homogeneous Neumann (total reflection), Dirichlet (total adsorpti...
Search Analytics: Automated Learning, Analysis, and Search with Open Source
NASA Astrophysics Data System (ADS)
Hundman, K.; Mattmann, C. A.; Hyon, J.; Ramirez, P.
2016-12-01
The sheer volume of unstructured scientific data makes comprehensive human analysis impossible, resulting in missed opportunities to identify relationships, trends, gaps, and outliers. As the open source community continues to grow, tools like Apache Tika, Apache Solr, Stanford's DeepDive, and Data-Driven Documents (D3) can help address this challenge. With a focus on journal publications and conference abstracts often in the form of PDF and Microsoft Office documents, we've initiated an exploratory NASA Advanced Concepts project aiming to use the aforementioned open source text analytics tools to build a data-driven justification for the HyspIRI Decadal Survey mission. We call this capability Search Analytics, and it fuses and augments these open source tools to enable the automatic discovery and extraction of salient information. In the case of HyspIRI, a hyperspectral infrared imager mission, key findings resulted from the extractions and visualizations of relationships from thousands of unstructured scientific documents. The relationships include links between satellites (e.g. Landsat 8), domain-specific measurements (e.g. spectral coverage) and subjects (e.g. invasive species). Using the above open source tools, Search Analytics mined and characterized a corpus of information that would be infeasible for a human to process. More broadly, Search Analytics offers insights into various scientific and commercial applications enabled through missions and instrumentation with specific technical capabilities. For example, the following phrases were extracted in close proximity within a publication: "In this study, hyperspectral images…with high spatial resolution (1 m) were analyzed to detect cutleaf teasel in two areas. …Classification of cutleaf teasel reached a users accuracy of 82 to 84%." Without reading a single paper we can use Search Analytics to automatically identify that a 1 m spatial resolution provides a cutleaf teasel detection users accuracy of 82-84%, which could have tangible, direct downstream implications for crop protection. Automatically assimilating this information expedites and supplements human analysis, and, ultimately, Search Analytics and its foundation of open source tools will result in more efficient scientific investment and research.
Gonthier, Gerald J.
2011-01-01
Two test wells were completed at Fort Stewart, coastal Georgia, to investigate the potential for using the Lower Floridan aquifer as a source of water to satisfy anticipated, increased water needs. The U.S. Geological Survey, in cooperation with the U.S. Department of the Army, completed hydrologic testing of the Floridan aquifer system at the study site, including flowmeter surveys, slug tests, and 24- and 72-hour aquifer tests by mid-March 2010. Analytical approaches and model simulation were applied to aquifer-test results to provide estimates of transmissivity and hydraulic conductivity of the multilayered Floridan aquifer system. Data from a 24-hour aquifer test of the Upper Floridan aquifer were evaluated by using the straight-line Cooper-Jacob analytical method. Data from a 72-hour aquifer test of the Lower Floridan aquifer were simulated by using axisymmetric model simulations. Results of aquifer testing indicated that the Upper Floridan aquifer has a transmissivity of 100,000 feet-squared per day, and the Lower Floridan aquifer has a transmissivity of 7,000 feet-squared per day. A specific storage for the Floridan aquifer system as a result of model calibration was 3E-06 ft–1. Additionally, during a 72-hour aquifer test of the Lower Floridan aquifer, a drawdown response was observed in two Upper Floridan aquifer wells, one of which was more than 1 mile away from the pumped well.
Planning for Low End Analytics Disruptions in Business School Curricula
ERIC Educational Resources Information Center
Rienzo, Thomas; Chen, Kuanchin
2018-01-01
Analytics is getting a great deal of attention in both industrial and academic venues. Organizations of all types are becoming more serious about transforming data from a variety of sources into insight, and analytics is the key to that transformation. Academic institutions are rapidly responding to the demand for analytics talent, with hundreds…
Indirect (source-free) integration method. I. Wave-forms from geodesic generic orbits of EMRIs
NASA Astrophysics Data System (ADS)
Ritter, Patxi; Aoudia, Sofiane; Spallicci, Alessandro D. A. M.; Cordier, Stéphane
2016-12-01
The Regge-Wheeler-Zerilli (RWZ) wave-equation describes Schwarzschild-Droste black hole perturbations. The source term contains a Dirac distribution and its derivative. We have previously designed a method of integration in time domain. It consists of a finite difference scheme where analytic expressions, dealing with the wave-function discontinuity through the jump conditions, replace the direct integration of the source and the potential. Herein, we successfully apply the same method to the geodesic generic orbits of EMRI (Extreme Mass Ratio Inspiral) sources, at second order. An EMRI is a Compact Star (CS) captured by a Super-Massive Black Hole (SMBH). These are considered the best probes for testing gravitation in strong regime. The gravitational wave-forms, the radiated energy and angular momentum at infinity are computed and extensively compared with other methods, for different orbits (circular, elliptic, parabolic, including zoom-whirl).
Lippi, Giuseppe; Montagnana, Martina; Giavarina, Davide
2006-01-01
Owing to remarkable advances in automation, laboratory technology and informatics, the pre-analytical phase has become the major source of variability in laboratory testing. The present survey investigated the development of several pre-analytical processes within a representative cohort of Italian clinical laboratories. A seven-point questionnaire was designed to investigate the following issues: 1a) the mean outpatient waiting time before check-in and 1b) the mean time from check-in to sample collection; 2) the mean time from sample collection to analysis; 3) the type of specimen collected for clinical chemistry testing; 4) the degree of pre-analytical automation; 5a) the number of samples shipped to other laboratories and 5b) the availability of standardised protocols for transportation; 6) the conditions for specimen storage; and 7) the availability and type of guidelines for management of unsuitable specimens. The questionnaire was administered to 150 laboratory specialists attending the SIMEL (Italian Society of Laboratory Medicine) National Meeting in June 2006. 107 questionnaires (71.3%) were returned. Data analysis revealed a high degree of variability among laboratories for the time required for check-in, outpatient sampling, sample transportation to the referral laboratory and analysis upon the arrival. Only 31% of laboratories have automated some pre-analytical steps. Of the 87% of laboratories that ship specimens to other facilities without sample preparation, 19% have no standardised protocol for transportation. For conventional clinical chemistry testing, 74% of the laboratories use serum evacuated tubes (59% with and 15% without serum separator), whereas the remaining 26% use lithium-heparin evacuated tubes (11% with and 15% without plasma separator). The storage period and conditions for rerun/retest vary widely. Only 63% of laboratories have a codified procedure for the management of unsuitable specimens, which are recognised by visual inspection (69%) or automatic detection (29%). Only 56% of the laboratories have standardised procedures for the management of unsuitable specimens, which vary widely on a local basis. The survey highlights broad heterogeneity in several pre-analytical processes among Italian laboratories. The lack of reliable guidelines encompassing evidence-based practice is a major problem for the standardisation of this crucial part of the testing process and represents a major challenge for laboratory medicine in the 2000s.
Remane, Daniela; Wissenbach, Dirk K; Meyer, Markus R; Maurer, Hans H
2010-04-15
In clinical and forensic toxicology, multi-analyte procedures are very useful to quantify drugs and poisons of different classes in one run. For liquid chromatographic/tandem mass spectrometric (LC/MS/MS) multi-analyte procedures, often only a limited number of stable-isotope-labeled internal standards (SIL-ISs) are available. If an SIL-IS is used for quantification of other analytes, it must be excluded that the co-eluting native analyte influences its ionization. Therefore, the effect of ion suppression and enhancement of fourteen SIL-ISs caused by their native analogues has been studied. It could be shown that the native analyte concentration influenced the extent of ion suppression and enhancement effects leading to more suppression with increasing analyte concentration especially when electrospray ionization (ESI) was used. Using atmospheric-pressure chemical ionization (APCI), methanolic solution showed mainly enhancement effects, whereas no ion suppression and enhancement effect, with one exception, occurred when plasma extracts were used under these conditions. Such differences were not observed using ESI. With ESI, eleven SIL-ISs showed relevant suppression effects, but only one analyte showed suppression effects when APCI was used. The presented study showed that ion suppression and enhancement tests using matrix-based samples of different sources are essential for the selection of ISs, particularly if used for several analytes to avoid incorrect quantification. In conclusion, only SIL-ISs should be selected for which no suppression and enhancement effects can be observed. If not enough ISs are free of ionization interferences, a different ionization technique should be considered. 2010 John Wiley & Sons, Ltd.
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
Muehlwald, S; Buchner, N; Kroh, L W
2018-03-23
Because of the high number of possible pesticide residues and their chemical complexity, it is necessary to develop methods which cover a broad range of pesticides. In this work, a qualitative multi-screening method for pesticides was developed by use of HPLC-ESI-Q-TOF. 110 pesticides were chosen for the creation of a personal compound database and library (PCDL). The MassHunter Qualitative Analysis software from Agilent Technologies was used to identify the analytes. The software parameter settings were optimised to produce a low number of false positive as well as false negative results. The method was validated for 78 selected pesticides. However, the validation criteria were not fulfilled for 45 analytes. Due to this result, investigations were started to elucidate reasons for the low detectability. It could be demonstrated that the three main causes of the signal suppression were the co-eluting matrix (matrix effect), the low sensitivity of the analyte in standard solution and the fragmentation of the analyte in the ion source (in-source collision-induced dissociation). In this paper different examples are discussed showing that the impact of these three causes is different for each analyte. For example, it is possible that an analyte with low signal intensity and an intense fragmentation in the ion source is detectable in a difficult matrix, whereas an analyte with a high sensitivity and a low fragmentation is not detectable in a simple matrix. Additionally, it could be shown that in-source fragments are a helpful tool for an unambiguous identification. Copyright © 2018 Elsevier B.V. All rights reserved.
Mohammed, Emad A; Naugler, Christopher
2017-01-01
Demand forecasting is the area of predictive analytics devoted to predicting future volumes of services or consumables. Fair understanding and estimation of how demand will vary facilitates the optimal utilization of resources. In a medical laboratory, accurate forecasting of future demand, that is, test volumes, can increase efficiency and facilitate long-term laboratory planning. Importantly, in an era of utilization management initiatives, accurately predicted volumes compared to the realized test volumes can form a precise way to evaluate utilization management initiatives. Laboratory test volumes are often highly amenable to forecasting by time-series models; however, the statistical software needed to do this is generally either expensive or highly technical. In this paper, we describe an open-source web-based software tool for time-series forecasting and explain how to use it as a demand forecasting tool in clinical laboratories to estimate test volumes. This tool has three different models, that is, Holt-Winters multiplicative, Holt-Winters additive, and simple linear regression. Moreover, these models are ranked and the best one is highlighted. This tool will allow anyone with historic test volume data to model future demand.
Mohammed, Emad A.; Naugler, Christopher
2017-01-01
Background: Demand forecasting is the area of predictive analytics devoted to predicting future volumes of services or consumables. Fair understanding and estimation of how demand will vary facilitates the optimal utilization of resources. In a medical laboratory, accurate forecasting of future demand, that is, test volumes, can increase efficiency and facilitate long-term laboratory planning. Importantly, in an era of utilization management initiatives, accurately predicted volumes compared to the realized test volumes can form a precise way to evaluate utilization management initiatives. Laboratory test volumes are often highly amenable to forecasting by time-series models; however, the statistical software needed to do this is generally either expensive or highly technical. Method: In this paper, we describe an open-source web-based software tool for time-series forecasting and explain how to use it as a demand forecasting tool in clinical laboratories to estimate test volumes. Results: This tool has three different models, that is, Holt-Winters multiplicative, Holt-Winters additive, and simple linear regression. Moreover, these models are ranked and the best one is highlighted. Conclusion: This tool will allow anyone with historic test volume data to model future demand. PMID:28400996
De Vore, Karl W; Fatahi, Nadia M; Sass, John E
2016-08-01
Arrhenius modeling of analyte recovery at increased temperatures to predict long-term colder storage stability of biological raw materials, reagents, calibrators, and controls is standard practice in the diagnostics industry. Predicting subzero temperature stability using the same practice is frequently criticized but nevertheless heavily relied upon. We compared the ability to predict analyte recovery during frozen storage using 3 separate strategies: traditional accelerated studies with Arrhenius modeling, and extrapolation of recovery at 20% of shelf life using either ordinary least squares or a radical equation y = B1x(0.5) + B0. Computer simulations were performed to establish equivalence of statistical power to discern the expected changes during frozen storage or accelerated stress. This was followed by actual predictive and follow-up confirmatory testing of 12 chemistry and immunoassay analytes. Linear extrapolations tended to be the most conservative in the predicted percent recovery, reducing customer and patient risk. However, the majority of analytes followed a rate of change that slowed over time, which was fit best to a radical equation of the form y = B1x(0.5) + B0. Other evidence strongly suggested that the slowing of the rate was not due to higher-order kinetics, but to changes in the matrix during storage. Predicting shelf life of frozen products through extrapolation of early initial real-time storage analyte recovery should be considered the most accurate method. Although in this study the time required for a prediction was longer than a typical accelerated testing protocol, there are less potential sources of error, reduced costs, and a lower expenditure of resources. © 2016 American Association for Clinical Chemistry.
The characterization of photographic materials as substrates for surface enhanced Raman spectroscopy
NASA Astrophysics Data System (ADS)
Vaughan, J.; Hortin, N.; Christie, S.; Kvasnik, F.; Scully, P. J.
2005-06-01
In this study, five types of photographic materials were obtained from commercial sources and characterized for use as substrates for surface enhanced Raman spectroscopy. The substrates are photographic emulsions coated on glass or paper support. The emulsions were developed to maximize the amount of metallic silver aggregated into clusters. The test analyte, Cresyl Violet, was deposited directly onto the substrate surface. The permeable nature of the supporting gelatin matrix enables the interaction between the target analyte and the solid silver clusters. The surface enhanced Raman spectra of a 2.75 × 10-7 M concentration of Cresyl Violet in ethanol were obtained using these photographic substrates. The Raman and resonant Raman enhancement of Cresyl Violet varies from substrate to substrate, as does the ratio of Raman to resonant Raman peak heights.
NASA Astrophysics Data System (ADS)
Torres Astorga, Romina; Velasco, Hugo; Dercon, Gerd; Mabit, Lionel
2017-04-01
Soil erosion and associated sediment transportation and deposition processes are key environmental problems in Central Argentinian watersheds. Several land use practices - such as intensive grazing and crop cultivation - are considered likely to increase significantly land degradation and soil/sediment erosion processes. Characterized by highly erodible soils, the sub catchment Estancia Grande (12.3 km2) located 23 km north east of San Luis has been investigated by using sediment source fingerprinting techniques to identify critical hot spots of land degradation. The authors created 4 artificial mixtures using known quantities of the most representative sediment sources of the studied catchment. The first mixture was made using four rotation crop soil sources. The second and the third mixture were created using different proportions of 4 different soil sources including soils from a feedlot, a rotation crop, a walnut forest and a grazing soil. The last tested mixture contained the same sources as the third mixture but with the addition of a fifth soil source (i.e. a native bank soil). The Energy Dispersive X Ray Fluorescence (EDXRF) analytical technique has been used to reconstruct the source sediment proportion of the original mixtures. Besides using a traditional method of fingerprint selection such as Kruskal-Wallis H-test and Discriminant Function Analysis (DFA), the authors used the actual source proportions in the mixtures and selected from the subset of tracers that passed the statistical tests specific elemental tracers that were in agreement with the expected mixture contents. The selection process ended with testing in a mixing model all possible combinations of the reduced number of tracers obtained. Alkaline earth metals especially Strontium (Sr) and Barium (Ba) were identified as the most effective fingerprints and provided a reduced Mean Absolute Error (MAE) of approximately 2% when reconstructing the 4 artificial mixtures. This study demonstrates that the EDXRF fingerprinting approach performed very well in reconstructing our original mixtures especially in identifying and quantifying the contribution of the 4 rotation crop soil sources in the first mixture.
A Comparison of Analytical and Experimental Data for a Magnetic Actuator
NASA Technical Reports Server (NTRS)
Groom, Nelson J.; Bloodgood, V. Dale, Jr.
2000-01-01
Theoretical and experimental force-displacement and force-current data are compared for two configurations of a simple horseshoe, or bipolar, magnetic actuator. One configuration utilizes permanent magnet wafers to provide a bias flux and the other configuration has no source of bias flux. The theoretical data are obtained from two analytical models of each configuration. One is an ideal analytical model which is developed under the following assumptions: (1) zero fringing and leakage flux, (2) zero actuator coil mmf loss, and (3) infinite permeability of the actuator core and suspended element flux return path. The other analytical model, called the extended model, is developed by adding loss and leakage factors to the ideal model. The values of the loss and leakage factors are calculated from experimental data. The experimental data are obtained from a magnetic actuator test fixture, which is described in detail. Results indicate that the ideal models for both configurations do not match the experimental data very well. However, except for the range around zero force, the extended models produce a good match. The best match is produced by the extended model of the configuration with permanent magnet flux bias.
40 CFR 141.402 - Ground water source microbial monitoring and analytical methods.
Code of Federal Regulations, 2013 CFR
2013-07-01
... for the presence of E. coli, enterococci, or coliphage: Analytical Methods for Source Water Monitoring... Microbiology, 62:3881-3884. 10 EPA Method 1601: Male-specific (F+) and Somatic Coliphage in Water by Two-step... 20460. 11 EPA Method 1602: Male-specific (F+) and Somatic Coliphage in Water by Single Agar Layer (SAL...
40 CFR 141.402 - Ground water source microbial monitoring and analytical methods.
Code of Federal Regulations, 2014 CFR
2014-07-01
... for the presence of E. coli, enterococci, or coliphage: Analytical Methods for Source Water Monitoring... Microbiology, 62:3881-3884. 10 EPA Method 1601: Male-specific (F+) and Somatic Coliphage in Water by Two-step... 20460. 11 EPA Method 1602: Male-specific (F+) and Somatic Coliphage in Water by Single Agar Layer (SAL...
Humidity Effects on Fragmentation in Plasma-Based Ambient Ionization Sources
NASA Astrophysics Data System (ADS)
Newsome, G. Asher; Ackerman, Luke K.; Johnson, Kevin J.
2016-01-01
Post-plasma ambient desorption/ionization (ADI) sources are fundamentally dependent on surrounding water vapor to produce protonated analyte ions. There are two reports of humidity effects on ADI spectra. However, it is unclear whether humidity will affect all ADI sources and analytes, and by what mechanism humidity affects spectra. Flowing atmospheric pressure afterglow (FAPA) ionization and direct analysis in real time (DART) mass spectra of various surface-deposited and gas-phase analytes were acquired at ambient temperature and pressure across a range of observed humidity values. A controlled humidity enclosure around the ion source and mass spectrometer inlet was used to create programmed humidity and temperatures. The relative abundance and fragmentation of molecular adduct ions for several compounds consistently varied with changing ambient humidity and also were controlled with the humidity enclosure. For several compounds, increasing humidity decreased protonated molecule and other molecular adduct ion fragmentation in both FAPA and DART spectra. For others, humidity increased fragment ion ratios. The effects of humidity on molecular adduct ion fragmentation were caused by changes in the relative abundances of different reagent protonated water clusters and, thus, a change in the average difference in proton affinity between an analyte and the population of water clusters. Control of humidity in ambient post-plasma ion sources is needed to create spectral stability and reproducibility.
Humidity Effects on Fragmentation in Plasma-Based Ambient Ionization Sources.
Newsome, G Asher; Ackerman, Luke K; Johnson, Kevin J
2016-01-01
Post-plasma ambient desorption/ionization (ADI) sources are fundamentally dependent on surrounding water vapor to produce protonated analyte ions. There are two reports of humidity effects on ADI spectra. However, it is unclear whether humidity will affect all ADI sources and analytes, and by what mechanism humidity affects spectra. Flowing atmospheric pressure afterglow (FAPA) ionization and direct analysis in real time (DART) mass spectra of various surface-deposited and gas-phase analytes were acquired at ambient temperature and pressure across a range of observed humidity values. A controlled humidity enclosure around the ion source and mass spectrometer inlet was used to create programmed humidity and temperatures. The relative abundance and fragmentation of molecular adduct ions for several compounds consistently varied with changing ambient humidity and also were controlled with the humidity enclosure. For several compounds, increasing humidity decreased protonated molecule and other molecular adduct ion fragmentation in both FAPA and DART spectra. For others, humidity increased fragment ion ratios. The effects of humidity on molecular adduct ion fragmentation were caused by changes in the relative abundances of different reagent protonated water clusters and, thus, a change in the average difference in proton affinity between an analyte and the population of water clusters. Control of humidity in ambient post-plasma ion sources is needed to create spectral stability and reproducibility.
NASA Astrophysics Data System (ADS)
Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min
2017-09-01
The Halbach type hollow cylindrical permanent magnet array (HCPMA) is a volume compact and energy conserved field source, which have attracted intense interests in many practical applications. Here, using the complex variable integration method based on the Biot-Savart Law (including current distributions inside the body and on the surfaces of magnet), we derive analytical field solutions to an ideal multipole HCPMA in entire space including the interior of magnet. The analytic field expression inside the array material is used to construct an analytic demagnetization function, with which we can explain the origin of demagnetization phenomena in HCPMA by taking into account an ideal magnetic hysteresis loop with finite coercivity. These analytical field expressions and demagnetization functions provide deeper insight into the nature of such permanent magnet array systems and offer guidance in designing optimized array system.
Analytical approach of laser beam propagation in the hollow polygonal light pipe.
Zhu, Guangzhi; Zhu, Xiao; Zhu, Changhong
2013-08-10
An analytical method of researching the light distribution properties on the output end of a hollow n-sided polygonal light pipe and a light source with a Gaussian distribution is developed. The mirror transformation matrices and a special algorithm of removing void virtual images are created to acquire the location and direction vector of each effective virtual image on the entrance plane. The analytical method is demonstrated by Monte Carlo ray tracing. At the same time, four typical cases are discussed. The analytical results indicate that the uniformity of light distribution varies with the structural and optical parameters of the hollow n-sided polygonal light pipe and light source with a Gaussian distribution. The analytical approach will be useful to design and choose the hollow n-sided polygonal light pipe, especially for high-power laser beam homogenization techniques.
Sampling probe for microarray read out using electrospray mass spectrometry
Van Berkel, Gary J.
2004-10-12
An automated electrospray based sampling system and method for analysis obtains samples from surface array spots having analytes. The system includes at least one probe, the probe including an inlet for flowing at least one eluting solvent to respective ones of a plurality of spots and an outlet for directing the analyte away from the spots. An automatic positioning system is provided for translating the probe relative to the spots to permit sampling of any spot. An electrospray ion source having an input fluidicly connected to the probe receives the analyte and generates ions from the analyte. The ion source provides the generated ions to a structure for analysis to identify the analyte, preferably being a mass spectrometer. The probe can be a surface contact probe, where the probe forms an enclosing seal along the periphery of the array spot surface.
NASA Astrophysics Data System (ADS)
Zamani, K.; Bombardelli, F. A.
2014-12-01
Verification of geophysics codes is imperative to avoid serious academic as well as practical consequences. In case that access to any given source code is not possible, the Method of Manufactured Solution (MMS) cannot be employed in code verification. In contrast, employing the Method of Exact Solution (MES) has several practical advantages. In this research, we first provide four new one-dimensional analytical solutions designed for code verification; these solutions are able to uncover the particular imperfections of the Advection-diffusion-reaction equation, such as nonlinear advection, diffusion or source terms, as well as non-constant coefficient equations. After that, we provide a solution of Burgers' equation in a novel setup. Proposed solutions satisfy the continuity of mass for the ambient flow, which is a crucial factor for coupled hydrodynamics-transport solvers. Then, we use the derived analytical solutions for code verification. To clarify gray-literature issues in the verification of transport codes, we designed a comprehensive test suite to uncover any imperfection in transport solvers via a hierarchical increase in the level of tests' complexity. The test suite includes hundreds of unit tests and system tests to check vis-a-vis the portions of the code. Examples for checking the suite start by testing a simple case of unidirectional advection; then, bidirectional advection and tidal flow and build up to nonlinear cases. We design tests to check nonlinearity in velocity, dispersivity and reactions. The concealing effect of scales (Peclet and Damkohler numbers) on the mesh-convergence study and appropriate remedies are also discussed. For the cases in which the appropriate benchmarks for mesh convergence study are not available, we utilize symmetry. Auxiliary subroutines for automation of the test suite and report generation are designed. All in all, the test package is not only a robust tool for code verification but it also provides comprehensive insight on the ADR solvers capabilities. Such information is essential for any rigorous computational modeling of ADR equation for surface/subsurface pollution transport. We also convey our experiences in finding several errors which were not detectable with routine verification techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woronowicz, Michael; Blackmon, Rebecca; Brown, Martin
2014-12-09
The International Space Station program is developing a robotically-operated leak locator tool to be used externally. The tool would consist of a Residual Gas Analyzer for partial pressure measurements and a full range pressure gauge for total pressure measurements. The primary application is to demonstrate the ability to detect NH{sub 3} coolant leaks in the ISS thermal control system. An analytical model of leak plume physics is presented that can account for effusive flow as well as plumes produced by sonic orifices and thruster operations. This model is used along with knowledge of typical RGA and full range gauge performancemore » to analyze the expected instrument sensitivity to ISS leaks of various sizes and relative locations (“directionality”). The paper also presents experimental results of leak simulation testing in a large thermal vacuum chamber at NASA Goddard Space Flight Center. This test characterized instrument sensitivity as a function of leak rates ranging from 1 lb{sub m/}/yr. to about 1 lb{sub m}/day. This data may represent the first measurements collected by an RGA or ion gauge system monitoring off-axis point sources as a function of location and orientation. Test results are compared to the analytical model and used to propose strategies for on-orbit leak location and environment characterization using the proposed instrument while taking into account local ISS conditions and the effects of ram/wake flows and structural shadowing within low Earth orbit.« less
Broadband Noise Predictions Based on a New Aeroacoustic Formulation
NASA Technical Reports Server (NTRS)
Casper, J.; Farassat, F.
2002-01-01
A new analytic result in acoustics called 'Formulation 1B,' proposed by Farassat, is used to compute the loading noise from an unsteady surface pressure distribution on a thin airfoil in the time domain. This formulation is a new solution of the Ffowcs Williams-Hawkings equation with the loading source term. The formulation contains a far-field surface integral that depends on the time derivative and the surface gradient of the pressure on the airfoil, as well as a contour integral on the boundary of the airfoil surface. As a first test case, the new formulation is used to compute the noise radiated from a flat plate, moving through a sinusoidal gust of constant frequency. The unsteady surface pressure for this test case is specified analytically from a result that is based on linear airfoil theory. This test case is used to examine the velocity scaling properties of Formulation 1B, and to demonstrate its equivalence to Formulation 1A, of Farassat. The new acoustic formulation, again with an analytic surface pressure, is then used to predict broadband noise radiated from an airfoil immersed in homogeneous turbulence. The results are compared with experimental data previously reported by Paterson and Amiet. Good agreement between predictions and measurements is obtained. The predicted results also agree very well with those of Paterson and Amiet, who used a frequency-domain approach. Finally, an alternative form of Formulation 1B is described for statistical analysis of broadband noise.
Block, Darci R; Algeciras-Schimnich, Alicia
2013-01-01
Requests for testing various analytes in serous fluids (e.g., pleural, peritoneal, pericardial effusions) are submitted daily to clinical laboratories. Testing of these fluids deviates from assay manufacturers' specifications, as most laboratory assays are optimized for testing blood or urine specimens. These requests add a burden to clinical laboratories, which need to validate assay performance characteristics in these fluids to exclude matrix interferences (given the different composition of body fluids) while maintaining regulatory compliance. Body fluid testing for a number of analytes has been reported in the literature; however, understanding the clinical utility of these analytes is critical because laboratories must address the analytic and clinical validation requirements, while educating clinicians on proper test utilization. In this article, we review the published data to evaluate the clinical utility of testing for numerous analytes in body fluid specimens. We also highlight the pre-analytic and analytic variables that need to be considered when reviewing published studies in body fluid testing. Finally, we provide guidance on how published studies might (or might not) guide interpretation of test results in today's clinical laboratories.
Staggs, Sarah E.; Beckman, Erin M.; Keely, Scott P.; Mackwan, Reena; Ware, Michael W.; Moyer, Alan P.; Ferretti, James A.; Sayed, Abu; Xiao, Lihua; Villegas, Eric N.
2013-01-01
Quantitative real-time polymerase chain reaction (qPCR) assays to detect Cryptosporidium oocysts in clinical samples are increasingly being used to diagnose human cryptosporidiosis, but a parallel approach for detecting and identifying Cryptosporidium oocyst contamination in surface water sources has yet to be established for current drinking water quality monitoring practices. It has been proposed that Cryptosporidium qPCR-based assays could be used as viable alternatives to current microscopic-based detection methods to quantify levels of oocysts in drinking water sources; however, data on specificity, analytical sensitivity, and the ability to accurately quantify low levels of oocysts are limited. The purpose of this study was to provide a comprehensive evaluation of TaqMan-based qPCR assays, which were developed for either clinical or environmental investigations, for detecting Cryptosporidium oocyst contamination in water. Ten different qPCR assays, six previously published and four developed in this study were analyzed for specificity and analytical sensitivity. Specificity varied between all ten assays, and in one particular assay, which targeted the Cryptosporidium 18S rRNA gene, successfully detected all Cryptosporidium spp. tested, but also cross-amplified T. gondii, fungi, algae, and dinoflagellates. When evaluating the analytical sensitivity of these qPCR assays, results showed that eight of the assays could reliably detect ten flow-sorted oocysts in reagent water or environmental matrix. This study revealed that while a qPCR-based detection assay can be useful for detecting and differentiating different Cryptosporidium species in environmental samples, it cannot accurately measure low levels of oocysts that are typically found in drinking water sources. PMID:23805235
Pekar, Heidi; Westerberg, Erik; Bruno, Oscar; Lääne, Ants; Persson, Kenneth M; Sundström, L Fredrik; Thim, Anna-Maria
2016-01-15
Freshwater blooms of cyanobacteria (blue-green algae) in source waters are generally composed of several different strains with the capability to produce a variety of toxins. The major exposure routes for humans are direct contact with recreational waters and ingestion of drinking water not efficiently treated. The ultra high pressure liquid chromatography tandem mass spectrometry based analytical method presented here allows simultaneous analysis of 22 cyanotoxins from different toxin groups, including anatoxins, cylindrospermopsins, nodularin and microcystins in raw water and drinking water. The use of reference standards enables correct identification of toxins as well as precision of the quantification and due to matrix effects, recovery correction is required. The multi-toxin group method presented here, does not compromise sensitivity, despite the large number of analytes. The limit of quantification was set to 0.1 μg/L for 75% of the cyanotoxins in drinking water and 0.5 μg/L for all cyanotoxins in raw water, which is compliant with the WHO guidance value for microcystin-LR. The matrix effects experienced during analysis were reasonable for most analytes, considering the large volume injected into the mass spectrometer. The time of analysis, including lysing of cell bound toxins, is less than three hours. Furthermore, the method was tested in Swedish source waters and infiltration ponds resulting in evidence of presence of anatoxin, homo-anatoxin, cylindrospermopsin and several variants of microcystins for the first time in Sweden, proving its usefulness. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Technique to determine location of radio sources from measurements taken on spinning spacecraft
NASA Technical Reports Server (NTRS)
Fainberg, J.
1979-01-01
The procedure developed to extract average source direction and average source size from spin-modulated radio astronomy data measured on the IMP-6 spacecraft is described. Because all measurements are used, rather than just finding maxima or minima in the data, the method is very sensitive, even in the presence of large amounts of noise. The technique is applicable to all experiments with directivity characteristics. It is suitable for onboard processing on satellites to reduce the data flow to Earth. The application to spin-modulated nonpolarized radio astronomy data is made and includes the effects of noise, background, and second source interference. The analysis was tested with computer simulated data and the results agree with analytic predictions. Applications of this method with IMP-6 radio data have led to: (1) determination of source positions of traveling solar radio bursts at large distances from the Sun; (2) mapping of magnetospheric radio emissions by radio triangulation; and (3) detection of low frequency radio emissions from Jupiter and Saturn.
The sound field of a rotating dipole in a plug flow.
Wang, Zhao-Huan; Belyaev, Ivan V; Zhang, Xiao-Zheng; Bi, Chuan-Xing; Faranosov, Georgy A; Dowell, Earl H
2018-04-01
An analytical far field solution for a rotating point dipole source in a plug flow is derived. The shear layer of the jet is modelled as an infinitely thin cylindrical vortex sheet and the far field integral is calculated by the stationary phase method. Four numerical tests are performed to validate the derived solution as well as to assess the effects of sound refraction from the shear layer. First, the calculated results using the derived formulations are compared with the known solution for a rotating dipole in a uniform flow to validate the present model in this fundamental test case. After that, the effects of sound refraction for different rotating dipole sources in the plug flow are assessed. Then the refraction effects on different frequency components of the signal at the observer position, as well as the effects of the motion of the source and of the type of source are considered. Finally, the effect of different sound speeds and densities outside and inside the plug flow is investigated. The solution obtained may be of particular interest for propeller and rotor noise measurements in open jet anechoic wind tunnels.
42 CFR 493.17 - Test categorization.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., analytic or postanalytic phases of the testing. (2) Training and experience—(i) Score 1. (A) Minimal training is required for preanalytic, analytic and postanalytic phases of the testing process; and (B... necessary for analytic test performance. (3) Reagents and materials preparation—(i) Score 1. (A) Reagents...
42 CFR 493.17 - Test categorization.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., analytic or postanalytic phases of the testing. (2) Training and experience—(i) Score 1. (A) Minimal training is required for preanalytic, analytic and postanalytic phases of the testing process; and (B... necessary for analytic test performance. (3) Reagents and materials preparation—(i) Score 1. (A) Reagents...
Usmanov, Dilshadbek T; Yu, Zhan; Chen, Lee Chuin; Hiraoka, Kenzo; Yamabe, Shinichi
2016-02-01
In this work, a low-pressure air dielectric-barrier discharge (DBD) ion source using a capillary with the inner diameter of 0.115 and 12 mm long applicable to miniaturized mass spectrometers was developed. The analytes, trinitrotoluene (TNT), 1,3,5-trinitroperhydro-1,3,5-triazine (RDX), 1,3,5,7-tetranitroperhydro-1,3,5,7-tetrazocine (HMX), pentaerythritol tetranitrate (PETN), nitroglycerine (NG), hexamethylene triperoxide diamine (HMTD), caffeine, cocaine and morphine, introduced through the capillary, were ionized by a low-pressure air DBD. The ion source pressures were changed by using various sizes of the ion sampling orifice. The signal intensities of those analytes showed marked pressure dependence. TNT was detected with higher sensitivity at lower pressure but vice versa for other analytes. For all analytes, a marked signal enhancement was observed when a grounded cylindrical mesh electrode was installed in the DBD ion source. Among nine analytes, RDX, HMX, NG and PETN could be detected as cluster ions [analyte + NO3 ](-) even at low pressure and high temperature up to 180 °C. The detection indicates that these cluster ions are stable enough to survive under present experimental conditions. The unexpectedly high stabilities of these cluster ions were verified by density functional theory calculation. Copyright © 2016 John Wiley & Sons, Ltd.
Source-term development for a contaminant plume for use by multimedia risk assessment models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whelan, Gene; McDonald, John P.; Taira, Randal Y.
1999-12-01
Multimedia modelers from the U.S. Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: DOE's Multimedia Environmental Pollutant Assessment System (MEPAS), EPA's MMSOILS, EPA's PRESTO, and DOE's RESidual RADioactivity (RESRAD). These models represent typical analytically, semi-analytically, and empirically based tools that are utilized in human risk and endangerment assessments for use at installations containing radioactive and/or hazardous contaminants. Although the benchmarking exercise traditionally emphasizes the application and comparison of these models, the establishment of a Conceptual Site Model (CSM) should be viewed with equalmore » importance. This paper reviews an approach for developing a CSM of an existing, real-world, Sr-90 plume at DOE's Hanford installation in Richland, Washington, for use in a multimedia-based benchmarking exercise bet ween MEPAS, MMSOILS, PRESTO, and RESRAD. In an unconventional move for analytically based modeling, the benchmarking exercise will begin with the plume as the source of contamination. The source and release mechanism are developed and described within the context of performing a preliminary risk assessment utilizing these analytical models. By beginning with the plume as the source term, this paper reviews a typical process and procedure an analyst would follow in developing a CSM for use in a preliminary assessment using this class of analytical tool.« less
Analytic Expressions for the Gravity Gradient Tensor of 3D Prisms with Depth-Dependent Density
NASA Astrophysics Data System (ADS)
Jiang, Li; Liu, Jie; Zhang, Jianzhong; Feng, Zhibing
2017-12-01
Variable-density sources have been paid more attention in gravity modeling. We conduct the computation of gravity gradient tensor of given mass sources with variable density in this paper. 3D rectangular prisms, as simple building blocks, can be used to approximate well 3D irregular-shaped sources. A polynomial function of depth can represent flexibly the complicated density variations in each prism. Hence, we derive the analytic expressions in closed form for computing all components of the gravity gradient tensor due to a 3D right rectangular prism with an arbitrary-order polynomial density function of depth. The singularity of the expressions is analyzed. The singular points distribute at the corners of the prism or on some of the lines through the edges of the prism in the lower semi-space containing the prism. The expressions are validated, and their numerical stability is also evaluated through numerical tests. The numerical examples with variable-density prism and basin models show that the expressions within their range of numerical stability are superior in computational accuracy and efficiency to the common solution that sums up the effects of a collection of uniform subprisms, and provide an effective method for computing gravity gradient tensor of 3D irregular-shaped sources with complicated density variation. In addition, the tensor computed with variable density is different in magnitude from that with constant density. It demonstrates the importance of the gravity gradient tensor modeling with variable density.
Stoeckel, D.M.; Stelzer, E.A.; Stogner, R.W.; Mau, D.P.
2011-01-01
Protocols for microbial source tracking of fecal contamination generally are able to identify when a source of contamination is present, but thus far have been unable to evaluate what portion of fecal-indicator bacteria (FIB) came from various sources. A mathematical approach to estimate relative amounts of FIB, such as Escherichia coli, from various sources based on the concentration and distribution of microbial source tracking markers in feces was developed. The approach was tested using dilute fecal suspensions, then applied as part of an analytical suite to a contaminated headwater stream in the Rocky Mountains (Upper Fountain Creek, Colorado). In one single-source fecal suspension, a source that was not present could not be excluded because of incomplete marker specificity; however, human and ruminant sources were detected whenever they were present. In the mixed-feces suspension (pet and human), the minority contributor (human) was detected at a concentration low enough to preclude human contamination as the dominant source of E. coli to the sample. Without the semi-quantitative approach described, simple detects of human-associated marker in stream samples would have provided inaccurate evidence that human contamination was a major source of E. coli to the stream. In samples from Upper Fountain Creek the pattern of E. coli, general and host-associated microbial source tracking markers, nutrients, and wastewater-associated chemical detections-augmented with local observations and land-use patterns-indicated that, contrary to expectations, birds rather than humans or ruminants were the predominant source of fecal contamination to Upper Fountain Creek. This new approach to E. coli allocation, validated by a controlled study and tested by application in a relatively simple setting, represents a widely applicable step forward in the field of microbial source tracking of fecal contamination. ?? 2011 Elsevier Ltd.
Dual metal gate tunneling field effect transistors based on MOSFETs: A 2-D analytical approach
NASA Astrophysics Data System (ADS)
Ramezani, Zeinab; Orouji, Ali A.
2018-01-01
A novel 2-D analytical drain current model of novel Dual Metal Gate Tunnel Field Effect Transistors Based on MOSFETs (DMG-TFET) is presented in this paper. The proposed Tunneling FET is extracted from a MOSFET structure by employing an additional electrode in the source region with an appropriate work function to induce holes in the N+ source region and hence makes it as a P+ source region. The electric field is derived which is utilized to extract the expression of the drain current by analytically integrating the band to band tunneling generation rate in the tunneling region based on the potential profile by solving the Poisson's equation. Through this model, the effects of the thin film thickness and gate voltage on the potential, the electric field, and the effects of the thin film thickness on the tunneling current can be studied. To validate our present model we use SILVACO ATLAS device simulator and the analytical results have been compared with it and found a good agreement.
GRACKLE: a chemistry and cooling library for astrophysics
NASA Astrophysics Data System (ADS)
Smith, Britton D.; Bryan, Greg L.; Glover, Simon C. O.; Goldbaum, Nathan J.; Turk, Matthew J.; Regan, John; Wise, John H.; Schive, Hsi-Yu; Abel, Tom; Emerick, Andrew; O'Shea, Brian W.; Anninos, Peter; Hummels, Cameron B.; Khochfar, Sadegh
2017-04-01
We present the GRACKLE chemistry and cooling library for astrophysical simulations and models. GRACKLE provides a treatment of non-equilibrium primordial chemistry and cooling for H, D and He species, including H2 formation on dust grains; tabulated primordial and metal cooling; multiple ultraviolet background models; and support for radiation transfer and arbitrary heat sources. The library has an easily implementable interface for simulation codes written in C, C++ and FORTRAN as well as a PYTHON interface with added convenience functions for semi-analytical models. As an open-source project, GRACKLE provides a community resource for accessing and disseminating astrochemical data and numerical methods. We present the full details of the core functionality, the simulation and PYTHON interfaces, testing infrastructure, performance and range of applicability. GRACKLE is a fully open-source project and new contributions are welcome.
Analytical methods for gelatin differentiation from bovine and porcine origins and food products.
Nhari, Raja Mohd Hafidz Raja; Ismail, Amin; Che Man, Yaakob B
2012-01-01
Usage of gelatin in food products has been widely debated for several years, which is about the source of gelatin that has been used, religion, and health. As an impact, various analytical methods have been introduced and developed to differentiate gelatin whether it is made from porcine or bovine sources. The analytical methods comprise a diverse range of equipment and techniques including spectroscopy, chemical precipitation, chromatography, and immunochemical. Each technique can differentiate gelatins for certain extent with advantages and limitations. This review is focused on overview of the analytical methods available for differentiation of bovine and porcine gelatin and gelatin in food products so that new method development can be established. © 2011 Institute of Food Technologists®
Ichihara, Kiyoshi; Ozarda, Yesim; Barth, Julian H; Klee, George; Qiu, Ling; Erasmus, Rajiv; Borai, Anwar; Evgina, Svetlana; Ashavaid, Tester; Khan, Dilshad; Schreier, Laura; Rolle, Reynan; Shimizu, Yoshihisa; Kimura, Shogo; Kawano, Reo; Armbruster, David; Mori, Kazuo; Yadav, Binod K
2017-04-01
The IFCC Committee on Reference Intervals and Decision Limits coordinated a global multicenter study on reference values (RVs) to explore rational and harmonizable procedures for derivation of reference intervals (RIs) and investigate the feasibility of sharing RIs through evaluation of sources of variation of RVs on a global scale. For the common protocol, rather lenient criteria for reference individuals were adopted to facilitate harmonized recruitment with planned use of the latent abnormal values exclusion (LAVE) method. As of July 2015, 12 countries had completed their study with total recruitment of 13,386 healthy adults. 25 analytes were measured chemically and 25 immunologically. A serum panel with assigned values was measured by all laboratories. RIs were derived by parametric and nonparametric methods. The effect of LAVE methods is prominent in analytes which reflect nutritional status, inflammation and muscular exertion, indicating that inappropriate results are frequent in any country. The validity of the parametric method was confirmed by the presence of analyte-specific distribution patterns and successful Gaussian transformation using the modified Box-Cox formula in all countries. After successful alignment of RVs based on the panel test results, nearly half the analytes showed variable degrees of between-country differences. This finding, however, requires confirmation after adjusting for BMI and other sources of variation. The results are reported in the second part of this paper. The collaborative study enabled us to evaluate rational methods for deriving RIs and comparing the RVs based on real-world datasets obtained in a harmonized manner. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Pardos, Zachary A.; Whyte, Anthony; Kao, Kevin
2016-01-01
In this paper, we address issues of transparency, modularity, and privacy with the introduction of an open source, web-based data repository and analysis tool tailored to the Massive Open Online Course community. The tool integrates data request/authorization and distribution workflow features as well as provides a simple analytics module upload…
Phosphorus in recycling fertilizers - analytical challenges.
Krüger, Oliver; Adam, Christian
2017-05-01
The importance of secondary raw materials for phosphorus (P) fertilizer production is expected to increase in the future due to resource depletion, supply risks, and heavy metal contamination of fossil phosphate resources. Municipal wastewater is a promising source for P recovery. In Germany for instance, it contains almost 50% of the total amount of P that is currently applied as mineral fertilizer. Several procedures have been developed to recover and re-use P resulting in a growing number of recycling fertilizers that are currently not regulated in terms of fertilizer efficiency. We tested various materials and matrices for their total P content, solubility of P in neutral ammonium citrate (P nac ) and water, and performed robustness tests to check if existing analytical methods are suitable for those new materials. Digestion with inverse aqua regia was best suited to determine the total P content. P nac sample preparation and analyses were feasible for all matrices. However, we found significant time and temperature dependencies, especially for materials containing organic matter. Furthermore, several materials didn't reach equilibrium during the extractions. Thus, strict compliance of the test conditions is strongly recommended to achieve comparable results. Copyright © 2017 Elsevier Inc. All rights reserved.
Coupling of Helmholtz resonators to improve acoustic liners for turbofan engines at low frequency
NASA Technical Reports Server (NTRS)
Dean, L. W.
1975-01-01
An analytical and test program was conducted to evaluate means for increasing the effectiveness of low frequency sound absorbing liners for aircraft turbine engines. Three schemes for coupling low frequency absorber elements were considered. These schemes were analytically modeled and their impedance was predicted over a frequency range of 50 to 1,000 Hz. An optimum and two off-optimum designs of the most promising, a parallel coupled scheme, were fabricated and tested in a flow duct facility. Impedance measurements were in good agreement with predicted values and validated the procedure used to transform modeled parameters to hardware designs. Measurements of attenuation for panels of coupled resonators were consistent with predictions based on measured impedance. All coupled resonator panels tested showed an increase in peak attenuation of about 50% and an increase in attenuation bandwidth of one one-third octave band over that measured for an uncoupled panel. These attenuation characteristics equate to about 35% greater reduction in source perceived noise level (PNL), relative to the uncoupled panel, or a reduction in treatment length of about 24% for constant PNL reduction. The increased effectiveness of the coupled resonator concept for attenuation of low frequency broad spectrum noise is demonstrated.
Borai, Anwar; Ichihara, Kiyoshi; Al Masaud, Abdulaziz; Tamimi, Waleed; Bahijri, Suhad; Armbuster, David; Bawazeer, Ali; Nawajha, Mustafa; Otaibi, Nawaf; Khalil, Haitham; Kawano, Reo; Kaddam, Ibrahim; Abdelaal, Mohamed
2016-05-01
This study is a part of the IFCC-global study to derive reference intervals (RIs) for 28 chemistry analytes in Saudis. Healthy individuals (n=826) aged ≥18 years were recruited using the global study protocol. All specimens were measured using an Architect analyzer. RIs were derived by both parametric and non-parametric methods for comparative purpose. The need for secondary exclusion of reference values based on latent abnormal values exclusion (LAVE) method was examined. The magnitude of variation attributable to gender, ages and regions was calculated by the standard deviation ratio (SDR). Sources of variations: age, BMI, physical exercise and smoking levels were investigated by using the multiple regression analysis. SDRs for gender, age and regional differences were significant for 14, 8 and 2 analytes, respectively. BMI-related changes in test results were noted conspicuously for CRP. For some metabolic related parameters the ranges of RIs by non-parametric method were wider than by the parametric method and RIs derived using the LAVE method were significantly different than those without it. RIs were derived with and without gender partition (BMI, drugs and supplements were considered). RIs applicable to Saudis were established for the majority of chemistry analytes, whereas gender, regional and age RI partitioning was required for some analytes. The elevated upper limits of metabolic analytes reflects the existence of high prevalence of metabolic syndrome in Saudi population.
Contamination of dried blood spots - an underestimated risk in newborn screening.
Winter, Theresa; Lange, Anja; Hannemann, Anke; Nauck, Matthias; Müller, Cornelia
2018-01-26
Newborn screening (NBS) is an established screening procedure in many countries worldwide, aiming at the early detection of inborn errors of metabolism. For decades, dried blood spots have been the standard specimen for NBS. The procedure of blood collection is well described and standardized and includes many critical pre-analytical steps. We examined the impact of contamination of some anticipated common substances on NBS results obtained from dry spot samples. This possible pre-analytical source of uncertainty has been poorly examined in the past. Capillary blood was obtained from 15 adult volunteers and applied to 10 screening filter papers per volunteer. Nine filter papers were contaminated without visible trace. The contaminants were baby diaper rash cream, baby wet wipes, disinfectant, liquid infant formula, liquid infant formula hypoallergenic (HA), ultrasonic gel, breast milk, feces, and urine. The differences between control and contaminated samples were evaluated for 45 NBS quantities. We estimated if the contaminations might lead to false-positive NBS results. Eight of nine investigated contaminants significantly altered NBS analyte concentrations and potentially caused false-positive screening outcomes. A contamination with feces was most influential, affecting 24 of 45 tested analytes followed by liquid infant formula (HA) and urine, affecting 19 and 13 of 45 analytes, respectively. A contamination of filter paper samples can have a substantial effect on the NBS results. Our results underline the importance of good pre-analytical training to make the staff aware of the threat and ensure reliable screening results.
Simulation supported POD for RT test case-concept and modeling
NASA Astrophysics Data System (ADS)
Gollwitzer, C.; Bellon, C.; Deresch, A.; Ewert, U.; Jaenisch, G.-R.; Zscherpel, U.; Mistral, Q.
2012-05-01
Within the framework of the European project PICASSO, the radiographic simulator aRTist (analytical Radiographic Testing inspection simulation tool) developed by BAM has been extended for reliability assessment of film and digital radiography. NDT of safety relevant components of aerospace industry requires the proof of probability of detection (POD) of the inspection. Modeling tools can reduce the expense of such extended, time consuming NDT trials, if the result of simulation fits to the experiment. Our analytic simulation tool consists of three modules for the description of the radiation source, the interaction of radiation with test pieces and flaws, and the detection process with special focus on film and digital industrial radiography. It features high processing speed with near-interactive frame rates and a high level of realism. A concept has been developed as well as a software extension for reliability investigations, completed by a user interface for planning automatic simulations with varying parameters and defects. Furthermore, an automatic image analysis procedure is included to evaluate the defect visibility. The radiographic modeling from 3D CAD of aero engine components and quality test samples are compared as a precondition for real trials. This enables the evaluation and optimization of film replacement for application of modern digital equipment for economical NDT and defined POD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertesz, Vilmos; Van Berkel, Gary J
2011-01-01
Analyte electrolysis using a repetitively pulsed high voltage ion source was investigated and compared to that using a regular, continuously operating direct current high voltage ion source in electrospray ionization mass spectrometry. The extent of analyte electrolysis was explored as a function of the length and frequency of the high voltage pulse using the model compound reserpine in positive ion mode. Using +5 kV as the maximum high voltage amplitude, reserpine was oxidized to its 2, 4, 6 and 8-electron oxidation products when direct current high voltage was employed. In contrast, when using a pulsed high voltage, oxidation of reserpinemore » was eliminated by employing the appropriate high voltage pulse length and frequency. This effect was caused by inefficient mass transport of the analyte to the electrode surface during the duration of the high voltage pulse and the subsequent relaxation of the emitter electrode/ electrolyte interface during the time period when the high voltage was turned off. This mode of ESI source operation allows for analyte electrolysis to be quickly and simply switched on or off electronically via a change in voltage pulse variables.« less
Well water quality in rural Nicaragua using a low-cost bacterial test and microbial source tracking.
Weiss, Patricia; Aw, Tiong Gim; Urquhart, Gerald R; Galeano, Miguel Ruiz; Rose, Joan B
2016-04-01
Water-related diseases, particularly diarrhea, are major contributors to morbidity and mortality in developing countries. Monitoring water quality on a global scale is crucial to making progress in terms of population health. Traditional analytical methods are difficult to use in many regions of the world in low-resource settings that face severe water quality issues due to the inaccessibility of laboratories. This study aimed to evaluate a new low-cost method (the compartment bag test (CBT)) in rural Nicaragua. The CBT was used to quantify the presence of Escherichia coli in drinking water wells and aimed to determine the source(s) of any microbial contamination. Results indicate that the CBT is a viable method for use in remote rural regions. The overall quality of well water in Pueblo Nuevo, Nicaragua was deemed unsafe, and results led to the conclusion that animal fecal wastes may be one of the leading causes of well contamination. Elevation and depth of wells were not found to impact overall water quality. However rope-pump wells had a 64.1% reduction in contamination when compared with simple wells.
Propellant grain dynamics in aft attach ring of shuttle solid rocket booster
NASA Technical Reports Server (NTRS)
Verderaime, V.
1979-01-01
An analytical technique for implementing simultaneously the temperature, dynamic strain, real modulus, and frequency properties of solid propellant in an unsymmetrical vibrating ring mode is presented. All dynamic parameters and sources are defined for a free vibrating ring-grain structure with initial displacement and related to a forced vibrating system to determine the change in real modulus. Propellant test data application is discussed. The technique was developed to determine the aft attach ring stiffness of the shuttle booster at lift-off.
NHEXAS PHASE I REGION 5 STUDY--METALS IN DUST ANALYTICAL RESULTS
This data set includes analytical results for measurements of metals in 1,906 dust samples. Dust samples were collected to assess potential residential sources of dermal and inhalation exposures and to examine relationships between analyte levels in dust and in personal and bioma...
Recent UCN source developments at Los Alamos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seestrom, S.J.; Anaya, J.M.; Bowles, T.J.
The most intense sources of ultra cold neutrons (UCN) have bee built at reactors where the high average thermal neutron flux can overcome the low UCN production rate to achieve usable densities of UCN. At spallation neutron sources the average flux available is much lower than at a reactor, though the peak flux can be comparable or higher. The authors have built a UCN source that attempts to take advantage of the high peak flux available at the short pulse spallation neutron source at the Los Alamos Neutron Science Center (LANSCE) to generate a useful number of UCN. In themore » source UCN are produced by Doppler-shifted Bragg scattering of neutrons to convert 400-m/s neutrons down into the UCN regime. This source was initially tested in 1996 and various improvements were made based on the results of the 1996 running. These improvements were implemented and tested in 1997. In sections 2 and 3 they discuss the improvements that have been made and the resulting source performance. Recently an even more interesting concept was put forward by Serebrov et al. This involves combining a solid Deuterium UCN source, previously studied by Serebrov et al., with a pulsed spallation source to achieve world record UCN densities. They have initiated a program of calculations and measurements aimed at verifying the solid Deuterium UCN source concept. The approach has been to develop an analytical capability, combine with Monte Carlo calculations of neutron production, and perform benchmark experiments to verify the validity of the calculations. Based on the calculations and measurements they plan to test a modified version of the Serebrov UCN factory. They estimate that they could produce over 1,000 UCN/cc in a 15 liter volume, using 1 {micro}amp of 800 MeV protons for two seconds every 500 seconds. They will discuss the result UCN production measurements in section 4.« less
Atayero, Aderemi A; Popoola, Segun I; Egeonu, Jesse; Oludayo, Olumuyiwa
2018-08-01
Citation is one of the important metrics that are used in measuring the relevance and the impact of research publications. The potentials of citation analytics may be exploited to understand the gains of publishing scholarly peer-reviewed research outputs in either Open Access (OA) sources or Subscription-Based (SB) sources in the bid to increase citation impact. However, relevant data required for such comparative analysis must be freely accessible for evidence-based findings and conclusions. In this data article, citation scores ( CiteScores ) of 2542 OA sources and 15,040 SB sources indexed in Scopus from 2014 to 2016 were presented and analyzed based on a set of five inclusion criteria. A robust dataset, which contains the CiteScores of OA and SB publication sources included, is attached as supplementary material to this data article to facilitate further reuse. Descriptive statistics and frequency distributions of OA CiteScores and SB CiteScores are presented in tables. Boxplot representations and scatter plots are provided to show the statistical distributions of OA CiteScores and SB CiteScores across the three sub-categories (Book Series, Journal, and Trade Journal). Correlation coefficient and p-value matrices are made available within the data article. In addition, Probability Density Functions (PDFs) and Cumulative Distribution Functions (CDFs) of OA CiteScores and SB CiteScores are computed and the results are presented using tables and graphs. Furthermore, Analysis of Variance (ANOVA) and multiple comparison post-hoc tests are conducted to understand the statistical difference (and its significance, if any) in the citation impact of OA publication sources and SB publication source based on CiteScore . In the long run, the data provided in this article will help policy makers and researchers in Higher Education Institutions (HEIs) to identify the appropriate publication source type and category for dissemination of scholarly research findings with maximum citation impact.
Controlling the spectral shape of nonlinear Thomson scattering with proper laser chirping
Rykovanov, S. G.; Geddes, C. G. R.; Schroeder, C. B.; ...
2016-03-18
Effects of nonlinearity in Thomson scattering of a high intensity laser pulse from electrons are analyzed. Analytic expressions for laser pulse shaping in frequency (chirping) are obtained which control spectrum broadening for high laser pulse intensities. These analytic solutions allow prediction of the spectral form and required laser parameters to avoid broadening. Results of analytical and numerical calculations agree well. The control over the scattered radiation bandwidth allows narrow bandwidth sources to be produced using high scattering intensities, which in turn greatly improves scattering yield for future x- and gamma-ray sources.
Corrected Four-Sphere Head Model for EEG Signals.
Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V; Dale, Anders M; Einevoll, Gaute T; Wójcik, Daniel K
2017-01-01
The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations.
Corrected Four-Sphere Head Model for EEG Signals
Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V.; Dale, Anders M.; Einevoll, Gaute T.; Wójcik, Daniel K.
2017-01-01
The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations. PMID:29093671
An Analysis of Rocket Propulsion Testing Costs
NASA Technical Reports Server (NTRS)
Ramirez-Pagan, Carmen P.; Rahman, Shamim A.
2009-01-01
The primary mission at NASA Stennis Space Center (SSC) is rocket propulsion testing. Such testing is generally performed within two arenas: (1) Production testing for certification and acceptance, and (2) Developmental testing for prototype or experimental purposes. The customer base consists of NASA programs, DOD programs, and commercial programs. Resources in place to perform on-site testing include both civil servants and contractor personnel, hardware and software including data acquisition and control, and 6 test stands with a total of 14 test positions/cells. For several business reasons there is the need to augment understanding of the test costs for all the various types of test campaigns. Historical propulsion test data was evaluated and analyzed in many different ways with the intent to find any correlation or statistics that could help produce more reliable and accurate cost estimates and projections. The analytical efforts included timeline trends, statistical curve fitting, average cost per test, cost per test second, test cost timeline, and test cost envelopes. Further, the analytical effort includes examining the test cost from the perspective of thrust level and test article characteristics. Some of the analytical approaches did not produce evidence strong enough for further analysis. Some other analytical approaches yield promising results and are candidates for further development and focused study. Information was organized for into its elements: a Project Profile, Test Cost Timeline, and Cost Envelope. The Project Profile is a snap shot of the project life cycle on a timeline fashion, which includes various statistical analyses. The Test Cost Timeline shows the cumulative average test cost, for each project, at each month where there was test activity. The Test Cost Envelope shows a range of cost for a given number of test(s). The supporting information upon which this study was performed came from diverse sources and thus it was necessary to build several intermediate databases in order to understand, validate, and manipulate data. These intermediate databases (validated historical account of schedule, test activity, and cost) by themselves are of great value and utility. For example, for the Project Profile, we were able to merged schedule, cost, and test activity. This kind of historical account conveys important information about sequence of events, lead time, and opportunities for improvement in future propulsion test projects. The Product Requirement Document (PRD) file is a collection of data extracted from each project PRD (technical characteristics, test requirements, and projection of cost, schedule, and test activity). This information could help expedite the development of future PRD (or equivalent document) on similar projects, and could also, when compared to the actual results, help improve projections around cost and schedule. Also, this file can be sorted by the parameter of interest to perform a visual review of potential common themes or trends. The process of searching, collecting, and validating propulsion test data encountered a lot of difficulties which then led to a set of recommendations for improvement in order to facilitate future data gathering and analysis.
SRG110 Stirling Generator Dynamic Simulator Vibration Test Results and Analysis Correlation
NASA Technical Reports Server (NTRS)
Suarez, Vicente J.; Lewandowski, Edward J.; Callahan, John
2006-01-01
The U.S. Department of Energy (DOE), Lockheed Martin (LM), and NASA Glenn Research Center (GRC) have been developing the Stirling Radioisotope Generator (SRG110) for use as a power system for space science missions. The launch environment enveloping potential missions results in a random input spectrum that is significantly higher than historical RPS launch levels and is a challenge for designers. Analysis presented in prior work predicted that tailoring the compliance at the generator-spacecraft interface reduced the dynamic response of the system thereby allowing higher launch load input levels and expanding the range of potential generator missions. To confirm analytical predictions, a dynamic simulator representing the generator structure, Stirling convertors and heat sources was designed and built for testing with and without a compliant interface. Finite element analysis was performed to guide the generator simulator and compliant interface design so that test modes and frequencies were representative of the SRG110 generator. This paper presents the dynamic simulator design, the test setup and methodology, test article modes and frequencies and dynamic responses, and post-test analysis results. With the compliant interface, component responses to an input environment exceeding the SRG110 qualification level spectrum were all within design allowables. Post-test analysis included finite element model tuning to match test frequencies and random response analysis using the test input spectrum. Analytical results were in good overall agreement with the test results and confirmed previous predictions that the SRG110 power system may be considered for a broad range of potential missions, including those with demanding launch environments.
SRG110 Stirling Generator Dynamic Simulator Vibration Test Results and Analysis Correlation
NASA Technical Reports Server (NTRS)
Lewandowski, Edward J.; Suarez, Vicente J.; Goodnight, Thomas W.; Callahan, John
2007-01-01
The U.S. Department of Energy (DOE), Lockheed Martin (LM), and NASA Glenn Research Center (GRC) have been developing the Stirling Radioisotope Generator (SRG110) for use as a power system for space science missions. The launch environment enveloping potential missions results in a random input spectrum that is significantly higher than historical radioisotope power system (RPS) launch levels and is a challenge for designers. Analysis presented in prior work predicted that tailoring the compliance at the generator-spacecraft interface reduced the dynamic response of the system thereby allowing higher launch load input levels and expanding the range of potential generator missions. To confirm analytical predictions, a dynamic simulator representing the generator structure, Stirling convertors and heat sources were designed and built for testing with and without a compliant interface. Finite element analysis was performed to guide the generator simulator and compliant interface design so that test modes and frequencies were representative of the SRG110 generator. This paper presents the dynamic simulator design, the test setup and methodology, test article modes and frequencies and dynamic responses, and post-test analysis results. With the compliant interface, component responses to an input environment exceeding the SRG110 qualification level spectrum were all within design allowables. Post-test analysis included finite element model tuning to match test frequencies and random response analysis using the test input spectrum. Analytical results were in good overall agreement with the test results and confirmed previous predictions that the SRG110 power system may be considered for a broad range of potential missions, including those with demanding launch environments.
Gilbert-López, Bienvenida; García-Reyes, Juan F; Meyer, Cordula; Michels, Antje; Franzke, Joachim; Molina-Díaz, Antonio; Hayen, Heiko
2012-11-21
A Dielectric Barrier Discharge Ionization (DBDI) LC/MS interface is based on the use of a low-temperature helium plasma, which features the possibility of simultaneous ionization of species with a wide variety of physicochemical properties. In this work, the performance of LC/DBDI-MS for trace analysis of highly relevant species in food and environment has been examined. Over 75 relevant species including multiclass priority organic contaminants and residues such as pesticides, polycyclic aromatic hydrocarbons, organochlorine species, pharmaceuticals, personal care products, and drugs of abuse were tested. LC/DBDI-MS performance for this application was assessed and compared with standard LC/MS sources (electrospray ionization (ESI) and atmospheric pressure chemical ionization (APCI)). The used benchtop Orbitrap mass spectrometer features a 10 Hz polarity switching mode, so that both positive and negative ion mode acquisitions are possible with acquisition cycles matching the requirements of fast liquid chromatography. Both polar and nonpolar species (including those typically analyzed by GC/electron ionization-MS) can be tested in a single run using polarity switching mode. The methodology was found to be effective in detecting a wide array of organic compounds at concentration levels in the low ng L(-1) to μg kg(-1) range in wastewater and food matrices, respectively. The linearity was evaluated in an olive oil extract, obtaining good correlation coefficients in the studied range. Additionally, minor matrix effects (≤15% of signal suppression or enhancement) were observed for most of the studied analytes in this complex fatty matrix. The results obtained were compared with data from both ESI and APCI sources, obtaining a merged coverage between ESI and APCI in terms of analyte ionization and higher overall sensitivity for the proposed ion source based on the DBD principle. The use of this approach further extends the coverage of current LC/MS methods towards an even larger variety of chemical species including both polar and nonpolar (non-ESI amenable) species and may find several applications in fields such as food and environment testing or metabolomics where GC/MS and LC/MS are combined to cover as many different species as possible.
The interpretation of hair analysis for drugs and drug metabolites.
Cuypers, Eva; Flanagan, Robert J
2018-02-01
Head hair analysis for drugs and drug metabolites has been used widely with the aim of detecting exposure in the weeks or months prior to sample collection. However, inappropriate interpretation of results has likely led to serious miscarriages of justice, especially in child custody cases. The aim of this review is to assess critically what can, and perhaps more importantly, what cannot be claimed as regards the interpretation of hair test results in a given set of circumstances in order to inform future testing. We searched the PubMed database for papers published 2010-2016 using the terms "hair" and "drug" and "decontamination", the terms "hair" and "drug" and "contamination", the terms "hair" and "drug-facilitated crime", the terms "hair" and "ethyl glucuronide", and the terms "hair", "drug testing" and "analysis". Study of the reference lists of the 46 relevant papers identified 25 further relevant citations, giving a total of 71 citations. Hair samples: Drugs, drug metabolites and/or decomposition products may arise not only from deliberate drug administration, but also via deposition from a contaminated atmosphere if drug(s) have been smoked or otherwise vaporized in a confined area, transfer from contaminated surfaces via food/fingers, etc., and transfer from sweat and other secretions after a single large exposure, which could include anesthesia. Excretion in sweat of endogenous analytes such as γ-hydroxybutyric acid is a potential confounder if its use is to be investigated. Cosmetic procedures such as bleaching or heat treatment of hair may remove analytes prior to sample collection. Hair color and texture, the area of the head the sample is taken from, the growth rate of individual hairs, and how the sample has been stored, may also affect the interpretation of results. Toxicological analysis: Immunoassay results alone do not provide reliable evidence on which to base judicial decisions. Gas or liquid chromatography with mass spectrometric detection (GC- or LC-MS), if used with due caution, can give accurate analyte identification and high sensitivity, but many problems remain. Firstly, it is not possible to prepare assay calibrators or quality control material except by soaking "blank" hair in solutions of appropriate analytes, drying, and then subjecting the dried material to an analysis. The fact that solvents can be used to add analytes to hair points to the fact that analytes can arrive not only on, but also in hair from exogenous sources. A range of solvent-washing procedures have been advocated to "decontaminate" hair by removing adsorbed analytes, but these carry the risk of transporting adsorbed analytes into the medulla of the hair therefore confounding the whole procedure. This is especially true if segmental analysis is being undertaken in order to provide a "time course" of drug exposure. Proposed clinical applications of hair analysis: There have been a number of reports where drugs seemingly administered during the perpetration of a crime have been detected in head hair. However, detailed evaluation of these reports is difficult without full understanding of the possible effects of any "decontamination" procedures used and of other variables such as hair color or cosmetic hair treatment. Similarly, in child custody cases and where the aim is to demonstrate abstinence from drug or alcohol use, the issues of possible exogenous sources of analyte, and of the large variations in analyte concentrations reported in known users, continue to confound the interpretation of results in individual cases. Interpretation of results of head hair analysis must take into account all the available circumstantial and other evidence especially as regards the methodology employed and the possibility of surface contamination of the hair prior to collection.
DeltaSA tool for source apportionment benchmarking, description and sensitivity analysis
NASA Astrophysics Data System (ADS)
Pernigotti, D.; Belis, C. A.
2018-05-01
DeltaSA is an R-package and a Java on-line tool developed at the EC-Joint Research Centre to assist and benchmark source apportionment applications. Its key functionalities support two critical tasks in this kind of studies: the assignment of a factor to a source in factor analytical models (source identification) and the model performance evaluation. The source identification is based on the similarity between a given factor and source chemical profiles from public databases. The model performance evaluation is based on statistical indicators used to compare model output with reference values generated in intercomparison exercises. The references values are calculated as the ensemble average of the results reported by participants that have passed a set of testing criteria based on chemical profiles and time series similarity. In this study, a sensitivity analysis of the model performance criteria is accomplished using the results of a synthetic dataset where "a priori" references are available. The consensus modulated standard deviation punc gives the best choice for the model performance evaluation when a conservative approach is adopted.
NASA Astrophysics Data System (ADS)
Shan, Zhendong; Ling, Daosheng
2018-02-01
This article develops an analytical solution for the transient wave propagation of a cylindrical P-wave line source in a semi-infinite elastic solid with a fluid layer. The analytical solution is presented in a simple closed form in which each term represents a transient physical wave. The Scholte equation is derived, through which the Scholte wave velocity can be determined. The Scholte wave is the wave that propagates along the interface between the fluid and solid. To develop the analytical solution, the wave fields in the fluid and solid are defined, their analytical solutions in the Laplace domain are derived using the boundary and interface conditions, and the solutions are then decomposed into series form according to the power series expansion method. Each item of the series solution has a clear physical meaning and represents a transient wave path. Finally, by applying Cagniard's method and the convolution theorem, the analytical solutions are transformed into the time domain. Numerical examples are provided to illustrate some interesting features in the fluid layer, the interface and the semi-infinite solid. When the P-wave velocity in the fluid is higher than that in the solid, two head waves in the solid, one head wave in the fluid and a Scholte wave at the interface are observed for the cylindrical P-wave line source.
42 CFR 493.859 - Standard; ABO group and D (Rho) typing.
Code of Federal Regulations, 2013 CFR
2013-10-01
... attain a score of at least 100 percent of acceptable responses for each analyte or test in each testing event is unsatisfactory analyte performance for the testing event. (b) Failure to attain an overall.... (2) For any unacceptable analyte or unsatisfactory testing event score, remedial action must be taken...
42 CFR 493.859 - Standard; ABO group and D (Rho) typing.
Code of Federal Regulations, 2012 CFR
2012-10-01
... attain a score of at least 100 percent of acceptable responses for each analyte or test in each testing event is unsatisfactory analyte performance for the testing event. (b) Failure to attain an overall.... (2) For any unacceptable analyte or unsatisfactory testing event score, remedial action must be taken...
42 CFR 493.859 - Standard; ABO group and D (Rho) typing.
Code of Federal Regulations, 2014 CFR
2014-10-01
... attain a score of at least 100 percent of acceptable responses for each analyte or test in each testing event is unsatisfactory analyte performance for the testing event. (b) Failure to attain an overall.... (2) For any unacceptable analyte or unsatisfactory testing event score, remedial action must be taken...
NASA Astrophysics Data System (ADS)
Quinta-Nova, Luis; Fernandez, Paulo; Pedro, Nuno
2017-12-01
This work focuses on developed a decision support system based on multicriteria spatial analysis to assess the potential for generation of biomass residues from forestry sources in a region of Portugal (Beira Baixa). A set of environmental, economic and social criteria was defined, evaluated and weighted in the context of Saaty’s analytic hierarchies. The best alternatives were obtained after applying Analytic Hierarchy Process (AHP). The model was applied to the central region of Portugal where forest and agriculture are the most representative land uses. Finally, sensitivity analysis of the set of factors and their associated weights was performed to test the robustness of the model. The proposed evaluation model provides a valuable reference for decision makers in establishing a standardized means of selecting the optimal location for new biomass plants.
NASA Technical Reports Server (NTRS)
Millard, J. P.; Green, M. J.; Sommer, S. C.
1972-01-01
An analytical study was conducted to develop a sensor for measuring the temperature of a planetary atmosphere from an entry vehicle traveling at supersonic speeds and having a detached shock. Such a sensor has been used in the Planetary Atmosphere Experiments Test Probe (PAET) mission and is planned for the Viking-Mars mission. The study specifically considered butt-welded thermocouple sensors stretched between two support posts; however, the factors considered are sufficiently general to apply to other sensors as well. This study included: (1) an investigation of the relation between sensor-measured temperature and free-stream conditions; (2) an evaluation of the effects of extraneous sources of heat; (3) the development of a computer program for evaluating sensor response during entry; and (4) a parametric study of sensor design characteristics.
NASA Astrophysics Data System (ADS)
Ramezani, Zeinab; Orouji, Ali A.
2017-08-01
This paper suggests and investigates a double-gate (DG) MOSFET, which emulates tunnel field effect transistors (M-TFET). We have combined this novel concept into a double-gate MOSFET, which behaves as a tunneling field effect transistor by work function engineering. In the proposed structure, in addition to the main gate, we utilize another gate over the source region with zero applied voltage and a proper work function to convert the source region from N+ to P+. We check the impact obtained by varying the source gate work function and source doping on the device parameters. The simulation results of the M-TFET indicate that it is a suitable case for a switching performance. Also, we present a two-dimensional analytic potential model of the proposed structure by solving the Poisson's equation in x and y directions and by derivatives from the potential profile; thus, the electric field is achieved. To validate our present model, we use the SILVACO ATLAS device simulator. The analytical results have been compared with it.
Performance of the MOMA Gas Chromatograph-Mass Spectrometer onboard the 2018 ExoMars Mission
NASA Astrophysics Data System (ADS)
Buch, Arnaud; Pinnick, Veronica; Szopa, Cyril; Grand, Noël; Freissinet, Caroline; Danell, Ryan; van Ameron, Friso; Arevalo, Ricardo; Brinckerhoff, William; Raulin, François; Mahaffy, Paul; Goesmann, Fred
2015-04-01
The Mars Organic Molecule Analyzer (MOMA) is a dual ion source linear ion trap mass spectrometer that was designed for the 2018 joint ESA-Roscosmos mission to Mars. The main scientific aim of the mission is to search for signs of extant or extinct life in the near subsurface of Mars by acquir-ing samples from as deep as 2 m below the surface. MOMA will be a key analytical tool in providing chemical (molecular) information from the solid samples, with particular focus on the characterization of organic content. The MOMA instrument, itself, is a joint venture for NASA and ESA to develop a mass spectrometer capable of analyzing samples from pyrolysis gas chromatograph (GC) as well as ambient pressure laser desorption ionization (LDI). The combination of the two analytical techniques allows for the chemical characterization of a broad range of compounds, including volatile and non-volatile species. Generally, MOMA can provide in-formation on elemental and molecular makeup, po-larity, chirality and isotopic patterns of analyte spe-cies. Here we report on the current performance of the MOMA prototype instruments, specifically the demonstration of the gas chromatography-mass spec-trometry (GC-MS) mode of operation. Both instruments have been tested separately first and have been coupled in order to test the efficiency of the future MOMA GC-MS instrument. The main objective of the second step has been to test the quantitative response of both instruments while they are coupled and to characterize the combined instrument detection limit for several compounds. A final experiment has been done in order to test the feasibility of the separation and detection of a mixture contained in a soil sample introduced in the MOMA oven.
WIPP waste characterization program sampling and analysis guidance manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-01-01
The Waste Isolation Pilot Plant (WIPP) Waste Characterization Program Sampling and Analysis Guidance Manual (Guidance Manual) provides a unified source of information on the sampling and analytical techniques that enable Department of Energy (DOE) facilities to comply with the requirements established in the current revision of the Quality Assurance Program Plan (QAPP) for the WIPP Experimental-Waste Characterization Program (the Program). This Guidance Manual includes all of the sampling and testing methodologies accepted by the WIPP Project Office (DOE/WPO) for use in implementing the Program requirements specified in the QAPP. This includes methods for characterizing representative samples of transuranic (TRU) wastesmore » at DOE generator sites with respect to the gas generation controlling variables defined in the WIPP bin-scale and alcove test plans, as well as waste container headspace gas sampling and analytical procedures to support waste characterization requirements under the WIPP test program and the Resource Conservation and Recovery Act (RCRA). The procedures in this Guidance Manual are comprehensive and detailed and are designed to provide the necessary guidance for the preparation of site specific procedures. The use of these procedures is intended to provide the necessary sensitivity, specificity, precision, and comparability of analyses and test results. The solutions to achieving specific program objectives will depend upon facility constraints, compliance with DOE Orders and DOE facilities' operating contractor requirements, and the knowledge and experience of the TRU waste handlers and analysts. With some analytical methods, such as gas chromatography/mass spectrometry, the Guidance Manual procedures may be used directly. With other methods, such as nondestructive/destructive characterization, the Guidance Manual provides guidance rather than a step-by-step procedure.« less
Kleine-Tebbe, Jörg; Jakob, Thilo
Allergen molecules (synonyms: single allergens, allergen components) open up new horizons for the targeted allergen-specific diagnostics of immunoglobulin E (IgE) in singleplex determination. The following rationales support the targeted use of allergen molecules and, more importantly, improve test properties: (1) increased test sensitivity ("analytical sensitivity"), particularly when important allergens are under-represented or lacking in the extract; (2) improved test selectivity (analytical specificity), particularly when the selected IgE repertoire against an allergen yields additional information on: (a) potential risk, (b) possible cross-reactivity, or (c) primary (species-specific) sensitization. However, the appropriate indication for the use of single allergens can only be established on a case-by-case basis (depending on the clinical context and previous history) and in an allergen-specific manner (depending on the allergen source and the single allergens available), rather than in a standardized way. Numerous investigations on suspected food allergy, insect venom allergy, or sensitization to respiratory allergens have meanwhile demonstrated the successful use of defined molecules for allergen-specific singleplex IgE diagnosis. Specific IgE to single allergens is limited in its suitability to predict the clinical relevance of sensitivity on an individual basis. In food allergies, one can at best identify the relative risk of a clinical reaction on the basis of an IgE profile, but no absolutely reliable prediction on (future) tolerance can be made. Ultimately, the clinical relevance of all IgE findings depends on the presence of corresponding symptoms and can only be assessed on an individual basis (previous history, symptom log, and provocation testing with the relevant allergen source where appropriate). Thus, also in molecular allergology, the treating physician and not the test result should determine the clinical relevance of diagnostic findings. Supplementary material is available for this article at 10.1007/s40629-015-0067-z and is accessible for authorized users.
NASA Astrophysics Data System (ADS)
Ebrahimkhanlou, Arvin; Salamone, Salvatore
2017-09-01
Tracking edge-reflected acoustic emission (AE) waves can allow the localization of their sources. Specifically, in bounded isotropic plate structures, only one sensor may be used to perform these source localizations. The primary goal of this paper is to develop a three-step probabilistic framework to quantify the uncertainties associated with such single-sensor localizations. According to this framework, a probabilistic approach is first used to estimate the direct distances between AE sources and the sensor. Then, an analytical model is used to reconstruct the envelope of edge-reflected AE signals based on the source-to-sensor distance estimations and their first arrivals. Finally, the correlation between the probabilistically reconstructed envelopes and recorded AE signals are used to estimate confidence contours for the location of AE sources. To validate the proposed framework, Hsu-Nielsen pencil lead break (PLB) tests were performed on the surface as well as the edges of an aluminum plate. The localization results show that the estimated confidence contours surround the actual source locations. In addition, the performance of the framework was tested in a noisy environment simulated by two dummy transducers and an arbitrary wave generator. The results show that in low-noise environments, the shape and size of the confidence contours depend on the sources and their locations. However, at highly noisy environments, the size of the confidence contours monotonically increases with the noise floor. Such probabilistic results suggest that the proposed probabilistic framework could thus provide more comprehensive information regarding the location of AE sources.
Klassen, Tara L.; von Rüden, Eva-Lotta; Drabek, Janice; Noebels, Jeffrey L.; Goldman, Alica M.
2013-01-01
Genetic testing and research have increased the demand for high-quality DNA that has traditionally been obtained by venipuncture. However, venous blood collection may prove difficult in special populations and when large-scale specimen collection or exchange is prerequisite for international collaborative investigations. Guthrie/FTA card–based blood spots, buccal scrapes, and finger nail clippings are DNA-containing specimens that are uniquely accessible and thus attractive as alternative tissue sources (ATS). The literature details a variety of protocols for extraction of nucleic acids from a singular ATS type, but their utility has not been systematically analyzed in comparison with conventional sources such as venous blood. Additionally, the efficacy of each protocol is often equated with the overall nucleic acid yield but not with the analytical performance of the DNA during mutation detection. Together with a critical in-depth literature review of published extraction methods, we developed and evaluated an all-inclusive approach for serial, systematic, and direct comparison of DNA utility from multiple biological samples. Our results point to the often underappreciated value of these alternative tissue sources and highlight ways to maximize the ATS-derived DNA for optimal quantity, quality, and utility as a function of extraction method. Our comparative analysis clarifies the value of ATS in genomic analysis projects for population-based screening, diagnostics, molecular autopsy, medico-legal investigations, or multi-organ surveys of suspected mosaicisms. PMID:22796560
Review of Pre-Analytical Errors in Oral Glucose Tolerance Testing in a Tertiary Care Hospital.
Nanda, Rachita; Patel, Suprava; Sahoo, Sibashish; Mohapatra, Eli
2018-03-13
The pre-pre-analytical and pre-analytical phases form a major chunk of the errors in a laboratory. The process has taken into consideration a very common procedure which is the oral glucose tolerance test to identify the pre-pre-analytical errors. Quality indicators provide evidence of quality, support accountability and help in the decision making of laboratory personnel. The aim of this research is to evaluate pre-analytical performance of the oral glucose tolerance test procedure. An observational study that was conducted overa period of three months, in the phlebotomy and accessioning unit of our laboratory using questionnaire that examined the pre-pre-analytical errors through a scoring system. The pre-analytical phase was analyzed for each sample collected as per seven quality indicators. About 25% of the population gave wrong answer with regard to the question that tested the knowledge of patient preparation. The appropriateness of test result QI-1 had the most error. Although QI-5 for sample collection had a low error rate, it is a very important indicator as any wrongly collected sample can alter the test result. Evaluating the pre-analytical and pre-pre-analytical phase is essential and must be conducted routinely on a yearly basis to identify errors and take corrective action and to facilitate their gradual introduction into routine practice.
An ion source for radiofrequency-pulsed glow discharge time-of-flight mass spectrometry
NASA Astrophysics Data System (ADS)
González Gago, C.; Lobo, L.; Pisonero, J.; Bordel, N.; Pereiro, R.; Sanz-Medel, A.
2012-10-01
A Grimm-type glow discharge (GD) has been designed and constructed as an ion source for pulsed radiofrequency GD spectrometry when coupled to an orthogonal time of flight mass spectrometer. Pulse shapes of argon species and analytes were studied as a function of the discharge conditions using a new in-house ion source (UNIOVI GD) and results have been compared with a previous design (PROTOTYPE GD). Different behavior and shapes of the pulse profiles have been observed for the two sources evaluated, particularly for the plasma gas ionic species detected. In the more analytically relevant region (afterglow), signals for 40Ar+ with this new design were negligible, while maximum intensity was reached earlier in time for 41(ArH)+ than when using the PROTOTYPE GD. Moreover, while maximum 40Ar+ signals measured along the pulse period were similar in both sources, 41(ArH)+ and 80(Ar2)+ signals tend to be noticeable higher using the PROTOTYPE chamber. The UNIOVI GD design was shown to be adequate for sensitive direct analysis of solid samples, offering linear calibration graphs and good crater shapes. Limits of detection (LODs) are in the same order of magnitude for both sources, although the UNIOVI source provides slightly better LODs for those analytes with masses slightly higher than 41(ArH)+.
Shelley, Jacob T.; Wiley, Joshua S.; Hieftje, Gary M.
2011-01-01
The advent of ambient desorption/ionization mass spectrometry has resulted in a strong interest in ionization sources that are capable of direct analyte sampling and ionization. One source that has enjoyed increasing interest is the Flowing Atmospheric-Pressure Afterglow (FAPA). FAPA has been proven capable of directly desorbing/ionizing samples in any phase (solid, liquid, or gas) and with impressive limits of detection (<100 fmol). The FAPA was also shown to be less affected by competitive-ionization matrix effects than other plasma-based sources. However, the original FAPA design exhibited substantial background levels, cluttered background spectra in the negative-ion mode, and significant oxidation of aromatic analytes, which ultimately compromised analyte identification and quantification. In the present study, a change in the FAPA configuration from a pin-to-plate to a pin-to-capillary geometry was found to vastly improve performance. Background signals in positive- and negative-ionization modes were reduced by 89% and 99%, respectively. Additionally, the capillary anode strongly reduced the amount of atomic oxygen that could cause oxidation of analytes. Temperatures of the gas stream that interacts with the sample, which heavily influences desorption capabilities, were compared between the two sources by means of IR thermography. The performance of the new FAPA configuration is evaluated through the determination of a variety of compounds in positive- and negative-ion mode, including agrochemicals and explosives. A detection limit of 4 amol was found for the direct determination of the agrochemical ametryn, and appears to be spectrometer-limited. The ability to quickly screen for analytes in bulk liquid samples with the pin-to-capillary FAPA is also shown. PMID:21627097
Shelley, Jacob T; Wiley, Joshua S; Hieftje, Gary M
2011-07-15
The advent of ambient desorption/ionization mass spectrometry has resulted in a strong interest in ionization sources that are capable of direct analyte sampling and ionization. One source that has enjoyed increasing interest is the flowing atmospheric-pressure afterglow (FAPA). The FAPA has been proven capable of directly desorbing/ionizing samples in any phase (solid, liquid, or gas) and with impressive limits of detection (<100 fmol). The FAPA was also shown to be less affected by competitive-ionization matrix effects than other plasma-based sources. However, the original FAPA design exhibited substantial background levels, cluttered background spectra in the negative-ion mode, and significant oxidation of aromatic analytes, which ultimately compromised analyte identification and quantification. In the present study, a change in the FAPA configuration from a pin-to-plate to a pin-to-capillary geometry was found to vastly improve performance. Background signals in positive- and negative-ionization modes were reduced by 89% and 99%, respectively. Additionally, the capillary anode strongly reduced the amount of atomic oxygen that could cause oxidation of analytes. Temperatures of the gas stream that interacts with the sample, which heavily influences desorption capabilities, were compared between the two sources by means of IR thermography. The performance of the new FAPA configuration is evaluated through the determination of a variety of compounds in positive- and negative-ion mode, including agrochemicals and explosives. A detection limit of 4 amol was found for the direct determination of the agrochemical ametryn and appears to be spectrometer-limited. The ability to quickly screen for analytes in bulk liquid samples with the pin-to-capillary FAPA is also shown.
Battaglia, Maurizio; Hill, D.P.
2009-01-01
Joint measurements of ground deformation and micro-gravity changes are an indispensable component for any volcano monitoring strategy. A number of analytical mathematical models are available in the literature that can be used to fit geodetic data and infer source location, depth and density. Bootstrap statistical methods allow estimations of the range of the inferred parameters. Although analytical models often assume that the crust is elastic, homogenous and isotropic, they can take into account different source geometries, the influence of topography, and gravity background noise. The careful use of analytical models, together with high quality data sets, can produce valuable insights into the nature of the deformation/gravity source. Here we present a review of various modeling methods, and use the historical unrest at Long Valley caldera (California) from 1982 to 1999 to illustrate the practical application of analytical modeling and bootstrap to constrain the source of unrest. A key question is whether the unrest at Long Valley since the late 1970s can be explained without calling upon an intrusion of magma. The answer, apparently, is no. Our modeling indicates that the inflation source is a slightly tilted prolate ellipsoid (dip angle between 91?? and 105??) at a depth of 6.5 to 7.9??km beneath the caldera resurgent dome with an aspect ratio between 0.44 and 0.60, a volume change from 0.161 to 0.173??km3 and a density of 1241 to 2093??kg/m3. The larger uncertainty of the density estimate reflects the higher noise of gravity measurements. These results are consistent with the intrusion of silicic magma with a significant amount of volatiles beneath the caldera resurgent dome. ?? 2008 Elsevier B.V.
Code of Federal Regulations, 2012 CFR
2012-07-01
... PROCEDURES Equipment, Fuel, and Gas Specifications § 1066.101 Overview. (a) This subpart addresses equipment related to emission testing, as well as test fuels and analytical gases. This section addresses emission sampling and analytical equipment, test fuels, and analytical gases. (b) The provisions of 40 CFR part 1065...
Code of Federal Regulations, 2013 CFR
2013-07-01
... PROCEDURES Equipment, Fuel, and Gas Specifications § 1066.101 Overview. (a) This subpart addresses equipment related to emission testing, as well as test fuels and analytical gases. This section addresses emission sampling and analytical equipment, test fuels, and analytical gases. (b) The provisions of 40 CFR part 1065...
Analytical solutions for efficient interpretation of single-well push-pull tracer tests
Single-well push-pull tracer tests have been used to characterize the extent, fate, and transport of subsurface contamination. Analytical solutions provide one alternative for interpreting test results. In this work, an exact analytical solution to two-dimensional equations descr...
Concentration history during pumping from a leaky aquifer with stratified initial concentration
Goode, Daniel J.; Hsieh, Paul A.; Shapiro, Allen M.; Wood, Warren W.; Kraemer, Thomas F.
1993-01-01
Analytical and numerical solutions are employed to examine the concentration history of a dissolved substance in water pumped from a leaky aquifer. Many aquifer systems are characterized by stratification, for example, a sandy layer overlain by a clay layer. To obtain information about separate hydrogeologic units, aquifer pumping tests are often conducted with a well penetrating only one of the layers. When the initial concentration distribution is also stratified (the concentration varies with elevation only), the concentration breakthrough in the pumped well may be interpreted to provide information on aquifer hydraulic and transport properties. To facilitate this interpretation, we present some simple analytical and numerical solutions for limiting cases and illustrate their application to a fractured bedrock/glacial drift aquifer system where the solute of interest is dissolved radon gas. In addition to qualitative information on water source, this method may yield estimates of effective porosity and saturated thickness (or fracture transport aperture) from a single-hole test. Little information about dispersivity is obtained because the measured concentration is not significantly affected by dispersion in the aquifer.
Experimental and Analytical Performance of a Dual Brayton Power Conversion System
NASA Technical Reports Server (NTRS)
Lavelle, Thomas A.; Hervol, David S.; Briggs, Maxwell; Owen, A. Karl
2009-01-01
The interactions between two closed Brayton cycle (CBC) power conversion units (PCU) which share a common gas inventory and heat source have been studied experimentally using the Dual Brayton Power Conversion System (DBPCS) and analytically using the Closed- Cycle System Simulation (CCSS) computer code. Selected operating modes include steady-state operation at equal and unequal shaft speeds and various start-up scenarios. Equal shaft speed steady-state tests were conducted for heater exit temperatures of 840 to 950 K and speeds of 50 to 90 krpm, providing a system performance map. Unequal shaft speed steady-state testing over the same operating conditions shows that the power produced by each Brayton is sensitive to the operating conditions of the other due to redistribution of gas inventory. Startup scenarios show that starting the engines one at a time can dramatically reduce the required motoring energy. Although the DBPCS is not considered a flight-like system, these insights, as well as the operational experience gained from operating and modeling this system provide valuable information for the future development of Brayton systems.
The partial coherence modulation transfer function in testing lithography lens
NASA Astrophysics Data System (ADS)
Huang, Jiun-Woei
2018-03-01
Due to the lithography demanding high performance in projection of semiconductor mask to wafer, the lens has to be almost free in spherical and coma aberration, thus, in situ optical testing for diagnosis of lens performance has to be established to verify the performance and to provide the suggesting for further improvement of the lens, before the lens has been build and integrated with light source. The measurement of modulation transfer function of critical dimension (CD) is main performance parameter to evaluate the line width of semiconductor platform fabricating ability for the smallest line width of producing tiny integrated circuits. Although the modulation transfer function (MTF) has been popularly used to evaluation the optical system, but in lithography, the contrast of each line-pair is in one dimension or two dimensions, analytically, while the lens stand along in the test bench integrated with the light source coherent or near coherent for the small dimension near the optical diffraction limit, the MTF is not only contributed by the lens, also by illumination of platform. In the study, the partial coherence modulation transfer function (PCMTF) for testing a lithography lens is suggested by measuring MTF in the high spatial frequency of in situ lithography lens, blended with the illumination of partial and in coherent light source. PCMTF can be one of measurement to evaluate the imperfect lens of lithography lens for further improvement in lens performance.
Analytical and multibody modeling for the power analysis of standing jumps.
Palmieri, G; Callegari, M; Fioretti, S
2015-01-01
Two methods for the power analysis of standing jumps are proposed and compared in this article. The first method is based on a simple analytical formulation which requires as input the coordinates of the center of gravity in three specified instants of the jump. The second method is based on a multibody model that simulates the jumps processing the data obtained by a three-dimensional (3D) motion capture system and the dynamometric measurements obtained by the force platforms. The multibody model is developed with OpenSim, an open-source software which provides tools for the kinematic and dynamic analyses of 3D human body models. The study is focused on two of the typical tests used to evaluate the muscular activity of lower limbs, which are the counter movement jump and the standing long jump. The comparison between the results obtained by the two methods confirms that the proposed analytical formulation is correct and represents a simple tool suitable for a preliminary analysis of total mechanical work and the mean power exerted in standing jumps.
SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications
Kalinin, Alexandr A.; Palanimalai, Selvam; Dinov, Ivo D.
2018-01-01
The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis. PMID:29630069
SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications.
Kalinin, Alexandr A; Palanimalai, Selvam; Dinov, Ivo D
2017-04-01
The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis.
In Search of a Pony: Sources, Methods, Outcomes, and Motivated Reasoning.
Stone, Marc B
2018-05-01
It is highly desirable to be able to evaluate the effect of policy interventions. Such evaluations should have expected outcomes based upon sound theory and be carefully planned, objectively evaluated and prospectively executed. In many cases, however, assessments originate with investigators' poorly substantiated beliefs about the effects of a policy. Instead of designing studies that test falsifiable hypotheses, these investigators adopt methods and data sources that serve as little more than descriptions of these beliefs in the guise of analysis. Interrupted time series analysis is one of the most popular forms of analysis used to present these beliefs. It is intuitively appealing but, in most cases, it is based upon false analogies, fallacious assumptions and analytical errors.
Analytic Methods Used in Quality Control in a Compounding Pharmacy.
Allen, Loyd V
2017-01-01
Analytical testing will no doubt become a more important part of pharmaceutical compounding as the public and regulatory agencies demand increasing documentation of the quality of compounded preparations. Compounding pharmacists must decide what types of testing and what amount of testing to include in their quality-control programs, and whether testing should be done in-house or outsourced. Like pharmaceutical compounding, analytical testing should be performed only by those who are appropriately trained and qualified. This article discusses the analytical methods that are used in quality control in a compounding pharmacy. Copyright© by International Journal of Pharmaceutical Compounding, Inc.
Rowe, Aaron A; Bonham, Andrew J; White, Ryan J; Zimmer, Michael P; Yadgar, Ramsin J; Hobza, Tony M; Honea, Jim W; Ben-Yaacov, Ilan; Plaxco, Kevin W
2011-01-01
Although potentiostats are the foundation of modern electrochemical research, they have seen relatively little application in resource poor settings, such as undergraduate laboratory courses and the developing world. One reason for the low penetration of potentiostats is their cost, as even the least expensive commercially available laboratory potentiostats sell for more than one thousand dollars. An inexpensive electrochemical workstation could thus prove useful in educational labs, and increase access to electrochemistry-based analytical techniques for food, drug and environmental monitoring. With these motivations in mind, we describe here the CheapStat, an inexpensive (<$80), open-source (software and hardware), hand-held potentiostat that can be constructed by anyone who is proficient at assembling circuits. This device supports a number of potential waveforms necessary to perform cyclic, square wave, linear sweep and anodic stripping voltammetry. As we demonstrate, it is suitable for a wide range of applications ranging from food- and drug-quality testing to environmental monitoring, rapid DNA detection, and educational exercises. The device's schematics, parts lists, circuit board layout files, sample experiments, and detailed assembly instructions are available in the supporting information and are released under an open hardware license.
CheapStat: An Open-Source, “Do-It-Yourself” Potentiostat for Analytical and Educational Applications
Rowe, Aaron A.; Bonham, Andrew J.; White, Ryan J.; Zimmer, Michael P.; Yadgar, Ramsin J.; Hobza, Tony M.; Honea, Jim W.; Ben-Yaacov, Ilan; Plaxco, Kevin W.
2011-01-01
Although potentiostats are the foundation of modern electrochemical research, they have seen relatively little application in resource poor settings, such as undergraduate laboratory courses and the developing world. One reason for the low penetration of potentiostats is their cost, as even the least expensive commercially available laboratory potentiostats sell for more than one thousand dollars. An inexpensive electrochemical workstation could thus prove useful in educational labs, and increase access to electrochemistry-based analytical techniques for food, drug and environmental monitoring. With these motivations in mind, we describe here the CheapStat, an inexpensive (<$80), open-source (software and hardware), hand-held potentiostat that can be constructed by anyone who is proficient at assembling circuits. This device supports a number of potential waveforms necessary to perform cyclic, square wave, linear sweep and anodic stripping voltammetry. As we demonstrate, it is suitable for a wide range of applications ranging from food- and drug-quality testing to environmental monitoring, rapid DNA detection, and educational exercises. The device's schematics, parts lists, circuit board layout files, sample experiments, and detailed assembly instructions are available in the supporting information and are released under an open hardware license. PMID:21931613
Moskovets, Eugene; Misharin, Alexander; Laiko, Viktor; Doroshenko, Vladimir
2016-07-15
A comparative MS study was conducted on the analytical performance of two matrix-assisted laser desorption/ionization (MALDI) sources that operated at either low pressure (∼1Torr) or at atmospheric pressure. In both cases, the MALDI sources were attached to a linear ion trap mass spectrometer equipped with a two-stage ion funnel. The obtained results indicate that the limits of detection, in the analysis of identical peptide samples, were much lower with the source that was operated slightly below the 1-Torr pressure. In the low-pressure (LP) MALDI source, ion signals were observed at a laser fluence that was considerably lower than the one determining the appearance of ion signals in the atmospheric pressure (AP) MALDI source. When the near-threshold laser fluences were used to record MALDI MS spectra at 1-Torr and 750-Torr pressures, the level of chemical noise at the 1-Torr pressure was much lower compared to that at AP. The dependency of the analyte ion signals on the accelerating field which dragged the ions from the MALDI plate to the MS analyzer are presented for the LP and AP MALDI sources. The study indicates that the laser fluence, background gas pressure, and field accelerating the ions away from a MALDI plate were the main parameters which determined the ion yield, signal-to-noise (S/N) ratios, the fragmentation of the analyte ions, and adduct formation in the LP and AP MALDI MS methods. The presented results can be helpful for a deeper insight into the mechanisms responsible for the ion formation in MALDI. Copyright © 2016 Elsevier Inc. All rights reserved.
10 CFR 26.168 - Blind performance testing.
Code of Federal Regulations, 2014 CFR
2014-01-01
... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...
10 CFR 26.168 - Blind performance testing.
Code of Federal Regulations, 2010 CFR
2010-01-01
... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...
10 CFR 26.168 - Blind performance testing.
Code of Federal Regulations, 2011 CFR
2011-01-01
... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...
10 CFR 26.168 - Blind performance testing.
Code of Federal Regulations, 2013 CFR
2013-01-01
... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...
10 CFR 26.168 - Blind performance testing.
Code of Federal Regulations, 2012 CFR
2012-01-01
... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...
NASA Astrophysics Data System (ADS)
Zamani, K.; Bombardelli, F.
2011-12-01
Almost all natural phenomena on Earth are highly nonlinear. Even simplifications to the equations describing nature usually end up being nonlinear partial differential equations. Transport (ADR) equation is a pivotal equation in atmospheric sciences and water quality. This nonlinear equation needs to be solved numerically for practical purposes so academicians and engineers thoroughly rely on the assistance of numerical codes. Thus, numerical codes require verification before they are utilized for multiple applications in science and engineering. Model verification is a mathematical procedure whereby a numerical code is checked to assure the governing equation is properly solved as it is described in the design document. CFD verification is not a straightforward and well-defined course. Only a complete test suite can uncover all the limitations and bugs. Results are needed to be assessed to make a distinction between bug-induced-defect and innate limitation of a numerical scheme. As Roache (2009) said, numerical verification is a state-of-the-art procedure. Sometimes novel tricks work out. This study conveys the synopsis of the experiences we gained during a comprehensive verification process which was done for a transport solver. A test suite was designed including unit tests and algorithmic tests. Tests were layered in complexity in several dimensions from simple to complex. Acceptance criteria defined for the desirable capabilities of the transport code such as order of accuracy, mass conservation, handling stiff source term, spurious oscillation, and initial shape preservation. At the begining, mesh convergence study which is the main craft of the verification is performed. To that end, analytical solution of ADR equation gathered. Also a new solution was derived. In the more general cases, lack of analytical solution could be overcome through Richardson Extrapolation and Manufactured Solution. Then, two bugs which were concealed during the mesh convergence study uncovered with the method of false injection and visualization of the results. Symmetry had dual functionality: there was a bug, which was hidden due to the symmetric nature of a test (it was detected afterward utilizing artificial false injection), on the other hand self-symmetry was used to design a new test, and in a case the analytical solution of the ADR equation was unknown. Assisting subroutines designed to check and post-process conservation of mass and oscillatory behavior. Finally, capability of the solver also checked for stiff reaction source term. The above test suite not only was a decent tool of error detection but also it provided a thorough feedback on the ADR solvers limitations. Such information is the crux of any rigorous numerical modeling for a modeler who deals with surface/subsurface pollution transport.
Reconstruction of sound source signal by analytical passive TR in the environment with airflow
NASA Astrophysics Data System (ADS)
Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu
2017-03-01
In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.
Designing a Marketing Analytics Course for the Digital Age
ERIC Educational Resources Information Center
Liu, Xia; Burns, Alvin C.
2018-01-01
Marketing analytics is receiving great attention because of evolving technology and the radical changes in the marketing environment. This study aims to assist the design and implementation of a marketing analytics course. We assembled a rich data set from four sources: business executives, 400 employers' job postings, one million tweets about…
Introducing Text Analytics as a Graduate Business School Course
ERIC Educational Resources Information Center
Edgington, Theresa M.
2011-01-01
Text analytics refers to the process of analyzing unstructured data from documented sources, including open-ended surveys, blogs, and other types of web dialog. Text analytics has enveloped the concept of text mining, an analysis approach influenced heavily from data mining. While text mining has been covered extensively in various computer…
Shelley, Jacob T; Hieftje, Gary M
2010-04-01
The recent development of ambient desorption/ionization mass spectrometry (ADI-MS) has enabled fast, simple analysis of many different sample types. The ADI-MS sources have numerous advantages, including little or no required sample pre-treatment, simple mass spectra, and direct analysis of solids and liquids. However, problems of competitive ionization and limited fragmentation require sample-constituent separation, high mass accuracy, and/or tandem mass spectrometry (MS/MS) to detect, identify, and quantify unknown analytes. To maintain the inherent high throughput of ADI-MS, it is essential for the ion source/mass analyzer combination to measure fast transient signals and provide structural information. In the current study, the flowing atmospheric-pressure afterglow (FAPA) ionization source is coupled with a time-of-flight mass spectrometer (TOF-MS) to analyze fast transient signals (<500 ms FWHM). It was found that gas chromatography (GC) coupled with the FAPA source resulted in a reproducible (<5% RSD) and sensitive (detection limits of <6 fmol for a mixture of herbicides) system with analysis times of ca. 5 min. Introducing analytes to the FAPA in a transient was also shown to significantly reduce matrix effects caused by competitive ionization by minimizing the number and amount of constituents introduced into the ionization source. Additionally, MS/MS with FAPA-TOF-MS, enabling analyte identification, was performed via first-stage collision-induced dissociation (CID). Lastly, molecular and structural information was obtained across a fast transient peak by modulating the conditions that caused the first-stage CID.
NASA Technical Reports Server (NTRS)
Sawdy, D. T.; Beckemeyer, R. J.; Patterson, J. D.
1976-01-01
Results are presented from detailed analytical studies made to define methods for obtaining improved multisegment lining performance by taking advantage of relative placement of each lining segment. Properly phased liner segments reflect and spatially redistribute the incident acoustic energy and thus provide additional attenuation. A mathematical model was developed for rectangular ducts with uniform mean flow. Segmented acoustic fields were represented by duct eigenfunction expansions, and mode-matching was used to ensure continuity of the total field. Parametric studies were performed to identify attenuation mechanisms and define preliminary liner configurations. An optimization procedure was used to determine optimum liner impedance values for a given total lining length, Mach number, and incident modal distribution. Optimal segmented liners are presented and it is shown that, provided the sound source is well-defined and flow environment is known, conventional infinite duct optimum attenuation rates can be improved. To confirm these results, an experimental program was conducted in a laboratory test facility. The measured data are presented in the form of analytical-experimental correlations. Excellent agreement between theory and experiment verifies and substantiates the analytical prediction techniques. The results indicate that phased liners may be of immediate benefit in the development of improved aircraft exhaust duct noise suppressors.
Simulation of transvertron high power microwave sources
NASA Astrophysics Data System (ADS)
Sullivan, Donald J.; Walsh, John E.; Arman, M. Joseph; Godfrey, Brendan B.
1989-07-01
The transvertron oscillator or amplifier is a new and efficient type of intense relativistic electron-beam-driven microwave radiation source. In the m = 0 axisymmetric version, it consists of single or multiple cylindrical cavities driven at one of the TM(0np) resonances by a high-voltage, low-impedance electron beam. There is no applied magnetic field, and the oscillatory transverse motion acquired by the axially-injected electron beam is an essential part of the drive mechanism. The transvertron theory was systematically tested for a wide range of parameters and two possible applications. The simulations were designed to verify the theoretical predictions, assess the transvertron as a possible source of intense microwave radiation, and study its potential as a microwave amplifier. Numerical results agree well in all regards with the analytical theory. Simulations were carried out in two dimensions using CCUBE, with the exception of radial loading cases, where the three-dimensional code SOS was required.
Kalpathy-Cramer, Jayashree; Awan, Musaddiq; Bedrick, Steven; Rasch, Coen R N; Rosenthal, David I; Fuller, Clifton D
2014-02-01
Modern radiotherapy requires accurate region of interest (ROI) inputs for plan optimization and delivery. Target delineation, however, remains operator-dependent and potentially serves as a major source of treatment delivery error. In order to optimize this critical, yet observer-driven process, a flexible web-based platform for individual and cooperative target delineation analysis and instruction was developed in order to meet the following unmet needs: (1) an open-source/open-access platform for automated/semiautomated quantitative interobserver and intraobserver ROI analysis and comparison, (2) a real-time interface for radiation oncology trainee online self-education in ROI definition, and (3) a source for pilot data to develop and validate quality metrics for institutional and cooperative group quality assurance efforts. The resultant software, Target Contour Testing/Instructional Computer Software (TaCTICS), developed using Ruby on Rails, has since been implemented and proven flexible, feasible, and useful in several distinct analytical and research applications.
NASA Astrophysics Data System (ADS)
Singh, Manjeet; Singh, Jaswant; Singh, Baljit; Ghanshyam, C.
2016-11-01
The aim of this study is to quantify the finite spectral bandwidth effect on laser absorption spectroscopy for a wide-band laser source. Experimental analysis reveals that the extinction coefficient of an analyte is affected by the bandwidth of the spectral source, which may result in the erroneous conclusions. An approximate mathematical model has been developed for optical intensities having Gaussian line shape, which includes the impact of source's spectral bandwidth in the equation for spectroscopic absorption. This is done by introducing a suitable first order and second order bandwidth approximation in the Beer-Lambert law equation for finite bandwidth case. The derived expressions were validated using spectroscopic analysis with higher SBW on a test sample, Rhodamine B. The concentrations calculated using proposed approximation, were in significant agreement with the true values when compared with those calculated with conventional approach.
Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration
NASA Technical Reports Server (NTRS)
Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.
1993-01-01
Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.
40 CFR 600.108-08 - Analytical gases.
Code of Federal Regulations, 2012 CFR
2012-07-01
... ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy and Carbon-Related Exhaust Emission Test Procedures § 600.108-08 Analytical gases. The analytical gases for all fuel economy testing...
40 CFR 600.108-08 - Analytical gases.
Code of Federal Regulations, 2013 CFR
2013-07-01
... ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy and Carbon-Related Exhaust Emission Test Procedures § 600.108-08 Analytical gases. The analytical gases for all fuel economy testing...
NASA Astrophysics Data System (ADS)
Bagnardi, M.; Hooper, A. J.
2017-12-01
Inversions of geodetic observational data, such as Interferometric Synthetic Aperture Radar (InSAR) and Global Navigation Satellite System (GNSS) measurements, are often performed to obtain information about the source of surface displacements. Inverse problem theory has been applied to study magmatic processes, the earthquake cycle, and other phenomena that cause deformation of the Earth's interior and of its surface. Together with increasing improvements in data resolution, both spatial and temporal, new satellite missions (e.g., European Commission's Sentinel-1 satellites) are providing the unprecedented opportunity to access space-geodetic data within hours from their acquisition. To truly take advantage of these opportunities we must become able to interpret geodetic data in a rapid and robust manner. Here we present the open-source Geodetic Bayesian Inversion Software (GBIS; available for download at http://comet.nerc.ac.uk/gbis). GBIS is written in Matlab and offers a series of user-friendly and interactive pre- and post-processing tools. For example, an interactive function has been developed to estimate the characteristics of noise in InSAR data by calculating the experimental semi-variogram. The inversion software uses a Markov-chain Monte Carlo algorithm, incorporating the Metropolis-Hastings algorithm with adaptive step size, to efficiently sample the posterior probability distribution of the different source parameters. The probabilistic Bayesian approach allows the user to retrieve estimates of the optimal (best-fitting) deformation source parameters together with the associated uncertainties produced by errors in the data (and by scaling, errors in the model). The current version of GBIS (V1.0) includes fast analytical forward models for magmatic sources of different geometry (e.g., point source, finite spherical source, prolate spheroid source, penny-shaped sill-like source, and dipping-dike with uniform opening) and for dipping faults with uniform slip, embedded in a isotropic elastic half-space. However, the software architecture allows the user to easily add any other analytical or numerical forward models to calculate displacements at the surface. GBIS is delivered with a detailed user manual and three synthetic datasets for testing and practical training.
NASA Astrophysics Data System (ADS)
Hor, Yew Fong
2002-08-01
This thesis involves the design, fabrication and characterization of an integrated optical waveguide sensor. Prior to fabrication, design parameters of the waveguide need to be determined and optimized. The waveguide parameters such as waveguide dimension and the refractive index of the core and cladding are obtained from the single-mode cutoff frequency calculated using either analytical or numerical methods. In this thesis, details of analytical calculations to determine the cutoff frequency in terms of the waveguide parameters will be presented. The method discussed here is Marcatili's approximation. The purpose is to solve the scalar wave equation derived from Maxwell's equations because it describes the mode properties inside the waveguides. The Finite Element Method is used to simulate the electric and magnetic fields inside the waveguides and to determine the propagation characteristics in optical waveguides. This method is suited for problems involving complicated geometries and variable index of refraction. Fabrication of the Integrated Mach-Zehnder Interferometer sensor involves several important standard processes such as Chemical Vapor Deposition (CVD) for thin film fabrication, photolithography for mask transfer, and etching for ridge waveguide formation. The detailed fabrication procedures of the tested Mach-Zehnder Interferometer sensors are discussed. After completion of the sensor fabrication processes, the characterizations were carried out for the thin film of SiO2 and PSG, the waveguides and the Y-junction separately. The waveguides were analyzed to make sure that the sensors are working as expected. The experimental testing on the separated waveguide portions of the first batch Integrated Mach-Zehnder Interferometer (MZI) sensors are described. These testing procedures were also performed for the subsequent fabricated batches of the integrated MZI sensors until optimum performance is achieved. A new concept has been proposed for chemical sensing applications. The novelty of the approach is mainly based on utilizing the multi-wavelength or broadband source instead of single wavelength input to the integrated MZI. The shifting of output spectra resulting from the interference has shown the ability of the MZI to analyze the different concentrations of a chemical analyte. The sensitivity of the sensor is also determined from the plot of intensity versus concentration, which is around 0.013 (%ml)-1 and 0.007 (%ml)-l for the white light source and the 1.5 mum broadband source, respectively, while the lowest detectable concentration of ethanol for the sensor detection is around 8% using a intensity variation method and 0.6% using a peak wavelength variation method.
Concurrence of big data analytics and healthcare: A systematic review.
Mehta, Nishita; Pandit, Anil
2018-06-01
The application of Big Data analytics in healthcare has immense potential for improving the quality of care, reducing waste and error, and reducing the cost of care. This systematic review of literature aims to determine the scope of Big Data analytics in healthcare including its applications and challenges in its adoption in healthcare. It also intends to identify the strategies to overcome the challenges. A systematic search of the articles was carried out on five major scientific databases: ScienceDirect, PubMed, Emerald, IEEE Xplore and Taylor & Francis. The articles on Big Data analytics in healthcare published in English language literature from January 2013 to January 2018 were considered. Descriptive articles and usability studies of Big Data analytics in healthcare and medicine were selected. Two reviewers independently extracted information on definitions of Big Data analytics; sources and applications of Big Data analytics in healthcare; challenges and strategies to overcome the challenges in healthcare. A total of 58 articles were selected as per the inclusion criteria and analyzed. The analyses of these articles found that: (1) researchers lack consensus about the operational definition of Big Data in healthcare; (2) Big Data in healthcare comes from the internal sources within the hospitals or clinics as well external sources including government, laboratories, pharma companies, data aggregators, medical journals etc.; (3) natural language processing (NLP) is most widely used Big Data analytical technique for healthcare and most of the processing tools used for analytics are based on Hadoop; (4) Big Data analytics finds its application for clinical decision support; optimization of clinical operations and reduction of cost of care (5) major challenge in adoption of Big Data analytics is non-availability of evidence of its practical benefits in healthcare. This review study unveils that there is a paucity of information on evidence of real-world use of Big Data analytics in healthcare. This is because, the usability studies have considered only qualitative approach which describes potential benefits but does not take into account the quantitative study. Also, majority of the studies were from developed countries which brings out the need for promotion of research on Healthcare Big Data analytics in developing countries. Copyright © 2018 Elsevier B.V. All rights reserved.
Trace metal speciation in natural waters: Computational vs. analytical
Nordstrom, D. Kirk
1996-01-01
Improvements in the field sampling, preservation, and determination of trace metals in natural waters have made many analyses more reliable and less affected by contamination. The speciation of trace metals, however, remains controversial. Chemical model speciation calculations do not necessarily agree with voltammetric, ion exchange, potentiometric, or other analytical speciation techniques. When metal-organic complexes are important, model calculations are not usually helpful and on-site analytical separations are essential. Many analytical speciation techniques have serious interferences and only work well for a limited subset of water types and compositions. A combined approach to the evaluation of speciation could greatly reduce these uncertainties. The approach proposed would be to (1) compare and contrast different analytical techniques with each other and with computed speciation, (2) compare computed trace metal speciation with reliable measurements of solubility, potentiometry, and mean activity coefficients, and (3) compare different model calculations with each other for the same set of water analyses, especially where supplementary data on speciation already exist. A comparison and critique of analytical with chemical model speciation for a range of water samples would delineate the useful range and limitations of these different approaches to speciation. Both model calculations and analytical determinations have useful and different constraints on the range of possible speciation such that they can provide much better insight into speciation when used together. Major discrepancies in the thermodynamic databases of speciation models can be evaluated with the aid of analytical speciation, and when the thermodynamic models are highly consistent and reliable, the sources of error in the analytical speciation can be evaluated. Major thermodynamic discrepancies also can be evaluated by simulating solubility and activity coefficient data and testing various chemical models for their range of applicability. Until a comparative approach such as this is taken, trace metal speciation will remain highly uncertain and controversial.
NASA Technical Reports Server (NTRS)
Walker, A. B. C., Jr.
1975-01-01
Techniques for the study of the solar corona are reviewed as an introduction to a discussion of modifications required for the study of cosmic sources. Spectroscopic analysis of individual sources and the interstellar medium is considered. The latter was studied via analysis of its effect on the spectra of selected individual sources. The effects of various characteristics of the ISM, including the presence of grains, molecules, and ionization, are first discussed, and the development of ISM models is described. The expected spectral structure of individual cosmic sources is then reviewed with emphasis on supernovae remnants and binary X-ray sources. The observational and analytical requirements imposed by the characteristics of these sources are identified, and prospects for the analysis of abundances and the study of physical parameters within them are assessed. Prospects for the spectroscopic study of other classes of X-ray sources are also discussed.
ERIC Educational Resources Information Center
Kimaru, Irene; Koether, Marina; Chichester, Kimberly; Eaton, Lafayette
2017-01-01
Analytical method transfer (AMT) and dissolution testing are important topics required in industry that should be taught in analytical chemistry courses. Undergraduate students in senior level analytical chemistry laboratory courses at Kennesaw State University (KSU) and St. John Fisher College (SJFC) participated in development, validation, and…
Analytic solution of magnetic induction distribution of ideal hollow spherical field sources
NASA Astrophysics Data System (ADS)
Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min
2017-12-01
The Halbach type hollow spherical permanent magnet arrays (HSPMA) are volume compacted, energy efficient field sources, and capable of producing multi-Tesla field in the cavity of the array, which have attracted intense interests in many practical applications. Here, we present analytical solutions of magnetic induction to the ideal HSPMA in entire space, outside of array, within the cavity of array, and in the interior of the magnet. We obtain solutions using concept of magnetic charge to solve the Poisson's and Laplace's equations for the HSPMA. Using these analytical field expressions inside the material, a scalar demagnetization function is defined to approximately indicate the regions of magnetization reversal, partial demagnetization, and inverse magnetic saturation. The analytical field solution provides deeper insight into the nature of HSPMA and offer guidance in designing optimized one.
Verification of a SEU model for advanced 1-micron CMOS structures using heavy ions
NASA Technical Reports Server (NTRS)
Cable, J. S.; Carter, J. R.; Witteles, A. A.
1986-01-01
Modeling and test results are reported for 1 micron CMOS circuits. Analytical predictions are correlated with experimental data, and sensitivities to process and design variations are discussed. Unique features involved in predicting the SEU performance of these devices are described. The results show that the critical charge for upset exhibits a strong dependence on pulse width for very fast devices, and upset predictions must factor in the pulse shape. Acceptable SEU error rates can be achieved for a 1 micron bulk CMOS process. A thin retrograde well provides complete SEU immunity for N channel hits at normal incidence angle. Source interconnect resistance can be important parameter in determining upset rates, and Cf-252 testing can be a valuable tool for cost-effective SEU testing.
42 CFR 493.1236 - Standard: Evaluation of proficiency testing performance.
Code of Federal Regulations, 2011 CFR
2011-10-01
... the following: (1) Any analyte or subspecialty without analytes listed in subpart I of this part that is not evaluated or scored by a CMS-approved proficiency testing program. (2) Any analyte, specialty...
42 CFR 493.1236 - Standard: Evaluation of proficiency testing performance.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the following: (1) Any analyte or subspecialty without analytes listed in subpart I of this part that is not evaluated or scored by a CMS-approved proficiency testing program. (2) Any analyte, specialty...
42 CFR 493.1236 - Standard: Evaluation of proficiency testing performance.
Code of Federal Regulations, 2014 CFR
2014-10-01
... the following: (1) Any analyte or subspecialty without analytes listed in subpart I of this part that is not evaluated or scored by a CMS-approved proficiency testing program. (2) Any analyte, specialty...
42 CFR 493.1236 - Standard: Evaluation of proficiency testing performance.
Code of Federal Regulations, 2012 CFR
2012-10-01
... the following: (1) Any analyte or subspecialty without analytes listed in subpart I of this part that is not evaluated or scored by a CMS-approved proficiency testing program. (2) Any analyte, specialty...
42 CFR 493.1236 - Standard: Evaluation of proficiency testing performance.
Code of Federal Regulations, 2013 CFR
2013-10-01
... the following: (1) Any analyte or subspecialty without analytes listed in subpart I of this part that is not evaluated or scored by a CMS-approved proficiency testing program. (2) Any analyte, specialty...
Selection and application of microbial source tracking tools for water-quality investigations
Stoeckel, Donald M.
2005-01-01
Microbial source tracking (MST) is a complex process that includes many decision-making steps. Once a contamination problem has been defined, the potential user of MST tools must thoroughly consider study objectives before deciding upon a source identifier, a detection method, and an analytical approach to apply to the problem. Regardless of which MST protocol is chosen, underlying assumptions can affect the results and interpretation. It is crucial to incorporate tests of those assumptions in the study quality-control plan to help validate results and facilitate interpretation. Detailed descriptions of MST objectives, protocols, and assumptions are provided in this report to assist in selection and application of MST tools for water-quality investigations. Several case studies illustrate real-world applications of MST protocols over a range of settings, spatial scales, and types of contamination. Technical details of many available source identifiers and detection methods are included as appendixes. By use of this information, researchers should be able to formulate realistic expectations for the information that MST tools can provide and, where possible, successfully execute investigations to characterize sources of fecal contamination to resource waters.
NASA Astrophysics Data System (ADS)
Rinzema, Kees; ten Bosch, Jaap J.; Ferwerda, Hedzer A.; Hoenders, Bernhard J.
1995-01-01
The diffusion approximation, which is often used to describe the propagation of light in biological tissues, is only good at a sufficient distance from sources and boundaries. Light- tissue interaction is however most intense in the region close to the source. It would therefore be interesting to study this region more closely. Although scattering in biological tissues is predominantly forward peaked, explicit solutions to the transport equation have only been obtained in the case of isotropic scattering. Particularly, for the case of an isotropic point source in an unbounded, isotropically scattering medium the solution is well known. We show that this problem can also be solved analytically if the scattering is no longer isotropic, while everything else remains the same.
Newspaper Reading among College Students in Development of Their Analytical Ability
ERIC Educational Resources Information Center
Kumar, Dinesh
2009-01-01
The study investigated the newspaper reading among college students in development of their analytical ability. Newspapers are one of the few sources of information that are comprehensive, interconnected and offered in one format. The main objective of the study was to find out the development of the analytical ability among college students by…
Early Alert of Academically At-Risk Students: An Open Source Analytics Initiative
ERIC Educational Resources Information Center
Jayaprakash, Sandeep M.; Moody, Erik W.; Lauría, Eitel J. M.; Regan, James R.; Baron, Joshua D.
2014-01-01
The Open Academic Analytics Initiative (OAAI) is a collaborative, multi-year grant program aimed at researching issues related to the scaling up of learning analytics technologies and solutions across all of higher education. The paper describes the goals and objectives of the OAAI, depicts the process and challenges of collecting, organizing and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGrath, Christopher A.
2015-04-01
The presence of radioactive xenon isotopes indicates that fission events have occurred, and is used to help enforce the Comprehensive Test Ban Treaty. Idaho National Laboratory (INL) produces 135Xe, 133mXe, 133Xe, and 131mXe standards used for the calibration and testing of collection equipment and analytical techniques used to monitor radio xenon emissions. At INL, xenon is produced and collected as one of several spontaneous fission products from a 252Cf source. Further chromatographic purification of the fission gases ensures the separations of the xenon fraction for selective collection. An explanation of the fission gas collection, separation and purification is presented. Additionally,more » the range of 135Xe to 133Xe ratio that can be isolated is explained. This is an operational update on the work introduced previously, now that it is in operation and has been recharged with a second 252Cf source.« less
NASA Astrophysics Data System (ADS)
Kralik, Martin
2017-04-01
The application of nitrogen and oxygen isotopes in nitrate allows, under favourable circumstances, to identify potential sources such as precipitation, chemical fertilisers and manure or sewage water. Without any additional tracer, the source distinction of nitrate from manure or sewage water is still difficult. Even the application of boron isotopes can in some cases not avoid ambiguous interpretation. Therefore, the Environment Agency Austria developed a new multi parametrical indicator test to allow the identification and quantification of pollution by domestic sewage water. The test analyses 8 substances well known to occur in sewage water: Acesulfame and sucralose (two artificial, calorie-free sweeteners), benzotriazole and tolyltriazole (two industrial chemicals/corrosion inhibitors), metoprolol, sotalol, carbamazepine and the metabolite 10,11-Dihydro-10,11-dihydroxycarbamazepine (pharmaceuticals) [1]. These substances are polar and degradation in the aquatic system by microbiological processes is not documented. These 8 Substances do not occur naturally which make them ideal tracers. The test can detect wastewater in the analysed water sample down to 0.1 %. This ideal coupling of these analytic tests helps to identify the nitrogen sources in the groundwater body Marchfeld East of Vienna to a high confidence level. In addition, the results allow a reasonable quantification of nitrogen sources from different types of fertilizers as well as sewage water contributions close to villages and in wells recharged by bank filtration. Recent investigations of groundwater in selected wells in Marchfeld [2] indicated a clear nitrogen contribution by wastewater leakages (sewers or septic tanks) to the total nitrogen budget. However, this contribution is shrinking and the main source comes still from agricultural activities. [1] Humer, F.; Weiss, S.; Reinnicke, S.; Clara, M.; Grath, J.; Windhofer, G. (2013): Multi parametrical indicator test for urban wastewater influence. EGU General Assembly 2013, held 7-12 April, 2013 in Vienna, Austria, id. EGU2013-5332, EGU2013-5332. [2] Kralik, M.; Humer, F. & Grath, J. (2008): Pilotprojekt Grundwasseralter: Herkunftsanalyse von Nitrat mittels Stickstoff-, Sauerstoff-, Schwefel und Kohlenstoffisotopen. 57 S.2, Environment Agency Austria/Ministry of Agriculture, Forestry, Environment and Water Management, Vienna.
An Improved Method of AGM for High Precision Geolocation of SAR Images
NASA Astrophysics Data System (ADS)
Zhou, G.; He, C.; Yue, T.; Huang, W.; Huang, Y.; Li, X.; Chen, Y.
2018-05-01
In order to take full advantage of SAR images, it is necessary to obtain the high precision location of the image. During the geometric correction process of images, to ensure the accuracy of image geometric correction and extract the effective mapping information from the images, precise image geolocation is important. This paper presents an improved analytical geolocation method (IAGM) that determine the high precision geolocation of each pixel in a digital SAR image. This method is based on analytical geolocation method (AGM) proposed by X. K. Yuan aiming at realizing the solution of RD model. Tests will be conducted using RADARSAT-2 SAR image. Comparing the predicted feature geolocation with the position as determined by high precision orthophoto, results indicate an accuracy of 50m is attainable with this method. Error sources will be analyzed and some recommendations about improving image location accuracy in future spaceborne SAR's will be given.
Quantitative Detection of Horse Contamination in Cooked Meat Products by ELISA.
Thienes, Cortlandt P; Masiri, Jongkit; Benoit, Lora A; Barrios-Lopez, Brianda; Samuel, Santosh A; Cox, David P; Dobritsa, Anatoly P; Nadala, Cesar; Samadpour, Mansour
2018-05-01
Concerns about the contamination of meat products with horse meat and new regulations for the declaration of meat adulterants have highlighted the need for a rapid test to detect horse meat adulteration. To address this need, Microbiologique, Inc., has developed a sandwich ELISA that can quantify the presence of horse meat down to 0.1% (w/w) in cooked pork, beef, chicken, goat, and lamb meats. This horse meat authentication ELISA has an analytical sensitivity of 0.000030 and 0.000046% (w/v) for cooked and autoclaved horse meat, respectively, and an analytical range of quantitation of 0.05-0.8% (w/v) in the absence of other meats. The assay is rapid and can be completed in 1 h and 10 min. Moreover, the assay is specific for cooked horse meat and does not demonstrate any cross-reactivity with xenogeneic cooked meat sources.
Potential microbial risk factors related to soil amendments and irrigation water of potato crops.
Selma, M V; Allende, A; López-Gálvez, F; Elizaquível, P; Aznar, R; Gil, M I
2007-12-01
This study assesses the potential microbial risk factors related to the use of soil amendments and irrigation water on potato crops, cultivated in one traditional and two intensive farms during two harvest seasons. The natural microbiota and potentially pathogenic micro-organisms were evaluated in the soil amendment, irrigation water, soil and produce. Uncomposted amendments and residual and creek water samples showed the highest microbial counts. The microbial load of potatoes harvested in spring was similar among the tested farms despite the diverse microbial levels of Listeria spp. and faecal coliforms in the potential risk sources. However, differences in total coliform load of potato were found between farms cultivated in the autumn. Immunochromatographic rapid tests and the BAM's reference method (Bacteriological Analytical Manual; AOAC International) were used to detect Escherichia coli O157:H7 from the potential risk sources and produce. Confirmation of the positive results by polymerase chain reaction procedures showed that the immunochromatographic assay was not reliable as it led to false-positive results. The potentially pathogenic micro-organisms of soil amendment, irrigation water and soil samples changed with the harvest seasons and the use of different agricultural practices. However, the microbial load of the produce was not always influenced by these risk sources. Improvements in environmental sample preparation are needed to avoid interferences in the use of immunochromatographic rapid tests. The potential microbial risk sources of fresh produce should be regularly controlled using reliable detection methods to guarantee their microbial safety.
Multimedia Analysis plus Visual Analytics = Multimedia Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chinchor, Nancy; Thomas, James J.; Wong, Pak C.
2010-10-01
Multimedia analysis has focused on images, video, and to some extent audio and has made progress in single channels excluding text. Visual analytics has focused on the user interaction with data during the analytic process plus the fundamental mathematics and has continued to treat text as did its precursor, information visualization. The general problem we address in this tutorial is the combining of multimedia analysis and visual analytics to deal with multimedia information gathered from different sources, with different goals or objectives, and containing all media types and combinations in common usage.
2015-05-01
application ,1 while the simulated PLC software is the open source ModbusPal Java application . When queried using the Modbus TCP protocol, ModbusPal reports...and programmable logic controller ( PLC ) components. The HMI and PLC components were instantiated with software and installed in multiple virtual...creating and capturing HMI– PLC network traffic over a 24-h period in the virtualized network and inspect the packets for errors. Test the
NASA Astrophysics Data System (ADS)
Kokorina, Alina A.; Goryacheva, Irina Y.; Sapelkin, Andrei V.; Sukhorukov, Gleb B.
2018-04-01
Photoluminescent (PL) carbon nanoparticles (CNPs) have been synthesized by one-step microwave irradiation from water solution of sodium dextran sulfate (DSS) as the sole carbon source. Microwave (MW) method is very simple and cheap and it provides fast synthesis of CNPs. We have varied synthesis time for obtaining high luminescent CNPs. The synthesized CNPs exhibit excitation-dependent photoluminescent. Final CNPs water solution has a blue- green luminescence. CNPs have low cytotoxicity, good photostability and can be potentially suitable candidates for bioimaging, analysis or analytical tests.
Noise characteristics of upper surface blown configurations: Analytical Studies
NASA Technical Reports Server (NTRS)
Reddy, N. N.; Tibbetts, J. G.; Pennock, A. P.; Tam, C. K. W.
1978-01-01
Noise and flow results of upper surface blown configurations were analyzed. The dominant noise source mechanisms were identified from experimental data. From far-field noise data for various geometric and operational parameters, an empirical noise prediction program was developed and evaluated by comparing predicted results with experimental data from other tests. USB aircraft compatibility studies were conducted using the described noise prediction and a cruise performance data base. A final design aircraft was selected and theory was developed for the noise from the trailing edge wake assuming it as a highly sheared layer.
NASA Astrophysics Data System (ADS)
Sarmah, Ratan; Tiwari, Shubham
2018-03-01
An analytical solution is developed for predicting two-dimensional transient seepage into ditch drainage network receiving water from a non-uniform steady ponding field from the surface of the soil under the influence of source/sink in the flow domain. The flow domain is assumed to be saturated, homogeneous and anisotropic in nature and have finite extends in horizontal and vertical directions. The drains are assumed to be standing vertical and penetrating up to impervious layer. The water levels in the drains are unequal and invariant with time. The flow field is also assumed to be under the continuous influence of time-space dependent arbitrary source/sink term. The correctness of the proposed model is checked by developing a numerical code and also with the existing analytical solution for the simplified case. The study highlights the significance of source/sink influence in the subsurface flow. With the imposition of the source and sink term in the flow domain, the pathline and travel time of water particles started deviating from their original position and above that the side and top discharge to the drains were also observed to have a strong influence of the source/sink terms. The travel time and pathline of water particles are also observed to have a dependency on the height of water in the ditches and on the location of source/sink activation area.
A results-based process for evaluation of diverse visual analytics tools
NASA Astrophysics Data System (ADS)
Rubin, Gary; Berger, David H.
2013-05-01
With the pervasiveness of still and full-motion imagery in commercial and military applications, the need to ingest and analyze these media has grown rapidly in recent years. Additionally, video hosting and live camera websites provide a near real-time view of our changing world with unprecedented spatial coverage. To take advantage of these controlled and crowd-sourced opportunities, sophisticated visual analytics (VA) tools are required to accurately and efficiently convert raw imagery into usable information. Whether investing in VA products or evaluating algorithms for potential development, it is important for stakeholders to understand the capabilities and limitations of visual analytics tools. Visual analytics algorithms are being applied to problems related to Intelligence, Surveillance, and Reconnaissance (ISR), facility security, and public safety monitoring, to name a few. The diversity of requirements means that a onesize- fits-all approach to performance assessment will not work. We present a process for evaluating the efficacy of algorithms in real-world conditions, thereby allowing users and developers of video analytics software to understand software capabilities and identify potential shortcomings. The results-based approach described in this paper uses an analysis of end-user requirements and Concept of Operations (CONOPS) to define Measures of Effectiveness (MOEs), test data requirements, and evaluation strategies. We define metrics that individually do not fully characterize a system, but when used together, are a powerful way to reveal both strengths and weaknesses. We provide examples of data products, such as heatmaps, performance maps, detection timelines, and rank-based probability-of-detection curves.
NASA Technical Reports Server (NTRS)
Lopes, Leonard; Redonnet, Stephane; Imamura, Taro; Ikeda, Tomoaki; Zawodny, Nikolas; Cunha, Guilherme
2015-01-01
The usage of Computational Fluid Dynamics (CFD) in noise prediction typically has been a two part process: accurately predicting the flow conditions in the near-field and then propagating the noise from the near-field to the observer. Due to the increase in computing power and the cost benefit when weighed against wind tunnel testing, the usage of CFD to estimate the local flow field of complex geometrical structures has become more routine. Recently, the Benchmark problems in Airframe Noise Computation (BANC) workshops have provided a community focus on accurately simulating the local flow field near the body with various CFD approaches. However, to date, little effort has been given into assessing the impact of the propagation phase of noise prediction. This paper includes results from the BANC-III workshop which explores variability in the propagation phase of CFD-based noise prediction. This includes two test cases: an analytical solution of a quadrupole source near a sphere and a computational solution around a nose landing gear. Agreement between three codes was very good for the analytic test case, but CFD-based noise predictions indicate that the propagation phase can introduce 3dB or more of variability in noise predictions.
NASA Astrophysics Data System (ADS)
Barnsley, Lester C.; Carugo, Dario; Aron, Miles; Stride, Eleanor
2017-03-01
The aim of this study was to characterize the behaviour of superparamagnetic particles in magnetic drug targeting (MDT) schemes. A 3-dimensional mathematical model was developed, based on the analytical derivation of the trajectory of a magnetized particle suspended inside a fluid channel carrying laminar flow and in the vicinity of an external source of magnetic force. Semi-analytical expressions to quantify the proportion of captured particles, and their relative accumulation (concentration) as a function of distance along the wall of the channel were also derived. These were expressed in terms of a non-dimensional ratio of the relevant physical and physiological parameters corresponding to a given MDT protocol. The ability of the analytical model to assess magnetic targeting schemes was tested against numerical simulations of particle trajectories. The semi-analytical expressions were found to provide good first-order approximations for the performance of MDT systems in which the magnetic force is relatively constant over a large spatial range. The numerical model was then used to test the suitability of a range of different designs of permanent magnet assemblies for MDT. The results indicated that magnetic arrays that emit a strong magnetic force that varies rapidly over a confined spatial range are the most suitable for concentrating magnetic particles in a localized region. By comparison, commonly used magnet geometries such as button magnets and linear Halbach arrays result in distributions of accumulated particles that are less efficient for delivery. The trajectories predicted by the numerical model were verified experimentally by acoustically focusing magnetic microbeads flowing in a glass capillary channel, and optically tracking their path past a high field gradient Halbach array.
Ishibashi, Midori
2015-01-01
The cost, speed, and quality are the three important factors recently indicated by the Ministry of Health, Labour and Welfare (MHLW) for the purpose of accelerating clinical studies. Based on this background, the importance of laboratory tests is increasing, especially in the evaluation of clinical study participants' entry and safety, and drug efficacy. To assure the quality of laboratory tests, providing high-quality laboratory tests is mandatory. For providing adequate quality assurance in laboratory tests, quality control in the three fields of pre-analytical, analytical, and post-analytical processes is extremely important. There are, however, no detailed written requirements concerning specimen collection, handling, preparation, storage, and shipping. Most laboratory tests for clinical studies are performed onsite in a local laboratory; however, a part of laboratory tests is done in offsite central laboratories after specimen shipping. As factors affecting laboratory tests, individual and inter-individual variations are well-known. Besides these factors, standardizing the factors of specimen collection, handling, preparation, storage, and shipping, may improve and maintain the high quality of clinical studies in general. Furthermore, the analytical method, units, and reference interval are also important factors. It is concluded that, to overcome the problems derived from pre-analytical processes, it is necessary to standardize specimen handling in a broad sense.
Reproducibility of Interferon Gamma (IFN-γ) Release Assays. A Systematic Review
Tagmouti, Saloua; Slater, Madeline; Benedetti, Andrea; Kik, Sandra V.; Banaei, Niaz; Cattamanchi, Adithya; Metcalfe, John; Dowdy, David; van Zyl Smit, Richard; Dendukuri, Nandini
2014-01-01
Rationale: Interferon gamma (IFN-γ) release assays for latent tuberculosis infection result in a larger-than-expected number of conversions and reversions in occupational screening programs, and reproducibility of test results is a concern. Objectives: Knowledge of the relative contribution and extent of the individual sources of variability (immunological, preanalytical, or analytical) could help optimize testing protocols. Methods: We performed a systematic review of studies published by October 2013 on all potential sources of variability of commercial IFN-γ release assays (QuantiFERON-TB Gold In-Tube and T-SPOT.TB). The included studies assessed test variability under identical conditions and under different conditions (the latter both overall and stratified by individual sources of variability). Linear mixed effects models were used to estimate within-subject SD. Measurements and Main Results: We identified a total of 26 articles, including 7 studies analyzing variability under the same conditions, 10 studies analyzing variability with repeat testing over time under different conditions, and 19 studies reporting individual sources of variability. Most data were on QuantiFERON (only three studies on T-SPOT.TB). A considerable number of conversions and reversions were seen around the manufacturer-recommended cut-point. The estimated range of variability of IFN-γ response in QuantiFERON under identical conditions was ±0.47 IU/ml (coefficient of variation, 13%) and ±0.26 IU/ml (30%) for individuals with an initial IFN-γ response in the borderline range (0.25–0.80 IU/ml). The estimated range of variability in noncontrolled settings was substantially larger (±1.4 IU/ml; 60%). Blood volume inoculated into QuantiFERON tubes and preanalytic delay were identified as key sources of variability. Conclusions: This systematic review shows substantial variability with repeat IFN-γ release assays testing even under identical conditions, suggesting that reversions and conversions around the existing cut-point should be interpreted with caution. PMID:25188809
Grabowski, Krzysztof; Gawronski, Mateusz; Baran, Ireneusz; Spychalski, Wojciech; Staszewski, Wieslaw J; Uhl, Tadeusz; Kundu, Tribikram; Packo, Pawel
2016-05-01
Acoustic Emission used in Non-Destructive Testing is focused on analysis of elastic waves propagating in mechanical structures. Then any information carried by generated acoustic waves, further recorded by a set of transducers, allow to determine integrity of these structures. It is clear that material properties and geometry strongly impacts the result. In this paper a method for Acoustic Emission source localization in thin plates is presented. The approach is based on the Time-Distance Domain Transform, that is a wavenumber-frequency mapping technique for precise event localization. The major advantage of the technique is dispersion compensation through a phase-shifting of investigated waveforms in order to acquire the most accurate output, allowing for source-sensor distance estimation using a single transducer. The accuracy and robustness of the above process are also investigated. This includes the study of Young's modulus value and numerical parameters influence on damage detection. By merging the Time-Distance Domain Transform with an optimal distance selection technique, an identification-localization algorithm is achieved. The method is investigated analytically, numerically and experimentally. The latter involves both laboratory and large scale industrial tests. Copyright © 2016 Elsevier B.V. All rights reserved.
Battaglia, Maurizio; Gottsmann, J.; Carbone, D.; Fernandez, J.
2008-01-01
Time-dependent gravimetric measurements can detect subsurface processes long before magma flow leads to earthquakes or other eruption precursors. The ability of gravity measurements to detect subsurface mass flow is greatly enhanced if gravity measurements are analyzed and modeled with ground-deformation data. Obtaining the maximum information from microgravity studies requires careful evaluation of the layout of network benchmarks, the gravity environmental signal, and the coupling between gravity changes and crustal deformation. When changes in the system under study are fast (hours to weeks), as in hydrothermal systems and restless volcanoes, continuous gravity observations at selected sites can help to capture many details of the dynamics of the intrusive sources. Despite the instrumental effects, mainly caused by atmospheric temperature, results from monitoring at Mt. Etna volcano show that continuous measurements are a powerful tool for monitoring and studying volcanoes.Several analytical and numerical mathematical models can beused to fit gravity and deformation data. Analytical models offer a closed-form description of the volcanic source. In principle, this allows one to readily infer the relative importance of the source parameters. In active volcanic sites such as Long Valley caldera (California, U.S.A.) and Campi Flegrei (Italy), careful use of analytical models and high-quality data sets has produced good results. However, the simplifications that make analytical models tractable might result in misleading volcanological inter-pretations, particularly when the real crust surrounding the source is far from the homogeneous/ isotropic assumption. Using numerical models allows consideration of more realistic descriptions of the sources and of the crust where they are located (e.g., vertical and lateral mechanical discontinuities, complex source geometries, and topography). Applications at Teide volcano (Tenerife) and Campi Flegrei demonstrate the importance of this more realistic description in gravity calculations. ?? 2008 Society of Exploration Geophysicists. All rights reserved.
Immobilized aptamer paper spray ionization source for ion mobility spectrometry.
Zargar, Tahereh; Khayamian, Taghi; Jafari, Mohammad T
2017-01-05
A selective thin-film microextraction based on aptamer immobilized on cellulose paper was used as a paper spray ionization source for ion mobility spectrometry (PSI-IMS), for the first time. In this method, the paper is not only used as an ionization source but also it is utilized for the selective extraction of analyte, based on immobilized aptamer. This combination integrates both sample preparation and analyte ionization in a Whatman paper. To that end, an appropriate sample introduction system with a novel design was constructed for the paper spray ionization source. Using this system, a continuous solvent flow works as an elution and spray solvent simultaneously. In this method, analyte is adsorbed on a triangular paper with immobilized aptamer and then it is desorbed and ionized by elution solvent and applied high voltage on paper, respectively. The effects of different experimental parameters such as applied voltage, angle of paper tip, distance between paper tip and counter electrode, elution solvent type, and solvent flow rate were optimized. The proposed method was exhaustively validated in terms of sensitivity and reproducibility by analyzing the standard solutions of codeine and acetamiprid. The analytical results obtained are promising enough to ensure the use of immobilized aptamer paper-spray as both the extraction and ionization techniques in IMS for direct analysis of biomedicine. Copyright © 2016 Elsevier B.V. All rights reserved.
Stoeckel, Donald M; Stelzer, Erin A; Stogner, Robert W; Mau, David P
2011-05-01
Protocols for microbial source tracking of fecal contamination generally are able to identify when a source of contamination is present, but thus far have been unable to evaluate what portion of fecal-indicator bacteria (FIB) came from various sources. A mathematical approach to estimate relative amounts of FIB, such as Escherichia coli, from various sources based on the concentration and distribution of microbial source tracking markers in feces was developed. The approach was tested using dilute fecal suspensions, then applied as part of an analytical suite to a contaminated headwater stream in the Rocky Mountains (Upper Fountain Creek, Colorado). In one single-source fecal suspension, a source that was not present could not be excluded because of incomplete marker specificity; however, human and ruminant sources were detected whenever they were present. In the mixed-feces suspension (pet and human), the minority contributor (human) was detected at a concentration low enough to preclude human contamination as the dominant source of E. coli to the sample. Without the semi-quantitative approach described, simple detects of human-associated marker in stream samples would have provided inaccurate evidence that human contamination was a major source of E. coli to the stream. In samples from Upper Fountain Creek the pattern of E. coli, general and host-associated microbial source tracking markers, nutrients, and wastewater-associated chemical detections--augmented with local observations and land-use patterns--indicated that, contrary to expectations, birds rather than humans or ruminants were the predominant source of fecal contamination to Upper Fountain Creek. This new approach to E. coli allocation, validated by a controlled study and tested by application in a relatively simple setting, represents a widely applicable step forward in the field of microbial source tracking of fecal contamination. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hasnain, Shahid; Saqib, Muhammad; Mashat, Daoud Suleiman
2017-07-01
This research paper represents a numerical approximation to non-linear three dimension reaction diffusion equation with non-linear source term from population genetics. Since various initial and boundary value problems exist in three dimension reaction diffusion phenomena, which are studied numerically by different numerical methods, here we use finite difference schemes (Alternating Direction Implicit and Fourth Order Douglas Implicit) to approximate the solution. Accuracy is studied in term of L2, L∞ and relative error norms by random selected grids along time levels for comparison with analytical results. The test example demonstrates the accuracy, efficiency and versatility of the proposed schemes. Numerical results showed that Fourth Order Douglas Implicit scheme is very efficient and reliable for solving 3-D non-linear reaction diffusion equation.
The Milky Way rotation curve revisited
NASA Astrophysics Data System (ADS)
Russeil, D.; Zavagno, A.; Mège, P.; Poulin, Y.; Molinari, S.; Cambresy, L.
2017-05-01
The Herschel survey of the Galactic Plane (Hi-GAL) is a continuum Galactic plane survey in five wavebands at 70, 160, 250, 350 and 500 μm. From such images, about 150 000 sources have been extracted for which the distance determination is a challenge. In this context the velocity of these sources has been determined thanks to a large number of molecular data cubes. But to convert the velocity to kinematic distance, one needs to adopt a rotation curve for our Galaxy. For three different samples of tracers, we test different analytical forms. We find that the power-law expression, θ(R)/θ0 = 1.022 (R/R0)0.0803 with R0, θ0 = 8.34 kpc, 240 km s-1 is a good and easily manipulated expression for the distance determination process.
NASA Technical Reports Server (NTRS)
Oglebay, J. C.
1977-01-01
A thermal analytic model for a 30-cm engineering model mercury-ion thruster was developed and calibrated using the experimental test results of tests of a pre-engineering model 30-cm thruster. A series of tests, performed later, simulated a wide range of thermal environments on an operating 30-cm engineering model thruster, which was instrumented to measure the temperature distribution within it. The modified analytic model is described and analytic and experimental results compared for various operating conditions. Based on the comparisons, it is concluded that the analytic model can be used as a preliminary design tool to predict thruster steady-state temperature distributions for stage and mission studies and to define the thermal interface bewteen the thruster and other elements of a spacecraft.
Deriving Earth Science Data Analytics Tools/Techniques Requirements
NASA Astrophysics Data System (ADS)
Kempler, S. J.
2015-12-01
Data Analytics applications have made successful strides in the business world where co-analyzing extremely large sets of independent variables have proven profitable. Today, most data analytics tools and techniques, sometimes applicable to Earth science, have targeted the business industry. In fact, the literature is nearly absent of discussion about Earth science data analytics. Earth science data analytics (ESDA) is the process of examining large amounts of data from a variety of sources to uncover hidden patterns, unknown correlations, and other useful information. ESDA is most often applied to data preparation, data reduction, and data analysis. Co-analysis of increasing number and volume of Earth science data has become more prevalent ushered by the plethora of Earth science data sources generated by US programs, international programs, field experiments, ground stations, and citizen scientists. Through work associated with the Earth Science Information Partners (ESIP) Federation, ESDA types have been defined in terms of data analytics end goals. Goals of which are very different than those in business, requiring different tools and techniques. A sampling of use cases have been collected and analyzed in terms of data analytics end goal types, volume, specialized processing, and other attributes. The goal of collecting these use cases is to be able to better understand and specify requirements for data analytics tools and techniques yet to be implemented. This presentation will describe the attributes and preliminary findings of ESDA use cases, as well as provide early analysis of data analytics tools/techniques requirements that would support specific ESDA type goals. Representative existing data analytics tools/techniques relevant to ESDA will also be addressed.
Deriving Earth Science Data Analytics Requirements
NASA Technical Reports Server (NTRS)
Kempler, Steven J.
2015-01-01
Data Analytics applications have made successful strides in the business world where co-analyzing extremely large sets of independent variables have proven profitable. Today, most data analytics tools and techniques, sometimes applicable to Earth science, have targeted the business industry. In fact, the literature is nearly absent of discussion about Earth science data analytics. Earth science data analytics (ESDA) is the process of examining large amounts of data from a variety of sources to uncover hidden patterns, unknown correlations, and other useful information. ESDA is most often applied to data preparation, data reduction, and data analysis. Co-analysis of increasing number and volume of Earth science data has become more prevalent ushered by the plethora of Earth science data sources generated by US programs, international programs, field experiments, ground stations, and citizen scientists.Through work associated with the Earth Science Information Partners (ESIP) Federation, ESDA types have been defined in terms of data analytics end goals. Goals of which are very different than those in business, requiring different tools and techniques. A sampling of use cases have been collected and analyzed in terms of data analytics end goal types, volume, specialized processing, and other attributes. The goal of collecting these use cases is to be able to better understand and specify requirements for data analytics tools and techniques yet to be implemented. This presentation will describe the attributes and preliminary findings of ESDA use cases, as well as provide early analysis of data analytics toolstechniques requirements that would support specific ESDA type goals. Representative existing data analytics toolstechniques relevant to ESDA will also be addressed.
BLUES function method in computational physics
NASA Astrophysics Data System (ADS)
Indekeu, Joseph O.; Müller-Nedebock, Kristian K.
2018-04-01
We introduce a computational method in physics that goes ‘beyond linear use of equation superposition’ (BLUES). A BLUES function is defined as a solution of a nonlinear differential equation (DE) with a delta source that is at the same time a Green’s function for a related linear DE. For an arbitrary source, the BLUES function can be used to construct an exact solution to the nonlinear DE with a different, but related source. Alternatively, the BLUES function can be used to construct an approximate piecewise analytical solution to the nonlinear DE with an arbitrary source. For this alternative use the related linear DE need not be known. The method is illustrated in a few examples using analytical calculations and numerical computations. Areas for further applications are suggested.
Description of small-scale fluctuations in the diffuse X-ray background.
NASA Technical Reports Server (NTRS)
Cavaliere, A.; Friedland, A.; Gursky, H.; Spada, G.
1973-01-01
An analytical study of the fluctuations on a small angular scale expected in the diffuse X-ray background in the presence of unresolved sources is presented. The source population is described by a function N(S), giving the number of sources per unit solid angle and unit apparent flux S. The distribution of observed flux, s, in each angular resolution element of a complete sky survey is represented by a function Q(s). The analytical relation between the successive, higher-order moments of N(S) and Q(s) is described. The goal of reconstructing the source population from the study of the moments of Q(s) of order higher than the second (i.e., the rms fluctuations) is discussed.
Background Radioactivity in River and Reservoir Sediments near Los Alamos, New Mexico
DOE Office of Scientific and Technical Information (OSTI.GOV)
S.G.McLin; D.W. Lyons
2002-05-05
As part of its continuing Environmental Surveillance Program, regional river and lake-bottom sediments have been collected annually by Los Alamos National Laboratory (the Laboratory) since 1974 and 1979, respectively. These background samples are collected from three drainage basins at ten different river stations and five reservoirs located throughout northern New Mexico and southern Colorado. Radiochemical analyses for these sediments include tritium, strontium-90, cesium-137, total uranium, plutonium-238, plutonium-239,-240, americium-241, gross alpha, gross beta, and gross gamma radioactivity. Detection-limit radioactivity originates as worldwide fallout from aboveground nuclear weapons testing and satellite reentry into Earth's atmosphere. Spatial and temporal variations in individual analytemore » levels originate from atmospheric point-source introductions and natural rate differences in airborne deposition and soil erosion. Background radioactivity values on sediments reflect this variability, and grouped river and reservoir sediment samples show a range of statistical distributions that appear to be analyte dependent. Traditionally, both river and reservoir analyte data were blended together to establish background levels. In this report, however, we group background sediment data according to two criteria. These include sediment source (either river or reservoir sediments) and station location relative to the Laboratory (either upstream or downstream). These grouped data are statistically evaluated through 1997, and background radioactivity values are established for individual analytes in upstream river and reservoir sediments. This information may be used to establish the existence and areal extent of trace-level environmental contamination resulting from historical Laboratory research activities since the early 1940s.« less
Toya, Yusuke; Itagaki, Toshiko; Wagatsuma, Kazuaki
2017-01-01
We investigated a simultaneous internal standard method in flame atomic absorption spectrometry (FAAS), in order to better the analytical precision of 3d-transition metals contained in steel materials. For this purpose, a new spectrometer system for FAAS, comprising a bright xenon lamp as the primary radiation source and a high-resolution Echelle monochromator, was employed to measure several absorption lines at a wavelength width of ca. 0.3 nm at the same time, which enables the absorbances of an analytical line and also an internal standard line to be estimated. In considering several criteria for selecting an internal standard element and the absorption line, it could be suggested that platinum-group elements: ruthenium, rhodium, or palladium, were suitable for an internal standard element to determine the 3d-transition metal elements, such as titanium, iron, and nickel, by measuring an appropriate pair of these absorption lines simultaneously. Several variances of the absorption signal, such as a variation in aspirated amounts of sample solution and a short-period drift of the primary light source, would be corrected and thus reduced, when the absorbance ratio of the analytical line to the internal standard line was measured. In Ti-Pd, Ni-Rh, and Fe-Ru systems chosen as typical test samples, the repeatability of the signal respnses was investigated with/without the internal standard method, resulting in better precision when the internal standard method was applied in the FAAS with a nitrous oxide-acetylene flame rather than an air-acetylene flame.
(U) An Analytic Study of Piezoelectric Ejecta Mass Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tregillis, Ian Lee
2017-02-16
We consider the piezoelectric measurement of the areal mass of an ejecta cloud, for the specific case where ejecta are created by a single shock at the free surface and fly ballistically through vacuum to the sensor. To do so, we define time- and velocity-dependent ejecta “areal mass functions” at the source and sensor in terms of typically unknown distribution functions for the ejecta particles. Next, we derive an equation governing the relationship between the areal mass function at the source (which resides in the rest frame of the free surface) and at the sensor (which resides in the laboratorymore » frame). We also derive expressions for the analytic (“true”) accumulated ejecta mass at the sensor and the measured (“inferred”) value obtained via the standard method for analyzing piezoelectric voltage traces. This approach enables us to derive an exact expression for the error imposed upon a piezoelectric ejecta mass measurement (in a perfect system) by the assumption of instantaneous creation. We verify that when the ejecta are created instantaneously (i.e., when the time dependence is a delta function), the piezoelectric inference method exactly reproduces the correct result. When creation is not instantaneous, the standard piezo analysis will always overestimate the true mass. However, the error is generally quite small (less than several percent) for most reasonable velocity and time dependences. In some cases, errors exceeding 10-15% may require velocity distributions or ejecta production timescales inconsistent with experimental observations. These results are demonstrated rigorously with numerous analytic test problems.« less
Testing of the analytical anisotropic algorithm for photon dose calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Esch, Ann van; Tillikainen, Laura; Pyykkonen, Jukka
2006-11-15
The analytical anisotropic algorithm (AAA) was implemented in the Eclipse (Varian Medical Systems) treatment planning system to replace the single pencil beam (SPB) algorithm for the calculation of dose distributions for photon beams. AAA was developed to improve the dose calculation accuracy, especially in heterogeneous media. The total dose deposition is calculated as the superposition of the dose deposited by two photon sources (primary and secondary) and by an electron contamination source. The photon dose is calculated as a three-dimensional convolution of Monte-Carlo precalculated scatter kernels, scaled according to the electron density matrix. For the configuration of AAA, an optimizationmore » algorithm determines the parameters characterizing the multiple source model by optimizing the agreement between the calculated and measured depth dose curves and profiles for the basic beam data. We have combined the acceptance tests obtained in three different departments for 6, 15, and 18 MV photon beams. The accuracy of AAA was tested for different field sizes (symmetric and asymmetric) for open fields, wedged fields, and static and dynamic multileaf collimation fields. Depth dose behavior at different source-to-phantom distances was investigated. Measurements were performed on homogeneous, water equivalent phantoms, on simple phantoms containing cork inhomogeneities, and on the thorax of an anthropomorphic phantom. Comparisons were made among measurements, AAA, and SPB calculations. The optimization procedure for the configuration of the algorithm was successful in reproducing the basic beam data with an overall accuracy of 3%, 1 mm in the build-up region, and 1%, 1 mm elsewhere. Testing of the algorithm in more clinical setups showed comparable results for depth dose curves, profiles, and monitor units of symmetric open and wedged beams below d{sub max}. The electron contamination model was found to be suboptimal to model the dose around d{sub max}, especially for physical wedges at smaller source to phantom distances. For the asymmetric field verification, absolute dose difference of up to 4% were observed for the most extreme asymmetries. Compared to the SPB, the penumbra modeling is considerably improved (1%, 1 mm). At the interface between solid water and cork, profiles show a better agreement with AAA. Depth dose curves in the cork are substantially better with AAA than with SPB. Improvements are more pronounced for 18 MV than for 6 MV. Point dose measurements in the thoracic phantom are mostly within 5%. In general, we can conclude that, compared to SPB, AAA improves the accuracy of dose calculations. Particular progress was made with respect to the penumbra and low dose regions. In heterogeneous materials, improvements are substantial and more pronounced for high (18 MV) than for low (6 MV) energies.« less
Major advances in testing of dairy products: milk component and dairy product attribute testing.
Barbano, D M; Lynch, J M
2006-04-01
Milk component analysis is relatively unusual in the field of quantitative analytical chemistry because an analytical test result determines the allocation of very large amounts of money between buyers and sellers of milk. Therefore, there is high incentive to develop and refine these methods to achieve a level of analytical performance rarely demanded of most methods or laboratory staff working in analytical chemistry. In the last 25 yr, well-defined statistical methods to characterize and validate analytical method performance combined with significant improvements in both the chemical and instrumental methods have allowed achievement of improved analytical performance for payment testing. A shift from marketing commodity dairy products to the development, manufacture, and marketing of value added dairy foods for specific market segments has created a need for instrumental and sensory approaches and quantitative data to support product development and marketing. Bringing together sensory data from quantitative descriptive analysis and analytical data from gas chromatography olfactometry for identification of odor-active compounds in complex natural dairy foods has enabled the sensory scientist and analytical chemist to work together to improve the consistency and quality of dairy food flavors.
Batt, Angela L; Furlong, Edward T; Mash, Heath E; Glassmeyer, Susan T; Kolpin, Dana W
2017-02-01
A national-scale survey of 247 contaminants of emerging concern (CECs), including organic and inorganic chemical compounds, and microbial contaminants, was conducted in source and treated drinking water samples from 25 treatment plants across the United States. Multiple methods were used to determine these CECs, including six analytical methods to measure 174 pharmaceuticals, personal care products, and pesticides. A three-component quality assurance/quality control (QA/QC) program was designed for the subset of 174 CECs which allowed us to assess and compare performances of the methods used. The three components included: 1) a common field QA/QC protocol and sample design, 2) individual investigator-developed method-specific QA/QC protocols, and 3) a suite of 46 method comparison analytes that were determined in two or more analytical methods. Overall method performance for the 174 organic chemical CECs was assessed by comparing spiked recoveries in reagent, source, and treated water over a two-year period. In addition to the 247 CECs reported in the larger drinking water study, another 48 pharmaceutical compounds measured did not consistently meet predetermined quality standards. Methodologies that did not seem suitable for these analytes are overviewed. The need to exclude analytes based on method performance demonstrates the importance of additional QA/QC protocols. Published by Elsevier B.V.
LISA Pathfinder Instrument Data Analysis
NASA Technical Reports Server (NTRS)
Guzman, Felipe
2010-01-01
LISA Pathfinder (LPF) is an ESA-launched demonstration mission of key technologies required for the joint NASA-ESA gravitational wave observatory in space, LISA. As part of the LPF interferometry investigations, analytic models of noise sources and corresponding noise subtraction techniques have been developed to correct for effects like the coupling of test mass jitter into displacement readout, and fluctuations of the laser frequency or optical pathlength difference. Ground testing of pre-flight hardware of the Optical Metrology subsystem is currently ongoing at the Albert Einstein Institute Hannover. In collaboration with NASA Goddard Space Flight Center, the LPF mission data analysis tool LTPDA is being used to analyze the data product of these tests. Furthermore, the noise subtraction techniques and in-flight experiment runs for noise characterization are being defined as part of the mission experiment master plan. We will present the data analysis outcome of preflight hardware ground tests and possible noise subtraction strategies for in-flight instrument operations.
Source Methodology for Turbofan Noise Prediction (SOURCE3D Technical Documentation)
NASA Technical Reports Server (NTRS)
Meyer, Harold D.
1999-01-01
This report provides the analytical documentation for the SOURCE3D Rotor Wake/Stator Interaction Code. It derives the equations for the rotor scattering coefficients and stator source vector and scattering coefficients that are needed for use in the TFANS (Theoretical Fan Noise Design/Prediction System). SOURCE3D treats the rotor and stator as isolated source elements. TFANS uses this information, along with scattering coefficients for inlet and exit elements, and provides complete noise solutions for turbofan engines. SOURCE3D is composed of a collection of FORTRAN programs that have been obtained by extending the approach of the earlier V072 Rotor Wake/Stator Interaction Code. Similar to V072, it treats the rotor and stator as a collection of blades and vanes having zero thickness and camber contained in an infinite, hardwall annular duct. SOURCE3D adds important features to the V072 capability-a rotor element, swirl flow and vorticity waves, actuator disks for flow turning, and combined rotor/actuator disk and stator/actuator disk elements. These items allow reflections from the rotor, frequency scattering, and mode trapping, thus providing more complete noise predictions than previously. The code has been thoroughly verified through comparison with D.B. Hanson's CUP2D two- dimensional code using a narrow annulus test case.
An analytic solution for numerical modeling validation in electromagnetics: the resistive sphere
NASA Astrophysics Data System (ADS)
Swidinsky, Andrei; Liu, Lifei
2017-11-01
We derive the electromagnetic response of a resistive sphere to an electric dipole source buried in a conductive whole space. The solution consists of an infinite series of spherical Bessel functions and associated Legendre polynomials, and follows the well-studied problem of a conductive sphere buried in a resistive whole space in the presence of a magnetic dipole. Our result is particularly useful for controlled-source electromagnetic problems using a grounded electric dipole transmitter and can be used to check numerical methods of calculating the response of resistive targets (such as finite difference, finite volume, finite element and integral equation). While we elect to focus on the resistive sphere in our examples, the expressions in this paper are completely general and allow for arbitrary source frequency, sphere radius, transmitter position, receiver position and sphere/host conductivity contrast so that conductive target responses can also be checked. Commonly used mesh validation techniques consist of comparisons against other numerical codes, but such solutions may not always be reliable or readily available. Alternatively, the response of simple 1-D models can be tested against well-known whole space, half-space and layered earth solutions, but such an approach is inadequate for validating models with curved surfaces. We demonstrate that our theoretical results can be used as a complementary validation tool by comparing analytic electric fields to those calculated through a finite-element analysis; the software implementation of this infinite series solution is made available for direct and immediate application.
MRMPlus: an open source quality control and assessment tool for SRM/MRM assay development.
Aiyetan, Paul; Thomas, Stefani N; Zhang, Zhen; Zhang, Hui
2015-12-12
Selected and multiple reaction monitoring involves monitoring a multiplexed assay of proteotypic peptides and associated transitions in mass spectrometry runs. To describe peptide and associated transitions as stable, quantifiable, and reproducible representatives of proteins of interest, experimental and analytical validation is required. However, inadequate and disparate analytical tools and validation methods predispose assay performance measures to errors and inconsistencies. Implemented as a freely available, open-source tool in the platform independent Java programing language, MRMPlus computes analytical measures as recommended recently by the Clinical Proteomics Tumor Analysis Consortium Assay Development Working Group for "Tier 2" assays - that is, non-clinical assays sufficient enough to measure changes due to both biological and experimental perturbations. Computed measures include; limit of detection, lower limit of quantification, linearity, carry-over, partial validation of specificity, and upper limit of quantification. MRMPlus streamlines assay development analytical workflow and therefore minimizes error predisposition. MRMPlus may also be used for performance estimation for targeted assays not described by the Assay Development Working Group. MRMPlus' source codes and compiled binaries can be freely downloaded from https://bitbucket.org/paiyetan/mrmplusgui and https://bitbucket.org/paiyetan/mrmplusgui/downloads respectively.
Temperature distribution of a simplified rotor due to a uniform heat source
NASA Astrophysics Data System (ADS)
Welzenbach, Sarah; Fischer, Tim; Meier, Felix; Werner, Ewald; kyzy, Sonun Ulan; Munz, Oliver
2018-03-01
In gas turbines, high combustion efficiency as well as operational safety are required. Thus, labyrinth seal systems with honeycomb liners are commonly used. In the case of rubbing events in the seal system, the components can be damaged due to cyclic thermal and mechanical loads. Temperature differences occurring at labyrinth seal fins during rubbing events can be determined by considering a single heat source acting periodically on the surface of a rotating cylinder. Existing literature analysing the temperature distribution on rotating cylindrical bodies due to a stationary heat source is reviewed. The temperature distribution on the circumference of a simplified labyrinth seal fin is calculated using an available and easy to implement analytical approach. A finite element model of the simplified labyrinth seal fin is created and the numerical results are compared to the analytical results. The temperature distributions calculated by the analytical and the numerical approaches coincide for low sliding velocities, while there are discrepancies of the calculated maximum temperatures for higher sliding velocities. The use of the analytical approach allows the conservative estimation of the maximum temperatures arising in labyrinth seal fins during rubbing events. At the same time, high calculation costs can be avoided.
NASA Astrophysics Data System (ADS)
Priya, Anjali; Mishra, Ram Awadh
2016-04-01
In this paper, analytical modeling of surface potential is proposed for new Triple Metal Gate (TMG) fully depleted Recessed-Source/Dain Silicon On Insulator (SOI) Metal Oxide Semiconductor Field Effect Transistor (MOSFET). The metal with the highest work function is arranged near the source region and the lowest one near the drain. Since Recessed-Source/Drain SOI MOSFET has higher drain current as compared to conventional SOI MOSFET due to large source and drain region. The surface potential model developed by 2D Poisson's equation is verified by comparison to the simulation result of 2-dimensional ATLAS simulator. The model is compared with DMG and SMG devices and analysed for different device parameters. The ratio of metal gate length is varied to optimize the result.
NASA Astrophysics Data System (ADS)
Barrett, Steven R. H.; Britter, Rex E.
Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.
Analytical Verifications in Cryogenic Testing of NGST Advanced Mirror System Demonstrators
NASA Technical Reports Server (NTRS)
Cummings, Ramona; Levine, Marie; VanBuren, Dave; Kegley, Jeff; Green, Joseph; Hadaway, James; Presson, Joan; Cline, Todd; Stahl, H. Philip (Technical Monitor)
2002-01-01
Ground based testing is a critical and costly part of component, assembly, and system verifications of large space telescopes. At such tests, however, with integral teamwork by planners, analysts, and test personnel, segments can be included to validate specific analytical parameters and algorithms at relatively low additional cost. This paper opens with strategy of analytical verification segments added to vacuum cryogenic testing of Advanced Mirror System Demonstrator (AMSD) assemblies. These AMSD assemblies incorporate material and architecture concepts being considered in the Next Generation Space Telescope (NGST) design. The test segments for workmanship testing, cold survivability, and cold operation optical throughput are supplemented by segments for analytical verifications of specific structural, thermal, and optical parameters. Utilizing integrated modeling and separate materials testing, the paper continues with support plan for analyses, data, and observation requirements during the AMSD testing, currently slated for late calendar year 2002 to mid calendar year 2003. The paper includes anomaly resolution as gleaned by authors from similar analytical verification support of a previous large space telescope, then closes with draft of plans for parameter extrapolations, to form a well-verified portion of the integrated modeling being done for NGST performance predictions.
Turner, Terry D.; Beller, Laurence S.; Clark, Michael L.; Klingler, Kerry M.
1997-01-01
A method of processing a test sample to concentrate an analyte in the sample from a solvent in the sample includes: a) boiling the test sample containing the analyte and solvent in a boiling chamber to a temperature greater than or equal to the solvent boiling temperature and less than the analyte boiling temperature to form a rising sample vapor mixture; b) passing the sample vapor mixture from the boiling chamber to an elongated primary separation tube, the separation tube having internal sidewalls and a longitudinal axis, the longitudinal axis being angled between vertical and horizontal and thus having an upper region and a lower region; c) collecting the physically transported liquid analyte on the internal sidewalls of the separation tube; and d) flowing the collected analyte along the angled internal sidewalls of the separation tube to and pass the separation tube lower region. The invention also includes passing a turbulence inducing wave through a vapor mixture to separate physically transported liquid second material from vaporized first material. Apparatus are also disclosed for effecting separations. Further disclosed is a fluidically powered liquid test sample withdrawal apparatus for withdrawing a liquid test sample from a test sample container and for cleaning the test sample container.
Turner, T.D.; Beller, L.S.; Clark, M.L.; Klingler, K.M.
1997-10-14
A method of processing a test sample to concentrate an analyte in the sample from a solvent in the sample includes: (a) boiling the test sample containing the analyte and solvent in a boiling chamber to a temperature greater than or equal to the solvent boiling temperature and less than the analyte boiling temperature to form a rising sample vapor mixture; (b) passing the sample vapor mixture from the boiling chamber to an elongated primary separation tube, the separation tube having internal sidewalls and a longitudinal axis, the longitudinal axis being angled between vertical and horizontal and thus having an upper region and a lower region; (c) collecting the physically transported liquid analyte on the internal sidewalls of the separation tube; and (d) flowing the collected analyte along the angled internal sidewalls of the separation tube to and pass the separation tube lower region. The invention also includes passing a turbulence inducing wave through a vapor mixture to separate physically transported liquid second material from vaporized first material. Apparatus is also disclosed for effecting separations. Further disclosed is a fluidically powered liquid test sample withdrawal apparatus for withdrawing a liquid test sample from a test sample container and for cleaning the test sample container. 8 figs.
Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris
2017-12-15
Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.
Vandekerckhove, Kristof; Seidl, Andreas; Gutka, Hiten; Kumar, Manish; Gratzl, Gyöngyi; Keire, David; Coffey, Todd; Kuehne, Henriette
2018-05-10
Leading regulatory agencies recommend biosimilar assessment to proceed in a stepwise fashion, starting with a detailed analytical comparison of the structural and functional properties of the proposed biosimilar and reference product. The degree of analytical similarity determines the degree of residual uncertainty that must be addressed through downstream in vivo studies. Substantive evidence of similarity from comprehensive analytical testing may justify a targeted clinical development plan, and thus enable a shorter path to licensing. The importance of a careful design of the analytical similarity study program therefore should not be underestimated. Designing a state-of-the-art analytical similarity study meeting current regulatory requirements in regions such as the USA and EU requires a methodical approach, consisting of specific steps that far precede the work on the actual analytical study protocol. This white paper discusses scientific and methodological considerations on the process of attribute and test method selection, criticality assessment, and subsequent assignment of analytical measures to US FDA's three tiers of analytical similarity assessment. Case examples of selection of critical quality attributes and analytical methods for similarity exercises are provided to illustrate the practical implementation of the principles discussed.
NASA Astrophysics Data System (ADS)
Cucu, Daniela; Woods, Mike
2008-08-01
The paper aims to present a practical approach for testing laboratories to ensure the quality of their test results. It is based on the experience gained in assessing a large number of testing laboratories, discussing with management and staff, reviewing results obtained in national and international PTs and ILCs and exchanging information in the EA laboratory committee. According to EN ISO/IEC 17025, an accredited laboratory has to implement a programme to ensure the quality of its test results for each measurand. Pre-analytical, analytical and post-analytical measures shall be applied in a systematic manner. They shall include both quality control and quality assurance measures. When designing the quality assurance programme a laboratory should consider pre-analytical activities (like personnel training, selection and validation of test methods, qualifying equipment), analytical activities ranging from sampling, sample preparation, instrumental analysis and post-analytical activities (like decoding, calculation, use of statistical tests or packages, management of results). Designed on different levels (analyst, quality manager and technical manager), including a variety of measures, the programme shall ensure the validity and accuracy of test results, the adequacy of the management system, prove the laboratory's competence in performing tests under accreditation and last but not least show the comparability of test results. Laboratory management should establish performance targets and review periodically QC/QA results against them, implementing appropriate measures in case of non-compliance.
Enabling Analytics on Sensitive Medical Data with Secure Multi-Party Computation.
Veeningen, Meilof; Chatterjea, Supriyo; Horváth, Anna Zsófia; Spindler, Gerald; Boersma, Eric; van der Spek, Peter; van der Galiën, Onno; Gutteling, Job; Kraaij, Wessel; Veugen, Thijs
2018-01-01
While there is a clear need to apply data analytics in the healthcare sector, this is often difficult because it requires combining sensitive data from multiple data sources. In this paper, we show how the cryptographic technique of secure multi-party computation can enable such data analytics by performing analytics without the need to share the underlying data. We discuss the issue of compliance to European privacy legislation; report on three pilots bringing these techniques closer to practice; and discuss the main challenges ahead to make fully privacy-preserving data analytics in the medical sector commonplace.
Comparison of Three Plasma Sources for Ambient Desorption/Ionization Mass Spectrometry
NASA Astrophysics Data System (ADS)
McKay, Kirsty; Salter, Tara L.; Bowfield, Andrew; Walsh, James L.; Gilmore, Ian S.; Bradley, James W.
2014-09-01
Plasma-based desorption/ionization sources are an important ionization technique for ambient surface analysis mass spectrometry. In this paper, we compare and contrast three competing plasma based desorption/ionization sources: a radio-frequency (rf) plasma needle, a dielectric barrier plasma jet, and a low-temperature plasma probe. The ambient composition of the three sources and their effectiveness at analyzing a range of pharmaceuticals and polymers were assessed. Results show that the background mass spectrum of each source was dominated by air species, with the rf needle producing a richer ion spectrum consisting mainly of ionized water clusters. It was also seen that each source produced different ion fragments of the analytes under investigation: this is thought to be due to different substrate heating, different ion transport mechanisms, and different electric field orientations. The rf needle was found to fragment the analytes least and as a result it was able to detect larger polymer ions than the other sources.
Comparison of three plasma sources for ambient desorption/ionization mass spectrometry.
McKay, Kirsty; Salter, Tara L; Bowfield, Andrew; Walsh, James L; Gilmore, Ian S; Bradley, James W
2014-09-01
Plasma-based desorption/ionization sources are an important ionization technique for ambient surface analysis mass spectrometry. In this paper, we compare and contrast three competing plasma based desorption/ionization sources: a radio-frequency (rf) plasma needle, a dielectric barrier plasma jet, and a low-temperature plasma probe. The ambient composition of the three sources and their effectiveness at analyzing a range of pharmaceuticals and polymers were assessed. Results show that the background mass spectrum of each source was dominated by air species, with the rf needle producing a richer ion spectrum consisting mainly of ionized water clusters. It was also seen that each source produced different ion fragments of the analytes under investigation: this is thought to be due to different substrate heating, different ion transport mechanisms, and different electric field orientations. The rf needle was found to fragment the analytes least and as a result it was able to detect larger polymer ions than the other sources.
75 FR 5722 - Procedures for Transportation Workplace Drug and Alcohol Testing Programs
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-04
... drugs in a DOT drug test. You must not test ``DOT specimens'' for any other drugs. (a) Marijuana... test analyte concentration analyte concentration Marijuana metabolites 50 ng/mL THCA \\1\\ 15 ng/mL...
NASA Astrophysics Data System (ADS)
Liang, Hui; Chen, Xiaobo
2017-10-01
A novel multi-domain method based on an analytical control surface is proposed by combining the use of free-surface Green function and Rankine source function. A cylindrical control surface is introduced to subdivide the fluid domain into external and internal domains. Unlike the traditional domain decomposition strategy or multi-block method, the control surface here is not panelized, on which the velocity potential and normal velocity components are analytically expressed as a series of base functions composed of Laguerre function in vertical coordinate and Fourier series in the circumference. Free-surface Green function is applied in the external domain, and the boundary integral equation is constructed on the control surface in the sense of Galerkin collocation via integrating test functions orthogonal to base functions over the control surface. The external solution gives rise to the so-called Dirichlet-to-Neumann [DN2] and Neumann-to-Dirichlet [ND2] relations on the control surface. Irregular frequencies, which are only dependent on the radius of the control surface, are present in the external solution, and they are removed by extending the boundary integral equation to the interior free surface (circular disc) on which the null normal derivative of potential is imposed, and the dipole distribution is expressed as Fourier-Bessel expansion on the disc. In the internal domain, where the Rankine source function is adopted, new boundary integral equations are formulated. The point collocation is imposed over the body surface and free surface, while the collocation of the Galerkin type is applied on the control surface. The present method is valid in the computation of both linear and second-order mean drift wave loads. Furthermore, the second-order mean drift force based on the middle-field formulation can be calculated analytically by using the coefficients of the Fourier-Laguerre expansion.
NASA Astrophysics Data System (ADS)
Jougnot, D.; Guarracino, L.
2016-12-01
The self-potential (SP) method is considered by most researchers the only geophysical method that is directly sensitive to groundwater flow. One source of SP signals, the so-called streaming potential, results from the presence of an electrical double layer at the mineral-pore water interface. When water flows through the pore space, it gives rise to a streaming current and a resulting measurable electrical voltage. Different approaches have been proposed to predict streaming potentials in porous media. One approach is based on the excess charge which is effectively dragged in the medium by the water flow. Following a recent theoretical framework, we developed a physically-based analytical model to predict the effective excess charge in saturated porous media. In this study, the porous media is described by a bundle of capillary tubes with a fractal pore-size distribution. First, an analytical relationship is derived to determine the effective excess charge for a single capillary tube as a function of the pore water salinity. Then, this relationship is used to obtain both exact and approximated expressions for the effective excess charge at the Representative Elementary Volume (REV) scale. The resulting analytical relationship allows the determination of the effective excess charge as a function of pore water salinity, fractal dimension and hydraulic parameters like porosity and permeability, which are also obtained at the REV scale. This new model has been successfully tested against data from the literature of different sources. One of the main finding of this study is that it provides a mechanistic explanation to the empirical dependence between the effective excess charge and the permeability that has been found by various researchers. The proposed petrophysical relationship also contributes to understand the role of porosity and water salinity on effective excess charge and will help to push further the use of streaming potential to monitor groundwater flow.
42 CFR 493.807 - Condition: Reinstatement of laboratories performing nonwaived testing.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., subspecialties, analyte or test, or voluntarily withdraws its certification under CLIA for the failed specialty, subspecialty, or analyte, the laboratory must then demonstrate sustained satisfactory performance on two... reinstatement for certification and Medicare or Medicaid approval in that specialty, subspecialty, analyte or...
42 CFR 493.807 - Condition: Reinstatement of laboratories performing nonwaived testing.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., subspecialties, analyte or test, or voluntarily withdraws its certification under CLIA for the failed specialty, subspecialty, or analyte, the laboratory must then demonstrate sustained satisfactory performance on two... reinstatement for certification and Medicare or Medicaid approval in that specialty, subspecialty, analyte or...
42 CFR 493.807 - Condition: Reinstatement of laboratories performing nonwaived testing.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., subspecialties, analyte or test, or voluntarily withdraws its certification under CLIA for the failed specialty, subspecialty, or analyte, the laboratory must then demonstrate sustained satisfactory performance on two... reinstatement for certification and Medicare or Medicaid approval in that specialty, subspecialty, analyte or...
Meier, D C; Benkstein, K D; Hurst, W S; Chu, P M
2017-05-01
Performance standard specifications for point chemical vapor detectors are established in ASTM E 2885-13 and ASTM E 2933-13. The performance evaluation of the detectors requires the accurate delivery of known concentrations of the chemical target to the system under test. Referee methods enable the analyte test concentration and associated uncertainties in the analyte test concentration to be validated by independent analysis, which is especially important for reactive analytes. This work extends the capability of a previously demonstrated method for using Fourier transform infrared (FT-IR) absorption spectroscopy for quantitatively evaluating the composition of vapor streams containing hazardous materials at Acute Exposure Guideline Levels (AEGL) to include test conditions colder than laboratory ambient temperatures. The described method covers the use of primary reference spectra to establish analyte concentrations, the generation of secondary reference spectra suitable for measuring analyte concentrations under specified testing environments, and the use of additional reference spectra and spectral profile strategies to mitigate the uncertainties due to impurities and water condensation within the low-temperature (7 °C, -5 °C) test cell. Important benefits of this approach include verification of the test analyte concentration with characterized uncertainties by in situ measurements co-located with the detector under test, near-real-time feedback, and broad applicability to toxic industrial chemicals.
Meier, D.C.; Benkstein, K.D.; Hurst, W.S.; Chu, P.M.
2016-01-01
Performance standard specifications for point chemical vapor detectors are established in ASTM E 2885-13 and ASTM E 2933-13. The performance evaluation of the detectors requires the accurate delivery of known concentrations of the chemical target to the system under test. Referee methods enable the analyte test concentration and associated uncertainties in the analyte test concentration to be validated by independent analysis, which is especially important for reactive analytes. This work extends the capability of a previously demonstrated method for using Fourier transform infrared (FT-IR) absorption spectroscopy for quantitatively evaluating the composition of vapor streams containing hazardous materials at Acute Exposure Guideline Levels (AEGL) to include test conditions colder than laboratory ambient temperatures. The described method covers the use of primary reference spectra to establish analyte concentrations, the generation of secondary reference spectra suitable for measuring analyte concentrations under specified testing environments, and the use of additional reference spectra and spectral profile strategies to mitigate the uncertainties due to impurities and water condensation within the low-temperature (7 °C, −5 °C) test cell. Important benefits of this approach include verification of the test analyte concentration with characterized uncertainties by in situ measurements co-located with the detector under test, near-real-time feedback, and broad applicability to toxic industrial chemicals. PMID:28090126
In-air RBS measurements at the LAMFI external beam setup
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, T. F.; Added, N.; Moro, M. V.
2014-11-11
This work describes new developments in the external beam setup of the Laboratory of Material Analysis with Ion Beams of the University of São Paulo (LAMFI-USP). This setup was designed to be a versatile analytical station to analyze a broad range of samples. In recent developments, we seek the external beam Rutherford Backscattering Spectroscopy (RBS) analysis to complement the Particle Induced X-ray Emission (PIXE) measurements. This work presents the initial results of the external beam RBS analysis as well as recent developments to improve the energy resolution RBS measurements, in particular tests to seek for sources of resolution degradation. Thesemore » aspects are discussed and preliminary results of in-air RBS analysis of some test samples are presented.« less
THE IMPACT OF POINT-SOURCE SUBTRACTION RESIDUALS ON 21 cm EPOCH OF REIONIZATION ESTIMATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trott, Cathryn M.; Wayth, Randall B.; Tingay, Steven J., E-mail: cathryn.trott@curtin.edu.au
Precise subtraction of foreground sources is crucial for detecting and estimating 21 cm H I signals from the Epoch of Reionization (EoR). We quantify how imperfect point-source subtraction due to limitations of the measurement data set yields structured residual signal in the data set. We use the Cramer-Rao lower bound, as a metric for quantifying the precision with which a parameter may be measured, to estimate the residual signal in a visibility data set due to imperfect point-source subtraction. We then propagate these residuals into two metrics of interest for 21 cm EoR experiments-the angular power spectrum and two-dimensional powermore » spectrum-using a combination of full analytic covariant derivation, analytic variant derivation, and covariant Monte Carlo simulations. This methodology differs from previous work in two ways: (1) it uses information theory to set the point-source position error, rather than assuming a global rms error, and (2) it describes a method for propagating the errors analytically, thereby obtaining the full correlation structure of the power spectra. The methods are applied to two upcoming low-frequency instruments that are proposing to perform statistical EoR experiments: the Murchison Widefield Array and the Precision Array for Probing the Epoch of Reionization. In addition to the actual antenna configurations, we apply the methods to minimally redundant and maximally redundant configurations. We find that for peeling sources above 1 Jy, the amplitude of the residual signal, and its variance, will be smaller than the contribution from thermal noise for the observing parameters proposed for upcoming EoR experiments, and that optimal subtraction of bright point sources will not be a limiting factor for EoR parameter estimation. We then use the formalism to provide an ab initio analytic derivation motivating the 'wedge' feature in the two-dimensional power spectrum, complementing previous discussion in the literature.« less
NASA Astrophysics Data System (ADS)
Lee, S. S.; Kim, H. J.; Kim, M. O.; Lee, K.; Lee, K. K.
2016-12-01
A study finding evidence of remediation represented on monitoring data before and after in site intensive remedial action was performed with various quantitative evaluation methods such as mass discharge analysis, tracer data, statistical trend analysis, and analytical solutions at DNAPL contaminated site, Wonju, Korea. Remediation technologies such as soil vapor extraction, soil flushing, biostimulation, and pump-and-treat have been applied to eliminate the contaminant sources of trichloroethylene (TCE) and to prevent the migration of TCE plume from remediation target zones. Prior to the remediation action, the concentration and mass discharges of TCE at all transects were affected by seasonal recharge variation and residual DNAPLs sources. After the remediation, the effect of remediation took place clearly at the main source zone and industrial complex. By tracing a time-series of plume evolution, a greater variation in the TCE concentrations was detected at the plumes near the source zones compared to the relatively stable plumes in the downstream. The removal amount of the residual source mass during the intensive remedial action was estimated to evaluate the efficiency of the intensive remedial action using analytical solution. From results of quantitative evaluation using analytical solution, it is assessed that the intensive remedial action had effectively performed with removal efficiency of 70% for the residual source mass during the remediation period. Analytical solution which can consider and quantify the impacts of partial mass reduction have been proven to be useful tools for quantifying unknown contaminant source mass and verifying dissolved concentration at the DNAPL contaminated site and evaluating the efficiency of remediation using long-term monitoring data. Acknowledgement : This subject was supported by the Korea Ministry of Environment under "GAIA project (173-092-009) and (201400540010)", R&D Project on Enviornmental Management of Geologic CO2 storage" from the KEITI (Project number:2014001810003).
NASA Astrophysics Data System (ADS)
Donohue, Randall; Yang, Yuting; McVicar, Tim; Roderick, Michael
2016-04-01
A fundamental question in climate and ecosystem science is "how does climate regulate the land surface carbon budget?" To better answer that question, here we develop an analytical model for estimating mean annual terrestrial gross primary productivity (GPP), which is the largest carbon flux over land, based on a rate-limitation framework. Actual GPP (climatological mean from 1982 to 2010) is calculated as a function of the balance between two GPP potentials defined by the climate (i.e., precipitation and solar radiation) and a third parameter that encodes other environmental variables and modifies the GPP-climate relationship. The developed model was tested at three spatial scales using different GPP sources, i.e., (1) observed GPP from 94 flux-sites, (2) modelled GPP (using the model-tree-ensemble approach) at 48654 (0.5 degree) grid-cells and (3) at 32 large catchments across the globe. Results show that the proposed model could account for the spatial GPP patterns, with a root-mean-square error of 0.70, 0.65 and 0.3 g C m-2 d-1 and R2 of 0.79, 0.92 and 0.97 for the flux-site, grid-cell and catchment scales, respectively. This analytical GPP model shares a similar form with the Budyko hydroclimatological model, which opens the possibility of a general analytical framework to analyze the linked carbon-water-energy cycles.
Analytical optical scattering in clouds
NASA Technical Reports Server (NTRS)
Phanord, Dieudonne D.
1989-01-01
An analytical optical model for scattering of light due to lightning by clouds of different geometry is being developed. The self-consistent approach and the equivalent medium concept of Twersky was used to treat the case corresponding to outside illumination. Thus, the resulting multiple scattering problem is transformed with the knowledge of the bulk parameters, into scattering by a single obstacle in isolation. Based on the size parameter of a typical water droplet as compared to the incident wave length, the problem for the single scatterer equivalent to the distribution of cloud particles can be solved either by Mie or Rayleigh scattering theory. The super computing code of Wiscombe can be used immediately to produce results that can be compared to the Monte Carlo computer simulation for outside incidence. A fairly reasonable inverse approach using the solution of the outside illumination case was proposed to model analytically the situation for point sources located inside the thick optical cloud. Its mathematical details are still being investigated. When finished, it will provide scientists an enhanced capability to study more realistic clouds. For testing purposes, the direct approach to the inside illumination of clouds by lightning is under consideration. Presently, an analytical solution for the cubic cloud will soon be obtained. For cylindrical or spherical clouds, preliminary results are needed for scattering by bounded obstacles above or below a penetrable surface interface.
Enabling fluorescent biosensors for the forensic identification of body fluids.
Frascione, Nunzianda; Gooch, James; Daniel, Barbara
2013-11-12
The search for body fluids often forms a crucial element of many forensic investigations. Confirming fluid presence at a scene can not only support or refute the circumstantial claims of a victim, suspect or witness, but may additionally provide a valuable source of DNA for further identification purposes. However, current biological fluid testing techniques are impaired by a number of well-characterised limitations; they often give false positives, cannot be used simultaneously, are sample destructive and lack the ability to visually locate fluid depositions. These disadvantages can negatively affect the outcome of a case through missed or misinterpreted evidence. Biosensors are devices able to transduce a biological recognition event into a measurable signal, resulting in real-time analyte detection. The use of innovative optical sensing technology may enable the highly specific and non-destructive detection of biological fluid depositions through interaction with several fluid-endogenous biomarkers. Despite considerable impact in a variety of analytical disciplines, biosensor application within forensic analyses may be considered extremely limited. This article aims to explore a number of prospective biosensing mechanisms and to outline the challenges associated with their adaptation towards detection of fluid-specific analytes.
Proactive Supply Chain Performance Management with Predictive Analytics
Stefanovic, Nenad
2014-01-01
Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment. PMID:25386605
Proactive supply chain performance management with predictive analytics.
Stefanovic, Nenad
2014-01-01
Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment.
Propagation of propeller tone noise through a fuselage boundary layer
NASA Technical Reports Server (NTRS)
Hanson, D. B.; Magliozzi, B.
1984-01-01
In earlier experimental and analytical studies, it was found that the boundary layer on an aircraft could provide significant shielding from propeller noise at typical transport airplane cruise Mach numbers. In this paper a new three-dimensional theory is described that treats the combined effects of refraction and scattering by the fuselage and boundary layer. The complete wave field is solved by matching analytical expressions for the incident and scattered waves in the outer flow to a numerical solution in the boundary layer flow. The model for the incident waves is a near-field frequency-domain propeller source theory developed previously for free field studies. Calculations for an advanced turboprop (Prop-Fan) model flight test at 0.8 Mach number show a much smaller than expected pressure amplification at the noise directivity peak, strong boundary layer shielding in the forward quadrant, and shadowing around the fuselage. Results are presented showing the difference between fuselage surface and free-space noise predictions as a function of frequency and Mach number. Comparison of calculated and measured effects obtained in a Prop-Fan model flight test show good agreement, particularly near and aft of the plane of rotation at high cruise Mach number.
Analysis of three tests of the unconfined aquifer in southern Nassau County, Long Island, New York
Lindner, J.B.; Reilly, T.E.
1982-01-01
Drawdown and recovery data from three 2-day aquifer tests (OF) the unconfined (water-table) aquifer in southern Nassau County, N.Y., during the fall of 1979, were analyzed. Several simple analytical solutions, a typecurve-matching procedure, and a Galerkin finite-element radial-flow model were used to determine hydraulic conductivity, ratio of horizontal to vertical hydraulic conductivity, and specific yield. Results of the curve-matching procedure covered a broad range of values that could be narrowed through consideration of data from other sources such as published reports, drillers ' logs, or values determined by analytical solutions. Analysis by the radial-flow model was preferred because it allows for vertical variability in aquifer properties and solves the system for all observation points simultaneously, whereas the other techniques treat the aquifer as homogeneous and must treat each observation well separately. All methods produced fairly consistent results. The ranges of aquifer values at the three sites were: horizontal hydraulic conductivity, 140 to 380 feet per day; transmissivity 11,200 to 17,100 feet squared per day; ratio of horizontal to vertical hydraulic conductivity 2.4:1 to 7:1, and specific yield , 0.13 to 0.23. (USGS)
NASA Astrophysics Data System (ADS)
Jawitz, J. W.; Basu, N.; Chen, X.
2007-05-01
Interwell application of coupled nonreactive and reactive tracers through aquifer contaminant source zones enables quantitative characterization of aquifer heterogeneity and contaminant architecture. Parameters obtained from tracer tests are presented here in a Lagrangian framework that can be used to predict the dissolution of nonaqueous phase liquid (NAPL) contaminants. Nonreactive tracers are commonly used to provide information about travel time distributions in hydrologic systems. Reactive tracers have more recently been introduced as a tool to quantify the amount of NAPL contaminant present within the tracer swept volume. Our group has extended reactive tracer techniques to also characterize NAPL spatial distribution heterogeneity. By conceptualizing the flow field through an aquifer as a collection of streamtubes, the aquifer hydrodynamic heterogeneities may be characterized by a nonreactive tracer travel time distribution, and NAPL spatial distribution heterogeneity may be similarly described using reactive travel time distributions. The combined statistics of these distributions are used to derive a simple analytical solution for contaminant dissolution. This analytical solution, and the tracer techniques used for its parameterization, were validated both numerically and experimentally. Illustrative applications are presented from numerical simulations using the multiphase flow and transport simulator UTCHEM, and laboratory experiments of surfactant-enhanced NAPL remediation in two-dimensional flow chambers.
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory.
Horowitz, Gary L; Zaman, Zahur; Blanckaert, Norbert J C; Chan, Daniel W; Dubois, Jeffrey A; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W; Nilsen, Olaug L; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality.
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory
Zaman, Zahur; Blanckaert, Norbert J. C.; Chan, Daniel W.; Dubois, Jeffrey A.; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W.; Nilsen, Olaug L.; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L.; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality. PMID:18924721
Long-term CF6 engine performance deterioration: Evaluation of engine S/N 451-380
NASA Technical Reports Server (NTRS)
Kramer, W. H.; Smith, J. J.
1978-01-01
The performance testing and analytical teardown of CF6-6D engine serial number 451-380 which was recently removed from a DC-10 aircraft is summarized. The investigative test program was conducted inbound prior to normal overhaul/refurbishment. The performance testing included an inbound test, a test following cleaning of the low pressure turbine airfoils, and a final test after leading edge rework and cleaning the stage one fan blades. The analytical teardown consisted of detailed disassembly inspection measurements and airfoil surface finish checks of the as-received deteriorated hardware. Aspects discussed include the analysis of the test cell performance data, a complete analytical teardown report with a detailed description of all observed hardware distress, and an analytical assessment of the performance loss (deterioration) relating measured hardware conditions to losses in both specific fuel comsumption and exhaust gas temperature.
Long-term CF6 engine performance deterioration: Evaluation of engine S/N 451-479
NASA Technical Reports Server (NTRS)
Kramer, W. H.; Smith, J. J.
1978-01-01
The performance testing and analytical teardown of CF6-6D engine is summarized. This engine had completed its initial installation on DC-10 aircraft. The investigative test program was conducted inbound prior to normal overhaul/refurbishment. The performance testing included an inbound test, a test following cleaning of the low pressure turbine airfoils, and a final test after leading edge rework and cleaning the stage one fan blades. The analytical teardown consisted of detailed disassembly inspection measurements and airfoil surface finish checks of the as received deteriorated hardware. Included in this report is a detailed analysis of the test cell performance data, a complete analytical teardown report with a detailed description of all observed hardware distress, and an analytical assessment of the performance loss (deterioration) relating measured hardware conditions to losses in both SFC (specific fuel consumption) and EGT (exhaust gas temperature).
NASA Technical Reports Server (NTRS)
Sadunas, J. A.; French, E. P.; Sexton, H.
1973-01-01
A 1/25 scale model S-2 stage base region thermal environment test is presented. Analytical results are included which reflect the effect of engine operating conditions, model scale, turbo-pump exhaust gas injection on base region thermal environment. Comparisons are made between full scale flight data, model test data, and analytical results. The report is prepared in two volumes. The description of analytical predictions and comparisons with flight data are presented. Tabulation of the test data is provided.
Brooks, M.H.; Schroder, L.J.; Malo, B.A.
1985-01-01
Four laboratories were evaluated in their analysis of identical natural and simulated precipitation water samples. Interlaboratory comparability was evaluated using analysis of variance coupled with Duncan 's multiple range test, and linear-regression models describing the relations between individual laboratory analytical results for natural precipitation samples. Results of the statistical analyses indicate that certain pairs of laboratories produce different results when analyzing identical samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple range test on data produced by the laboratories from the analysis of identical simulated precipitation samples. Bias for a given analyte produced by a single laboratory has been indicated when the laboratory mean for that analyte is shown to be significantly different from the mean for the most-probable analyte concentrations in the simulated precipitation samples. Ion-chromatographic methods for the determination of chloride, nitrate, and sulfate have been compared with the colorimetric methods that were also in use during the study period. Comparisons were made using analysis of variance coupled with Duncan 's multiple range test for means produced by the two methods. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Analyte estimated precisions have been compared using F-tests and differences in analyte precisions for laboratory pairs have been reported. (USGS)
Sierra/Aria 4.48 Verification Manual.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sierra Thermal Fluid Development Team
Presented in this document is a portion of the tests that exist in the Sierra Thermal/Fluids verification test suite. Each of these tests is run nightly with the Sierra/TF code suite and the results of the test checked under mesh refinement against the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.
An Improved Neutron Transport Algorithm for Space Radiation
NASA Technical Reports Server (NTRS)
Heinbockel, John H.; Clowdsley, Martha S.; Wilson, John W.
2000-01-01
A low-energy neutron transport algorithm for use in space radiation protection is developed. The algorithm is based upon a multigroup analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. This analysis is accomplished by solving a realistic but simplified neutron transport test problem. The test problem is analyzed by using numerical and analytical procedures to obtain an accurate solution within specified error bounds. Results from the test problem are then used for determining mean values associated with rescattering terms that are associated with a multigroup solution of the straight-ahead Boltzmann equation. The algorithm is then coupled to the Langley HZETRN code through the evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for a water and an aluminum-water shield-target configuration is then compared with LAHET and MCNPX Monte Carlo code calculations for the same shield-target configuration. The algorithm developed showed a great improvement in results over the unmodified HZETRN solution. In addition, a two-directional solution of the evaporation source showed even further improvement of the fluence near the front of the water target where diffusion from the front surface is important.
Schramm, Elisabeth; Kürten, Andreas; Hölzer, Jasper; Mitschke, Stefan; Mühlberger, Fabian; Sklorz, Martin; Wieser, Jochen; Ulrich, Andreas; Pütz, Michael; Schulte-Ladbeck, Rasmus; Schultze, Rainer; Curtius, Joachim; Borrmann, Stephan; Zimmermann, Ralf
2009-06-01
An in-house-built ion trap mass spectrometer combined with a soft ionization source has been set up and tested. As ionization source, an electron beam pumped vacuum UV (VUV) excimer lamp (EBEL) was used for single-photon ionization. It was shown that soft ionization allows the reduction of fragmentation of the target analytes and the suppression of most matrix components. Therefore, the combination of photon ionization with the tandem mass spectrometry (MS/MS) capability of an ion trap yields a powerful tool for molecular ion peak detection and identification of organic trace compounds in complex matrixes. This setup was successfully tested for two different applications. The first one is the detection of security-relevant substances like explosives, narcotics, and chemical warfare agents. One test substance from each of these groups was chosen and detected successfully with single photon ionization ion trap mass spectrometry (SPI-ITMS) MS/MS measurements. Additionally, first tests were performed, demonstrating that this method is not influenced by matrix compounds. The second field of application is the detection of process gases. Here, exhaust gas from coffee roasting was analyzed in real time, and some of its compounds were identified using MS/MS studies.
Design and Test Plans for a Non-Nuclear Fission Power System Technology Demonstration Unit
NASA Technical Reports Server (NTRS)
Mason, Lee; Palac, Donald; Gibson, Marc; Houts, Michael; Warren, John; Werner, James; Poston, David; Qualls, Arthur Lou; Radel, Ross; Harlow, Scott
2012-01-01
A joint National Aeronautics and Space Administration (NASA) and Department of Energy (DOE) team is developing concepts and technologies for affordable nuclear Fission Power Systems (FPSs) to support future exploration missions. A key deliverable is the Technology Demonstration Unit (TDU). The TDU will assemble the major elements of a notional FPS with a non-nuclear reactor simulator (Rx Sim) and demonstrate system-level performance in thermal vacuum. The Rx Sim includes an electrical resistance heat source and a liquid metal heat transport loop that simulates the reactor thermal interface and expected dynamic response. A power conversion unit (PCU) generates electric power utilizing the liquid metal heat source and rejects waste heat to a heat rejection system (HRS). The HRS includes a pumped water heat removal loop coupled to radiator panels suspended in the thermal-vacuum facility. The basic test plan is to subject the system to realistic operating conditions and gather data to evaluate performance sensitivity, control stability, and response characteristics. Upon completion of the testing, the technology is expected to satisfy the requirements for Technology Readiness Level 6 (System Demonstration in an Operational and Relevant Environment) based on the use of high-fidelity hardware and prototypic software tested under realistic conditions and correlated with analytical predictions.
Design and Test Plans for a Non-Nuclear Fission Power System Technology Demonstration Unit
NASA Astrophysics Data System (ADS)
Mason, L.; Palac, D.; Gibson, M.; Houts, M.; Warren, J.; Werner, J.; Poston, D.; Qualls, L.; Radel, R.; Harlow, S.
A joint National Aeronautics and Space Administration (NASA) and Department of Energy (DOE) team is developing concepts and technologies for affordable nuclear Fission Power Systems (FPSs) to support future exploration missions. A key deliverable is the Technology Demonstration Unit (TDU). The TDU will assemble the major elements of a notional FPS with a non-nuclear reactor simulator (Rx Sim) and demonstrate system-level performance in thermal vacuum. The Rx Sim includes an electrical resistance heat source and a liquid metal heat transport loop that simulates the reactor thermal interface and expected dynamic response. A power conversion unit (PCU) generates electric power utilizing the liquid metal heat source and rejects waste heat to a heat rejection system (HRS). The HRS includes a pumped water heat removal loop coupled to radiator panels suspended in the thermal-vacuum facility. The basic test plan is to subject the system to realistic operating conditions and gather data to evaluate performance sensitivity, control stability, and response characteristics. Upon completion of the testing, the technology is expected to satisfy the requirements for Technology Readiness Level 6 (System Demonstration in an Operational and Relevant Environment) based on the use of high-fidelity hardware and prototypic software tested under realistic conditions and correlated with analytical predictions.
Lessons learned in preparing method 29 filters for compliance testing audits.
Martz, R F; McCartney, J E; Bursey, J T; Riley, C E
2000-01-01
Companies conducting compliance testing are required to analyze audit samples at the time they collect and analyze the stack samples if audit samples are available. Eastern Research Group (ERG) provides technical support to the EPA's Emission Measurements Center's Stationary Source Audit Program (SSAP) for developing, preparing, and distributing performance evaluation samples and audit materials. These audit samples are requested via the regulatory Agency and include spiked audit materials for EPA Method 29-Metals Emissions from Stationary Sources, as well as other methods. To provide appropriate audit materials to federal, state, tribal, and local governments, as well as agencies performing environmental activities and conducting emission compliance tests, ERG has recently performed testing of blank filter materials and preparation of spiked filters for EPA Method 29. For sampling stationary sources using an EPA Method 29 sampling train, the use of filters without organic binders containing less than 1.3 microg/in.2 of each of the metals to be measured is required. Risk Assessment testing imposes even stricter requirements for clean filter background levels. Three vendor sources of quartz fiber filters were evaluated for background contamination to ensure that audit samples would be prepared using filters with the lowest metal background levels. A procedure was developed to test new filters, and a cleaning procedure was evaluated to see if a greater level of cleanliness could be achieved using an acid rinse with new filters. Background levels for filters supplied by different vendors and within lots of filters from the same vendor showed a wide variation, confirmed through contact with several analytical laboratories that frequently perform EPA Method 29 analyses. It has been necessary to repeat more than one compliance test because of suspect metals background contamination levels. An acid cleaning step produced improvement in contamination level, but the difference was not significant for most of the Method 29 target metals. As a result of our studies, we conclude: Filters for Method 29 testing should be purchased in lots as large as possible. Testing firms should pre-screen new boxes and/or new lots of filters used for Method 29 testing. Random analysis of three filters (top, middle, bottom of the box) from a new box of vendor filters before allowing them to be used in field tests is a prudent approach. A box of filters from a given vendor should be screened, and filters from this screened box should be used both for testing and as field blanks in each test scenario to provide the level of quality assurance required for stationary source testing.
Orejas, Jaime; Pfeuffer, Kevin P; Ray, Steven J; Pisonero, Jorge; Sanz-Medel, Alfredo; Hieftje, Gary M
2014-11-01
Ambient desorption/ionization (ADI) sources coupled to mass spectrometry (MS) offer outstanding analytical features: direct analysis of real samples without sample pretreatment, combined with the selectivity and sensitivity of MS. Since ADI sources typically work in the open atmosphere, ambient conditions can affect the desorption and ionization processes. Here, the effects of internal source parameters and ambient humidity on the ionization processes of the flowing atmospheric pressure afterglow (FAPA) source are investigated. The interaction of reagent ions with a range of analytes is studied in terms of sensitivity and based upon the processes that occur in the ionization reactions. The results show that internal parameters which lead to higher gas temperatures afforded higher sensitivities, although fragmentation is also affected. In the case of humidity, only extremely dry conditions led to higher sensitivities, while fragmentation remained unaffected.
Integral-moment analysis of the BATSE gamma-ray burst intensity distribution
NASA Technical Reports Server (NTRS)
Horack, John M.; Emslie, A. Gordon
1994-01-01
We have applied the technique of integral-moment analysis to the intensity distribution of the first 260 gamma-ray bursts observed by the Burst and Transient Source Experiment (BATSE) on the Compton Gamma Ray Observatory. This technique provides direct measurement of properties such as the mean, variance, and skewness of the convolved luminosity-number density distribution, as well as associated uncertainties. Using this method, one obtains insight into the nature of the source distributions unavailable through computation of traditional single parameters such as V/V(sub max)). If the luminosity function of the gamma-ray bursts is strongly peaked, giving bursts only a narrow range of luminosities, these results are then direct probes of the radial distribution of sources, regardless of whether the bursts are a local phenomenon, are distributed in a galactic halo, or are at cosmological distances. Accordingly, an integral-moment analysis of the intensity distribution of the gamma-ray bursts provides for the most complete analytic description of the source distribution available from the data, and offers the most comprehensive test of the compatibility of a given hypothesized distribution with observation.
Krakow conference on low emissions sources: Proceedings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pierce, B.L.; Butcher, T.A.
1995-12-31
The Krakow Conference on Low Emission Sources presented the information produced and analytical tools developed in the first phase of the Krakow Clean Fossil Fuels and Energy Efficiency Program. This phase included: field testing to provide quantitative data on missions and efficiencies as well as on opportunities for building energy conservation; engineering analysis to determine the costs of implementing pollution control; and incentives analysis to identify actions required to create a market for equipment, fuels, and services needed to reduce pollution. Collectively, these Proceedings contain reports that summarize the above phase one information, present the status of energy system managementmore » in Krakow, provide information on financing pollution control projects in Krakow and elsewhere, and highlight the capabilities and technologies of Polish and American companies that are working to reduce pollution from low emission sources. It is intended that the US reader will find in these Proceedings useful results and plans for control of pollution from low emission sources that are representative of heating systems in central and Eastern Europe. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.« less
Domain-Invariant Partial-Least-Squares Regression.
Nikzad-Langerodi, Ramin; Zellinger, Werner; Lughofer, Edwin; Saminger-Platz, Susanne
2018-05-11
Multivariate calibration models often fail to extrapolate beyond the calibration samples because of changes associated with the instrumental response, environmental condition, or sample matrix. Most of the current methods used to adapt a source calibration model to a target domain exclusively apply to calibration transfer between similar analytical devices, while generic methods for calibration-model adaptation are largely missing. To fill this gap, we here introduce domain-invariant partial-least-squares (di-PLS) regression, which extends ordinary PLS by a domain regularizer in order to align the source and target distributions in the latent-variable space. We show that a domain-invariant weight vector can be derived in closed form, which allows the integration of (partially) labeled data from the source and target domains as well as entirely unlabeled data from the latter. We test our approach on a simulated data set where the aim is to desensitize a source calibration model to an unknown interfering agent in the target domain (i.e., unsupervised model adaptation). In addition, we demonstrate unsupervised, semisupervised, and supervised model adaptation by di-PLS on two real-world near-infrared (NIR) spectroscopic data sets.
Managing laboratory test ordering through test frequency filtering.
Janssens, Pim M W; Wasser, Gerd
2013-06-01
Modern computer systems allow limits to be set on the periods allowed for repetitive testing. We investigated a computerised system for managing potentially overtly frequent laboratory testing, calculating the financial savings obtained. In consultation with hospital physicians, tests were selected for which 'spare periods' (periods during which tests are barred) might be set to control repetitive testing. The tests were selected and spare periods determined based on known analyte variations in health and disease, variety of tissues or cells giving rise to analytes, clinical conditions and rate of change determining analyte levels, frequency with which doctors need information about the analytes and the logistical needs of the clinic. The operation and acceptance of the system was explored with 23 analytes. Frequency filtering was subsequently introduced for 44 tests, each with their own spare periods. The proportion of tests barred was 0.56%, the most frequent of these being for total cholesterol, uric acid and HDL-cholesterol. The financial savings were 0.33% of the costs of all testing, with HbA1c, HDL-cholesterol and vitamin B12 yielding the largest savings. Following the introduction of the system the number of barred tests ultimately decreased, suggesting accommodation by the test requestors. Managing laboratory testing through computerised limits to prevent overtly frequent testing is feasible. The savings were relatively low, but sustaining the system takes little effort, giving little reason not to apply it. The findings will serve as a basis for improving the system and may guide others in introducing similar systems.
100-F Target Analyte List Development for Soil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ovink, R.
2012-09-18
This report documents the process used to identify source area target analytes in support of the 100-F Area remedial investigation/feasibility study (RI/FS) addendum to the Integrated 100 Area Remedial Investigation/Feasibility Study Work Plan (DOE/RL-2008-46, Rev. 0).
100-K Target Analyte List Development for Soil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ovink, R.
2012-09-18
This report documents the process used to identify source area target analytes in support of the 100-K Area remedial investigation/feasibility study (RI/FS) addendum to the Integrated 100 Area Remedial Investigation/Feasibility Study Work Plan (DOE/RL-2008-46, Rev. 0).
Analytical technique characterizes all trace contaminants in water
NASA Technical Reports Server (NTRS)
Foster, J. N.; Lysyj, I.; Nelson, K. H.
1967-01-01
Properly programmed combination of advanced chemical and physical analytical techniques characterize critically all trace contaminants in both the potable and waste water from the Apollo Command Module. This methodology can also be applied to the investigation of the source of water pollution.
NASA Technical Reports Server (NTRS)
Chambers, Jeffrey A.
1994-01-01
Finite element analysis is regularly used during the engineering cycle of mechanical systems to predict the response to static, thermal, and dynamic loads. The finite element model (FEM) used to represent the system is often correlated with physical test results to determine the validity of analytical results provided. Results from dynamic testing provide one means for performing this correlation. One of the most common methods of measuring accuracy is by classical modal testing, whereby vibratory mode shapes are compared to mode shapes provided by finite element analysis. The degree of correlation between the test and analytical mode shapes can be shown mathematically using the cross orthogonality check. A great deal of time and effort can be exhausted in generating the set of test acquired mode shapes needed for the cross orthogonality check. In most situations response data from vibration tests are digitally processed to generate the mode shapes from a combination of modal parameters, forcing functions, and recorded response data. An alternate method is proposed in which the same correlation of analytical and test acquired mode shapes can be achieved without conducting the modal survey. Instead a procedure is detailed in which a minimum of test information, specifically the acceleration response data from a random vibration test, is used to generate a set of equivalent local accelerations to be applied to the reduced analytical model at discrete points corresponding to the test measurement locations. The static solution of the analytical model then produces a set of deformations that once normalized can be used to represent the test acquired mode shapes in the cross orthogonality relation. The method proposed has been shown to provide accurate results for both a simple analytical model as well as a complex space flight structure.
Post-analytical Issues in Hemostasis and Thrombosis Testing.
Favaloro, Emmanuel J; Lippi, Giuseppe
2017-01-01
Analytical concerns within hemostasis and thrombosis testing are continuously decreasing. This is essentially attributable to modern instrumentation, improvements in test performance and reliability, as well as the application of appropriate internal quality control and external quality assurance measures. Pre-analytical issues are also being dealt with in some newer instrumentation, which are able to detect hemolysis, icteria and lipemia, and, in some cases, other issues related to sample collection such as tube under-filling. Post-analytical issues are generally related to appropriate reporting and interpretation of test results, and these are the focus of the current overview, which provides a brief description of these events, as well as guidance for their prevention or minimization. In particular, we propose several strategies for improved post-analytical reporting of hemostasis assays and advise that this may provide the final opportunity to prevent serious clinical errors in diagnosis.
Beloglazova, N V; Goryacheva, I Yu; Rusanova, T Yu; Yurasov, N A; Galve, R; Marco, M-P; De Saeger, S
2010-07-05
A new rapid method which allows simultaneous one step detection of two analytes of different nature (2,4,6,-trichlorophenol (TCP) and ochratoxin A (OTA)) in red wine was developed. It was based on a column test with three separate immunolayers: two test layers and one control layer. Each layer consisted of sepharose gel with immobilized anti-OTA (OTA test layer), anti-TCP (TCP test layer) or anti-HRP (control layer) antibodies. Analytes bind to the antibodies in the corresponding test layer while sample flows through the column. Then a mixture of OTA-HRP and TCP-HRP in appropriate dilutions was used, followed by the application of chromogenic substrate. Colour development of the test layer occurred when the corresponding analyte was absent in the sample. HRP-conjugates bound to anti-HRP antibody in the control layer independently of presence or absence of analytes and a blue colour developed in the control layer. Cut-off values for both analytes were 2 microg L(-1). The described method was applied to the simultaneous detection of TCP and OTA in wine samples. To screen the analytes in red wine samples, clean-up columns were used for sample pre-treatment in combination with the test column. Results were confirmed by chromatographic methods. Copyright 2010 Elsevier B.V. All rights reserved.
Dillon, Roslyn; Croner, Lisa J; Bucci, John; Kairs, Stefanie N; You, Jia; Beasley, Sharon; Blimline, Mark; Carino, Rochele B; Chan, Vicky C; Cuevas, Danissa; Diggs, Jeff; Jennings, Megan; Levy, Jacob; Mina, Ginger; Yee, Alvin; Wilcox, Bruce
2018-05-30
Early detection of colorectal cancer (CRC) is key to reducing associated mortality. Despite the importance of early detection, approximately 40% of individuals in the United States between the ages of 50-75 have never been screened for CRC. The low compliance with colonoscopy and fecal-based screening may be addressed with a non-invasive alternative such as a blood-based test. We describe here the analytical validation of a multiplexed blood-based assay that measures the plasma concentrations of 15 proteins to assess advanced adenoma (AA) and CRC risk in symptomatic patients. The test was developed on an electrochemiluminescent immunoassay platform employing four multi-marker panels, to be implemented in the clinic as a laboratory developed test (LDT). Under the Clinical Laboratory Improvement Amendments (CLIA) and College of American Pathologists (CAP) regulations, a United States-based clinical laboratory utilizing an LDT must establish performance characteristics relating to analytical validity prior to releasing patient test results. This report describes a series of studies demonstrating the precision, accuracy, analytical sensitivity, and analytical specificity for each of the 15 assays, as required by CLIA/CAP. In addition, the report describes studies characterizing each of the assays' dynamic range, parallelism, tolerance to common interfering substances, spike recovery, and stability to sample freeze-thaw cycles. Upon completion of the analytical characterization, a clinical accuracy study was performed to evaluate concordance of AA and CRC classifier model calls using the analytical method intended for use in the clinic. Of 434 symptomatic patient samples tested, the percent agreement with original CRC and AA calls was 87% and 92% respectively. All studies followed CLSI guidelines and met the regulatory requirements for implementation of a new LDT. The results provide the analytical evidence to support the implementation of the novel multi-marker test as a clinical test for evaluating CRC and AA risk in symptomatic individuals. Copyright © 2018 Elsevier B.V. All rights reserved.
Dinov, Ivo D; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W; Price, Nathan D; Van Horn, John D; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M; Dauer, William; Toga, Arthur W
2016-01-01
A unique archive of Big Data on Parkinson's Disease is collected, managed and disseminated by the Parkinson's Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson's disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data-large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources-all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson's disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson's disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer's, Huntington's, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications.
Measurement of curium in marine samples
NASA Astrophysics Data System (ADS)
Schneider, D. L.; Livingston, H. D.
1984-06-01
Measurement of environmentally small but detectable amounts of curium requires reliable, accureate, and sensitive analytical methods. The radiochemical separation developed at Woods Hole is briefly reviewed with specific reference to radiochemical interferences in the alpha spectrometric measurement of curium nuclides and to the relative amounts of interferences expected in different oceanic regimes and sample types. Detection limits for 242 Cm and 244 Cm are ultimately limited by their presence in the 243Am used as curium yield monitor. Environmental standard reference materials are evaluated with regard to curium. The marine literature is reviewed and curium measurements are discussed in relation to their source of introduction to the environment. Sources include ocean dumping of low-level radioactive wastes and discharges from nuclear fuel reporcessing activities, In particular, the question of a detectable presence of 244Cm in global fallout from nuclear weapons testing is addressed and shown to be essentially negligible. Analyses of Scottish coastal sedimantes show traces of 242Cm and 244Cm activity which are believed to originate from transport from sources in the Irish Sea.
An analysis of source structure effects in radio interferometry measurements
NASA Technical Reports Server (NTRS)
Thomas, J. B.
1980-01-01
To begin a study of structure effects, this report presents a theoretical framework, proposes an effective position approach to structure corrections based on brightness distribution measurements, and analyzes examples of analytical and measured brightness distributions. Other topics include the effect of the frequency dependence of a brightness distribution on bandwidth synthesis (BWS) delay, the determination of the absolute location of a measured brightness distribution, and structure effects in dual frequency calibration of charged particle delays. For the 10 measured distributions analyzed, it was found that the structure effect in BWS delay at X-band (3.6 cm) can reach 30 cm, but typically falls in the range of 0 to 5 cm. A trial limit equation that is dependent on visibility was successfully tested against the 10 measured brightness distributions (seven sources). If the validity of this particular equation for an upper limit can be established for nearly all sources, the structure effect in BWS delay could be greatly reduced without supplementary measurements of brightness distributions.
Unwin, Ian; Jansen-van der Vliet, Martine; Westenbrink, Susanne; Presser, Karl; Infanger, Esther; Porubska, Janka; Roe, Mark; Finglas, Paul
2016-02-15
The EuroFIR Document and Data Repositories are being developed as accessible collections of source documents, including grey literature, and the food composition data reported in them. These Repositories will contain source information available to food composition database compilers when selecting their nutritional data. The Document Repository was implemented as searchable bibliographic records in the Europe PubMed Central database, which links to the documents online. The Data Repository will contain original data from source documents in the Document Repository. Testing confirmed the FoodCASE food database management system as a suitable tool for the input, documentation and quality assessment of Data Repository information. Data management requirements for the input and documentation of reported analytical results were established, including record identification and method documentation specifications. Document access and data preparation using the Repositories will provide information resources for compilers, eliminating duplicated work and supporting unambiguous referencing of data contributing to their compiled data. Copyright © 2014 Elsevier Ltd. All rights reserved.
Narrow band noise response of a Belleville spring resonator.
Lyon, Richard H
2013-09-01
This study of nonlinear dynamics includes (i) an identification of quasi-steady states of response using equivalent linearization, (ii) the temporal simulation of the system using Heun's time step procedure on time domain analytic signals, and (iii) a laboratory experiment. An attempt has been made to select material and measurement parameters so that nearly the same systems are used and analyzed for all three parts of the study. This study illustrates important features of nonlinear response to narrow band excitation: (a) states of response that the system can acquire with transitions of the system between those states, (b) the interaction between the noise source and the vibrating load in which the source transmits energy to or draws energy from the load as transitions occur; (c) the lag or lead of the system response relative to the source as transitions occur that causes the average frequencies of source and response to differ; and (d) the determination of the state of response (mass or stiffness controlled) by observation of the instantaneous phase of the influence function. These analyses take advantage of the use of time domain analytic signals that have a complementary role to functions that are analytic in the frequency domain.
Line-source excitation of realistic conformal metasurface cloaks
NASA Astrophysics Data System (ADS)
Padooru, Yashwanth R.; Yakovlev, Alexander B.; Chen, Pai-Yen; Alù, Andrea
2012-11-01
Following our recently introduced analytical tools to model and design conformal mantle cloaks based on metasurfaces [Padooru et al., J. Appl. Phys. 112, 034907 (2012)], we investigate their performance and physical properties when excited by an electric line source placed in their close proximity. We consider metasurfaces formed by 2-D arrays of slotted (meshes and Jerusalem cross slots) and printed (patches and Jerusalem crosses) sub-wavelength elements. The electromagnetic scattering analysis is carried out using a rigorous analytical model, which utilizes the two-sided impedance boundary conditions at the interface of the sub-wavelength elements. It is shown that the homogenized grid-impedance expressions, originally derived for planar arrays of sub-wavelength elements and plane-wave excitation, may be successfully used to model and tailor the surface reactance of cylindrical conformal mantle cloaks illuminated by near-field sources. Our closed-form analytical results are in good agreement with full-wave numerical simulations, up to sub-wavelength distances from the metasurface, confirming that mantle cloaks may be very effective to suppress the scattering of moderately sized objects, independent of the type of excitation and point of observation. We also discuss the dual functionality of these metasurfaces to boost radiation efficiency and directivity from confined near-field sources.
Quantifying the errors due to the superposition of analytical deformation sources
NASA Astrophysics Data System (ADS)
Neuberg, J. W.; Pascal, K.
2012-04-01
The displacement field due to magma movement in the subsurface is often modelled using a Mogi point source or a dislocation Okada source embedded in a homogeneous elastic half-space. When the magmatic system cannot be modelled by a single source it is often represented by several sources, their respective deformation fields are superimposed. However, in such a case the assumption of homogeneity in the half-space is violated and the interaction between sources in an elastic medium is neglected. In this investigation we have quantified the effects of neglecting the interaction between sources on the surface deformation field. To do so, we calculated the vertical and horizontal displacements for models with adjacent sources and we tested them against the solutions of corresponding numerical 3D finite element models. We implemented several models combining spherical pressure sources and dislocation sources, varying the pressure or dislocation of the sources and their relative position. We also investigated three numerical methods to model a dike as a dislocation tensile source or as a pressurized tabular crack. We found that the discrepancies between simple superposition of the displacement field and a fully interacting numerical solution depend mostly on the source types and on their spacing. The errors induced when neglecting the source interaction are expected to vary greatly with the physical and geometrical parameters of the model. We demonstrated that for certain scenarios these discrepancies can be neglected (<5%) when the sources are separated by at least 4 radii for two combined Mogi sources and by at least 3 radii for juxtaposed Mogi and Okada sources
The importance of quality control in validating concentrations ...
A national-scale survey of 247 contaminants of emerging concern (CECs), including organic and inorganic chemical compounds, and microbial contaminants, was conducted in source and treated drinking water samples from 25 treatment plants across the United States. Multiple methods were used to determine these CECs, including six analytical methods to measure 174 pharmaceuticals, personal care products, and pesticides. A three-component quality assurance/quality control (QA/QC) program was designed for the subset of 174 CECs which allowed us to assess and compare performances of the methods used. The three components included: 1) a common field QA/QC protocol and sample design, 2) individual investigator-developed method-specific QA/QC protocols, and 3) a suite of 46 method comparison analytes that were determined in two or more analytical methods. Overall method performance for the 174 organic chemical CECs was assessed by comparing spiked recoveries in reagent, source, and treated water over a two-year period. In addition to the 247 CECs reported in the larger drinking water study, another 48 pharmaceutical compounds measured did not consistently meet predetermined quality standards. Methodologies that did not seem suitable for these analytes are overviewed. The need to exclude analytes based on method performance demonstrates the importance of additional QA/QC protocols. This paper compares the method performance of six analytical methods used to measure 174 emer
In-orbit evaluation of the control system/structural mode interactions of the OSO-8 spacecraft
NASA Technical Reports Server (NTRS)
Slafer, L. I.
1979-01-01
The Orbiting Solar Observatory-8 experienced severe structural mode/control loop interaction problems during the spacecraft development. Extensive analytical studies, using the hybrid coordinate modeling approach, and comprehensive ground testing were carried out in order to achieve the system's precision pointing performance requirements. A recent series of flight tests were conducted with the spacecraft in which a wide bandwidth, high resolution telemetry system was utilized to evaluate the on-orbit flexible dynamics characteristics of the vehicle along with the control system performance. The paper describes the results of these tests, reviewing the basic design problem, analytical approach taken, ground test philosophy, and on-orbit testing. Data from the tests was used to determine the primary mode frequency, damping, and servo coupling dynamics for the on-orbit condition. Additionally, the test results have verified analytically predicted differences between the on-orbit and ground test environments, and have led to a validation of both the analytical modeling and servo design techniques used during the development of the control system.
Fluorescence In Situ Hybridization Probe Validation for Clinical Use.
Gu, Jun; Smith, Janice L; Dowling, Patricia K
2017-01-01
In this chapter, we provide a systematic overview of the published guidelines and validation procedures for fluorescence in situ hybridization (FISH) probes for clinical diagnostic use. FISH probes-which are classified as molecular probes or analyte-specific reagents (ASRs)-have been extensively used in vitro for both clinical diagnosis and research. Most commercially available FISH probes in the United States are strictly regulated by the U.S. Food and Drug Administration (FDA), the Centers for Disease Control and Prevention (CDC), the Centers for Medicare & Medicaid Services (CMS) the Clinical Laboratory Improvement Amendments (CLIA), and the College of American Pathologists (CAP). Although home-brewed FISH probes-defined as probes made in-house or acquired from a source that does not supply them to other laboratories-are not regulated by these agencies, they too must undergo the same individual validation process prior to clinical use as their commercial counterparts. Validation of a FISH probe involves initial validation and ongoing verification of the test system. Initial validation includes assessment of a probe's technical specifications, establishment of its standard operational procedure (SOP), determination of its clinical sensitivity and specificity, development of its cutoff, baseline, and normal reference ranges, gathering of analytics, confirmation of its applicability to a specific research or clinical setting, testing of samples with or without the abnormalities that the probe is meant to detect, staff training, and report building. Ongoing verification of the test system involves testing additional normal and abnormal samples using the same method employed during the initial validation of the probe.
Caustic Singularities Of High-Gain, Dual-Shaped Reflectors
NASA Technical Reports Server (NTRS)
Galindo, Victor; Veruttipong, Thavath W.; Imbriale, William A.; Rengarajan, Sambiam
1991-01-01
Report presents study of some sources of error in analysis, by geometric theory of diffraction (GTD), of performance of high-gain, dual-shaped antenna reflector. Study probes into underlying analytic causes of singularity, with view toward devising and testing practical methods to avoid problems caused by singularity. Hybrid physical optics (PO) approach used to study near-field spillover or noise-temperature characteristics of high-gain relector antenna efficiently and accurately. Report illustrates this approach and underlying principles by presenting numerical results, for both offset and symmetrical reflector systems, computed by GTD, PO, and PO/GO methods.
Uralets, Victor; App, Mike; Rana, Sumandeep; Morgan, Stewart; Ross, Wayne
2014-03-01
2-Ethylamino-1-phenylbutane (EAPB) and 2-amino-1-phenylbutane (APB) were identified by gas chromatography-mass spectrometry in multiple urine samples submitted for stimulant drug testing and screened positive for amphetamines by enzyme immunoassay. Forty-two samples from all over the USA were found, containing both analytes during a 3-month period May-July 2013. A sports dietary supplement 'CRAZE' has been determined to be one of the sources of EAPB supply. EAPB along with its suggested metabolite APB were detected in a urine sample, obtained from a person known to use 'CRAZE'.
Cryogenic Autogenous Pressurization Testing for Robotic Refueling Mission 3
NASA Technical Reports Server (NTRS)
Boyle, R.; DiPirro, M.; Tuttle, J.; Francis, J.; Mustafi, S.; Li, X.; Barfknecht, P.; DeLee, C. H.; McGuire, J.
2015-01-01
A wick-heater system has been selected for use to pressurize the Source Dewar of the Robotic Refueling Mission Phase 3 on-orbit cryogen transfer experiment payload for the International Space Station. Experimental results of autogenous pressurization of liquid argon and liquid nitrogen using a prototype wick-heater system are presented. The wick-heater generates gas to increase the pressure in the tank while maintaining a low bulk fluid temperature. Pressurization experiments were performed in 2013 to characterize the performance of the wick heater. This paper describes the experimental setup, pressurization results, and analytical model correlations.
Method and apparatus for simultaneous spectroelectrochemical analysis
Chatterjee, Sayandev; Bryan, Samuel A; Schroll, Cynthia A; Heineman, William R
2013-11-19
An apparatus and method of simultaneous spectroelectrochemical analysis is disclosed. A transparent surface is provided. An analyte solution on the transparent surface is contacted with a working electrode and at least one other electrode. Light from a light source is focused on either a surface of the working electrode or the analyte solution. The light reflected from either the surface of the working electrode or the analyte solution is detected. The potential of the working electrode is adjusted, and spectroscopic changes of the analyte solution that occur with changes in thermodynamic potentials are monitored.
Space simulation techniques and facilities for SAX STM test campaign
NASA Technical Reports Server (NTRS)
Giordano, Pietro; Raimondo, Giacomo; Messidoro, Piero
1994-01-01
SAX is a satellite for X-Ray astronomy. It is a major element of the overall basic Science Program of the Italian Space Agency (ASI) and is being developed with the contribution of the Netherlands Agency for Aerospace Programs (NIVR). The scientific objectives of SAX are to carry out systematic and comprehensive observations of celestial X-Ray sources over the 0.1 - 300 KeV energy range with special emphasis on spectral and timing measurements. The satellite will also monitor the X-Ray sky to investigate long-term source variability and to permit localization and study of X-Ray transients. Alenia Spazio is developing the satellite that is intended for launch in the second half of 1995 in a low, near-equatorial Earth orbit. At system level a Structural Thermal Model (STM) has been conceived to verify the environmental requirements by validating the mechanical and thermal analytical models and qualifying satellite structure and thermal control. In particular, the following tests have been carried out in Alenia Spazio, CEA/CESTA and ESTEC facilities: Modal Survey, Centrifuge, Acoustic, Sinusoidal/Random Vibration and Thermal Balance. The paper, after a short introduction of the SAX satellite, summarizes the environmental qualification program performed on the SAX STM. It presents test objectives, methodologies and relevant test configurations. Peculiar aspects of the test campaign are highlighted. Problems encountered and solutions adopted in performing the tests are described as well. Furthermore, test results are presented and assessed.
Annual banned-substance review: Analytical approaches in human sports drug testing.
Thevis, Mario; Kuuranne, Tiia; Geyer, Hans
2018-01-01
Several high-profile revelations concerning anti-doping rule violations over the past 12 months have outlined the importance of tackling prevailing challenges and reducing the limitations of the current anti-doping system. At this time, the necessity to enhance, expand, and improve analytical test methods in response to the substances outlined in the World Anti-Doping Agency's (WADA) Prohibited List represents an increasingly crucial task for modern sports drug-testing programs. The ability to improve analytical testing methods often relies on the expedient application of novel information regarding superior target analytes for sports drug-testing assays, drug elimination profiles, alternative test matrices, together with recent advances in instrumental developments. This annual banned-substance review evaluates literature published between October 2016 and September 2017 offering an in-depth evaluation of developments in these arenas and their potential application to substances reported in WADA's 2017 Prohibited List. Copyright © 2017 John Wiley & Sons, Ltd.
Incomplete Intelligence: Is the Information Sharing Environment an Effective Platform?
2012-09-01
Initiative NYFD New York Fire Department NYPD New York Police Department OLAP On Line Analytics Processing OSINT Open Source Intelligence...Intelligence ( OSINT ), from public websites, media sources, and other unclassified events and reports. Although some of these sources do not have a direct
Recognition and source memory as multivariate decision processes.
Banks, W P
2000-07-01
Recognition memory, source memory, and exclusion performance are three important domains of study in memory, each with its own findings, it specific theoretical developments, and its separate research literature. It is proposed here that results from all three domains can be treated with a single analytic model. This article shows how to generate a comprehensive memory representation based on multidimensional signal detection theory and how to make predictions for each of these paradigms using decision axes drawn through the space. The detection model is simpler than the comparable multinomial model, it is more easily generalizable, and it does not make threshold assumptions. An experiment using the same memory set for all three tasks demonstrates the analysis and tests the model. The results show that some seemingly complex relations between the paradigms derive from an underlying simplicity of structure.
A higher order panel method for linearized supersonic flow
NASA Technical Reports Server (NTRS)
Ehlers, F. E.; Epton, M. A.; Johnson, F. T.; Magnus, A. E.; Rubbert, P. E.
1979-01-01
The basic integral equations of linearized supersonic theory for an advanced supersonic panel method are derived. Methods using only linear varying source strength over each panel or only quadratic doublet strength over each panel gave good agreement with analytic solutions over cones and zero thickness cambered wings. For three dimensional bodies and wings of general shape, combined source and doublet panels with interior boundary conditions to eliminate the internal perturbations lead to a stable method providing good agreement experiment. A panel system with all edges contiguous resulted from dividing the basic four point non-planar panel into eight triangular subpanels, and the doublet strength was made continuous at all edges by a quadratic distribution over each subpanel. Superinclined panels were developed and tested on s simple nacelle and on an airplane model having engine inlets, with excellent results.
WELLHEAD ANALYTIC ELEMENT MODEL FOR WINDOWS
WhAEM2000 (wellhead analytic element model for Win 98/00/NT/XP) is a public domain, ground-water flow model designed to facilitate capture zone delineation and protection area mapping in support of the State's and Tribe's Wellhead Protection Programs (WHPP) and Source Water Asses...
Analytical performance of a bronchial genomic classifier.
Hu, Zhanzhi; Whitney, Duncan; Anderson, Jessica R; Cao, Manqiu; Ho, Christine; Choi, Yoonha; Huang, Jing; Frink, Robert; Smith, Kate Porta; Monroe, Robert; Kennedy, Giulia C; Walsh, P Sean
2016-02-26
The current standard practice of lung lesion diagnosis often leads to inconclusive results, requiring additional diagnostic follow up procedures that are invasive and often unnecessary due to the high benign rate in such lesions (Chest 143:e78S-e92, 2013). The Percepta bronchial genomic classifier was developed and clinically validated to provide more accurate classification of lung nodules and lesions that are inconclusive by bronchoscopy, using bronchial brushing specimens (N Engl J Med 373:243-51, 2015, BMC Med Genomics 8:18, 2015). The analytical performance of the Percepta test is reported here. Analytical performance studies were designed to characterize the stability of RNA in bronchial brushing specimens during collection and shipment; analytical sensitivity defined as input RNA mass; analytical specificity (i.e. potentially interfering substances) as tested on blood and genomic DNA; and assay performance studies including intra-run, inter-run, and inter-laboratory reproducibility. RNA content within bronchial brushing specimens preserved in RNAprotect is stable for up to 20 days at 4 °C with no changes in RNA yield or integrity. Analytical sensitivity studies demonstrated tolerance to variation in RNA input (157 ng to 243 ng). Analytical specificity studies utilizing cancer positive and cancer negative samples mixed with either blood (up to 10 % input mass) or genomic DNA (up to 10 % input mass) demonstrated no assay interference. The test is reproducible from RNA extraction through to Percepta test result, including variation across operators, runs, reagent lots, and laboratories (standard deviation of 0.26 for scores on > 6 unit scale). Analytical sensitivity, analytical specificity and robustness of the Percepta test were successfully verified, supporting its suitability for clinical use.
Werber, D; Bernard, H
2014-02-27
Thousands of infectious food-borne disease outbreaks (FBDO) are reported annually to the European Food Safety Authority within the framework of the zoonoses Directive (2003/99/EC). Most recognised FBDO occur locally following point source exposure, but only few are investigated using analytical epidemiological studies. In Germany, and probably also in other countries of the European Union, this seems to be particularly true for those investigated by local health authorities. Analytical studies, usually cohort studies or case–control studies, are a powerful tool to identify suspect food vehicles. Therefore, from a public health and food safety perspective, their more frequent usage is highly desirable. We have developed a small toolbox consisting of a strategic concept and a simple software tool for data entry and analysis, with the objective to increase the use of analytical studies in the investigation of local point source FBDO in Germany.
On-orbit evaluation of the control system/structural mode interactions on OSO-8
NASA Technical Reports Server (NTRS)
Slafer, L. I.
1980-01-01
The Orbiting Solar Observatory-8 experienced severe structural mode/control loop interaction problems during the spacecraft development. Extensive analytical studies, using the hybrid coordinate modeling approach, and comprehensive ground testing were carried out in order to achieve the system's precision pointing performance requirements. A recent series of flight tests were conducted with the spacecraft in which a wide bandwidth, high resolution telemetry system was utilized to evaluate the on-orbit flexible dynamics characteristics of the vehicle along with the control system performance. This paper describes the results of these tests, reviewing the basic design problem, analytical approach taken, ground test philosophy, and on-orbit testing. Data from the tests was used to determine the primary mode frequency, damping, and servo coupling dynamics for the on-orbit condition. Additionally, the test results have verified analytically predicted differences between the on-orbit and ground test environments. The test results have led to a validation of both the analytical modeling and servo design techniques used during the development of the control system, and also verified the approach taken to vehicle and servo ground testing.
NASA Technical Reports Server (NTRS)
Mason, P. W.; Harris, H. G.; Zalesak, J.; Bernstein, M.
1974-01-01
The methods and procedures used in the analysis and testing of the scale model are reported together with the correlation of the analytical and experimental results. The model, the NASTRAN finite element analysis, and results are discussed. Tests and analytical investigations are also reported.
Krleza, Jasna Lenicek; Dorotic, Adrijana; Grzunov, Ana
2017-02-15
Proper standardization of laboratory testing requires assessment of performance after the tests are performed, known as the post-analytical phase. A nationwide external quality assessment (EQA) scheme implemented in Croatia in 2014 includes a questionnaire on post-analytical practices, and the present study examined laboratory responses in order to identify current post-analytical phase practices and identify areas for improvement. In four EQA exercises between September 2014 and December 2015, 145-174 medical laboratories across Croatia were surveyed using the Module 11 questionnaire on the post-analytical phase of testing. Based on their responses, the laboratories were evaluated on four quality indicators: turnaround time (TAT), critical values, interpretative comments and procedures in the event of abnormal results. Results were presented as absolute numbers and percentages. Just over half of laboratories (56.3%) monitored TAT. Laboratories varied substantially in how they dealt with critical values. Most laboratories (65-97%) issued interpretative comments with test results. One third of medical laboratories (30.6-33.3%) issued abnormal test results without confirming them in additional testing. Our results suggest that the nationwide post-analytical EQA scheme launched in 2014 in Croatia has yet to be implemented to the full. To close the gaps between existing recommendations and laboratory practice, laboratory professionals should focus on ensuring that TAT is monitored and lists of critical values are established within laboratories. Professional bodies/institutions should focus on clarify and harmonized rules to standardized practices and applied for adding interpretative comments to laboratory test results and for dealing with abnormal test results.
Curating and Integrating Data from Multiple Sources to Support Healthcare Analytics.
Ng, Kenney; Kakkanatt, Chris; Benigno, Michael; Thompson, Clay; Jackson, Margaret; Cahan, Amos; Zhu, Xinxin; Zhang, Ping; Huang, Paul
2015-01-01
As the volume and variety of healthcare related data continues to grow, the analysis and use of this data will increasingly depend on the ability to appropriately collect, curate and integrate disparate data from many different sources. We describe our approach to and highlight our experiences with the development of a robust data collection, curation and integration infrastructure that supports healthcare analytics. This system has been successfully applied to the processing of a variety of data types including clinical data from electronic health records and observational studies, genomic data, microbiomic data, self-reported data from surveys and self-tracked data from wearable devices from over 600 subjects. The curated data is currently being used to support healthcare analytic applications such as data visualization, patient stratification and predictive modeling.
NASA Technical Reports Server (NTRS)
Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)
2000-01-01
This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.
42 CFR 493.801 - Condition: Enrollment and testing of samples.
Code of Federal Regulations, 2013 CFR
2013-10-01
... subpart. (2)(i) Designate the program(s) to be used for each specialty, subspecialty, and analyte or test... procedures, in accordance with § 493.1236(c)(1). (3) For each specialty, subspecialty and analyte or test...
42 CFR 493.801 - Condition: Enrollment and testing of samples.
Code of Federal Regulations, 2012 CFR
2012-10-01
... subpart. (2)(i) Designate the program(s) to be used for each specialty, subspecialty, and analyte or test... procedures, in accordance with § 493.1236(c)(1). (3) For each specialty, subspecialty and analyte or test...
42 CFR 493.801 - Condition: Enrollment and testing of samples.
Code of Federal Regulations, 2010 CFR
2010-10-01
... subpart. (2)(i) Designate the program(s) to be used for each specialty, subspecialty, and analyte or test... procedures, in accordance with § 493.1236(c)(1). (3) For each specialty, subspecialty and analyte or test...
42 CFR 493.801 - Condition: Enrollment and testing of samples.
Code of Federal Regulations, 2011 CFR
2011-10-01
... subpart. (2)(i) Designate the program(s) to be used for each specialty, subspecialty, and analyte or test... procedures, in accordance with § 493.1236(c)(1). (3) For each specialty, subspecialty and analyte or test...
42 CFR 493.801 - Condition: Enrollment and testing of samples.
Code of Federal Regulations, 2014 CFR
2014-10-01
... subpart. (2)(i) Designate the program(s) to be used for each specialty, subspecialty, and analyte or test... procedures, in accordance with § 493.1236(c)(1). (3) For each specialty, subspecialty and analyte or test...
NASA Astrophysics Data System (ADS)
Samborski, Sylwester; Valvo, Paolo S.
2018-01-01
The paper deals with the numerical and analytical modelling of the end-loaded split test for multi-directional laminates affected by the typical elastic couplings. Numerical analysis of three-dimensional finite element models was performed with the Abaqus software exploiting the virtual crack closure technique (VCCT). The results show possible asymmetries in the widthwise deflections of the specimen, as well as in the strain energy release rate (SERR) distributions along the delamination front. Analytical modelling based on a beam-theory approach was also conducted in simpler cases, where only bending-extension coupling is present, but no out-of-plane effects. The analytical results matched the numerical ones, thus demonstrating that the analytical models are feasible for test design and experimental data reduction.
Theoretical studies of tone noise from a fan rotor
NASA Technical Reports Server (NTRS)
Rao, G. V. R.; Chu, W. T.; Digumarthi, R. V.
1973-01-01
An analytical study was made of some possible rotor alone noise sources of dipole, quadrapole and monopole characters which generate discrete tone noise. Particular emphasis is given to the tone noise caused by fan inlet flow distortion and turbulence. Analytical models are developed to allow prediction of absolute levels. Experimental data measured on a small scale fan is presented which indicates inlet turbulence interaction with a fan rotor can be a source of tone noise. Predicted and measured tone noise for the small scale rotor are shown to be in reasonable agreement.
Analytic model for low-frequency noise in nanorod devices.
Lee, Jungil; Yu, Byung Yong; Han, Ilki; Choi, Kyoung Jin; Ghibaudo, Gerard
2008-10-01
In this work analytic model for generation of excess low-frequency noise in nanorod devices such as field-effect transistors are developed. In back-gate field-effect transistors where most of the surface area of the nanorod is exposed to the ambient, the surface states could be the major noise source via random walk of electrons for the low-frequency or 1/f noise. In dual gate transistors, the interface states and oxide traps can compete with each other as the main noise source via random walk and tunneling, respectively.
Evaluation of analytical errors in a clinical chemistry laboratory: a 3 year experience.
Sakyi, As; Laing, Ef; Ephraim, Rk; Asibey, Of; Sadique, Ok
2015-01-01
Proficient laboratory service is the cornerstone of modern healthcare systems and has an impact on over 70% of medical decisions on admission, discharge, and medications. In recent years, there is an increasing awareness of the importance of errors in laboratory practice and their possible negative impact on patient outcomes. We retrospectively analyzed data spanning a period of 3 years on analytical errors observed in our laboratory. The data covered errors over the whole testing cycle including pre-, intra-, and post-analytical phases and discussed strategies pertinent to our settings to minimize their occurrence. We described the occurrence of pre-analytical, analytical and post-analytical errors observed at the Komfo Anokye Teaching Hospital clinical biochemistry laboratory during a 3-year period from January, 2010 to December, 2012. Data were analyzed with Graph Pad Prism 5(GraphPad Software Inc. CA USA). A total of 589,510 tests was performed on 188,503 outpatients and hospitalized patients. The overall error rate for the 3 years was 4.7% (27,520/58,950). Pre-analytical, analytical and post-analytical errors contributed 3.7% (2210/58,950), 0.1% (108/58,950), and 0.9% (512/58,950), respectively. The number of tests reduced significantly over the 3-year period, but this did not correspond with a reduction in the overall error rate (P = 0.90) along with the years. Analytical errors are embedded within our total process setup especially pre-analytical and post-analytical phases. Strategic measures including quality assessment programs for staff involved in pre-analytical processes should be intensified.
Wang, Huiyong; Bussy, Ugo; Chung-Davidson, Yu-Wen; Li, Weiming
2016-01-15
This study aims to provide a rapid, sensitive and precise UPLC-MS/MS method for target steroid quantitation in biological matrices. We developed and validated an UPLC-MS/MS method to simultaneously determine 16 steroids in plasma and tissue samples. Ionization sources of Electrospray Ionization (ESI) and Atmospheric Pressure Chemical Ionization (APCI) were compared in this study by testing their spectrometry performances at the same chromatographic conditions, and the ESI source was found up to five times more sensitive than the APCI. Different sample preparation techniques were investigated for an optimal extraction of steroids from the biological matrices. The developed method exhibited excellent linearity for all analytes with regression coefficients higher than 0.99 in broad concentration ranges. The limit of detection (LOD) was from 0.003 to 0.1ng/mL. The method was validated according to FDA guidance and applied to determine steroids in sea lamprey plasma and tissues (fat and testes) by the developed method. Copyright © 2015. Published by Elsevier B.V.
Weisshaupt, Petra; Pritzkow, Wolfgang; Noll, Matthias
2012-01-01
Four full-factorial 2(3) experimental plans were applied to evaluate the nitrogen (N) sources of Oligoporus placenta and Trametes versicolor and their interaction with the atmospheric N(2)-assimilating bacterium Beijerinckia acida. The effects of N from peptone, of sapwood and of N from gaseous N(2) on fungal, bacterial and fungal-bacterial activity were investigated. The activities were determined by quantification of biomass, formation of CO(2), consumption of O(2) and laccase activity. The significance of each effect was tested according to t-test recommendation. The activity of both fungi was enhanced by peptone rather than sapwood or gaseous N(2). Nevertheless, comparative studies under an N(2)-free gas mixture as well as under air revealed that the presence of N(2) affected bacterial growth and bacterial-fungal cocultivations. Elemental analysis isotope ratio mass spectrometry (IRMS) of the bacterial and fungal biomass enabled estimation of N transfer and underlined gaseous N(2) as requisite for fungal-bacterial interactions. Combining full-factorial experimental plans with an analytical set-up comprising gas chromatography, IRMS and enzymatic activity allowed synergistic effects to be revealed, fungal N sources to be traced, and symbiotic fungal-bacterial interactions to be investigated. Copyright © 2011 British Mycological Society. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Claycomb, James Ronald
1998-10-01
Several High-T c Superconducting (HTS) eddy current probes have been developed for applications in electromagnetic nondestructive evaluation (NDE) of conducting materials. The probes utilize high-T c SUperconducting Quantum Interference Device (SQUID) magnetometers to detect the fields produced by the perturbation of induced eddy currents resulting from subsurface flaws. Localized HTS shields are incorporated to selectively screen out environmental electromagnetic interference and enable movement of the instrument in the Earth's magnetic field. High permeability magnetic shields are employed to focus flux into, and thereby increase the eddy current density in the metallic test samples. NDE test results are presented, in which machined flaws in aluminum alloy are detected by probes of different design. A novel current injection technique performing NDE of wires using SQUIDs is also discussed. The HTS and high permeability shields are designed based on analytical and numerical finite element method (FEM) calculations presented here. Superconducting and high permeability magnetic shields are modeled in uniform noise fields and in the presence of dipole fields characteristic of flaw signals. Several shield designs are characterized in terms of (1) their ability to screen out uniform background noise fields; (2) the resultant improvement in signal-to-noise ratio and (3) the extent to which dipole source fields are distorted. An analysis of eddy current induction is then presented for low frequency SQUID NDE. Analytical expressions are developed for the induced eddy currents and resulting magnetic fields produced by excitation sources above conducting plates of varying thickness. The expressions derived here are used to model the SQUID's response to material thinning. An analytical defect model is also developed, taking into account the attenuation of the defect field through the conducting material, as well as the current flow around the edges of the flaw. Time harmonic FEM calculations are then used to model the electromagnetic response of eight probe designs, consisting of an eddy current drive coil coupled to a SQUID surrounded by superconducting and/or high permeability magnetic shielding. Simulations are carried out with the eddy current probes located a finite distance above a conducting surface. Results are quantified in terms of shielding and focus factors for each probe design.
NHEXAS PHASE I ARIZONA STUDY--METALS IN WATER ANALYTICAL RESULTS
The Metals in Water data set contains analytical results for measurements of up to 11 metals in 314 water samples over 211 households. Sample collection was undertaken at the tap and any additional drinking water source used extensively within each residence. The primary metals...
Denitrification is a significant process for the removal of nitrate transported in groundwater drainage from agricultural watersheds. In this paper analytical solutions are developed for advective-reactive and nonpoint-source contaminant transport in a two-layer unconfined aquife...
100-N Area Decision Unit Target Analyte List Development for Soil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ovink, R.
2012-09-18
This report documents the process used to identify source area target analytes in support of the 100-N Area remedial investigation/feasibility study (RI/FS) addendum to the Integrated 100 Area Remedial Investigation/Feasibility Study Work Plan (DOE/RL-2008-46, Rev. 0).
Turbomachinery noise studies of the AiResearch QCGAT engine with inflow control
NASA Technical Reports Server (NTRS)
Mcardle, J. G.; Homyak, L.; Chrulski, D. D.
1981-01-01
The AiResearch Quiet Clean General Aviation Turbofan engine was tested on an outdoor test stand to compare the acoustic performance of two inflow control devices (ICD's) of similar design, and three inlet lips of different external shape. Only small performance differences were found. Far-field directivity patterns calculated by applicable existing analyses were compared with the measured tone and broadband patterns. For some of these comparisons, tests were made with an ICD to reduce rotor/inflow disturbance interaction noise, or with the acoustic suppression panels in the inlet or bypass duct covered with aluminum tape to determine hard wall acoustic performance. The comparisons showed that the analytical expressions used predict many directivity pattern features and trends, but can deviate in shape from the measured patterns under certain engine operating conditions. Some patterns showed lobes from modes attributable to rotor/engine strut interaction sources.
A preliminary study of a cryogenic equivalence principle experiment on Shuttle
NASA Technical Reports Server (NTRS)
Everitt, C. W. F.; Worden, P. W., Jr.
1985-01-01
The Weak Equivalence Principle is the hypothesis that all test bodies fall with the same acceleration in the same gravitational field. The current limit on violations of the Weak Equivalence Principle, measured by the ratio of the difference in acceleration of two test masses to their average acceleration, is about 3 parts in one-hundred billion. It is anticipated that this can be improved in a shuttle experiment to a part in one quadrillion. Topics covered include: (1) studies of the shuttle environment, including interference with the experiment, interfacing to the experiment, and possible alternatives; (2) numerical simulations of the proposed experiment, including analytic solutions for special cases of the mass motion and preliminary estimates of sensitivity and time required; (3) error analysis of several noise sources such as thermal distortion, gas and radiation pressure effects, and mechanical distortion; and (4) development and performance tests of a laboratory version of the instrument.
NASA Astrophysics Data System (ADS)
Kopacz, Monika; Jacob, Daniel J.; Henze, Daven K.; Heald, Colette L.; Streets, David G.; Zhang, Qiang
2009-02-01
We apply the adjoint of an atmospheric chemical transport model (GEOS-Chem CTM) to constrain Asian sources of carbon monoxide (CO) with 2° × 2.5° spatial resolution using Measurement of Pollution in the Troposphere (MOPITT) satellite observations of CO columns in February-April 2001. Results are compared to the more common analytical method for solving the same Bayesian inverse problem and applied to the same data set. The analytical method is more exact but because of computational limitations it can only constrain emissions over coarse regions. We find that the correction factors to the a priori CO emission inventory from the adjoint inversion are generally consistent with those of the analytical inversion when averaged over the large regions of the latter. The adjoint solution reveals fine-scale variability (cities, political boundaries) that the analytical inversion cannot resolve, for example, in the Indian subcontinent or between Korea and Japan, and some of that variability is of opposite sign which points to large aggregation errors in the analytical solution. Upward correction factors to Chinese emissions from the prior inventory are largest in central and eastern China, consistent with a recent bottom-up revision of that inventory, although the revised inventory also sees the need for upward corrections in southern China where the adjoint and analytical inversions call for downward correction. Correction factors for biomass burning emissions derived from the adjoint and analytical inversions are consistent with a recent bottom-up inventory on the basis of MODIS satellite fire data.
Downscattering due to Wind Outflows in Compact X-ray Sources: Theory and Interpretation
NASA Technical Reports Server (NTRS)
Titarchuk, Lev; Shrader, Chris
2004-01-01
A number of recent lines of evidence point towards the presence of hot, outflowing plasma from the central regions of compact Galactic and extragalactic X-ray sources. Additionally, it has long been noted that many of these sources exhibit an "excess" continuum component, above approx. 10 keV, usually attributed to Compton Reflection from a static medium. Motivated by these facts, as well as by recent observational constraints on the Compton reflection models - specifically apparently discrepant variability timescales for line and continuum components in some cases - we consider possible of effects of out-flowing plasma on the high-energy continuum spectra of accretion powered compact objects. We present a general formulation for photon downscattering diffusion which includes recoil and Comptonization effects due to divergence of the flow. We then develop an analytical theory for the spectral formation in such systems that allows us to derive formulae for the emergent spectrum. Finally we perform the analytical model fitting on several Galactic X-ray binaries. Objects which have been modeled with high-covering-fraction Compton reflectors, such as GS1353-64 are included in our analysis. In addition, Cyg X-3, is which is widely believed to be characterized by dense circumstellar winds with temperature of order 10(exp 6) K, provides an interesting test case. Data from INTEGRAL and RXTE covering the approx. 3 - 300 keV range are used in our analysis. We further consider the possibility that the widely noted distortion of the power-law continuum above 10 keV may in some cases be explained by these spectral softening effects.
Li, Ming; Hu, Bin; Li, Jianqiang; Chen, Rong; Zhang, Xie; Chen, Huanwen
2009-09-15
A homemade novel nanoextractive electrospray ionization (nanoEESI) source has been characterized for in situ mass spectrometric analysis of ambient samples without sample pretreatment. The primary ions generated using a nanospray emitter interact with the neutral sample plume created by manually nebulizing liquid samples, allowing production of the analyte ions in the spatial cross section of the nanoEESI source. The performance of nanoEESI is experimentally investigated by coupling the nanoEESI source to a commercial LTQ mass spectrometer for rapid analysis of various ambient samples using positive/negative ion detection modes. Compounds of interest in actual samples such as aerosol drug preparations, beverages, milk suspensions, farmland water, and groundwater were unambiguously detected using tandem nanoEESI ion trap mass spectrometry. The limit of detection was low picogram per milliliter levels for the compounds tested. Acceptable relative standard deviation (RSD) values (5-10%) were obtained for direct measurement of analytes in complex matrixes, providing linear dynamic signal responses using manual sample introduction. A single sample analysis was completed within 1.2 s. Requiring no sheath gas for either primary ion production or neutral sample introduction, the nanoEESI has advantages including readiness for miniaturization and integration, simple maintenance, easy operation, and low cost. The experimental data demonstrate that the nanoEESI is a promising tool for high-throughput, sensitive, quantitative, in situ analysis of ambient complex samples, showing potential applications for in situ analysis in multiple disciplines including but not limited to pharmaceutical analysis, food quality control, pesticides residue detection, and homeland security.
Code of Federal Regulations, 2014 CFR
2014-07-01
... PROCEDURES Equipment, Measurement Instruments, Fuel, and Analytical Gas Specifications § 1066.101 Overview. (a) This subpart addresses equipment related to emission testing, as well as test fuels and... specifications for fuels, engine fluids, and analytical gases; these specifications apply for testing under this...
MEETING DATA QUALITY OBJECTIVES WITH INTERVAL INFORMATION
Immunoassay test kits are promising technologies for measuring analytes under field conditions. Frequently, these field-test kits report the analyte concentrations as falling in an interval between minimum and maximum values. Many project managers use field-test kits only for scr...
2017-01-01
Background Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Materials and Methods Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician’s request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. Results The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results. PMID:28107395
Najat, Dereen
2017-01-01
Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician's request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results.
Laboratory evaluation of the pointing stability of the ASPS Vernier System
NASA Technical Reports Server (NTRS)
1980-01-01
The annular suspension and pointing system (ASPS) is an end-mount experiment pointing system designed for use in the space shuttle. The results of the ASPS Vernier System (AVS) pointing stability tests conducted in a laboratory environment are documented. A simulated zero-G suspension was used to support the test payload in the laboratory. The AVS and the suspension were modelled and incorporated into a simulation of the laboratory test. Error sources were identified and pointing stability sensitivities were determined via simulation. Statistical predictions of laboratory test performance were derived and compared to actual laboratory test results. The predicted mean pointing stability during simulated shuttle disturbances was 1.22 arc seconds; the actual mean laboratory test pointing stability was 1.36 arc seconds. The successful prediction of laboratory test results provides increased confidence in the analytical understanding of the AVS magnetic bearing technology and allows confident prediction of in-flight performance. Computer simulations of ASPS, operating in the shuttle disturbance environment, predict in-flight pointing stability errors less than 0.01 arc seconds.
Applications of nuclear analytical techniques to environmental studies
NASA Astrophysics Data System (ADS)
Freitas, M. C.; Pacheco, A. M. G.; Marques, A. P.; Barros, L. I. C.; Reis, M. A.
2001-07-01
A few examples of application of nuclear-analytical techniques to biological monitors—natives and transplants—are given herein. Parmelia sulcata Taylor transplants were set up in a heavily industrialized area of Portugal—the Setúbal peninsula, about 50 km south of Lisbon—where indigenous lichens are rare. The whole area was 10×15 km around an oil-fired power station, and a 2.5×2.5 km grid was used. In north-western Portugal, native thalli of the same epiphytes (Parmelia spp., mostly Parmelia sulcata Taylor) and bark from olive trees (Olea europaea) were sampled across an area of 50×50 km, using a 10×10 km grid. This area is densely populated and features a blend of rural, urban-industrial and coastal environments, together with the country's second-largest metro area (Porto). All biomonitors have been analyzed by INAA and PIXE. Results were put through nonparametric tests and factor analysis for trend significance and emission sources, respectively.
National Transonic Facility model and model support vibration problems
NASA Technical Reports Server (NTRS)
Young, Clarence P., Jr.; Popernack, Thomas G., Jr.; Gloss, Blair B.
1990-01-01
Vibrations of models and model support system were encountered during testing in the National Transonic Facility. Model support system yaw plane vibrations have resulted in model strain gage balance design load limits being reached. These high levels of vibrations resulted in limited aerodynamic testing for several wind tunnel models. The yaw vibration problem was the subject of an intensive experimental and analytical investigation which identified the primary source of the yaw excitation and resulted in attenuation of the yaw oscillations to acceptable levels. This paper presents the principal results of analyses and experimental investigation of the yaw plane vibration problems. Also, an overview of plans for development and installation of a permanent model system dynamic and aeroelastic response measurement and monitoring system for the National Transonic Facility is presented.
Endocrine Disruptors and Asthma-Associated Chemicals in Consumer Products
Nishioka, Marcia; Standley, Laurel J.; Perovich, Laura J.; Brody, Julia Green; Rudel, Ruthann A.
2012-01-01
Background: Laboratory and human studies raise concerns about endocrine disruption and asthma resulting from exposure to chemicals in consumer products. Limited labeling or testing information is available to evaluate products as exposure sources. Objectives: We analytically quantified endocrine disruptors and asthma-related chemicals in a range of cosmetics, personal care products, cleaners, sunscreens, and vinyl products. We also evaluated whether product labels provide information that can be used to select products without these chemicals. Methods: We selected 213 commercial products representing 50 product types. We tested 42 composited samples of high-market-share products, and we tested 43 alternative products identified using criteria expected to minimize target compounds. Analytes included parabens, phthalates, bisphenol A (BPA), triclosan, ethanolamines, alkylphenols, fragrances, glycol ethers, cyclosiloxanes, and ultraviolet (UV) filters. Results: We detected 55 compounds, indicating a wide range of exposures from common products. Vinyl products contained > 10% bis(2-ethylhexyl) phthalate (DEHP) and could be an important source of DEHP in homes. In other products, the highest concentrations and numbers of detects were in the fragranced products (e.g., perfume, air fresheners, and dryer sheets) and in sunscreens. Some products that did not contain the well-known endocrine-disrupting phthalates contained other less-studied phthalates (dicyclohexyl phthalate, diisononyl phthalate, and di-n-propyl phthalate; also endocrine-disrupting compounds), suggesting a substitution. Many detected chemicals were not listed on product labels. Conclusions: Common products contain complex mixtures of EDCs and asthma-related compounds. Toxicological studies of these mixtures are needed to understand their biological activity. Regarding epidemiology, our findings raise concern about potential confounding from co-occurring chemicals and misclassification due to variability in product composition. Consumers should be able to avoid some target chemicals—synthetic fragrances, BPA, and regulated active ingredients—using purchasing criteria. More complete product labeling would enable consumers to avoid the rest of the target chemicals. PMID:22398195
42 CFR 493.1289 - Standard: Analytic systems quality assessment.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Analytic systems quality assessment. 493... Nonwaived Testing Analytic Systems § 493.1289 Standard: Analytic systems quality assessment. (a) The... through 493.1283. (b) The analytic systems quality assessment must include a review of the effectiveness...
Downward continuation of airborne gravity data by means of the change of boundary approach
NASA Astrophysics Data System (ADS)
Mansi, A. H.; Capponi, M.; Sampietro, D.
2018-03-01
Within the modelling of gravity data, a common practice is the upward/downward continuation of the signal, i.e. the process of continuing the gravitational signal in the vertical direction away or closer to the sources, respectively. The gravity field, being a potential field, satisfies the Laplace's equation outside the masses and this means that it allows to unambiguously perform this analytical continuation only in a source-free domain. The analytical continuation problem has been solved both in the space and spectral domains by exploiting different algorithms. As well known, the downward continuation operator, differently from the upward one, is an unstable operator, due to its spectral characteristics similar to those of a high-pass filter, and several regularization methods have been proposed in order to stabilize it. In this work, an iterative procedure to downward/upward continue the gravity field observations, acquired at different altitudes, is proposed. This methodology is based on the change of boundary principle and it has been expressively thought for aerogravimetric observations for geophysical exploration purposes. Within this field of application, usually several simplifications can be applied, basically due to the specific characteristics of the airborne surveys which are usually flown at almost constant altitude as close as possible to the terrain. For instance, these characteristics, as shown in the present work, allow to perform the downward continuation without the need of any regularization. The goodness of the proposed methodology has been evaluated by means of a numerical test on real data, acquired in the South of Australia. The test shows that it is possible to move the aerogravimetric data, acquired along tracks with a maximum height difference of about 250 m, with accuracies of the order of 10^{-3} mGal.
Gidwani, Kamlesh; Huhtinen, Kaisa; Kekki, Henna; van Vliet, Sandra; Hynninen, Johanna; Koivuviita, Niina; Perheentupa, Antti; Poutanen, Matti; Auranen, Annika; Grenman, Seija; Lamminmäki, Urpo; Carpen, Olli; van Kooyk, Yvette; Pettersson, Kim
2016-10-01
Measurement of serum cancer antigen 125 (CA125) is the standard approach for epithelial ovarian cancer (EOC) diagnostics and follow-up. However, the clinical specificity is not optimal because increased values are also detected in healthy controls and in benign diseases. CA125 is known to be differentially glycosylated in EOC, potentially offering a way to construct CA125 assays with improved cancer specificity. Our goal was to identify carbohydrate-reactive lectins for discriminating between CA125 originating from EOC and noncancerous sources. CA125 from the OVCAR-3 cancer cell line, placental homogenate, and ascites fluid from patients with cirrhosis were captured on anti-CA125 antibody immobilized on microtitration wells. A panel of lectins, each coated onto fluorescent europium-chelate-doped 97-nm nanoparticles (Eu(+3)-NPs), was tested for detection of the immobilized CA125. Serum samples from high-grade serous EOC or patients with endometriosis and healthy controls were analyzed. By using macrophage galactose-type lectin (MGL)-coated Eu(+3)-NPs, an analytically sensitive CA125 assay (CA125(MGL)) was achieved that specifically recognized the CA125 isoform produced by EOC, whereas the recognition of CA125 from nonmalignant conditions was reduced. Serum CA125(MGL) measurement better discriminated patients with EOC from endometriosis compared to conventional immunoassay. The discrimination was particularly improved for marginally increased CA125 values and for earlier detection of EOC progression. The new CA125(MGL) assay concept could help reduce the false-positive rates of conventional CA125 immunoassays. The improved analytical specificity of this test approach is dependent on a discriminating lectin immobilized in large numbers on Eu(+3)-NPs, providing both an avidity effect and signal amplification. © 2016 American Association for Clinical Chemistry.
NASA Astrophysics Data System (ADS)
Verginelli, Iason; Capobianco, Oriana; Hartog, Niels; Baciocchi, Renato
2017-02-01
In this work we introduce a 1-D analytical solution that can be used for the design of horizontal permeable reactive barriers (HPRBs) as a vapor mitigation system at sites contaminated by chlorinated solvents. The developed model incorporates a transient diffusion-dominated transport with a second-order reaction rate constant. Furthermore, the model accounts for the HPRB lifetime as a function of the oxidant consumption by reaction with upward vapors and its progressive dissolution and leaching by infiltrating water. Simulation results by this new model closely replicate previous lab-scale tests carried out on trichloroethylene (TCE) using a HPRB containing a mixture of potassium permanganate, water and sand. In view of field applications, design criteria, in terms of the minimum HPRB thickness required to attenuate vapors at acceptable risk-based levels and the expected HPRB lifetime, are determined from site-specific conditions such as vapor source concentration, water infiltration rate and HPRB mixture. The results clearly show the field-scale feasibility of this alternative vapor mitigation system for the treatment of chlorinated solvents. Depending on the oxidation kinetic of the target contaminant, a 1 m thick HPRB can ensure an attenuation of vapor concentrations of orders of magnitude up to 20 years, even for vapor source concentrations up to 10 g/m3. A demonstrative application for representative contaminated site conditions also shows the feasibility of this mitigation system from an economical point of view with capital costs potentially somewhat lower than those of other remediation options, such as soil vapor extraction systems. Overall, based on the experimental and theoretical evaluation thus far, field-scale tests are warranted to verify the potential and cost-effectiveness of HPRBs for vapor mitigation control under various conditions of application.
Analytic tests and their relation to jet fuel thermal stability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heneghan, S.P.; Kauffman, R.E.
1995-05-01
The evaluation of jet fuel thermal stability (TS) by simple analytic procedures has long been a goal of fuels chemists. The reason is obvious: if the analytic chemist can determine which types of material cause his test to respond, the refiners will know which materials to remove to improve stability. Complicating this quest is the lack of an acceptable quantitative TS test with which to compare any analytic procedures. To circumvent this problem, we recently compiled the results of TS tests for 12 fuels using six separate test procedures. The results covering a range of flow and temperature conditions showmore » that TS is not as dependent on test conditions as previously thought. Also, comparing the results from these tests with several analytic procedures shows that either a measure of the number of phenols or the total sulfur present in jet fuels is strongly indicative of the TS. The phenols have been measured using a cyclic voltammetry technique and the polar material by gas chromatography (atomic emission detection) following a solid phase extraction on silica gel. The polar material has been identified as mainly phenols (by mass spectrometry identification). Measures of the total acid number or peroxide concentration have little correlation with TS.« less
An Analytical Framework for Evaluating E-Commerce Business Models and Strategies.
ERIC Educational Resources Information Center
Lee, Chung-Shing
2001-01-01
Considers electronic commerce as a paradigm shift, or a disruptive innovation, and presents an analytical framework based on the theories of transaction costs and switching costs. Topics include business transformation process; scale effect; scope effect; new sources of revenue; and e-commerce value creation model and strategy. (LRW)
U.S.-MEXICO BORDER PROGRAM ARIZONA BORDER STUDY--METALS IN WATER ANALYTICAL RESULTS
The Metals in Water data set contains analytical results for measurements of up to 11 metals in 98 water samples over 61 households. Sample collection was undertaken at the tap and any additional drinking water source used extensively within each residence. The primary metals o...
Internal Standards: A Source of Analytical Bias For Volatile Organic Analyte Determinations
The use of internal standards in the determination of volatile organic compounds as described in SW-846 Method 8260C introduces a potential for bias in results once the internal standards (ISTDs) are added to a sample for analysis. The bias is relative to the dissimilarity betw...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Youn, H; Jeon, H; Nam, J
Purpose: To investigate the feasibility of an analytic framework to estimate patients’ absorbed dose distribution owing to daily cone-beam CT scan for image-guided radiation treatment. Methods: To compute total absorbed dose distribution, we separated the framework into primary and scattered dose calculations. Using the source parameters such as voltage, current, and bowtie filtration, for the primary dose calculation, we simulated the forward projection from the source to each voxel of an imaging object including some inhomogeneous inserts. Then we calculated the primary absorbed dose at each voxel based on the absorption probability deduced from the HU values and Beer’s law.more » In sequence, all voxels constructing the phantom were regarded as secondary sources to radiate scattered photons for scattered dose calculation. Details of forward projection were identical to that of the previous step. The secondary source intensities were given by using scatter-to- primary ratios provided by NIST. In addition, we compared the analytically calculated dose distribution with their Monte Carlo simulation results. Results: The suggested framework for absorbed dose estimation successfully provided the primary and secondary dose distributions of the phantom. Moreover, our analytic dose calculations and Monte Carlo calculations were well agreed each other even near the inhomogeneous inserts. Conclusion: This work indicated that our framework can be an effective monitor to estimate a patient’s exposure owing to cone-beam CT scan for image-guided radiation treatment. Therefore, we expected that the patient’s over-exposure during IGRT might be prevented by our framework.« less
Sensor for detecting and differentiating chemical analytes
Yi, Dechang [Metuchen, NJ; Senesac, Lawrence R [Knoxville, TN; Thundat, Thomas G [Knoxville, TN
2011-07-05
A sensor for detecting and differentiating chemical analytes includes a microscale body having a first end and a second end and a surface between the ends for adsorbing a chemical analyte. The surface includes at least one conductive heating track for heating the chemical analyte and also a conductive response track, which is electrically isolated from the heating track, for producing a thermal response signal from the chemical analyte. The heating track is electrically connected with a voltage source and the response track is electrically connected with a signal recorder. The microscale body is restrained at the first end and the second end and is substantially isolated from its surroundings therebetween, thus having a bridge configuration.
Distributed data networks: a blueprint for Big Data sharing and healthcare analytics.
Popovic, Jennifer R
2017-01-01
This paper defines the attributes of distributed data networks and outlines the data and analytic infrastructure needed to build and maintain a successful network. We use examples from one successful implementation of a large-scale, multisite, healthcare-related distributed data network, the U.S. Food and Drug Administration-sponsored Sentinel Initiative. Analytic infrastructure-development concepts are discussed from the perspective of promoting six pillars of analytic infrastructure: consistency, reusability, flexibility, scalability, transparency, and reproducibility. This paper also introduces one use case for machine learning algorithm development to fully utilize and advance the portfolio of population health analytics, particularly those using multisite administrative data sources. © 2016 New York Academy of Sciences.
The Development of MST Test Information for the Prediction of Test Performances
ERIC Educational Resources Information Center
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.
2017-01-01
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
CERTS Microgrid Laboratory Test Bed - PIER Final Project Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eto, Joseph H.; Eto, Joseph H.; Lasseter, Robert
2008-07-25
The objective of the CERTS Microgrid Laboratory Test Bed project was to enhance the ease of integrating small energy sources into a microgrid. The project accomplished this objective by developing and demonstrating three advanced techniques, collectively referred to as the CERTS Microgrid concept, that significantly reduce the level of custom field engineering needed to operate microgrids consisting of small generating sources. The techniques comprising the CERTS Microgrid concept are: 1) a method for effecting automatic and seamless transitions between grid-connected and islanded modes of operation; 2) an approach to electrical protection within the microgrid that does not depend on highmore » fault currents; and 3) a method for microgrid control that achieves voltage and frequency stability under islanded conditions without requiring high-speed communications. The techniques were demonstrated at a full-scale test bed built near Columbus, Ohio and operated by American Electric Power. The testing fully confirmed earlier research that had been conducted initially through analytical simulations, then through laboratory emulations, and finally through factory acceptance testing of individual microgrid components. The islanding and resychronization method met all Institute of Electrical and Electronics Engineers 1547 and power quality requirements. The electrical protections system was able to distinguish between normal and faulted operation. The controls were found to be robust and under all conditions, including difficult motor starts. The results from these test are expected to lead to additional testing of enhancements to the basic techniques at the test bed to improve the business case for microgrid technologies, as well to field demonstrations involving microgrids that involve one or mroe of the CERTS Microgrid concepts.« less
NASA Astrophysics Data System (ADS)
Leif, Robert C.; Spidlen, Josef; Brinkman, Ryan R.
2008-02-01
Introduction: The International Society for Analytical Cytology, ISAC, is developing a new combined flow and image Analytical Cytometry Standard (ACS). This standard needs to serve both the research and clinical communities. The clinical medicine and clinical research communities have a need to exchange information with hospital and other clinical information systems. Methods: 1) Prototype the standard by creating CytometryML and a RAW format for binary data. 2) Join the ISAC Data Standards Task Force. 3) Create essential project documentation. 4) Cooperate with other groups by assisting in the preparation of the DICOM Supplement 122: Specimen Module and Pathology Service-Object Pair Classes. Results: CytometryML has been created and serves as a prototype and source of experience for the following: the Analytical Cytometry Standard (ACS) 1.0, the ACS container, Minimum Information about a Flow Cytometry Experiment (MIFlowCyt), and Requirements for a Data File Standard Format to Describe Flow Cytometry and Related Analytical Cytology Data. These requirements provide a means to judge the appropriateness of design elements and to develop tests for the final ACS. The requirements include providing the information required for understanding and reproducing a cytometry experiment or clinical measurement, and for a single standard for both flow and digital microscopic cytometry. Schemas proposed by other members of the ISAC Data Standards Task Force (e.g, Gating-ML) have been independently validated and have been integrated with CytometryML. The use of netCDF as an element of the ACS container has been proposed by others and a suggested method of its use is proposed.
Development of airframe design technology for crashworthiness.
NASA Technical Reports Server (NTRS)
Kruszewski, E. T.; Thomson, R. G.
1973-01-01
This paper describes the NASA portion of a joint FAA-NASA General Aviation Crashworthiness Program leading to the development of improved crashworthiness design technology. The objectives of the program are to develop analytical technology for predicting crashworthiness of structures, provide design improvements, and perform full-scale crash tests. The analytical techniques which are being developed both in-house and under contract are described, and typical results from these analytical programs are shown. In addition, the full-scale testing facility and test program are discussed.
Self-consistent multidimensional electron kinetic model for inductively coupled plasma sources
NASA Astrophysics Data System (ADS)
Dai, Fa Foster
Inductively coupled plasma (ICP) sources have received increasing interest in microelectronics fabrication and lighting industry. In 2-D configuration space (r, z) and 2-D velocity domain (νθ,νz), a self- consistent electron kinetic analytic model is developed for various ICP sources. The electromagnetic (EM) model is established based on modal analysis, while the kinetic analysis gives the perturbed Maxwellian distribution of electrons by solving Boltzmann-Vlasov equation. The self- consistent algorithm combines the EM model and the kinetic analysis by updating their results consistently until the solution converges. The closed-form solutions in the analytical model provide rigorous and fast computing for the EM fields and the electron kinetic behavior. The kinetic analysis shows that the RF energy in an ICP source is extracted by a collisionless dissipation mechanism, if the electron thermovelocity is close to the RF phase velocities. A criterion for collisionless damping is thus given based on the analytic solutions. To achieve uniformly distributed plasma for plasma processing, we propose a novel discharge structure with both planar and vertical coil excitations. The theoretical results demonstrate improved uniformity for the excited azimuthal E-field in the chamber. Non-monotonic spatial decay in electric field and space current distributions was recently observed in weakly- collisional plasmas. The anomalous skin effect is found to be responsible for this phenomenon. The proposed model successfully models the non-monotonic spatial decay effect and achieves good agreements with the measurements for different applied RF powers. The proposed analytical model is compared with other theoretical models and different experimental measurements. The developed model is also applied to two kinds of ICP discharges used for electrodeless light sources. One structure uses a vertical internal coil antenna to excite plasmas and another has a metal shield to prevent the electromagnetic radiation. The theoretical results delivered by the proposed model agree quite well with the experimental measurements in many aspects. Therefore, the proposed self-consistent model provides an efficient and reliable means for designing ICP sources in various applications such as VLSI fabrication and electrodeless light sources.
Ott, Wayne R; Klepeis, Neil E; Switzer, Paul
2003-08-01
This paper derives the analytical solutions to multi-compartment indoor air quality models for predicting indoor air pollutant concentrations in the home and evaluates the solutions using experimental measurements in the rooms of a single-story residence. The model uses Laplace transform methods to solve the mass balance equations for two interconnected compartments, obtaining analytical solutions that can be applied without a computer. Environmental tobacco smoke (ETS) sources such as the cigarette typically emit pollutants for relatively short times (7-11 min) and are represented mathematically by a "rectangular" source emission time function, or approximated by a short-duration source called an "impulse" time function. Other time-varying indoor sources also can be represented by Laplace transforms. The two-compartment model is more complicated than the single-compartment model and has more parameters, including the cigarette or combustion source emission rate as a function of time, room volumes, compartmental air change rates, and interzonal air flow factors expressed as dimensionless ratios. This paper provides analytical solutions for the impulse, step (Heaviside), and rectangular source emission time functions. It evaluates the indoor model in an unoccupied two-bedroom home using cigars and cigarettes as sources with continuous measurements of carbon monoxide (CO), respirable suspended particles (RSP), and particulate polycyclic aromatic hydrocarbons (PPAH). Fine particle mass concentrations (RSP or PM3.5) are measured using real-time monitors. In our experiments, simultaneous measurements of concentrations at three heights in a bedroom confirm an important assumption of the model-spatial uniformity of mixing. The parameter values of the two-compartment model were obtained using a "grid search" optimization method, and the predicted solutions agreed well with the measured concentration time series in the rooms of the home. The door and window positions in each room had considerable effect on the pollutant concentrations observed in the home. Because of the small volumes and low air change rates of most homes, indoor pollutant concentrations from smoking activity in a home can be very high and can persist at measurable levels indoors for many hours.
Permeation absorption sampler with multiple detection
Zaromb, Solomon
1990-01-01
A system for detecting analytes in air or aqueous systems includes a permeation absorption preconcentrator sampler for the analytes and analyte detectors. The preconcentrator has an inner fluid-permeable container into which a charge of analyte-sorbing liquid is intermittently injected, and a fluid-impermeable outer container. The sample is passed through the outer container and around the inner container for trapping and preconcentrating the analyte in the sorbing liquid. The analyte can be detected photometrically by injecting with the sorbing material a reagent which reacts with the analyte to produce a characteristic color or fluorescence which is detected by illuminating the contents of the inner container with a light source and measuring the absorbed or emitted light, or by producing a characteristic chemiluminescence which can be detected by a suitable light sensor. The analyte can also be detected amperometrically. Multiple inner containers may be provided into which a plurality of sorbing liquids are respectively introduced for simultaneously detecting different analytes. Baffles may be provided in the outer container. A calibration technique is disclosed.
Evaluation of Analytical Errors in a Clinical Chemistry Laboratory: A 3 Year Experience
Sakyi, AS; Laing, EF; Ephraim, RK; Asibey, OF; Sadique, OK
2015-01-01
Background: Proficient laboratory service is the cornerstone of modern healthcare systems and has an impact on over 70% of medical decisions on admission, discharge, and medications. In recent years, there is an increasing awareness of the importance of errors in laboratory practice and their possible negative impact on patient outcomes. Aim: We retrospectively analyzed data spanning a period of 3 years on analytical errors observed in our laboratory. The data covered errors over the whole testing cycle including pre-, intra-, and post-analytical phases and discussed strategies pertinent to our settings to minimize their occurrence. Materials and Methods: We described the occurrence of pre-analytical, analytical and post-analytical errors observed at the Komfo Anokye Teaching Hospital clinical biochemistry laboratory during a 3-year period from January, 2010 to December, 2012. Data were analyzed with Graph Pad Prism 5(GraphPad Software Inc. CA USA). Results: A total of 589,510 tests was performed on 188,503 outpatients and hospitalized patients. The overall error rate for the 3 years was 4.7% (27,520/58,950). Pre-analytical, analytical and post-analytical errors contributed 3.7% (2210/58,950), 0.1% (108/58,950), and 0.9% (512/58,950), respectively. The number of tests reduced significantly over the 3-year period, but this did not correspond with a reduction in the overall error rate (P = 0.90) along with the years. Conclusion: Analytical errors are embedded within our total process setup especially pre-analytical and post-analytical phases. Strategic measures including quality assessment programs for staff involved in pre-analytical processes should be intensified. PMID:25745569
NASA Astrophysics Data System (ADS)
Hedberg, Emma; Gidhagen, Lars; Johansson, Christer
Sampling of particles (PM10) was conducted during a one-year period at two rural sites in Central Chile, Quillota and Linares. The samples were analyzed for elemental composition. The data sets have undergone source-receptor analyses in order to estimate the sources and their abundance's in the PM10 size fraction, by using the factor analytical method positive matrix factorization (PMF). The analysis showed that PM10 was dominated by soil resuspension at both sites during the summer months, while during winter traffic dominated the particle mass at Quillota and local wood burning dominated the particle mass at Linares. Two copper smelters impacted the Quillota station, and contributed to 10% and 16% of PM10 as an average during summer and winter, respectively. One smelter impacted Linares by 8% and 19% of PM10 in the summer and winter, respectively. For arsenic the two smelters accounted for 87% of the monitored arsenic levels at Quillota and at Linares one smelter contributed with 72% of the measured mass. In comparison with PMF, the use of a dispersion model tended to overestimate the smelter contribution to arsenic levels at both sites. The robustness of the PMF model was tested by using randomly reduced data sets, where 85%, 70%, 50% and 33% of the samples were included. In this way the ability of the model to reconstruct the sources initially found by the original data set could be tested. On average for all sources the relative standard deviation increased from 7% to 25% for the variables identifying the sources, when decreasing the data set from 85% to 33% of the samples, indicating that the solution initially found was very stable to begin with. But it was also noted that sources due to industrial or combustion processes were more sensitive for the size of the data set, compared to the natural sources as local soil and sea spray sources.
42 CFR 493.941 - Hematology (including routine hematology and coagulation).
Code of Federal Regulations, 2013 CFR
2013-10-01
... of a laboratory's responses for qualitative and quantitative hematology tests or analytes, the...) of this section. (2) For quantitative hematology tests or analytes, the program must determine the...
42 CFR 493.941 - Hematology (including routine hematology and coagulation).
Code of Federal Regulations, 2014 CFR
2014-10-01
... of a laboratory's responses for qualitative and quantitative hematology tests or analytes, the...) of this section. (2) For quantitative hematology tests or analytes, the program must determine the...
42 CFR 493.941 - Hematology (including routine hematology and coagulation).
Code of Federal Regulations, 2010 CFR
2010-10-01
... of a laboratory's responses for qualitative and quantitative hematology tests or analytes, the...) of this section. (2) For quantitative hematology tests or analytes, the program must determine the...
42 CFR 493.941 - Hematology (including routine hematology and coagulation).
Code of Federal Regulations, 2012 CFR
2012-10-01
... of a laboratory's responses for qualitative and quantitative hematology tests or analytes, the...) of this section. (2) For quantitative hematology tests or analytes, the program must determine the...
42 CFR 493.941 - Hematology (including routine hematology and coagulation).
Code of Federal Regulations, 2011 CFR
2011-10-01
... of a laboratory's responses for qualitative and quantitative hematology tests or analytes, the...) of this section. (2) For quantitative hematology tests or analytes, the program must determine the...
Erythrocyte Sedimentation Rate (ESR)
... 3 screens]. Available from: https://labtestsonline.org/understanding/analytes/esr/tab/test/ Lab Tests Online [Internet]. Washington ... 2 screens]. Available from: https://labtestsonline.org/understanding/analytes/esr/tab/sample/ National Heart, Lung, and Blood ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akyol, Bora A.; Allwardt, Craig H.; Beech, Zachary W.
VOLTTRON is a flexible, reliable, and scalable platform for distributed control and sensing. VOLTTRON serves in four primary roles: •A reference platform for researchers to quickly develop control applications for transactive energy. •A reference platform with flexible data store support for energy analytics applications either in academia or in commercial enterprise. •A platform from which commercial enterprise can develop products without license issues and easily integrate into their product line. •An accelerator to drive industry adoption of transactive energy and advanced building energy analytics. Pacific Northwest National Laboratory, with funding from the U.S. Department of Energy’s Building Technologies Office, developedmore » and maintains VOLTTRON as an open-source community project. VOLTTRON source code includes agent execution software; agents that perform critical services that enable and enhance VOLTTRON functionality; and numerous agents that utilize the platform to perform a specific function (fault detection, demand response, etc.). The platform supports energy, operational, and financial transactions between networked entities (equipment, organizations, buildings, grid, etc.) and enhance the control infrastructure of existing buildings through the use of open-source device communication, control protocols, and integrated analytics.« less
Wexler, Eliezer J.
1992-01-01
Analytical solutions to the advective-dispersive solute-transport equation are useful in predicting the fate of solutes in ground water. Analytical solutions compiled from available literature or derived by the author are presented for a variety of boundary condition types and solute-source configurations in one-, two-, and three-dimensional systems having uniform ground-water flow. A set of user-oriented computer programs was created to evaluate these solutions and to display the results in tabular and computer-graphics format. These programs incorporate many features that enhance their accuracy, ease of use, and versatility. Documentation for the programs describes their operation and required input data, and presents the results of sample problems. Derivations of selected solutions, source codes for the computer programs, and samples of program input and output also are included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The Chemical Technology (CMT) Division is a diverse technical organization with principal emphases in environmental management and development of advanced energy sources. The Division conducts research and development in three general areas: (1) development of advanced power sources for stationary and transportation applications and for consumer electronics, (2) management of high-level and low-level nuclear wastes and hazardous wastes, and (3) electrometallurgical treatment of spent nuclear fuel. The Division also performs basic research in catalytic chemistry involving molecular energy resources, mechanisms of ion transport in lithium battery electrolytes, and the chemistry of technology-relevant materials and electrified interfaces. In addition, the Divisionmore » operates the Analytical Chemistry Laboratory, which conducts research in analytical chemistry and provides analytical services for programs at Argonne National Laboratory (ANL) and other organizations. Technical highlights of the Division`s activities during 1997 are presented.« less
PB-AM: An open-source, fully analytical linear poisson-boltzmann solver
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felberg, Lisa E.; Brookes, David H.; Yap, Eng-Hui
2016-11-02
We present the open source distributed software package Poisson-Boltzmann Analytical Method (PB-AM), a fully analytical solution to the linearized Poisson Boltzmann equation. The PB-AM software package includes the generation of outputs files appropriate for visualization using VMD, a Brownian dynamics scheme that uses periodic boundary conditions to simulate dynamics, the ability to specify docking criteria, and offers two different kinetics schemes to evaluate biomolecular association rate constants. Given that PB-AM defines mutual polarization completely and accurately, it can be refactored as a many-body expansion to explore 2- and 3-body polarization. Additionally, the software has been integrated into the Adaptive Poisson-Boltzmannmore » Solver (APBS) software package to make it more accessible to a larger group of scientists, educators and students that are more familiar with the APBS framework.« less
Inertia-gravity wave radiation from the elliptical vortex in the f-plane shallow water system
NASA Astrophysics Data System (ADS)
Sugimoto, Norihiko
2017-04-01
Inertia-gravity wave (IGW) radiation from the elliptical vortex is investigated in the f-plane shallow water system. The far field of IGW is analytically derived for the case of an almost circular Kirchhoff vortex with a small aspect ratio. Cyclone-anticyclone asymmetry appears at finite values of the Rossby number (Ro) caused by the source originating in the Coriolis acceleration. While the intensity of IGWs from the cyclone monotonically decreases as f increases, that from the anticyclone increases as f increases for relatively smaller f and has a local maximum at intermediate f. A numerical experiment is conducted on a model using a spectral method in an unbounded domain. The numerical results agree quite well with the analytical ones for elliptical vortices with small aspect ratios, implying that the derived analytical forms are useful for the verification of the numerical model. For elliptical vortices with larger aspect ratios, however, significant deviation from the analytical estimates appears. The intensity of IGWs radiated in the numerical simulation is larger than that estimated analytically. The reason is that the source of IGWs is amplified during the time evolution because the shape of the vortex changes from ideal ellipse to elongated with filaments. Nevertheless, cyclone-anticyclone asymmetry similar to the analytical estimate appears in all the range of aspect ratios, suggesting that this asymmetry is a robust feature.
Estimation of the limit of detection using information theory measures.
Fonollosa, Jordi; Vergara, Alexander; Huerta, Ramón; Marco, Santiago
2014-01-31
Definitions of the limit of detection (LOD) based on the probability of false positive and/or false negative errors have been proposed over the past years. Although such definitions are straightforward and valid for any kind of analytical system, proposed methodologies to estimate the LOD are usually simplified to signals with Gaussian noise. Additionally, there is a general misconception that two systems with the same LOD provide the same amount of information on the source regardless of the prior probability of presenting a blank/analyte sample. Based upon an analogy between an analytical system and a binary communication channel, in this paper we show that the amount of information that can be extracted from an analytical system depends on the probability of presenting the two different possible states. We propose a new definition of LOD utilizing information theory tools that deals with noise of any kind and allows the introduction of prior knowledge easily. Unlike most traditional LOD estimation approaches, the proposed definition is based on the amount of information that the chemical instrumentation system provides on the chemical information source. Our findings indicate that the benchmark of analytical systems based on the ability to provide information about the presence/absence of the analyte (our proposed approach) is a more general and proper framework, while converging to the usual values when dealing with Gaussian noise. Copyright © 2013 Elsevier B.V. All rights reserved.
Solute source depletion control of forward and back diffusion through low-permeability zones
NASA Astrophysics Data System (ADS)
Yang, Minjune; Annable, Michael D.; Jawitz, James W.
2016-10-01
Solute diffusive exchange between low-permeability aquitards and high-permeability aquifers acts as a significant mediator of long-term contaminant fate. Aquifer contaminants diffuse into aquitards, but as contaminant sources are depleted, aquifer concentrations decline, triggering back diffusion from aquitards. The dynamics of the contaminant source depletion, or the source strength function, controls the timing of the transition of aquitards from sinks to sources. Here, we experimentally evaluate three archetypical transient source depletion models (step-change, linear, and exponential), and we use novel analytical solutions to accurately account for dynamic aquitard-aquifer diffusive transfer. Laboratory diffusion experiments were conducted using a well-controlled flow chamber to assess solute exchange between sand aquifer and kaolinite aquitard layers. Solute concentration profiles in the aquitard were measured in situ using electrical conductivity. Back diffusion was shown to begin earlier and produce larger mass flux for rapidly depleting sources. The analytical models showed very good correspondence with measured aquifer breakthrough curves and aquitard concentration profiles. The modeling approach links source dissolution and back diffusion, enabling assessment of human exposure risk and calculation of the back diffusion initiation time, as well as the resulting plume persistence.
Solute source depletion control of forward and back diffusion through low-permeability zones.
Yang, Minjune; Annable, Michael D; Jawitz, James W
2016-10-01
Solute diffusive exchange between low-permeability aquitards and high-permeability aquifers acts as a significant mediator of long-term contaminant fate. Aquifer contaminants diffuse into aquitards, but as contaminant sources are depleted, aquifer concentrations decline, triggering back diffusion from aquitards. The dynamics of the contaminant source depletion, or the source strength function, controls the timing of the transition of aquitards from sinks to sources. Here, we experimentally evaluate three archetypical transient source depletion models (step-change, linear, and exponential), and we use novel analytical solutions to accurately account for dynamic aquitard-aquifer diffusive transfer. Laboratory diffusion experiments were conducted using a well-controlled flow chamber to assess solute exchange between sand aquifer and kaolinite aquitard layers. Solute concentration profiles in the aquitard were measured in situ using electrical conductivity. Back diffusion was shown to begin earlier and produce larger mass flux for rapidly depleting sources. The analytical models showed very good correspondence with measured aquifer breakthrough curves and aquitard concentration profiles. The modeling approach links source dissolution and back diffusion, enabling assessment of human exposure risk and calculation of the back diffusion initiation time, as well as the resulting plume persistence. Copyright © 2016 Elsevier B.V. All rights reserved.
Epp: A C++ EGSnrc user code for x-ray imaging and scattering simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lippuner, Jonas; Elbakri, Idris A.; Cui Congwu
2011-03-15
Purpose: Easy particle propagation (Epp) is a user code for the EGSnrc code package based on the C++ class library egspp. A main feature of egspp (and Epp) is the ability to use analytical objects to construct simulation geometries. The authors developed Epp to facilitate the simulation of x-ray imaging geometries, especially in the case of scatter studies. While direct use of egspp requires knowledge of C++, Epp requires no programming experience. Methods: Epp's features include calculation of dose deposited in a voxelized phantom and photon propagation to a user-defined imaging plane. Projection images of primary, single Rayleigh scattered, singlemore » Compton scattered, and multiple scattered photons may be generated. Epp input files can be nested, allowing for the construction of complex simulation geometries from more basic components. To demonstrate the imaging features of Epp, the authors simulate 38 keV x rays from a point source propagating through a water cylinder 12 cm in diameter, using both analytical and voxelized representations of the cylinder. The simulation generates projection images of primary and scattered photons at a user-defined imaging plane. The authors also simulate dose scoring in the voxelized version of the phantom in both Epp and DOSXYZnrc and examine the accuracy of Epp using the Kawrakow-Fippel test. Results: The results of the imaging simulations with Epp using voxelized and analytical descriptions of the water cylinder agree within 1%. The results of the Kawrakow-Fippel test suggest good agreement between Epp and DOSXYZnrc. Conclusions: Epp provides the user with useful features, including the ability to build complex geometries from simpler ones and the ability to generate images of scattered and primary photons. There is no inherent computational time saving arising from Epp, except for those arising from egspp's ability to use analytical representations of simulation geometries. Epp agrees with DOSXYZnrc in dose calculation, since they are both based on the well-validated standard EGSnrc radiation transport physics model.« less
Source-term development for a contaminant plume for use by multimedia risk assessment models
NASA Astrophysics Data System (ADS)
Whelan, Gene; McDonald, John P.; Taira, Randal Y.; Gnanapragasam, Emmanuel K.; Yu, Charley; Lew, Christine S.; Mills, William B.
2000-02-01
Multimedia modelers from the US Environmental Protection Agency (EPA) and US Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: MEPAS, MMSOILS, PRESTO, and RESRAD. These models represent typical analytically based tools that are used in human-risk and endangerment assessments at installations containing radioactive and hazardous contaminants. The objective is to demonstrate an approach for developing an adequate source term by simplifying an existing, real-world, 90Sr plume at DOE's Hanford installation in Richland, WA, for use in a multimedia benchmarking exercise between MEPAS, MMSOILS, PRESTO, and RESRAD. Source characteristics and a release mechanism are developed and described; also described is a typical process and procedure that an analyst would follow in developing a source term for using this class of analytical tool in a preliminary assessment.
Stakeholder perspectives on decision-analytic modeling frameworks to assess genetic services policy.
Guzauskas, Gregory F; Garrison, Louis P; Stock, Jacquie; Au, Sylvia; Doyle, Debra Lochner; Veenstra, David L
2013-01-01
Genetic services policymakers and insurers often make coverage decisions in the absence of complete evidence of clinical utility and under budget constraints. We evaluated genetic services stakeholder opinions on the potential usefulness of decision-analytic modeling to inform coverage decisions, and asked them to identify genetic tests for decision-analytic modeling studies. We presented an overview of decision-analytic modeling to members of the Western States Genetic Services Collaborative Reimbursement Work Group and state Medicaid representatives and conducted directed content analysis and an anonymous survey to gauge their attitudes toward decision-analytic modeling. Participants also identified and prioritized genetic services for prospective decision-analytic evaluation. Participants expressed dissatisfaction with current processes for evaluating insurance coverage of genetic services. Some participants expressed uncertainty about their comprehension of decision-analytic modeling techniques. All stakeholders reported openness to using decision-analytic modeling for genetic services assessments. Participants were most interested in application of decision-analytic concepts to multiple-disorder testing platforms, such as next-generation sequencing and chromosomal microarray. Decision-analytic modeling approaches may provide a useful decision tool to genetic services stakeholders and Medicaid decision-makers.
Correlations and analytical approaches to co-evolving voter models
NASA Astrophysics Data System (ADS)
Ji, M.; Xu, C.; Choi, C. W.; Hui, P. M.
2013-11-01
The difficulty in formulating analytical treatments in co-evolving networks is studied in light of the Vazquez-Eguíluz-San Miguel voter model (VM) and a modified VM (MVM) that introduces a random mutation of the opinion as a noise in the VM. The density of active links, which are links that connect the nodes of opposite opinions, is shown to be highly sensitive to both the degree k of a node and the active links n among the neighbors of a node. We test the validity in the formalism of analytical approaches and show explicitly that the assumptions behind the commonly used homogeneous pair approximation scheme in formulating a mean-field theory are the source of the theory's failure due to the strong correlations between k, n and n2. An improved approach that incorporates spatial correlation to the nearest-neighbors explicitly and a random approximation for the next-nearest neighbors is formulated for the VM and the MVM, and it gives better agreement with the simulation results. We introduce an empirical approach that quantifies the correlations more accurately and gives results in good agreement with the simulation results. The work clarifies why simply mean-field theory fails and sheds light on how to analyze the correlations in the dynamic equations that are often generated in co-evolving processes.
Fibrinolysis standards: a review of the current status.
Thelwell, C
2010-07-01
Biological standards are used to calibrate measurements of components of the fibrinolytic system, either for assigning potency values to therapeutic products, or to determine levels in human plasma as an indicator of thrombotic risk. Traditionally WHO International Standards are calibrated in International Units based on consensus values from collaborative studies. The International Unit is defined by the response activity of a given amount of the standard in a bioassay, independent of the method used. Assay validity is based on the assumption that both standard and test preparation contain the same analyte, and the response in an assay is a true function of this analyte. This principle is reflected in the diversity of source materials used to prepare fibrinolysis standards, which has depended on the contemporary preparations they were employed to measure. With advancing recombinant technology, and improved analytical techniques, a reference system based on reference materials and associated reference methods has been recommended for future fibrinolysis standards. Careful consideration and scientific judgement must however be applied when deciding on an approach to develop a new standard, with decisions based on the suitability of a standard to serve its purpose, and not just to satisfy a metrological ideal. 2010 The International Association for Biologicals. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asano, Keiji G; Ford, Michael J; Tomkins, Bruce A
A self-aspirating heated nebulizer probe is described and demonstrated for use in the direct analysis of analytes on surfaces and in liquid samples by atmospheric pressure chemical ionization (APCI) mass spectrometry. Functionality and performance of the probe as a self-aspirating APCI source is demonstrated using reserpine and progesterone as test compounds. The utility of the probe to sample analytes directly from surfaces was demonstrated first by scanning development lanes of a reversed-phase thin-layer chromatography plate in which a three-component dye mixture, viz., Fat Red 7B, Solvent Green 3, and Solvent Blue 35, was spotted and the components were separated. Developmentmore » lanes were scanned by the sampling probe operated under computer control (x, y plane) while full-scan mass spectra were recorded using a quadrupole ion trap mass spectrometer. In addition, the ability to sample the surface of pharmaceutical tablets (viz., Extra Strength Tylenol(reg. sign) and Evista(reg. sign) tablets) and to detect the active ingredients (acetaminophen and raloxifene, respectively) selectively was demonstrated using tandem mass spectrometry (MS/MS). Finally, the capability to sample analyte solutions from the wells of a 384-well microtiter plate and to perform quantitative analyses using MS/MS detection was illustrated with cotinine standards spiked with cotinine-d{sub 3} as an internal standard.« less
General relativistic razor-thin disks with magnetically polarized matter
NASA Astrophysics Data System (ADS)
Navarro-Noguera, Anamaría; Lora-Clavijo, F. D.; González, Guillermo A.
2018-06-01
The origin of magnetic fields in the universe still remains unknown and constitutes one of the most intriguing questions in astronomy and astrophysics. Their significance is enormous since they have a strong influence on many astrophysical phenomena. In regards of this motivation, theoretical models of galactic disks with sources of magnetic field may contribute to understand the physics behind them. Inspired by this, we present a new family of analytical models for thin disks composed by magnetized material. The solutions are axially symmetric, conformastatic and are obtained by solving the Einstein-Maxwell Field Equations for continuum media without the test field approximation, and assuming that the sources are razor-thin disk of magnetically polarized matter. We find analytical expressions for the surface energy density, the pressure, the polarization vector, the electromagnetic fields, the mass and the rotational velocity for circular orbits, for two particular solutions. In each case, the energy-momentum tensor agrees with the energy conditions and also the convergence of the mass for all the solutions is proved. Since the solutions are well-behaved, they may be used to model astrophysical thin disks, and also may contribute as initial data in numerical simulations. In addition, the process to obtain the solutions is described in detail, which may be used as a guide to find solutions with magnetized material in General Relativity.
Ternon, Eva; Tolosa, Imma
2015-07-24
Solid-phase extraction of both aliphatic (AHs) and aromatic polycyclic hydrocarbons (PAHs) from seawater samples was evaluated using a GFF filter stacked upon an octadecyl bonded silica (C18) disk. Stable-isotope measurements were developed on hydrocarbons extracted from both GFF and C18-disks in order to characterize the source of hydrocarbons. A clear partition of hydrocarbon compounds between the dissolved and the particulate phase was highlighted. PAHs showed a higher affinity with the dissolved phase (recoveries efficiency of 48-71%) whereas AHs presented strong affinity with the particulate phase (up to 76% of extraction efficiency). Medium volumes of seawater samples were tested and no breakthrough was observed for a 5L sample. Isotopic fractionation was investigated within all analytical steps but none was evidenced. This method has been applied to harbor seawater samples and very low AH and PAH concentrations were achieved. Due to the low concentration levels of hydrocarbons in the samples, the source of hydrocarbons was determined by molecular indices rather than isotopic measurements and a pyrolytic origin was evidenced. The aliphatic profile also revealed the presence of long-chain linear alkylbenzenes (LABs). The methodology presented here would better fit to polluted coastal environments affected by recent oil spills. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Batu, Vedat
2015-01-01
In this paper, a new generalized three-dimensional complete analytical solution is presented for any well screen shape in a vertically and horizontally anisotropic confined aquifer in x-y-z Cartesian coordinates system for drawdown by taking into account the three principal hydraulic conductivities (Kx, Ky, and Kz) along the x-y-z coordinate directions. The special solution covers a partially-penetrating inclined parallelepiped as well as an inclined line source well. It has been showed that the rectangular parallelepiped screen case solution of Batu (2012) is a special case of this general solution. Like Batu (2012), the horizontal well case is a special case of this solution as well. The solution takes into account both the vertical anisotropy (azx = Kz/Kx) as well as the horizontal anisotropy (ayx = Ky/Kx) and has potential application areas to analyze pumping test drawdown data from partially-penetrating inclined wells by representing them as tiny parallelepiped as well as line sources. Apart from other verifications, the inclined well solution results have also been compared with the results of MODFLOW with very good agreement. The solution has also potential application areas for a partially-penetrating inclined parallelepiped fracture. With this new solution, both the horizontal anisotropy (ayx = Ky/Kx) as well as the vertical anisotropy (azx = Kz/Kx) can also be determined using observed drawdown data.
Analytical and functional similarity of Amgen biosimilar ABP 215 to bevacizumab.
Seo, Neungseon; Polozova, Alla; Zhang, Mingxuan; Yates, Zachary; Cao, Shawn; Li, Huimin; Kuhns, Scott; Maher, Gwendolyn; McBride, Helen J; Liu, Jennifer
ABP 215 is a biosimilar product to bevacizumab. Bevacizumab acts by binding to vascular endothelial growth factor A, inhibiting endothelial cell proliferation and new blood vessel formation, thereby leading to tumor vasculature normalization. The ABP 215 analytical similarity assessment was designed to assess the structural and functional similarity of ABP 215 and bevacizumab sourced from both the United States (US) and the European Union (EU). Similarity assessment was also made between the US- and EU-sourced bevacizumab to assess the similarity between the two products. The physicochemical properties and structural similarity of ABP 215 and bevacizumab were characterized using sensitive state-of-the-art analytical techniques capable of detecting small differences in product attributes. ABP 215 has the same amino acid sequence and exhibits similar post-translational modification profiles compared to bevacizumab. The functional similarity assessment employed orthogonal assays designed to interrogate all expected biological activities, including those known to affect the mechanisms of action for ABP 215 and bevacizumab. More than 20 batches of bevacizumab (US) and bevacizumab (EU), and 13 batches of ABP 215 representing unique drug substance lots were assessed for similarity. The large dataset allows meaningful comparisons and garners confidence in the overall conclusion for the analytical similarity assessment of ABP 215 to both US- and EU-sourced bevacizumab. The structural and purity attributes, and biological properties of ABP 215 are demonstrated to be highly similar to those of bevacizumab.
Analytical techniques: A compilation
NASA Technical Reports Server (NTRS)
1975-01-01
A compilation, containing articles on a number of analytical techniques for quality control engineers and laboratory workers, is presented. Data cover techniques for testing electronic, mechanical, and optical systems, nondestructive testing techniques, and gas analysis techniques.
42 CFR 493.803 - Condition: Successful participation.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., subspecialty, and analyte or test in which the laboratory is certified under CLIA. (b) Except as specified in... a given specialty, subspecialty, analyte or test, as defined in this section, or fails to take...
42 CFR 493.803 - Condition: Successful participation.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., subspecialty, and analyte or test in which the laboratory is certified under CLIA. (b) Except as specified in... a given specialty, subspecialty, analyte or test, as defined in this section, or fails to take...
42 CFR 493.803 - Condition: Successful participation.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., subspecialty, and analyte or test in which the laboratory is certified under CLIA. (b) Except as specified in... a given specialty, subspecialty, analyte or test, as defined in this section, or fails to take...