NASA Technical Reports Server (NTRS)
Chambers, Jeffrey A.
1994-01-01
Finite element analysis is regularly used during the engineering cycle of mechanical systems to predict the response to static, thermal, and dynamic loads. The finite element model (FEM) used to represent the system is often correlated with physical test results to determine the validity of analytical results provided. Results from dynamic testing provide one means for performing this correlation. One of the most common methods of measuring accuracy is by classical modal testing, whereby vibratory mode shapes are compared to mode shapes provided by finite element analysis. The degree of correlation between the test and analytical mode shapes can be shown mathematically using the cross orthogonality check. A great deal of time and effort can be exhausted in generating the set of test acquired mode shapes needed for the cross orthogonality check. In most situations response data from vibration tests are digitally processed to generate the mode shapes from a combination of modal parameters, forcing functions, and recorded response data. An alternate method is proposed in which the same correlation of analytical and test acquired mode shapes can be achieved without conducting the modal survey. Instead a procedure is detailed in which a minimum of test information, specifically the acceleration response data from a random vibration test, is used to generate a set of equivalent local accelerations to be applied to the reduced analytical model at discrete points corresponding to the test measurement locations. The static solution of the analytical model then produces a set of deformations that once normalized can be used to represent the test acquired mode shapes in the cross orthogonality relation. The method proposed has been shown to provide accurate results for both a simple analytical model as well as a complex space flight structure.
NASA Astrophysics Data System (ADS)
Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.
We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.
Managing laboratory test ordering through test frequency filtering.
Janssens, Pim M W; Wasser, Gerd
2013-06-01
Modern computer systems allow limits to be set on the periods allowed for repetitive testing. We investigated a computerised system for managing potentially overtly frequent laboratory testing, calculating the financial savings obtained. In consultation with hospital physicians, tests were selected for which 'spare periods' (periods during which tests are barred) might be set to control repetitive testing. The tests were selected and spare periods determined based on known analyte variations in health and disease, variety of tissues or cells giving rise to analytes, clinical conditions and rate of change determining analyte levels, frequency with which doctors need information about the analytes and the logistical needs of the clinic. The operation and acceptance of the system was explored with 23 analytes. Frequency filtering was subsequently introduced for 44 tests, each with their own spare periods. The proportion of tests barred was 0.56%, the most frequent of these being for total cholesterol, uric acid and HDL-cholesterol. The financial savings were 0.33% of the costs of all testing, with HbA1c, HDL-cholesterol and vitamin B12 yielding the largest savings. Following the introduction of the system the number of barred tests ultimately decreased, suggesting accommodation by the test requestors. Managing laboratory testing through computerised limits to prevent overtly frequent testing is feasible. The savings were relatively low, but sustaining the system takes little effort, giving little reason not to apply it. The findings will serve as a basis for improving the system and may guide others in introducing similar systems.
An analytic data analysis method for oscillatory slug tests.
Chen, Chia-Shyun
2006-01-01
An analytical data analysis method is developed for slug tests in partially penetrating wells in confined or unconfined aquifers of high hydraulic conductivity. As adapted from the van der Kamp method, the determination of the hydraulic conductivity is based on the occurrence times and the displacements of the extreme points measured from the oscillatory data and their theoretical counterparts available in the literature. This method is applied to two sets of slug test response data presented by Butler et al.: one set shows slow damping with seven discernable extremities, and the other shows rapid damping with three extreme points. The estimates of the hydraulic conductivity obtained by the analytic method are in good agreement with those determined by an available curve-matching technique.
Bolann, B J; Asberg, A
2004-01-01
The deviation of test results from patients' homeostatic set points in steady-state conditions may complicate interpretation of the results and the comparison of results with clinical decision limits. In this study the total deviation from the homeostatic set point is defined as the maximum absolute deviation for 95% of measurements, and we present analytical quality requirements that prevent analytical error from increasing this deviation to more than about 12% above the value caused by biology alone. These quality requirements are: 1) The stable systematic error should be approximately 0, and 2) a systematic error that will be detected by the control program with 90% probability, should not be larger than half the value of the combined analytical and intra-individual standard deviation. As a result, when the most common control rules are used, the analytical standard deviation may be up to 0.15 times the intra-individual standard deviation. Analytical improvements beyond these requirements have little impact on the interpretability of measurement results.
Statistical correlation analysis for comparing vibration data from test and analysis
NASA Technical Reports Server (NTRS)
Butler, T. G.; Strang, R. F.; Purves, L. R.; Hershfeld, D. J.
1986-01-01
A theory was developed to compare vibration modes obtained by NASTRAN analysis with those obtained experimentally. Because many more analytical modes can be obtained than experimental modes, the analytical set was treated as expansion functions for putting both sources in comparative form. The dimensional symmetry was developed for three general cases: nonsymmetric whole model compared with a nonsymmetric whole structural test, symmetric analytical portion compared with a symmetric experimental portion, and analytical symmetric portion with a whole experimental test. The theory was coded and a statistical correlation program was installed as a utility. The theory is established with small classical structures.
Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà
2010-03-01
Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.
Tests of a Semi-Analytical Case 1 and Gelbstoff Case 2 SeaWiFS Algorithm with a Global Data Set
NASA Technical Reports Server (NTRS)
Carder, Kendall L.; Hawes, Steve K.; Lee, Zhongping
1997-01-01
A semi-analytical algorithm was tested with a total of 733 points of either unpackaged or packaged-pigment data, with corresponding algorithm parameters for each data type. The 'unpackaged' type consisted of data sets that were generally consistent with the Case 1 CZCS algorithm and other well calibrated data sets. The 'packaged' type consisted of data sets apparently containing somewhat more packaged pigments, requiring modification of the absorption parameters of the model consistent with the CalCOFI study area. This resulted in two equally divided data sets. A more thorough scrutiny of these and other data sets using a semianalytical model requires improved knowledge of the phytoplankton and gelbstoff of the specific environment studied. Since the semi-analytical algorithm is dependent upon 4 spectral channels including the 412 nm channel, while most other algorithms are not, a means of testing data sets for consistency was sought. A numerical filter was developed to classify data sets into the above classes. The filter uses reflectance ratios, which can be determined from space. The sensitivity of such numerical filters to measurement resulting from atmospheric correction and sensor noise errors requires further study. The semi-analytical algorithm performed superbly on each of the data sets after classification, resulting in RMS1 errors of 0.107 and 0.121, respectively, for the unpackaged and packaged data-set classes, with little bias and slopes near 1.0. In combination, the RMS1 performance was 0.114. While these numbers appear rather sterling, one must bear in mind what mis-classification does to the results. Using an average or compromise parameterization on the modified global data set yielded an RMS1 error of 0.171, while using the unpackaged parameterization on the global evaluation data set yielded an RMS1 error of 0.284. So, without classification, the algorithm performs better globally using the average parameters than it does using the unpackaged parameters. Finally, the effects of even more extreme pigment packaging must be examined in order to improve algorithm performance at high latitudes. Note, however, that the North Sea and Mississippi River plume studies contributed data to the packaged and unpackaged classess, respectively, with little effect on algorithm performance. This suggests that gelbstoff-rich Case 2 waters do not seriously degrade performance of the semi-analytical algorithm.
ERIC Educational Resources Information Center
Saengprom, Narumon; Erawan, Waraporn; Damrongpanit, Suntonrapot; Sakulku, Jaruwan
2015-01-01
The purposes of this study were 1) Compare analytical thinking ability by testing the same sets of students 5 times 2) Develop and verify whether analytical thinking ability of students corresponds to second-order growth curve factors model. Samples were 1,093 eighth-grade students. The results revealed that 1) Analytical thinking ability scores…
Analytical modeling of flash-back phenomena. [premixed/prevaporized combustion system
NASA Technical Reports Server (NTRS)
Feng, C. C.
1979-01-01
To understand the flame flash-back phenomena more extensively, an analytical model was formed and a numerical program was written and tested to solve the set of differential equations describing the model. Results show that under a given set of conditions flame propagates in the boundary layer on a flat plate when the free stream is at or below 1.8 m/s.
saltPAD: A New Analytical Tool for Monitoring Salt Iodization in Low Resource Settings
Myers, Nicholas M.; Strydom, Emmerentia Elza; Sweet, James; Sweet, Christopher; Spohrer, Rebecca; Dhansay, Muhammad Ali; Lieberman, Marya
2016-01-01
We created a paper test card that measures a common iodizing agent, iodate, in salt. To test the analytical metrics, usability, and robustness of the paper test card when it is used in low resource settings, the South African Medical Research Council and GroundWork performed independent validation studies of the device. The accuracy and precision metrics from both studies were comparable. In the SAMRC study, more than 90% of the test results (n=1704) were correctly classified as corresponding to adequately or inadequately iodized salt. The cards are suitable for market and household surveys to determine whether salt is adequately iodized. Further development of the cards will improve their utility for monitoring salt iodization during production. PMID:29942380
Wang, Li-Ju; Naudé, Nicole; Demissie, Misganaw; Crivaro, Anne; Kamoun, Malek; Wang, Ping; Li, Lei
2018-07-01
Most mobile health (mHealth) diagnostic devices for laboratory tests only analyze one sample at a time, which is not suitable for large volume serology testing, especially in low-resource settings with shortage of health professionals. In this study, we developed an ultra-low-cost clinically-accurate mobile phone microplate reader (mReader), and clinically validated this optical device for 12 infectious disease tests. The mReader optically reads 96 samples on a microplate at one time. 771 de-identified patient samples were tested for 12 serology assays for bacterial/viral infections. The mReader and the clinical instrument blindly read and analyzed all tests in parallel. The analytical accuracy and the diagnostic performance of the mReader were evaluated across the clinical reportable categories by comparison with clinical laboratorial testing results. The mReader exhibited 97.59-99.90% analytical accuracy and <5% coefficient of variation (CV). The positive percent agreement (PPA) in all 12 tests achieved 100%, negative percent agreement (NPA) was higher than 83% except for one test (42.86%), and overall percent agreement (OPA) ranged 89.33-100%. We envision the mReader can benefit underserved areas/populations and low-resource settings in rural clinics/hospitals at a low cost (~$50 USD) with clinical-level analytical quality. It has the potential to improve health access, speed up healthcare delivery, and reduce health disparities and education disparities by providing access to a low-cost spectrophotometer. Copyright © 2018 Elsevier B.V. All rights reserved.
Indoor Exposure Product Testing Protocols Version 2
EPA’s Office of Pollution Prevention and Toxics (OPPT) has developed a set of ten indoor exposure testing protocols intended to provide information on the purpose of the testing, general description of the sampling and analytical procedures, and references for tests that will be ...
Improvement of analytical dynamic models using modal test data
NASA Technical Reports Server (NTRS)
Berman, A.; Wei, F. S.; Rao, K. V.
1980-01-01
A method developed to determine maximum changes in analytical mass and stiffness matrices to make them consistent with a set of measured normal modes and natural frequencies is presented. The corrected model will be an improved base for studies of physical changes, boundary condition changes, and for prediction of forced responses. The method features efficient procedures not requiring solutions of the eigenvalue problem, and the ability to have more degrees of freedom than the test data. In addition, modal displacements are obtained for all analytical degrees of freedom, and the frequency dependence of the coordinate transformations is properly treated.
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund; ...
2018-03-28
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
A Ricin Forensic Profiling Approach Based on a Complex Set of Biomarkers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fredriksson, Sten-Ake; Wunschel, David S.; Lindstrom, Susanne Wiklund
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1 – PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods andmore » robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved.« less
A pilot analytic study of a research-level, lower-cost human papillomavirus 16, 18, and 45 test.
Yang, Hannah P; Walmer, David K; Merisier, Delson; Gage, Julia C; Bell, Laura; Rangwala, Sameera; Shrestha, Niwashin; Kobayashi, Lori; Eder, Paul S; Castle, Philip E
2011-09-01
The analytic performance of a low-cost, research-stage DNA test for the most carcinogenic human papillomavirus (HPV) genotypes (HPV16, HPV18, and HPV45) in aggregate was evaluated among carcinogenic HPV-positive women, which might be used to decide who needs immediate colposcopy in low-resource settings ("triage test"). We found that HPV16/18/45 test agreed well with two DNA tests, a GP5+/6+ genotyping assay (Kappa = 0.77) and a quantitative PCR assay (at a cutpoint of 5000 viral copies) (Kappa = 0.87). DNA sequencing on a subset of 16 HPV16/18/45 positive and 16 HPV16/18/45 negative verified the analytic specificity of the research test. It is concluded that the HPV16/18/45 assay is a promising triage test with a minimum detection of approximately 5000 viral copies, the clinically relevant threshold. Published by Elsevier B.V.
Test-Taker Characteristics and Integrated Speaking Test Performance: A Path-Analytic Study
ERIC Educational Resources Information Center
Huang, Heng-Tsung Danny; Hung, Shao-Ting Alan; Hong, He-Ting Vivian
2016-01-01
This study explored the relationships among language proficiency, two selected test-taker characteristics (i.e., topical knowledge and anxiety), and integrated speaking test performance. Data collection capitalized on three sets of instruments: three integrated tasks derived from TOEFL-iBT preparation materials, the state anxiety inventory created…
40 CFR 1065.703 - Distillate diesel fuel.
Code of Federal Regulations, 2014 CFR
2014-07-01
... CONTROLS ENGINE-TESTING PROCEDURES Engine Fluids, Test Fuels, Analytical Gases and Other Calibration... diesel fuel specified for use as a test fuel. See the standard-setting part to determine which grade to... grades are specified in the following table: Table 1 of § 1065.703—Test Fuel Specifications for...
40 CFR 1065.703 - Distillate diesel fuel.
Code of Federal Regulations, 2011 CFR
2011-07-01
... CONTROLS ENGINE-TESTING PROCEDURES Engine Fluids, Test Fuels, Analytical Gases and Other Calibration... diesel fuel specified for use as a test fuel. See the standard-setting part to determine which grade to... inhibitor. (5) Pour depressant. (6) Dye. (7) Dispersant. (8) Biocide. Table 1 of § 1065.703—Test Fuel...
40 CFR 1065.703 - Distillate diesel fuel.
Code of Federal Regulations, 2013 CFR
2013-07-01
... CONTROLS ENGINE-TESTING PROCEDURES Engine Fluids, Test Fuels, Analytical Gases and Other Calibration... diesel fuel specified for use as a test fuel. See the standard-setting part to determine which grade to... inhibitor. (5) Pour depressant. (6) Dye. (7) Dispersant. (8) Biocide. Table 1 of § 1065.703—Test Fuel...
40 CFR 1065.703 - Distillate diesel fuel.
Code of Federal Regulations, 2012 CFR
2012-07-01
... CONTROLS ENGINE-TESTING PROCEDURES Engine Fluids, Test Fuels, Analytical Gases and Other Calibration... diesel fuel specified for use as a test fuel. See the standard-setting part to determine which grade to... inhibitor. (5) Pour depressant. (6) Dye. (7) Dispersant. (8) Biocide. Table 1 of § 1065.703—Test Fuel...
Three-dimensional eddy current solution of a polyphase machine test model (abstract)
NASA Astrophysics Data System (ADS)
Pahner, Uwe; Belmans, Ronnie; Ostovic, Vlado
1994-05-01
This abstract describes a three-dimensional (3D) finite element solution of a test model that has been reported in the literature. The model is a basis for calculating the current redistribution effects in the end windings of turbogenerators. The aim of the study is to see whether the analytical results of the test model can be found using a general purpose finite element package, thus indicating that the finite element model is accurate enough to treat real end winding problems. The real end winding problems cannot be solved analytically, as the geometry is far too complicated. The model consists of a polyphase coil set, containing 44 individual coils. This set generates a two pole mmf distribution on a cylindrical surface. The rotating field causes eddy currents to flow in the inner massive and conducting rotor. In the analytical solution a perfect sinusoidal mmf distribution is put forward. The finite element model contains 85824 tetrahedra and 16451 nodes. A complex single scalar potential representation is used in the nonconducting parts. The computation time required was 3 h and 42 min. The flux plots show that the field distribution is acceptable. Furthermore, the induced currents are calculated and compared with the values found from the analytical solution. The distribution of the eddy currents is very close to the distribution of the analytical solution. The most important results are the losses, both local and global. The value of the overall losses is less than 2% away from those of the analytical solution. Also the local distribution of the losses is at any given point less than 7% away from the analytical solution. The deviations of the results are acceptable and are partially due to the fact that the sinusoidal mmf distribution was not modeled perfectly in the finite element method.
CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila
2015-03-10
We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less
A Multilevel Multiset Time-Series Model for Describing Complex Developmental Processes
Ma, Xin; Shen, Jianping
2017-01-01
The authors sought to develop an analytical platform where multiple sets of time series can be examined simultaneously. This multivariate platform capable of testing interaction effects among multiple sets of time series can be very useful in empirical research. The authors demonstrated that the multilevel framework can readily accommodate this analytical capacity. Given their intention to use the multilevel multiset time-series model to pursue complicated research purposes, their resulting model is relatively simple to specify, to run, and to interpret. These advantages make the adoption of their model relatively effortless as long as researchers have the basic knowledge and skills in working with multilevel growth modeling. With multiple potential extensions of their model, the establishment of this analytical platform for analysis of multiple sets of time series can inspire researchers to pursue far more advanced research designs to address complex developmental processes in reality. PMID:29881094
A ricin forensic profiling approach based on a complex set of biomarkers.
Fredriksson, Sten-Åke; Wunschel, David S; Lindström, Susanne Wiklund; Nilsson, Calle; Wahl, Karen; Åstot, Crister
2018-08-15
A forensic method for the retrospective determination of preparation methods used for illicit ricin toxin production was developed. The method was based on a complex set of biomarkers, including carbohydrates, fatty acids, seed storage proteins, in combination with data on ricin and Ricinus communis agglutinin. The analyses were performed on samples prepared from four castor bean plant (R. communis) cultivars by four different sample preparation methods (PM1-PM4) ranging from simple disintegration of the castor beans to multi-step preparation methods including different protein precipitation methods. Comprehensive analytical data was collected by use of a range of analytical methods and robust orthogonal partial least squares-discriminant analysis- models (OPLS-DA) were constructed based on the calibration set. By the use of a decision tree and two OPLS-DA models, the sample preparation methods of test set samples were determined. The model statistics of the two models were good and a 100% rate of correct predictions of the test set was achieved. Copyright © 2018 Elsevier B.V. All rights reserved.
Goicoechea, H C; Olivieri, A C
2001-07-01
A newly developed multivariate method involving net analyte preprocessing (NAP) was tested using central composite calibration designs of progressively decreasing size regarding the multivariate simultaneous spectrophotometric determination of three active components (phenylephrine, diphenhydramine and naphazoline) and one excipient (methylparaben) in nasal solutions. Its performance was evaluated and compared with that of partial least-squares (PLS-1). Minimisation of the calibration predicted error sum of squares (PRESS) as a function of a moving spectral window helped to select appropriate working spectral ranges for both methods. The comparison of NAP and PLS results was carried out using two tests: (1) the elliptical joint confidence region for the slope and intercept of a predicted versus actual concentrations plot for a large validation set of samples and (2) the D-optimality criterion concerning the information content of the calibration data matrix. Extensive simulations and experimental validation showed that, unlike PLS, the NAP method is able to furnish highly satisfactory results when the calibration set is reduced from a full four-component central composite to a fractional central composite, as expected from the modelling requirements of net analyte based methods.
Evaluation of analytical errors in a clinical chemistry laboratory: a 3 year experience.
Sakyi, As; Laing, Ef; Ephraim, Rk; Asibey, Of; Sadique, Ok
2015-01-01
Proficient laboratory service is the cornerstone of modern healthcare systems and has an impact on over 70% of medical decisions on admission, discharge, and medications. In recent years, there is an increasing awareness of the importance of errors in laboratory practice and their possible negative impact on patient outcomes. We retrospectively analyzed data spanning a period of 3 years on analytical errors observed in our laboratory. The data covered errors over the whole testing cycle including pre-, intra-, and post-analytical phases and discussed strategies pertinent to our settings to minimize their occurrence. We described the occurrence of pre-analytical, analytical and post-analytical errors observed at the Komfo Anokye Teaching Hospital clinical biochemistry laboratory during a 3-year period from January, 2010 to December, 2012. Data were analyzed with Graph Pad Prism 5(GraphPad Software Inc. CA USA). A total of 589,510 tests was performed on 188,503 outpatients and hospitalized patients. The overall error rate for the 3 years was 4.7% (27,520/58,950). Pre-analytical, analytical and post-analytical errors contributed 3.7% (2210/58,950), 0.1% (108/58,950), and 0.9% (512/58,950), respectively. The number of tests reduced significantly over the 3-year period, but this did not correspond with a reduction in the overall error rate (P = 0.90) along with the years. Analytical errors are embedded within our total process setup especially pre-analytical and post-analytical phases. Strategic measures including quality assessment programs for staff involved in pre-analytical processes should be intensified.
Quality Assurance of RNA Expression Profiling in Clinical Laboratories
Tang, Weihua; Hu, Zhiyuan; Muallem, Hind; Gulley, Margaret L.
2012-01-01
RNA expression profiles are increasingly used to diagnose and classify disease, based on expression patterns of as many as several thousand RNAs. To ensure quality of expression profiling services in clinical settings, a standard operating procedure incorporates multiple quality indicators and controls, beginning with preanalytic specimen preparation and proceeding thorough analysis, interpretation, and reporting. Before testing, histopathological examination of each cellular specimen, along with optional cell enrichment procedures, ensures adequacy of the input tissue. Other tactics include endogenous controls to evaluate adequacy of RNA and exogenous or spiked controls to evaluate run- and patient-specific performance of the test system, respectively. Unique aspects of quality assurance for array-based tests include controls for the pertinent outcome signatures that often supersede controls for each individual analyte, built-in redundancy for critical analytes or biochemical pathways, and software-supported scrutiny of abundant data by a laboratory physician who interprets the findings in a manner facilitating appropriate medical intervention. Access to high-quality reagents, instruments, and software from commercial sources promotes standardization and adoption in clinical settings, once an assay is vetted in validation studies as being analytically sound and clinically useful. Careful attention to the well-honed principles of laboratory medicine, along with guidance from government and professional groups on strategies to preserve RNA and manage large data sets, promotes clinical-grade assay performance. PMID:22020152
Validation of the enthalpy method by means of analytical solution
NASA Astrophysics Data System (ADS)
Kleiner, Thomas; Rückamp, Martin; Bondzio, Johannes; Humbert, Angelika
2014-05-01
Numerical simulations moved in the recent year(s) from describing the cold-temperate transition surface (CTS) towards an enthalpy description, which allows avoiding incorporating a singular surface inside the model (Aschwanden et al., 2012). In Enthalpy methods the CTS is represented as a level set of the enthalpy state variable. This method has several numerical and practical advantages (e.g. representation of the full energy by one scalar field, no restriction to topology and shape of the CTS). The proposed method is rather new in glaciology and to our knowledge not verified and validated against analytical solutions. Unfortunately we are still lacking analytical solutions for sufficiently complex thermo-mechanically coupled polythermal ice flow. However, we present two experiments to test the implementation of the enthalpy equation and corresponding boundary conditions. The first experiment tests particularly the functionality of the boundary condition scheme and the corresponding basal melt rate calculation. Dependent on the different thermal situations that occur at the base, the numerical code may have to switch to another boundary type (from Neuman to Dirichlet or vice versa). The main idea of this set-up is to test the reversibility during transients. A former cold ice body that run through a warmer period with an associated built up of a liquid water layer at the base must be able to return to its initial steady state. Since we impose several assumptions on the experiment design analytical solutions can be formulated for different quantities during distinct stages of the simulation. The second experiment tests the positioning of the internal CTS in a parallel-sided polythermal slab. We compare our simulation results to the analytical solution proposed by Greve and Blatter (2009). Results from three different ice flow-models (COMIce, ISSM, TIMFD3) are presented.
DEVELOPMENT OF DIAGNOSTIC ANALYTICAL AND MECHANICAL ABILITY TESTS THROUGH FACET DESIGN AND ANALYSIS.
ERIC Educational Resources Information Center
GUTTMAN, LOUIS,; SCHLESINGER, I.M.
METHODOLOGY BASED ON FACET THEORY (MODIFIED SET THEORY) WAS USED IN TEST CONSTRUCTION AND ANALYSIS TO PROVIDE AN EFFICIENT TOOL OF EVALUATION FOR VOCATIONAL GUIDANCE AND VOCATIONAL SCHOOL USE. THE TYPE OF TEST DEVELOPMENT UNDERTAKEN WAS LIMITED TO THE USE OF NONVERBAL PICTORIAL ITEMS. ITEMS FOR TESTING ABILITY TO IDENTIFY ELEMENTS BELONGING TO AN…
Performance specifications for the extra-analytical phases of laboratory testing: Why and how.
Plebani, Mario
2017-07-01
An important priority in the current healthcare scenario should be to address errors in laboratory testing, which account for a significant proportion of diagnostic errors. Efforts made in laboratory medicine to enhance the diagnostic process have been directed toward improving technology, greater volumes and more accurate laboratory tests being achieved, but data collected in the last few years highlight the need to re-evaluate the total testing process (TTP) as the unique framework for improving quality and patient safety. Valuable quality indicators (QIs) and extra-analytical performance specifications are required for guidance in improving all TTP steps. Yet in literature no data are available on extra-analytical performance specifications based on outcomes, and nor is it possible to set any specification using calculations involving biological variability. The collection of data representing the state-of-the-art based on quality indicators is, therefore, underway. The adoption of a harmonized set of QIs, a common data collection and standardised reporting method is mandatory as it will not only allow the accreditation of clinical laboratories according to the International Standard, but also assure guidance for promoting improvement processes and guaranteeing quality care to patients. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Littell, Justin D.; Binienda, Wieslaw K.; Arnold, William A.; Roberts, Gary D.; Goldberg, Robert K.
2010-01-01
The reliability of impact simulations for aircraft components made with triaxial-braided carbon-fiber composites is currently limited by inadequate material property data and lack of validated material models for analysis. Methods to characterize the material properties used in the analytical models from a systematically obtained set of test data are also lacking. A macroscopic finite element based analytical model to analyze the impact response of these materials has been developed. The stiffness and strength properties utilized in the material model are obtained from a set of quasi-static in-plane tension, compression and shear coupon level tests. Full-field optical strain measurement techniques are applied in the testing, and the results are used to help in characterizing the model. The unit cell of the braided composite is modeled as a series of shell elements, where each element is modeled as a laminated composite. The braided architecture can thus be approximated within the analytical model. The transient dynamic finite element code LS-DYNA is utilized to conduct the finite element simulations, and an internal LS-DYNA constitutive model is utilized in the analysis. Methods to obtain the stiffness and strength properties required by the constitutive model from the available test data are developed. Simulations of quasi-static coupon tests and impact tests of a represented braided composite are conducted. Overall, the developed method shows promise, but improvements that are needed in test and analysis methods for better predictive capability are examined.
2011-09-01
by a single mean equinoctial element set . EGP Orbit Determination Test Cases Rev 25 14 All of the EGP test cases employ the same observation...the non-singular equinoctial mean elements is more linear and this has positive implications for orbit determination processes based on the semi...by a single mean equinoctial element set . 5. CONCLUSIONS The GTDS Semi-analytical Satellite Theory (DSST) architecture has been extended to
A Guide for Setting the Cut-Scores to Minimize Weighted Classification Errors in Test Batteries
ERIC Educational Resources Information Center
Grabovsky, Irina; Wainer, Howard
2017-01-01
In this article, we extend the methodology of the Cut-Score Operating Function that we introduced previously and apply it to a testing scenario with multiple independent components and different testing policies. We derive analytically the overall classification error rate for a test battery under the policy when several retakes are allowed for…
Evaluation of Analytical Errors in a Clinical Chemistry Laboratory: A 3 Year Experience
Sakyi, AS; Laing, EF; Ephraim, RK; Asibey, OF; Sadique, OK
2015-01-01
Background: Proficient laboratory service is the cornerstone of modern healthcare systems and has an impact on over 70% of medical decisions on admission, discharge, and medications. In recent years, there is an increasing awareness of the importance of errors in laboratory practice and their possible negative impact on patient outcomes. Aim: We retrospectively analyzed data spanning a period of 3 years on analytical errors observed in our laboratory. The data covered errors over the whole testing cycle including pre-, intra-, and post-analytical phases and discussed strategies pertinent to our settings to minimize their occurrence. Materials and Methods: We described the occurrence of pre-analytical, analytical and post-analytical errors observed at the Komfo Anokye Teaching Hospital clinical biochemistry laboratory during a 3-year period from January, 2010 to December, 2012. Data were analyzed with Graph Pad Prism 5(GraphPad Software Inc. CA USA). Results: A total of 589,510 tests was performed on 188,503 outpatients and hospitalized patients. The overall error rate for the 3 years was 4.7% (27,520/58,950). Pre-analytical, analytical and post-analytical errors contributed 3.7% (2210/58,950), 0.1% (108/58,950), and 0.9% (512/58,950), respectively. The number of tests reduced significantly over the 3-year period, but this did not correspond with a reduction in the overall error rate (P = 0.90) along with the years. Conclusion: Analytical errors are embedded within our total process setup especially pre-analytical and post-analytical phases. Strategic measures including quality assessment programs for staff involved in pre-analytical processes should be intensified. PMID:25745569
Poljak, Mario; Ostrbenk, Anja; Seme, Katja; Ucakar, Veronika; Hillemanns, Peter; Bokal, Eda Vrtacnik; Jancar, Nina; Klavs, Irena
2011-05-01
The clinical performance of the Abbott RealTime High Risk HPV (human papillomavirus) test (RealTime) and that of the Hybrid Capture 2 HPV DNA test (hc2) were prospectively compared in the population-based cervical cancer screening setting. In women >30 years old (n = 3,129), the clinical sensitivity of RealTime for detection of cervical intraepithelial neoplasia of grade 2 (CIN2) or worse (38 cases) and its clinical specificity for lesions of less than CIN2 (3,091 controls) were 100% and 93.3%, respectively, and those of hc2 were 97.4% and 91.8%, respectively. A noninferiority score test showed that the clinical specificity (P < 0.0001) and clinical sensitivity (P = 0.011) of RealTime were noninferior to those of hc2 at the recommended thresholds of 98% and 90%. In the total study population (women 20 to 64 years old; n = 4,432; 57 cases, 4,375 controls), the clinical sensitivity and specificity of RealTime were 98.2% and 89.5%, and those of hc2 were 94.7% and 87.7%, respectively. The analytical sensitivity and analytical specificity of RealTime in detecting targeted HPV types evaluated with the largest sample collection to date (4,479 samples) were 94.8% and 99.8%, and those of hc2 were 93.4% and 97.8%, respectively. Excellent analytical agreement between the two assays was obtained (kappa value, 0.84), while the analytical accuracy of RealTime was significantly higher than that of hc2. RealTime demonstrated high intralaboratory reproducibility and interlaboratory agreement with 500 samples retested 61 to 226 days after initial testing in two different laboratories. RealTime can be considered to be a reliable and robust HPV assay clinically comparable to hc2 for the detection of CIN2+ lesions in a population-based cervical cancer screening setting.
Irregular analytical errors in diagnostic testing - a novel concept.
Vogeser, Michael; Seger, Christoph
2018-02-23
In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC-isotope-dilution mass spectrometry methods are increasingly used for pre-market validation of routine diagnostic assays (these tests also involve substantial sets of clinical validation samples). Based on this definition/terminology, we list recognized causes of irregular analytical error as a risk catalog for clinical chemistry in this article. These issues include reproducible individual analytical errors (e.g. caused by anti-reagent antibodies) and non-reproducible, sporadic errors (e.g. errors due to incorrect pipetting volume due to air bubbles in a sample), which can both lead to inaccurate results and risks for patients.
NASA Astrophysics Data System (ADS)
Johnston, Marty; Jalkio, Jeffrey
2013-04-01
By the time students have reached the intermediate level physics courses they have been exposed to a broad set of analytical, experimental, and computational skills. However, their ability to independently integrate these skills into the study of a physical system is often weak. To address this weakness and assess their understanding of the underlying physical concepts we have introduced laboratory homework into lecture based, junior level theoretical mechanics and electromagnetics courses. A laboratory homework set replaces a traditional one and emphasizes the analysis of a single system. In an exercise, students use analytical and computational tools to predict the behavior of a system and design a simple measurement to test their model. The laboratory portion of the exercises is straight forward and the emphasis is on concept integration and application. The short student reports we collect have revealed misconceptions that were not apparent in reviewing the traditional homework and test problems. Work continues on refining the current problems and expanding the problem sets.
An Empirical Evaluation of Factor Reliability.
ERIC Educational Resources Information Center
Jackson, Douglas N.; Morf, Martin E.
The psychometric reliability of a factor, defined as its generalizability across samples drawn from the same population of tests, is considered as a necessary precondition for the scientific meaningfulness of factor analytic results. A solution to the problem of generalizability is illustrated empirically on data from a set of tests designed to…
Borges, Chad R
2007-07-01
A chemometrics-based data analysis concept has been developed as a substitute for manual inspection of extracted ion chromatograms (XICs), which facilitates rapid, analyst-mediated interpretation of GC- and LC/MS(n) data sets from samples undergoing qualitative batchwise screening for prespecified sets of analytes. Automatic preparation of data into two-dimensional row space-derived scatter plots (row space plots) eliminates the need to manually interpret hundreds to thousands of XICs per batch of samples while keeping all interpretation of raw data directly in the hands of the analyst-saving great quantities of human time without loss of integrity in the data analysis process. For a given analyte, two analyte-specific variables are automatically collected by a computer algorithm and placed into a data matrix (i.e., placed into row space): the first variable is the ion abundance corresponding to scan number x and analyte-specific m/z value y, and the second variable is the ion abundance corresponding to scan number x and analyte-specific m/z value z (a second ion). These two variables serve as the two axes of the aforementioned row space plots. In order to collect appropriate scan number (retention time) information, it is necessary to analyze, as part of every batch, a sample containing a mixture of all analytes to be tested. When pure standard materials of tested analytes are unavailable, but representative ion m/z values are known and retention time can be approximated, data are evaluated based on two-dimensional scores plots from principal component analysis of small time range(s) of mass spectral data. The time-saving efficiency of this concept is directly proportional to the percentage of negative samples and to the total number of samples processed simultaneously.
ERIC Educational Resources Information Center
Darwazeh, Afnan N.
The aim of this study was to investigate some of the learner variables that may have an influence on university academic achievement in a distance versus a conventional education setting. Descriptive and analytical statistics were used to analyze data by using "Pearson r," and "F-test." Results revealed that the university…
ERIC Educational Resources Information Center
Mowsesian, Richard; Hays, William L.
The Graduate Record Examination (GRE) Aptitude Test has been in use since 1938. In 1975 the GRE Aptitude Test was broadened to include an experimental set of items designed to tap a respondent's recognition of logical relationships and consistency of interrelated statements, and to make inferences from abstract relationships. To test the…
Cantrill, Richard C
2008-01-01
Methods of analysis for products of modern biotechnology are required for national and international trade in seeds, grain and food in order to meet the labeling or import/export requirements of different nations and trading blocks. Although many methods were developed by the originators of transgenic events, governments, universities, and testing laboratories, trade is less complicated if there exists a set of international consensus-derived analytical standards. In any analytical situation, multiple methods may exist for testing for the same analyte. These methods may be supported by regional preferences and regulatory requirements. However, tests need to be sensitive enough to determine low levels of these traits in commodity grain for regulatory purposes and also to indicate purity of seeds containing these traits. The International Organization for Standardization (ISO) and its European counterpart have worked to produce a suite of standards through open, balanced and consensus-driven processes. Presently, these standards are approaching the time for their first review. In fact, ISO 21572, the "protein standard" has already been circulated for systematic review. In order to expedite the review and revision of the nucleic acid standards an ISO Technical Specification (ISO/TS 21098) was drafted to set the criteria for the inclusion of precision data from collaborative studies into the annexes of these standards.
Maximum of a Fractional Brownian Motion: Analytic Results from Perturbation Theory.
Delorme, Mathieu; Wiese, Kay Jörg
2015-11-20
Fractional Brownian motion is a non-Markovian Gaussian process X_{t}, indexed by the Hurst exponent H. It generalizes standard Brownian motion (corresponding to H=1/2). We study the probability distribution of the maximum m of the process and the time t_{max} at which the maximum is reached. They are encoded in a path integral, which we evaluate perturbatively around a Brownian, setting H=1/2+ϵ. This allows us to derive analytic results beyond the scaling exponents. Extensive numerical simulations for different values of H test these analytical predictions and show excellent agreement, even for large ϵ.
Parachute-deployment-parameter identification based on an analytical simulation of Viking BLDT AV-4
NASA Technical Reports Server (NTRS)
Talay, T. A.
1974-01-01
A six-degree-of-freedom analytical simulation of parachute deployment dynamics developed at the Langley Research Center is presented. A comparison study was made using flight results from the Viking Balloon Launched Decelerator Test (BLDT) AV-4. Since there are significant voids in the knowledge of vehicle and decelerator aerodynamics and suspension system physical properties, a set of deployment-parameter input has been defined which may be used as a basis for future studies of parachute deployment dynamics. The study indicates the analytical model is sufficiently sophisticated to investigate parachute deployment dynamics with reasonable accuracy.
Nikiforova, Marina N; Mercurio, Stephanie; Wald, Abigail I; Barbi de Moura, Michelle; Callenberg, Keith; Santana-Santos, Lucas; Gooding, William E; Yip, Linwah; Ferris, Robert L; Nikiforov, Yuri E
2018-04-15
Molecular tests have clinical utility for thyroid nodules with indeterminate fine-needle aspiration (FNA) cytology, although their performance requires further improvement. This study evaluated the analytical performance of the newly created ThyroSeq v3 test. ThyroSeq v3 is a DNA- and RNA-based next-generation sequencing assay that analyzes 112 genes for a variety of genetic alterations, including point mutations, insertions/deletions, gene fusions, copy number alterations, and abnormal gene expression, and it uses a genomic classifier (GC) to separate malignant lesions from benign lesions. It was validated in 238 tissue samples and 175 FNA samples with known surgical follow-up. Analytical performance studies were conducted. In the training tissue set of samples, ThyroSeq GC detected more than 100 genetic alterations, including BRAF, RAS, TERT, and DICER1 mutations, NTRK1/3, BRAF, and RET fusions, 22q loss, and gene expression alterations. GC cutoffs were established to distinguish cancer from benign nodules with 93.9% sensitivity, 89.4% specificity, and 92.1% accuracy. This correctly classified most papillary, follicular, and Hurthle cell lesions, medullary thyroid carcinomas, and parathyroid lesions. In the FNA validation set, the GC sensitivity was 98.0%, the specificity was 81.8%, and the accuracy was 90.9%. Analytical accuracy studies demonstrated a minimal required nucleic acid input of 2.5 ng, a 12% minimal acceptable tumor content, and reproducible test results under variable stress conditions. The ThyroSeq v3 GC analyzes 5 different classes of molecular alterations and provides high accuracy for detecting all common types of thyroid cancer and parathyroid lesions. The analytical sensitivity, specificity, and robustness of the test have been successfully validated and indicate its suitability for clinical use. Cancer 2018;124:1682-90. © 2018 American Cancer Society. © 2018 American Cancer Society.
Fei, Yang; Zeng, Rong; Wang, Wei; He, Falin; Zhong, Kun; Wang, Zhiguo
2015-01-01
To investigate the state of the art of intra-laboratory turnaround time (intra-TAT), provide suggestions and find out whether laboratories accredited by International Organization for Standardization (ISO) 15189 or College of American Pathologists (CAP) will show better performance on intra-TAT than non-accredited ones. 479 Chinese clinical laboratories participating in the external quality assessment programs of chemistry, blood gas, and haematology tests organized by the National Centre for Clinical Laboratories in China were included in our study. General information and the median of intra-TAT of routine and stat tests in last one week were asked in the questionnaires. The response rate of clinical biochemistry, blood gas, and haematology testing were 36% (479/1307), 38% (228/598), and 36% (449/1250), respectively. More than 50% of laboratories indicated that they had set up intra-TAT median goals and almost 60% of laboratories declared they had monitored intra-TAT generally for every analyte they performed. Among all analytes we investigated, the intra-TAT of haematology analytes was shorter than biochemistry while the intra-TAT of blood gas analytes was the shortest. There were significant differences between median intra-TAT on different days of the week for routine tests. However, there were no significant differences in median intra-TAT reported by accredited laboratories and non-accredited laboratories. Many laboratories in China are aware of intra-TAT control and are making effort to reach the target. There is still space for improvement. Accredited laboratories have better status on intra-TAT monitoring and target setting than the non-accredited, but there are no significant differences in median intra-TAT reported by them.
Fei, Yang; Zeng, Rong; Wang, Wei; He, Falin; Zhong, Kun
2015-01-01
Introduction To investigate the state of the art of intra-laboratory turnaround time (intra-TAT), provide suggestions and find out whether laboratories accredited by International Organization for Standardization (ISO) 15189 or College of American Pathologists (CAP) will show better performance on intra-TAT than non-accredited ones. Materials and methods 479 Chinese clinical laboratories participating in the external quality assessment programs of chemistry, blood gas, and haematology tests organized by the National Centre for Clinical Laboratories in China were included in our study. General information and the median of intra-TAT of routine and stat tests in last one week were asked in the questionnaires. Results The response rate of clinical biochemistry, blood gas, and haematology testing were 36% (479 / 1307), 38% (228 / 598), and 36% (449 / 1250), respectively. More than 50% of laboratories indicated that they had set up intra-TAT median goals and almost 60% of laboratories declared they had monitored intra-TAT generally for every analyte they performed. Among all analytes we investigated, the intra-TAT of haematology analytes was shorter than biochemistry while the intra-TAT of blood gas analytes was the shortest. There were significant differences between median intra-TAT on different days of the week for routine tests. However, there were no significant differences in median intra-TAT reported by accredited laboratories and non-accredited laboratories. Conclusions Many laboratories in China are aware of intra-TAT control and are making effort to reach the target. There is still space for improvement. Accredited laboratories have better status on intra-TAT monitoring and target setting than the non-accredited, but there are no significant differences in median intra-TAT reported by them. PMID:26110033
Closing the brain-to-brain loop in laboratory testing.
Plebani, Mario; Lippi, Giuseppe
2011-07-01
Abstract The delivery of laboratory services has been described 40 years ago and defined with the foremost concept of "brain-to-brain turnaround time loop". This concept consists of several processes, including the final step which is the action undertaken on the patient based on laboratory information. Unfortunately, the need for systematic feedback to improve the value of laboratory services has been poorly understood and, even more risky, poorly applied in daily laboratory practice. Currently, major problems arise from the unavailability of consensually accepted quality specifications for the extra-analytical phase of laboratory testing. This, in turn, does not allow clinical laboratories to calculate a budget for the "patient-related total error". The definition and use of the term "total error" refers only to the analytical phase, and should be better defined as "total analytical error" to avoid any confusion and misinterpretation. According to the hierarchical approach to classify strategies to set analytical quality specifications, the "assessment of the effect of analytical performance on specific clinical decision-making" is comprehensively at the top and therefore should be applied as much as possible to address analytical efforts towards effective goals. In addition, an increasing number of laboratories worldwide are adopting risk management strategies such as FMEA, FRACAS, LEAN and Six Sigma since these techniques allow the identification of the most critical steps in the total testing process, and to reduce the patient-related risk of error. As a matter of fact, an increasing number of laboratory professionals recognize the importance of understanding and monitoring any step in the total testing process, including the appropriateness of the test request as well as the appropriate interpretation and utilization of test results.
Johnstone, Daniel; Milward, Elizabeth A.; Berretta, Regina; Moscato, Pablo
2012-01-01
Background Recent Alzheimer's disease (AD) research has focused on finding biomarkers to identify disease at the pre-clinical stage of mild cognitive impairment (MCI), allowing treatment to be initiated before irreversible damage occurs. Many studies have examined brain imaging or cerebrospinal fluid but there is also growing interest in blood biomarkers. The Alzheimer's Disease Neuroimaging Initiative (ADNI) has generated data on 190 plasma analytes in 566 individuals with MCI, AD or normal cognition. We conducted independent analyses of this dataset to identify plasma protein signatures predicting pre-clinical AD. Methods and Findings We focused on identifying signatures that discriminate cognitively normal controls (n = 54) from individuals with MCI who subsequently progress to AD (n = 163). Based on p value, apolipoprotein E (APOE) showed the strongest difference between these groups (p = 2.3×10−13). We applied a multivariate approach based on combinatorial optimization ((α,β)-k Feature Set Selection), which retains information about individual participants and maintains the context of interrelationships between different analytes, to identify the optimal set of analytes (signature) to discriminate these two groups. We identified 11-analyte signatures achieving values of sensitivity and specificity between 65% and 86% for both MCI and AD groups, depending on whether APOE was included and other factors. Classification accuracy was improved by considering “meta-features,” representing the difference in relative abundance of two analytes, with an 8-meta-feature signature consistently achieving sensitivity and specificity both over 85%. Generating signatures based on longitudinal rather than cross-sectional data further improved classification accuracy, returning sensitivities and specificities of approximately 90%. Conclusions Applying these novel analysis approaches to the powerful and well-characterized ADNI dataset has identified sets of plasma biomarkers for pre-clinical AD. While studies of independent test sets are required to validate the signatures, these analyses provide a starting point for developing a cost-effective and minimally invasive test capable of diagnosing AD in its pre-clinical stages. PMID:22485168
RUPTURES IN THE ANALYTIC SETTING AND DISTURBANCES IN THE TRANSFORMATIONAL FIELD OF DREAMS.
Brown, Lawrence J
2015-10-01
This paper explores some implications of Bleger's (1967, 2013) concept of the analytic situation, which he views as comprising the analytic setting and the analytic process. The author discusses Bleger's idea of the analytic setting as the depositary for projected painful aspects in either the analyst or patient or both-affects that are then rendered as nonprocess. In contrast, the contents of the analytic process are subject to an incessant process of transformation (Green 2005). The author goes on to enumerate various components of the analytic setting: the nonhuman, object relational, and the analyst's "person" (including mental functioning). An extended clinical vignette is offered as an illustration. © 2015 The Psychoanalytic Quarterly, Inc.
NASA Astrophysics Data System (ADS)
Yuan, H. Z.; Chen, Z.; Shu, C.; Wang, Y.; Niu, X. D.; Shu, S.
2017-09-01
In this paper, a free energy-based surface tension force (FESF) model is presented for accurately resolving the surface tension force in numerical simulation of multiphase flows by the level set method. By using the analytical form of order parameter along the normal direction to the interface in the phase-field method and the free energy principle, FESF model offers an explicit and analytical formulation for the surface tension force. The only variable in this formulation is the normal distance to the interface, which can be substituted by the distance function solved by the level set method. On one hand, as compared to conventional continuum surface force (CSF) model in the level set method, FESF model introduces no regularized delta function, due to which it suffers less from numerical diffusions and performs better in mass conservation. On the other hand, as compared to the phase field surface tension force (PFSF) model, the evaluation of surface tension force in FESF model is based on an analytical approach rather than numerical approximations of spatial derivatives. Therefore, better numerical stability and higher accuracy can be expected. Various numerical examples are tested to validate the robustness of the proposed FESF model. It turns out that FESF model performs better than CSF model and PFSF model in terms of accuracy, stability, convergence speed and mass conservation. It is also shown in numerical tests that FESF model can effectively simulate problems with high density/viscosity ratio, high Reynolds number and severe topological interfacial changes.
Lack of grading agreement among international hemostasis external quality assessment programs
Olson, John D.; Jennings, Ian; Meijer, Piet; Bon, Chantal; Bonar, Roslyn; Favaloro, Emmanuel J.; Higgins, Russell A.; Keeney, Michael; Mammen, Joy; Marlar, Richard A.; Meley, Roland; Nair, Sukesh C.; Nichols, William L.; Raby, Anne; Reverter, Joan C.; Srivastava, Alok; Walker, Isobel
2018-01-01
Laboratory quality programs rely on internal quality control and external quality assessment (EQA). EQA programs provide unknown specimens for the laboratory to test. The laboratory's result is compared with other (peer) laboratories performing the same test. EQA programs assign target values using a variety of methods statistical tools and performance assessment of ‘pass’ or ‘fail’ is made. EQA provider members of the international organization, external quality assurance in thrombosis and hemostasis, took part in a study to compare outcome of performance analysis using the same data set of laboratory results. Eleven EQA organizations using eight different analytical approaches participated. Data for a normal and prolonged activated partial thromboplastin time (aPTT) and a normal and reduced factor VIII (FVIII) from 218 laboratories were sent to the EQA providers who analyzed the data set using their method of evaluation for aPTT and FVIII, determining the performance for each laboratory record in the data set. Providers also summarized their statistical approach to assignment of target values and laboratory performance. Each laboratory record in the data set was graded pass/fail by all EQA providers for each of the four analytes. There was a lack of agreement of pass/fail grading among EQA programs. Discordance in the grading was 17.9 and 11% of normal and prolonged aPTT results, respectively, and 20.2 and 17.4% of normal and reduced FVIII results, respectively. All EQA programs in this study employed statistical methods compliant with the International Standardization Organization (ISO), ISO 13528, yet the evaluation of laboratory results for all four analytes showed remarkable grading discordance. PMID:29232255
The legal and ethical concerns that arise from using complex predictive analytics in health care.
Cohen, I Glenn; Amarasingham, Ruben; Shah, Anand; Xie, Bin; Lo, Bernard
2014-07-01
Predictive analytics, or the use of electronic algorithms to forecast future events in real time, makes it possible to harness the power of big data to improve the health of patients and lower the cost of health care. However, this opportunity raises policy, ethical, and legal challenges. In this article we analyze the major challenges to implementing predictive analytics in health care settings and make broad recommendations for overcoming challenges raised in the four phases of the life cycle of a predictive analytics model: acquiring data to build the model, building and validating it, testing it in real-world settings, and disseminating and using it more broadly. For instance, we recommend that model developers implement governance structures that include patients and other stakeholders starting in the earliest phases of development. In addition, developers should be allowed to use already collected patient data without explicit consent, provided that they comply with federal regulations regarding research on human subjects and the privacy of health information. Project HOPE—The People-to-People Health Foundation, Inc.
Students' science process skill and analytical thinking ability in chemistry learning
NASA Astrophysics Data System (ADS)
Irwanto, Rohaeti, Eli; Widjajanti, Endang; Suyanta
2017-08-01
Science process skill and analytical thinking ability are needed in chemistry learning in 21st century. Analytical thinking is related with science process skill which is used by students to solve complex and unstructured problems. Thus, this research aims to determine science process skill and analytical thinking ability of senior high school students in chemistry learning. The research was conducted in Tiga Maret Yogyakarta Senior High School, Indonesia, at the middle of the first semester of academic year 2015/2016 is using the survey method. The survey involved 21 grade XI students as participants. Students were given a set of test questions consists of 15 essay questions. The result indicated that the science process skill and analytical thinking ability were relatively low ie. 30.67%. Therefore, teachers need to improve the students' cognitive and psychomotor domains effectively in learning process.
Clinical implementation of RNA signatures for pharmacogenomic decision-making
Tang, Weihua; Hu, Zhiyuan; Muallem, Hind; Gulley, Margaret L
2011-01-01
RNA profiling is increasingly used to predict drug response, dose, or toxicity based on analysis of drug pharmacokinetic or pharmacodynamic pathways. Before implementing multiplexed RNA arrays in clinical practice, validation studies are carried out to demonstrate sufficient evidence of analytic and clinical performance, and to establish an assay protocol with quality assurance measures. Pathologists assure quality by selecting input tissue and by interpreting results in the context of the input tissue as well as the technologies that were used and the clinical setting in which the test was ordered. A strength of RNA profiling is the array-based measurement of tens to thousands of RNAs at once, including redundant tests for critical analytes or pathways to promote confidence in test results. Instrument and reagent manufacturers are crucial for supplying reliable components of the test system. Strategies for quality assurance include careful attention to RNA preservation and quality checks at pertinent steps in the assay protocol, beginning with specimen collection and proceeding through the various phases of transport, processing, storage, analysis, interpretation, and reporting. Specimen quality is checked by probing housekeeping transcripts, while spiked and exogenous controls serve as a check on analytic performance of the test system. Software is required to manipulate abundant array data and present it for interpretation by a laboratory physician who reports results in a manner facilitating therapeutic decision-making. Maintenance of the assay requires periodic documentation of personnel competency and laboratory proficiency. These strategies are shepherding genomic arrays into clinical settings to provide added value to patients and to the larger health care system. PMID:23226056
Investigation of characteristics of feed system instabilities
NASA Technical Reports Server (NTRS)
Vaage, R. D.; Fidler, L. E.; Zehnle, R. A.
1972-01-01
The relationship between the structural and feed system natural frequencies in structure-propulsion system coupled longitudinal oscillations (pogo) is investigated. The feed system frequencies are usually very dependent upon the compressibility (compliance) of cavitation bubbles that exist to some extent in all operating turbopumps. This document includes: a complete review of cavitation mechanisms; development of a turbopump cavitation compliance model; an accumulation and analysis of all available cavitation compliance test data; and a correlation of empirical-analytical results. The analytical model is based on the analysis of flow relative to a set of cascaded blades, having any described shape, and assumes phase changes occur under conditions of isentropic equilibrium. Analytical cavitation compliance predictions for the J-2 LOX, F-1 LOX, H-1 LOX and LR87 oxidizer turbopump inducers do not compare favorably with test data. The model predicts much less cavitation than is derived from the test data. This implies that mechanisms other than blade cavitation contribute significantly to the total amount of turbopump cavitation.
Hydrocarbon-Fueled Rocket Engine Plume Diagnostics: Analytical Developments and Experimental Results
NASA Technical Reports Server (NTRS)
Tejwani, Gopal D.; McVay, Gregory P.; Langford, Lester A.; St. Cyr, William W.
2006-01-01
A viewgraph presentation describing experimental results and analytical developments about plume diagnostics for hydrocarbon-fueled rocket engines is shown. The topics include: 1) SSC Plume Diagnostics Background; 2) Engine Health Monitoring Approach; 3) Rocket Plume Spectroscopy Simulation Code; 4) Spectral Simulation for 10 Atomic Species and for 11 Diatomic Molecular Electronic Bands; 5) "Best" Lines for Plume Diagnostics for Hydrocarbon-Fueled Rocket Engines; 6) Experimental Set Up for the Methane Thruster Test Program and Experimental Results; and 7) Summary and Recommendations.
The pitfalls of hair analysis for toxicants in clinical practice: three case reports.
Frisch, Melissa; Schwartz, Brian S
2002-01-01
Hair analysis is used to assess exposure to heavy metals in patients presenting with nonspecific symptoms and is a commonly used procedure in patients referred to our clinic. We are frequently called on to evaluate patients who have health-related concerns as a result of hair analysis. Three patients first presented to outside physicians with nonspecific, multisystemic symptoms. A panel of analytes was measured in hair, and one or more values were interpreted as elevated. As a result of the hair analysis and other unconventional diagnostic tests, the patients presented to us believing they suffered from metal toxicity. In this paper we review the clinical efficacy of this procedure within the context of a patient population with somatic disorders and no clear risk factors for metal intoxication. We also review limitations of hair analysis in this setting; these limitations include patient factors such as low pretest probability of disease and test factors such as the lack of validation of analytic techniques, the inability to discern between exogenous contaminants and endogenous toxicants in hair, the variability of analytic procedures, low interlaboratory reliability, and the increased likelihood of false positive test results in the measurement of panels of analytes. PMID:11940463
An active learning representative subset selection method using net analyte signal.
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
2018-05-05
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced. Copyright © 2018 Elsevier B.V. All rights reserved.
An active learning representative subset selection method using net analyte signal
NASA Astrophysics Data System (ADS)
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
2018-05-01
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced.
Yu, Kate; Di, Li; Kerns, Edward; Li, Susan Q; Alden, Peter; Plumb, Robert S
2007-01-01
We report in this paper an ultra-performance liquid chromatography/tandem mass spectrometric (UPLC(R)/MS/MS) method utilizing an ESI-APCI multimode ionization source to quantify structurally diverse analytes. Eight commercial drugs were used as test compounds. Each LC injection was completed in 1 min using a UPLC system coupled with MS/MS multiple reaction monitoring (MRM) detection. Results from three separate sets of experiments are reported. In the first set of experiments, the eight test compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes (ESI+, ESI-, APCI-, and APCI+) during an LC run. Approximately 8-10 data points were collected across each LC peak. This was insufficient for a quantitative analysis. In the second set of experiments, four compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes during an LC run. Approximately 15 data points were obtained for each LC peak. Quantification results were obtained with a limit of detection (LOD) as low as 0.01 ng/mL. For the third set of experiments, the eight test compounds were analyzed as a batch. During each LC injection, a single compound was analyzed. The mass spectrometer was detecting at a particular ionization mode during each LC injection. More than 20 data points were obtained for each LC peak. Quantification results were also obtained. This single-compound analytical method was applied to a microsomal stability test. Compared with a typical HPLC method currently used for the microsomal stability test, the injection-to-injection cycle time was reduced to 1.5 min (UPLC method) from 3.5 min (HPLC method). The microsome stability results were comparable with those obtained by traditional HPLC/MS/MS.
NASA Astrophysics Data System (ADS)
Carraro, F.; Valiani, A.; Caleffi, V.
2018-03-01
Within the framework of the de Saint Venant equations coupled with the Exner equation for morphodynamic evolution, this work presents a new efficient implementation of the Dumbser-Osher-Toro (DOT) scheme for non-conservative problems. The DOT path-conservative scheme is a robust upwind method based on a complete Riemann solver, but it has the drawback of requiring expensive numerical computations. Indeed, to compute the non-linear time evolution in each time step, the DOT scheme requires numerical computation of the flux matrix eigenstructure (the totality of eigenvalues and eigenvectors) several times at each cell edge. In this work, an analytical and compact formulation of the eigenstructure for the de Saint Venant-Exner (dSVE) model is introduced and tested in terms of numerical efficiency and stability. Using the original DOT and PRICE-C (a very efficient FORCE-type method) as reference methods, we present a convergence analysis (error against CPU time) to study the performance of the DOT method with our new analytical implementation of eigenstructure calculations (A-DOT). In particular, the numerical performance of the three methods is tested in three test cases: a movable bed Riemann problem with analytical solution; a problem with smooth analytical solution; a test in which the water flow is characterised by subcritical and supercritical regions. For a given target error, the A-DOT method is always the most efficient choice. Finally, two experimental data sets and different transport formulae are considered to test the A-DOT model in more practical case studies.
Bhushan, Ravi; Sen, Arijit
2017-04-01
Very few Indian studies exist on evaluation of pre-analytical variables affecting "Prothrombin Time" the commonest coagulation assay performed. The study was performed in an Indian tertiary care setting with an aim to assess quantitatively the prevalence of pre-analytical variables and their effects on the results (patient safety), for Prothrombin time test. The study also evaluated their effects on the result and whether intervention, did correct the results. The firstly evaluated the prevalence for various pre-analytical variables detected in samples sent for Prothrombin Time testing. These samples with the detected variables wherever possible were tested and result noted. The samples from the same patients were repeated and retested ensuring that no pre-analytical variable is present. The results were again noted to check for difference the intervention produced. The study evaluated 9989 samples received for PT/INR over a period of 18 months. The prevalence of different pre-analytical variables was found to be 862 (8.63%). The proportion of various pre-analytical variables detected were haemolysed samples 515 (5.16%), over filled vacutainers 62 (0.62%), under filled vacutainers 39 (0.39%), low values 205 (2.05%), clotted samples 11 (0.11%), wrong labeling 4 (0.04%), wrong vacutainer use 2 (0.02%), chylous samples 7 (0.07%) and samples with more than one variable 17 (0.17%). The comparison of percentage of samples showing errors were noted for the first variables since they could be tested with and without the variable in place. The reduction in error percentage was 91.5%, 69.2%, 81.5% and 95.4% post intervention for haemolysed, overfilled, under filled and samples collected with excess pressure at phlebotomy respectively. Correcting the variables did reduce the error percentage to a great extent in these four variables and hence the variables are found to affect "Prothrombin Time" testing and can hamper patient safety.
A mild alkali treated jute fibre controlling the hydration behaviour of greener cement paste
Jo, Byung-Wan; Chakraborty, Sumit
2015-01-01
To reduce the antagonistic effect of jute fibre on the setting and hydration of jute reinforced cement, modified jute fibre reinforcement would be a unique approach. The present investigation deals with the effectiveness of mild alkali treated (0.5%) jute fibre on the setting and hydration behaviour of cement. Setting time measurement, hydration test and analytical characterizations of the hardened samples (viz., FTIR, XRD, DSC, TGA, and free lime estimation) were used to evaluate the effect of alkali treated jute fibre. From the hydration test, the time (t) required to reach maximum temperature for the hydration of control cement sample is estimated to be 860 min, whilst the time (t) is measured to be 1040 min for the hydration of a raw jute reinforced cement sample. However, the time (t) is estimated to be 1020 min for the hydration of an alkali treated jute reinforced cement sample. Additionally, from the analytical characterizations, it is determined that fibre-cement compatibility is increased and hydration delaying effect is minimized by using alkali treated jute fibre as fibre reinforcement. Based on the analyses, a model has been proposed to explain the setting and hydration behaviour of alkali treated jute fibre reinforced cement composite. PMID:25592665
On the Use of Accelerated Test Methods for Characterization of Advanced Composite Materials
NASA Technical Reports Server (NTRS)
Gates, Thomas S.
2003-01-01
A rational approach to the problem of accelerated testing for material characterization of advanced polymer matrix composites is discussed. The experimental and analytical methods provided should be viewed as a set of tools useful in the screening of material systems for long-term engineering properties in aerospace applications. Consideration is given to long-term exposure in extreme environments that include elevated temperature, reduced temperature, moisture, oxygen, and mechanical load. Analytical formulations useful for predictive models that are based on the principles of time-based superposition are presented. The need for reproducible mechanisms, indicator properties, and real-time data are outlined as well as the methodologies for determining specific aging mechanisms.
Revisitation of the dipole tracer test for heterogeneous porous formations
NASA Astrophysics Data System (ADS)
Zech, Alraune; D'Angelo, Claudia; Attinger, Sabine; Fiori, Aldo
2018-05-01
In this paper, a new analytical solution for interpreting dipole tests in heterogeneous media is derived by associating the shape of the tracer breakthrough curve with the log-conductivity variance. It is presented how the solution can be used for interpretation of dipole field test in view of geostatistical aquifer characterization on three illustrative examples. The analytical solution for the tracer breakthrough curve at the pumping well in a dipole tracer test is developed by considering a perfectly stratified formation. The analysis is carried out making use of the travel time of a generic solute particle, from the injection to the pumping well. Injection conditions are adapted to different possible field setting. Solutions are presented for resident and flux proportional injection mode as well as for an instantaneous pulse of solute and continuous solute injections. The analytical form of the solution allows a detailed investigation on the impact of heterogeneity, the tracer input conditions and ergodicity conditions at the well. The impact of heterogeneity manifests in a significant spreading of solute particles that increases the natural tendency to spreading induced by the dipole setup. Furthermore, with increasing heterogeneity the number of layers needed to reach ergodic conditions become larger. Thus, dipole test in highly heterogeneous aquifers might take place under non-ergodic conditions giving that the log-conductivity variance is underestimated. The method is a promising geostatistical analyzing tool being the first analytical solution for dipole tracer test analysis taking heterogeneity of hydraulic conductivity into account.
Assessment in Professional Education.
ERIC Educational Resources Information Center
Elman, Sandra E.; Lynton, Ernest A.
The assessment of professional programs at the undergraduate level is discussed (i.e., engineering, business, education, nursing, and other career-oriented fields). Presently, assessment in professional education relies almost exclusively on written or oral testing of a predetermined set of cognitive and analytical skills. This is followed by…
Materials Compatibility Testing in Concentrated Hydrogen Peroxide
NASA Technical Reports Server (NTRS)
Boxwell, R.; Bromley, G.; Mason, D.; Crockett, D.; Martinez, L.; McNeal, C.; Lyles, G. (Technical Monitor)
2000-01-01
Materials test methods from the 1960's have been used as a starting point in evaluating materials for today's space launch vehicles. These established test methods have been modified to incorporate today's analytical laboratory equipment. The Orbital test objective was to test a wide range of materials to incorporate the revolution in polymer and composite materials that has occurred since the 1960's. Testing is accomplished in 3 stages from rough screening to detailed analytical tests. Several interesting test observations have been made during this testing and are included in the paper. A summary of the set-up, test and evaluation of long-term storage sub-scale tanks is also included. This sub-scale tank test lasted for a 7-month duration prior to being stopped due to a polar boss material breakdown. Chemical evaluations of the hydrogen peroxide and residue left on the polar boss surface identify the material breakdown quite clearly. The paper concludes with recommendations for future testing and a specific effort underway within the industry to standardize the test methods used in evaluating materials.
Does overgeneral autobiographical memory result from poor memory for task instructions?
Yanes, Paula K; Roberts, John E; Carlos, Erica L
2008-10-01
Considerable previous research has shown that retrieval of overgeneral autobiographical memories (OGM) is elevated among individuals suffering from various emotional disorders and those with a history of trauma. Although previous theories suggest that OGM serves the function of regulating acute negative affect, it is also possible that OGM results from difficulties in keeping the instruction set for the Autobiographical Memory Test (AMT) in working memory, or what has been coined "secondary goal neglect" (Dalgleish, 2004). The present study tested whether OGM is associated with poor memory for the task's instruction set, and whether an instruction set reminder would improve memory specificity over repeated trials. Multilevel modelling data-analytic techniques demonstrated a significant relationship between poor recall of instruction set and probability of retrieving OGMs. Providing an instruction set reminder for the AMT relative to a control task's instruction set improved memory specificity immediately afterward.
System identification of analytical models of damped structures
NASA Technical Reports Server (NTRS)
Fuh, J.-S.; Chen, S.-Y.; Berman, A.
1984-01-01
A procedure is presented for identifying linear nonproportionally damped system. The system damping is assumed to be representable by a real symmetric matrix. Analytical mass, stiffness and damping matrices which constitute an approximate representation of the system are assumed to be available. Given also are an incomplete set of measured natural frequencies, damping ratios and complex mode shapes of the structure, normally obtained from test data. A method is developed to find the smallest changes in the analytical model so that the improved model can exactly predict the measured modal parameters. The present method uses the orthogonality relationship to improve mass and damping matrices and the dynamic equation to find the improved stiffness matrix.
Model verification of large structural systems
NASA Technical Reports Server (NTRS)
Lee, L. T.; Hasselman, T. K.
1977-01-01
A methodology was formulated, and a general computer code implemented for processing sinusoidal vibration test data to simultaneously make adjustments to a prior mathematical model of a large structural system, and resolve measured response data to obtain a set of orthogonal modes representative of the test model. The derivation of estimator equations is shown along with example problems. A method for improving the prior analytic model is included.
Artificial neural network and classical least-squares methods for neurotransmitter mixture analysis.
Schulze, H G; Greek, L S; Gorzalka, B B; Bree, A V; Blades, M W; Turner, R F
1995-02-01
Identification of individual components in biological mixtures can be a difficult problem regardless of the analytical method employed. In this work, Raman spectroscopy was chosen as a prototype analytical method due to its inherent versatility and applicability to aqueous media, making it useful for the study of biological samples. Artificial neural networks (ANNs) and the classical least-squares (CLS) method were used to identify and quantify the Raman spectra of the small-molecule neurotransmitters and mixtures of such molecules. The transfer functions used by a network, as well as the architecture of a network, played an important role in the ability of the network to identify the Raman spectra of individual neurotransmitters and the Raman spectra of neurotransmitter mixtures. Specifically, networks using sigmoid and hyperbolic tangent transfer functions generalized better from the mixtures in the training data set to those in the testing data sets than networks using sine functions. Networks with connections that permit the local processing of inputs generally performed better than other networks on all the testing data sets. and better than the CLS method of curve fitting, on novel spectra of some neurotransmitters. The CLS method was found to perform well on noisy, shifted, and difference spectra.
NASA Technical Reports Server (NTRS)
Judd, M.; Wolf, S. W. D.; Goodyer, M. J.
1976-01-01
A method has been developed for accurately computing the imaginary flow fields outside a flexible walled test section, applicable to lifting and non-lifting models. The tolerances in the setting of the flexible walls introduce only small levels of aerodynamic interference at the model. While it is not possible to apply corrections for the interference effects, they may be reduced by improving the setting accuracy of the portions of wall immediately above and below the model. Interference effects of the truncation of the length of the streamlined portion of a test section are brought to an acceptably small level by the use of a suitably long test section with the model placed centrally.
Lozano, Valeria A; Ibañez, Gabriela A; Olivieri, Alejandro C
2009-10-05
In the presence of analyte-background interactions and a significant background signal, both second-order multivariate calibration and standard addition are required for successful analyte quantitation achieving the second-order advantage. This report discusses a modified second-order standard addition method, in which the test data matrix is subtracted from the standard addition matrices, and quantitation proceeds via the classical external calibration procedure. It is shown that this novel data processing method allows one to apply not only parallel factor analysis (PARAFAC) and multivariate curve resolution-alternating least-squares (MCR-ALS), but also the recently introduced and more flexible partial least-squares (PLS) models coupled to residual bilinearization (RBL). In particular, the multidimensional variant N-PLS/RBL is shown to produce the best analytical results. The comparison is carried out with the aid of a set of simulated data, as well as two experimental data sets: one aimed at the determination of salicylate in human serum in the presence of naproxen as an additional interferent, and the second one devoted to the analysis of danofloxacin in human serum in the presence of salicylate.
Shephard, Mark DS; Gill, Janice P
2006-01-01
Type 2 diabetes mellitus and its major complication, renal disease, represent one of the most significant contemporary health problems facing Australia’s Indigenous Aboriginal People. The Australian Government-funded Quality Assurance for Aboriginal Medical Services Program (QAAMS) provides a framework by which on-site point-of-care testing (POCT) for haemoglobin A1c (HbA1c) and now urine albumin:creatinine ratio (ACR) can be performed to facilitate better diabetes management in Aboriginal medical services. This paper provides updated evidence for the analytical quality of POCT in the QAAMS Program. The median imprecision for point-of-care (POC) HbA1c and urine ACR quality assurance (QA) testing has continually improved over the past six and half years, stabilising at approximately 3% for both analytes and proving analytically sound in Aboriginal hands. For HbA1c, there was no statistical difference between the imprecision achieved by QAAMS and laboratory users of the Bayer DCA 2000 since the QAAMS program commenced (QAAMS CV 3.6% ± 0.52, laboratory CV 3.4% ± 0.42; p = 0.21, paired t-test). The Western Pacific Island of Tonga recently joined the QAAMS HbA1c Program indicating that the QAAMS model can also be applied internationally in other settings where the prevalence of diabetes is high. PMID:17581642
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamann, F., E-mail: franck.hamann@cea.fr; Combis, P.; Videau, L.
The one-dimensional magnetohydrodynamics of a plasma cylindrical liner is addressed in the case of a two components magnetic field. The azimuthal component is responsible for the implosion of the liner and the axial field is compressed inside the liner. A complete set of analytical profiles for the magnetic field components, the density, and the local velocity are proposed at the scale of the liner thickness. Numerical simulations are also presented to test the validity of the analytical formulas.
Monitoring Affect States during Effortful Problem Solving Activities
ERIC Educational Resources Information Center
D'Mello, Sidney K.; Lehman, Blair; Person, Natalie
2010-01-01
We explored the affective states that students experienced during effortful problem solving activities. We conducted a study where 41 students solved difficult analytical reasoning problems from the Law School Admission Test. Students viewed videos of their faces and screen captures and judged their emotions from a set of 14 states (basic…
Surprises and insights from long-term aquatic datasets and experiments
Walter K. Dodds; Christopher T. Robinson; Evelyn E. Gaiser; Gretchen J.A. Hansen; Heather Powell; Joseph M. Smith; Nathaniel B. Morse; Sherri L. Johnson; Stanley V. Gregory; Tisza Bell; Timothy K. Kratz; William H. McDowell
2012-01-01
Long-term research on freshwater ecosystems provides insights that can be difficult to obtain from other approaches. Widespread monitoring of ecologically relevant water-quality parameters spanning decades can facilitate important tests of ecological principles. Unique long-term data sets and analytical tools are increasingly available, allowing for powerful and...
Statistical approaches to optimize detection of MIB off-flavor in aquaculture raised channel catfish
USDA-ARS?s Scientific Manuscript database
The catfish industry prides itself on preventing inadvertent sale of off-flavor fish. Typically, several fish are taste tested over several weeks before pond harvest to confirm good fish flavor quality. We collected several data sets of analytically measured off-flavor concentrations in catfish to...
Wills, A J; Lea, Stephen E G; Leaver, Lisa A; Osthaus, Britta; Ryan, Catriona M E; Suret, Mark B; Bryant, Catherine M L; Chapman, Sue J A; Millar, Louise
2009-11-01
Pigeons (Columba livia), gray squirrels (Sciurus carolinensis), and undergraduates (Homo sapiens) learned discrimination tasks involving multiple mutually redundant dimensions. First, pigeons and undergraduates learned conditional discriminations between stimuli composed of three spatially separated dimensions, after first learning to discriminate the individual elements of the stimuli. When subsequently tested with stimuli in which one of the dimensions took an anomalous value, the majority of both species categorized test stimuli by their overall similarity to training stimuli. However some individuals of both species categorized them according to a single dimension. In a second set of experiments, squirrels, pigeons, and undergraduates learned go/no-go discriminations using multiple simultaneous presentations of stimuli composed of three spatially integrated, highly salient dimensions. The tendency to categorize test stimuli including anomalous dimension values unidimensionally was higher than in the first set of experiments and did not differ significantly between species. The authors conclude that unidimensional categorization of multidimensional stimuli is not diagnostic for analytic cognitive processing, and that any differences between human's and pigeons' behavior in such tasks are not due to special features of avian visual cognition.
Boeing Smart Rotor Full-scale Wind Tunnel Test Data Report
NASA Technical Reports Server (NTRS)
Kottapalli, Sesi; Hagerty, Brandon; Salazar, Denise
2016-01-01
A full-scale helicopter smart material actuated rotor technology (SMART) rotor test was conducted in the USAF National Full-Scale Aerodynamics Complex 40- by 80-Foot Wind Tunnel at NASA Ames. The SMART rotor system is a five-bladed MD 902 bearingless rotor with active trailing-edge flaps. The flaps are actuated using piezoelectric actuators. Rotor performance, structural loads, and acoustic data were obtained over a wide range of rotor shaft angles of attack, thrust, and airspeeds. The primary test objective was to acquire unique validation data for the high-performance computing analyses developed under the Defense Advanced Research Project Agency (DARPA) Helicopter Quieting Program (HQP). Other research objectives included quantifying the ability of the on-blade flaps to achieve vibration reduction, rotor smoothing, and performance improvements. This data set of rotor performance and structural loads can be used for analytical and experimental comparison studies with other full-scale rotor systems and for analytical validation of computer simulation models. The purpose of this final data report is to document a comprehensive, highquality data set that includes only data points where the flap was actively controlled and each of the five flaps behaved in a similar manner.
DEFLECTION OF A HETEROGENEOUS WIDE-BEAM UNDER UNIFORM PRESSURE LOAD
DOE Office of Scientific and Technical Information (OSTI.GOV)
T. V. Holschuh; T. K. Howard; W. R. Marcum
2014-07-01
Oregon State University (OSU) and the Idaho National Laboratory (INL) are currently collaborating on a test program which entails hydro-mechanical testing of a generic plate type fuel element, or generic test plate assembly (GTPA), for the purpose of qualitatively demonstrating mechanical integrity of uranium-molybdenum monolithic plates as compared to that of uranium aluminum dispersion, and aluminum fuel plates onset by hydraulic forces. This test program supports ongoing work conducted for/by the Global Threat Reduction Initiative (GTRI) Fuels Development Program. This study’s focus supports the ongoing collaborative effort by detailing the derivation of an analytic solution for deflection of a heterogeneousmore » plate under a uniform, distributed load in order to predict the deflection of test plates in the GTPA. The resulting analytical solutions for three specific boundary condition sets are then presented against several test cases of a homogeneous plate. In all test cases considered, the results for both homogeneous and heterogeneous plates are numerically identical to one another, demonstrating correct derivation of the heterogeneous solution. Two additional problems are presents herein that provide a representative deflection profile for the plates under consideration within the GTPA. Furthermore, qualitative observations are made about the influence of a more-rigid internal fuel-meat region and its influence on the overall deflection profile of a plate. Present work is being directed to experimentally confirm the analytical solution’s results using select materials.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinsky, Benjamin A.; Sahoo, Malaya K.; Sandlund, Johanna
The recently developed Xpert® Ebola Assay is a novel nucleic acid amplification test for simplified detection of Ebola virus (EBOV) in whole blood and buccal swab samples. The assay targets sequences in two EBOV genes, lowering the risk for new variants to escape detection in the test. The objective of this report is to present analytical characteristics of the Xpert® Ebola Assay on whole blood samples. Our study evaluated the assay’s analytical sensitivity, analytical specificity, inclusivity and exclusivity performance in whole blood specimens. EBOV RNA, inactivated EBOV, and infectious EBOV were used as targets. The dynamic range of the assay,more » the inactivation of virus, and specimen stability were also evaluated. The lower limit of detection (LoD) for the assay using inactivated virus was estimated to be 73 copies/mL (95% CI: 51–97 copies/mL). The LoD for infectious virus was estimated to be 1 plaque-forming unit/mL, and for RNA to be 232 copies/mL (95% CI 163–302 copies/mL). The assay correctly identified five different Ebola viruses, Yambuku-Mayinga, Makona-C07, Yambuku-Ecran, Gabon-Ilembe, and Kikwit-956210, and correctly excluded all non-EBOV isolates tested. The conditions used by Xpert® Ebola for inactivation of infectious virus reduced EBOV titer by ≥6 logs. In conclusion, we found the Xpert® Ebola Assay to have high analytical sensitivity and specificity for the detection of EBOV in whole blood. It offers ease of use, fast turnaround time, and remote monitoring. The test has an efficient viral inactivation protocol, fulfills inclusivity and exclusivity criteria, and has specimen stability characteristics consistent with the need for decentralized testing. The simplicity of the assay should enable testing in a wide variety of laboratory settings, including remote laboratories that are not capable of performing highly complex nucleic acid amplification tests, and during outbreaks where time to detection is critical.« less
Pinsky, Benjamin A.; Sahoo, Malaya K.; Sandlund, Johanna; ...
2015-11-12
The recently developed Xpert® Ebola Assay is a novel nucleic acid amplification test for simplified detection of Ebola virus (EBOV) in whole blood and buccal swab samples. The assay targets sequences in two EBOV genes, lowering the risk for new variants to escape detection in the test. The objective of this report is to present analytical characteristics of the Xpert® Ebola Assay on whole blood samples. Our study evaluated the assay’s analytical sensitivity, analytical specificity, inclusivity and exclusivity performance in whole blood specimens. EBOV RNA, inactivated EBOV, and infectious EBOV were used as targets. The dynamic range of the assay,more » the inactivation of virus, and specimen stability were also evaluated. The lower limit of detection (LoD) for the assay using inactivated virus was estimated to be 73 copies/mL (95% CI: 51–97 copies/mL). The LoD for infectious virus was estimated to be 1 plaque-forming unit/mL, and for RNA to be 232 copies/mL (95% CI 163–302 copies/mL). The assay correctly identified five different Ebola viruses, Yambuku-Mayinga, Makona-C07, Yambuku-Ecran, Gabon-Ilembe, and Kikwit-956210, and correctly excluded all non-EBOV isolates tested. The conditions used by Xpert® Ebola for inactivation of infectious virus reduced EBOV titer by ≥6 logs. In conclusion, we found the Xpert® Ebola Assay to have high analytical sensitivity and specificity for the detection of EBOV in whole blood. It offers ease of use, fast turnaround time, and remote monitoring. The test has an efficient viral inactivation protocol, fulfills inclusivity and exclusivity criteria, and has specimen stability characteristics consistent with the need for decentralized testing. The simplicity of the assay should enable testing in a wide variety of laboratory settings, including remote laboratories that are not capable of performing highly complex nucleic acid amplification tests, and during outbreaks where time to detection is critical.« less
Verification and application of the Iosipescu shear test method
NASA Technical Reports Server (NTRS)
Walrath, D. E.; Adams, D. F.
1984-01-01
Finite element models were used to study the effects of notch angle variations on the stress state within an Iosipescu shear test speciment. These analytical results were also studied to determine the feasibility of using strain gage rosettes and a modified extensometer to measure shear strains in this test specimen. Analytical results indicate that notch angle variations produced only small differences in simulated shear properties. Both strain gage rosettes and the modified extensometer were shown to be feasible shear strain transducers for the test method. The Iosipoescu shear test fixture was redesigned to incorporate several improvements. These improvements include accommodation of a 50 percent larger specimen for easier measurement of shear train, a clamping mechanism to relax strict tolerances on specimen width, and a self contained alignment tool for use during specimen installation. A set of in-plane and interlaminar shear properties were measured for three graphite fabric/epoxy composites of T300/934 composite material. The three weave patterns were Oxford, 5-harness satin, and 8-harness satin.
NASA Astrophysics Data System (ADS)
Chen, Jui-Sheng; Li, Loretta Y.; Lai, Keng-Hsin; Liang, Ching-Ping
2017-11-01
A novel solution method is presented which leads to an analytical model for the advective-dispersive transport in a semi-infinite domain involving a wide spectrum of boundary inputs, initial distributions, and zero-order productions. The novel solution method applies the Laplace transform in combination with the generalized integral transform technique (GITT) to obtain the generalized analytical solution. Based on this generalized analytical expression, we derive a comprehensive set of special-case solutions for some time-dependent boundary distributions and zero-order productions, described by the Dirac delta, constant, Heaviside, exponentially-decaying, or periodically sinusoidal functions as well as some position-dependent initial conditions and zero-order productions specified by the Dirac delta, constant, Heaviside, or exponentially-decaying functions. The developed solutions are tested against an analytical solution from the literature. The excellent agreement between the analytical solutions confirms that the new model can serve as an effective tool for investigating transport behaviors under different scenarios. Several examples of applications, are given to explore transport behaviors which are rarely noted in the literature. The results show that the concentration waves resulting from the periodically sinusoidal input are sensitive to dispersion coefficient. The implication of this new finding is that a tracer test with a periodic input may provide additional information when for identifying the dispersion coefficients. Moreover, the solution strategy presented in this study can be extended to derive analytical models for handling more complicated problems of solute transport in multi-dimensional media subjected to sequential decay chain reactions, for which analytical solutions are not currently available.
The use of an analytic Hamiltonian matrix for solving the hydrogenic atom
NASA Astrophysics Data System (ADS)
Bhatti, Mohammad
2001-10-01
The non-relativistic Hamiltonian corresponding to the Shrodinger equation is converted into analytic Hamiltonian matrix using the kth order B-splines functions. The Galerkin method is applied to the solution of the Shrodinger equation for bound states of hydrogen-like systems. The program Mathematica is used to create analytic matrix elements and exact integration is performed over the knot-sequence of B-splines and the resulting generalized eigenvalue problem is solved on a specified numerical grid. The complete basis set and the energy spectrum is obtained for the coulomb potential for hydrogenic systems with Z less than 100 with B-splines of order eight. Another application is given to test the Thomas-Reiche-Kuhn sum rule for the hydrogenic systems.
NASA Astrophysics Data System (ADS)
Liang, Ching-Ping; Hsu, Shao-Yiu; Chen, Jui-Sheng
2016-09-01
It is recommended that an in-situ infiltration tracer test is considered for simultaneously determining the longitudinal and transverse dispersion coefficients in soil. Analytical solutions have been derived for two-dimensional advective-dispersive transport in a radial geometry in the literature which can be used for interpreting the result of such a tracer test. However, these solutions were developed for a transport domain with an unbounded-radial extent and an infinite thickness of vadose zone which might not be realistically manifested in the actual solute transport during a field infiltration tracer test. Especially, the assumption of infinite thickness of vadose zone should be invalid for infiltration tracer tests conducted in soil with a shallow groundwater table. This paper describes an analytical model for interpreting the results of an infiltration tracer test based on improving the transport domain with a bounded-radial extent and a finite thickness of vadose zone. The analytical model is obtained with the successive application of appropriate integral transforms and their corresponding inverse transforms. A comparison of the newly derived analytical solution against the previous analytical solutions in which two distinct sets of radial extent and thickness of vadose zone are considered is conducted to determine the influence of the radial and exit boundary conditions on the solute transport. The results shows that both the radial and exit boundary conditions substantially affect the trailing segment of the breakthrough curves for a soil medium with large dispersion coefficients. Previous solutions derived for a transport domain with an unbounded-radial and an infinite thickness of vadose zone boundary conditions give lower concentration predictions compared with the proposed solution at late times. Moreover, the differences between two solutions are amplified when the observation positions are near the groundwater table. In addition, we compare our solution against the approximate solutions that derived from the previous analytical solution and has been suggested to serve as fast tools for simultaneously estimating the longitudinal and transverse dispersion coefficients. The results indicate that the approximate solutions offer predictions that are markedly distinct from our solution for the entire range of dispersion coefficient values. Thus, it is not appropriate to use the approximate solution for interpreting the results of an infiltration tracer test.
Analytical Tools to Improve Optimization Procedures for Lateral Flow Assays
Hsieh, Helen V.; Dantzler, Jeffrey L.; Weigl, Bernhard H.
2017-01-01
Immunochromatographic or lateral flow assays (LFAs) are inexpensive, easy to use, point-of-care medical diagnostic tests that are found in arenas ranging from a doctor’s office in Manhattan to a rural medical clinic in low resource settings. The simplicity in the LFA itself belies the complex task of optimization required to make the test sensitive, rapid and easy to use. Currently, the manufacturers develop LFAs by empirical optimization of material components (e.g., analytical membranes, conjugate pads and sample pads), biological reagents (e.g., antibodies, blocking reagents and buffers) and the design of delivery geometry. In this paper, we will review conventional optimization and then focus on the latter and outline analytical tools, such as dynamic light scattering and optical biosensors, as well as methods, such as microfluidic flow design and mechanistic models. We are applying these tools to find non-obvious optima of lateral flow assays for improved sensitivity, specificity and manufacturing robustness. PMID:28555034
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindberg, Michael J.
2010-09-28
Between October 14, 2009 and February 22, 2010 sediment samples were received from 100-BC Decision Unit for geochemical studies. This is an analytical data report for sediments received from CHPRC at the 100 BC 5 OU. The analyses for this project were performed at the 325 building located in the 300 Area of the Hanford Site. The analyses were performed according to Pacific Northwest National Laboratory (PNNL) approved procedures and/or nationally recognized test procedures. The data sets include the sample identification numbers, analytical results, estimated quantification limits (EQL), and quality control data. The preparatory and analytical quality control requirements, calibrationmore » requirements, acceptance criteria, and failure actions are defined in the on-line QA plan 'Conducting Analytical Work in Support of Regulatory Programs' (CAW). This QA plan implements the Hanford Analytical Services Quality Assurance Requirements Documents (HASQARD) for PNNL.« less
Lemasson, Elise; Bertin, Sophie; Hennig, Philippe; Boiteux, Hélène; Lesellier, Eric; West, Caroline
2015-08-21
Impurity profiling of organic products that are synthesized as possible drug candidates requires complementary analytical methods to ensure that all impurities are identified. Supercritical fluid chromatography (SFC) is a very useful tool to achieve this objective, as an adequate selection of stationary phases can provide orthogonal separations so as to maximize the chances to see all impurities. In this series of papers, we have developed a method for achiral SFC-MS profiling of drug candidates, based on a selection of 160 analytes issued from Servier Research Laboratories. In the first part of this study, focusing on mobile phase selection, a gradient elution with carbon dioxide and methanol comprising 2% water and 20mM ammonium acetate proved to be the best in terms of chromatographic performance, while also providing good MS response [1]. The objective of this second part was the selection of an orthogonal set of ultra-high performance stationary phases, that was carried out in two steps. Firstly, a reduced set of analytes (20) was used to screen 23 columns. The columns selected were all 1.7-2.5μm fully porous or 2.6-2.7μm superficially porous particles, with a variety of stationary phase chemistries. Derringer desirability functions were used to rank the columns according to retention window, column efficiency evaluated with peak width of selected analytes, and the proportion of analytes successfully eluted with good peak shapes. The columns providing the worst performances were thus eliminated and a shorter selection of columns (11) was obtained. Secondly, based on 160 tested analytes, the 11 columns were ranked again. The retention data obtained on these columns were then compared to define a reduced set of the best columns providing the greatest orthogonality, to maximize the chances to see all impurities within a limited number of runs. Two high-performance columns were thus selected: ACQUITY UPC(2) HSS C18 SB and Nucleoshell HILIC. Copyright © 2015 Elsevier B.V. All rights reserved.
Integrating DNA strand displacement circuitry to the nonlinear hybridization chain reaction.
Zhang, Zhuo; Fan, Tsz Wing; Hsing, I-Ming
2017-02-23
Programmable and modular attributes of DNA molecules allow one to develop versatile sensing platforms that can be operated isothermally and enzyme-free. In this work, we present an approach to integrate upstream DNA strand displacement circuits that can be turned on by a sequence-specific microRNA analyte with a downstream nonlinear hybridization chain reaction for a cascading hyperbranched nucleic acid assembly. This system provides a two-step amplification strategy for highly sensitive detection of the miRNA analyte, conducive for multiplexed detection. Multiple miRNA analytes were tested with our integrated circuitry using the same downstream signal amplification setting, showing the decoupling of nonlinear self-assembly with the analyte sequence. Compared with the reported methods, our signal amplification approach provides an additional control module for higher-order DNA self-assembly and could be developed into a promising platform for the detection of critical nucleic-acid based biomarkers.
NASA Astrophysics Data System (ADS)
Quinta-Nova, Luis; Fernandez, Paulo; Pedro, Nuno
2017-12-01
This work focuses on developed a decision support system based on multicriteria spatial analysis to assess the potential for generation of biomass residues from forestry sources in a region of Portugal (Beira Baixa). A set of environmental, economic and social criteria was defined, evaluated and weighted in the context of Saaty’s analytic hierarchies. The best alternatives were obtained after applying Analytic Hierarchy Process (AHP). The model was applied to the central region of Portugal where forest and agriculture are the most representative land uses. Finally, sensitivity analysis of the set of factors and their associated weights was performed to test the robustness of the model. The proposed evaluation model provides a valuable reference for decision makers in establishing a standardized means of selecting the optimal location for new biomass plants.
Synthetic aperture radar images of ocean waves, theories of imaging physics and experimental tests
NASA Technical Reports Server (NTRS)
Vesecky, J. F.; Durden, S. L.; Smith, M. P.; Napolitano, D. A.
1984-01-01
The physical mechanism for the synthetic Aperture Radar (SAR) imaging of ocean waves is investigated through the use of analytical models. The models are tested by comparison with data sets from the SEASAT mission and airborne SAR's. Dominant ocean wavelengths from SAR estimates are biased towards longer wavelengths. The quasispecular scattering mechanism agrees with experimental data. The Doppler shift for ship wakes is that of the mean sea surface.
Grade Repetition in Honduran Primary Schools
ERIC Educational Resources Information Center
Marshall, Jeffery H.
2003-01-01
This paper looks at several dimensions of the grade failure issue in Honduras using a unique data set compiled by the UMCE evaluation project in 1998 and 1999. The analytical framework incorporates econometric analysis of standardized tests and teacher pass/fail decisions for roughly 13,000 second and fourth grade students. The results show that…
ERIC Educational Resources Information Center
Crowson, H. Michael; Brandes, Joyce A.
2014-01-01
This study addressed predictors of pre-service teachers' opposition toward the practice of educating students with disabilities in mainstream classroom settings--a practice known as inclusion. We tested a hypothesized path model that incorporated social dominance orientation (SDO) and contact as distal predictors, and intergroup anxiety,…
Model verification of large structural systems. [space shuttle model response
NASA Technical Reports Server (NTRS)
Lee, L. T.; Hasselman, T. K.
1978-01-01
A computer program for the application of parameter identification on the structural dynamic models of space shuttle and other large models with hundreds of degrees of freedom is described. Finite element, dynamic, analytic, and modal models are used to represent the structural system. The interface with math models is such that output from any structural analysis program applied to any structural configuration can be used directly. Processed data from either sine-sweep tests or resonant dwell tests are directly usable. The program uses measured modal data to condition the prior analystic model so as to improve the frequency match between model and test. A Bayesian estimator generates an improved analytical model and a linear estimator is used in an iterative fashion on highly nonlinear equations. Mass and stiffness scaling parameters are generated for an improved finite element model, and the optimum set of parameters is obtained in one step.
Analytical challenges for conducting rapid metabolism characterization for QIVIVE.
Tolonen, Ari; Pelkonen, Olavi
2015-06-05
For quantitative in vitro-in vivo extrapolation (QIVIVE) of metabolism for the purposes of toxicokinetics prediction, a precise and robust analytical technique for identifying and measuring a chemical and its metabolites is an absolute prerequisite. Currently, high-resolution mass spectrometry (HR-MS) is a tool of choice for a majority of organic relatively lipophilic molecules, linked with a LC separation tool and simultaneous UV-detection. However, additional techniques such as gas chromatography, radiometric measurements and NMR, are required to cover the whole spectrum of chemical structures. To accumulate enough reliable and robust data for the validation of QIVIVE, there are some partially opposing needs: Detailed delineation of the in vitro test system to produce a reliable toxicokinetic measure for a studied chemical, and a throughput capacity of the in vitro set-up and the analytical tool as high as possible. We discuss current analytical challenges for the identification and quantification of chemicals and their metabolites, both stable and reactive, focusing especially on LC-MS techniques, but simultaneously attempting to pinpoint factors associated with sample preparation, testing conditions and strengths and weaknesses of a particular technique available for a particular task. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Paper-based analytical devices for environmental analysis.
Meredith, Nathan A; Quinn, Casey; Cate, David M; Reilly, Thomas H; Volckens, John; Henry, Charles S
2016-03-21
The field of paper-based microfluidics has experienced rapid growth over the past decade. Microfluidic paper-based analytical devices (μPADs), originally developed for point-of-care medical diagnostics in resource-limited settings, are now being applied in new areas, such as environmental analyses. Low-cost paper sensors show great promise for on-site environmental analysis; the theme of ongoing research complements existing instrumental techniques by providing high spatial and temporal resolution for environmental monitoring. This review highlights recent applications of μPADs for environmental analysis along with technical advances that may enable μPADs to be more widely implemented in field testing.
Dynamic Capacity and Surface Fatigue Life for Spur and Helical Gears
NASA Technical Reports Server (NTRS)
Coy, J. J.; Townsend, D. P.; Zaretsky, E. V.
1975-01-01
A mathematical model for surface fatigue life of gear, pinion, or entire meshing gear train is given. The theory is based on a previous statistical approach for rolling-element bearings. Equations are presented which give the dynamic capacity of the gear set. The dynamic capacity is the transmitted tangential load which gives a 90 percent probability of survival of the gear set for one million pinion revolutions. The analytical results are compared with test data for a set of AISI 9310 spur gears operating at a maximum Hertz stress of 1.71 billion N/sq m and 10,000 rpm. The theoretical life predictions are shown to be good when material constants obtained from rolling-element bearing tests were used in the gear life model.
The Generalized Higher Criticism for Testing SNP-Set Effects in Genetic Association Studies
Barnett, Ian; Mukherjee, Rajarshi; Lin, Xihong
2017-01-01
It is of substantial interest to study the effects of genes, genetic pathways, and networks on the risk of complex diseases. These genetic constructs each contain multiple SNPs, which are often correlated and function jointly, and might be large in number. However, only a sparse subset of SNPs in a genetic construct is generally associated with the disease of interest. In this article, we propose the generalized higher criticism (GHC) to test for the association between an SNP set and a disease outcome. The higher criticism is a test traditionally used in high-dimensional signal detection settings when marginal test statistics are independent and the number of parameters is very large. However, these assumptions do not always hold in genetic association studies, due to linkage disequilibrium among SNPs and the finite number of SNPs in an SNP set in each genetic construct. The proposed GHC overcomes the limitations of the higher criticism by allowing for arbitrary correlation structures among the SNPs in an SNP-set, while performing accurate analytic p-value calculations for any finite number of SNPs in the SNP-set. We obtain the detection boundary of the GHC test. We compared empirically using simulations the power of the GHC method with existing SNP-set tests over a range of genetic regions with varied correlation structures and signal sparsity. We apply the proposed methods to analyze the CGEM breast cancer genome-wide association study. Supplementary materials for this article are available online. PMID:28736464
Expressivism, Relativism, and the Analytic Equivalence Test
Frápolli, Maria J.; Villanueva, Neftalí
2015-01-01
The purpose of this paper is to show that, pace (Field, 2009), MacFarlane’s assessment relativism and expressivism should be sharply distinguished. We do so by arguing that relativism and expressivism exemplify two very different approaches to context-dependence. Relativism, on the one hand, shares with other contemporary approaches a bottom–up, building block, model, while expressivism is part of a different tradition, one that might include Lewis’ epistemic contextualism and Frege’s content individuation, with which it shares an organic model to deal with context-dependence. The building-block model and the organic model, and thus relativism and expressivism, are set apart with the aid of a particular test: only the building-block model is compatible with the idea that there might be analytically equivalent, and yet different, propositions. PMID:26635690
Online Updating of Statistical Inference in the Big Data Setting.
Schifano, Elizabeth D; Wu, Jing; Wang, Chun; Yan, Jun; Chen, Ming-Hui
2016-01-01
We present statistical methods for big data arising from online analytical processing, where large amounts of data arrive in streams and require fast analysis without storage/access to the historical data. In particular, we develop iterative estimating algorithms and statistical inferences for linear models and estimating equations that update as new data arrive. These algorithms are computationally efficient, minimally storage-intensive, and allow for possible rank deficiencies in the subset design matrices due to rare-event covariates. Within the linear model setting, the proposed online-updating framework leads to predictive residual tests that can be used to assess the goodness-of-fit of the hypothesized model. We also propose a new online-updating estimator under the estimating equation setting. Theoretical properties of the goodness-of-fit tests and proposed estimators are examined in detail. In simulation studies and real data applications, our estimator compares favorably with competing approaches under the estimating equation setting.
Schot, Marjolein J C; van Delft, Sanne; Kooijman-Buiting, Antoinette M J; de Wit, Niek J; Hopstaken, Rogier M
2015-01-01
Objective Various point-of-care testing (POCT) urine analysers are commercially available for routine urine analysis in general practice. The present study compares analytical performance, agreement and user-friendliness of six different POCT urine analysers for diagnosing urinary tract infection in general practice. Setting All testing procedures were performed at a diagnostic centre for primary care in the Netherlands. Urine samples were collected at four general practices. Primary and secondary outcome measures Analytical performance and agreement of the POCT analysers regarding nitrite, leucocytes and erythrocytes, with the laboratory reference standard, was the primary outcome measure, and analysed by calculating sensitivity, specificity, positive and negative predictive value, and Cohen's κ coefficient for agreement. Secondary outcome measures were the user-friendliness of the POCT analysers, in addition to other characteristics of the analysers. Results The following six POCT analysers were evaluated: Uryxxon Relax (Macherey Nagel), Urisys 1100 (Roche), Clinitek Status (Siemens), Aution 11 (Menarini), Aution Micro (Menarini) and Urilyzer (Analyticon). Analytical performance was good for all analysers. Compared with laboratory reference standards, overall agreement was good, but differed per parameter and per analyser. Concerning the nitrite test, the most important test for clinical practice, all but one showed perfect agreement with the laboratory standard. For leucocytes and erythrocytes specificity was high, but sensitivity was considerably lower. Agreement for leucocytes varied between good to very good, and for the erythrocyte test between fair and good. First-time users indicated that the analysers were easy to use. They expected higher productivity and accuracy when using these analysers in daily practice. Conclusions The overall performance and user-friendliness of all six commercially available POCT urine analysers was sufficient to justify routine use in suspected urinary tract infections in general practice. PMID:25986635
NASA Technical Reports Server (NTRS)
Gaonkar, G.
1986-01-01
For flap-lag stability of isolated rotors, experimental and analytical investigations are conducted in hover and forward flight on the adequacy of a linear quasisteady aerodynamics theory with dynamic inflow. Forward flight effects on lag regressing mode are emphasized. Accordingly, a soft inplane hingeless rotor with three blades is tested at advance ratios as high as 0.55 and at shaft angles as high as 20 degrees. The 1.62 m model rotor is untrimmed with an essentially unrestricted tilt of the tip path plane. In combination with lag natural frequencies, collective pitch settings and flap-lag coupling parameters, the data base comprises nearly 1200 test points (damping and frequency) in forward flight and 200 test points in hover. By computerized symbolic manipulation, a linear analytical model is developed in substall to predict stability margins with mode identificaton. To help explain the correlation between theory and data it also predicts substall and stall regions of the rotor disk from equilibrium values. The correlation shows both the strengthts and weaknesses of the theory in substall.
NASA Astrophysics Data System (ADS)
Phanikumar, Mantha S.; McGuire, Jennifer T.
2010-08-01
Push-pull tests are a popular technique to investigate various aquifer properties and microbial reaction kinetics in situ. Most previous studies have interpreted push-pull test data using approximate analytical solutions to estimate (generally first-order) reaction rate coefficients. Though useful, these analytical solutions may not be able to describe important complexities in rate data. This paper reports the development of a multi-species, radial coordinate numerical model (PPTEST) that includes the effects of sorption, reaction lag time and arbitrary reaction order kinetics to estimate rates in the presence of mixing interfaces such as those created between injected "push" water and native aquifer water. The model has the ability to describe an arbitrary number of species and user-defined reaction rate expressions including Monod/Michelis-Menten kinetics. The FORTRAN code uses a finite-difference numerical model based on the advection-dispersion-reaction equation and was developed to describe the radial flow and transport during a push-pull test. The accuracy of the numerical solutions was assessed by comparing numerical results with analytical solutions and field data available in the literature. The model described the observed breakthrough data for tracers (chloride and iodide-131) and reactive components (sulfate and strontium-85) well and was found to be useful for testing hypotheses related to the complex set of processes operating near mixing interfaces.
NASA Technical Reports Server (NTRS)
Meitner, P. L.; Glassman, A. J.
1980-01-01
An off-design performance loss model for a radial turbine with pivoting, variable-area stators is developed through a combination of analytical modeling and experimental data analysis. A viscous loss model is used for the variation in stator loss with setting angle, and stator vane end-clearance leakage effects are predicted by a clearance flow model. The variation of rotor loss coefficient with stator setting angle is obtained by means of an analytical matching of experimental data for a rotor that was tested with six stators, having throat areas from 20 to 144% of the design area. An incidence loss model is selected to obtain best agreement with experimental data. The stator vane end-clearance leakage model predicts increasing mass flow and decreasing efficiency as a result of end-clearances, with changes becoming significantly larger with decreasing stator area.
Kaushik, Karishma S.; Kessel, Ashley; Ratnayeke, Nalin; Gordon, Vernita D.
2015-01-01
We have developed a hands-on experimental module that combines biology experiments with a physics-based analytical model in order to characterize antimicrobial compounds. To understand antibiotic resistance, participants perform a disc diffusion assay to test the antimicrobial activity of different compounds and then apply a diffusion-based analytical model to gain insights into the behavior of the active antimicrobial component. In our experience, this module was robust, reproducible, and cost-effective, suggesting that it could be implemented in diverse settings such as undergraduate research, STEM (science, technology, engineering, and math) camps, school programs, and laboratory training workshops. By providing valuable interdisciplinary research experience in science outreach and education initiatives, this module addresses the paucity of structured training or education programs that integrate diverse scientific fields. Its low-cost requirements make it especially suitable for use in resource-limited settings. PMID:25602254
Parker, Andrew M.; Stone, Eric R.
2013-01-01
One of the most common findings in behavioral decision research is that people have unrealistic beliefs about how much they know. However, demonstrating that misplaced confidence exists does not necessarily mean that there are costs to it. This paper contrasts two approaches toward answering whether misplaced confidence is good or bad, which we have labeled the overconfidence and unjustified confidence approach. We first consider conceptual and analytic issues distinguishing these approaches. Then, we provide findings from a set of simulations designed to determine when the approaches produce different conclusions across a range of possible confidence-knowledge-outcome relationships. Finally, we illustrate the main findings from the simulations with three empirical examples drawn from our own data. We conclude that the unjustified confidence approach is typically the preferred approach, both because it is appropriate for testing a larger set of psychological mechanisms as well as for methodological reasons. PMID:25309037
Druka, Arnis; Druka, Ilze; Centeno, Arthur G; Li, Hongqiang; Sun, Zhaohui; Thomas, William T B; Bonar, Nicola; Steffenson, Brian J; Ullrich, Steven E; Kleinhofs, Andris; Wise, Roger P; Close, Timothy J; Potokina, Elena; Luo, Zewei; Wagner, Carola; Schweizer, Günther F; Marshall, David F; Kearsey, Michael J; Williams, Robert W; Waugh, Robbie
2008-11-18
A typical genetical genomics experiment results in four separate data sets; genotype, gene expression, higher-order phenotypic data and metadata that describe the protocols, processing and the array platform. Used in concert, these data sets provide the opportunity to perform genetic analysis at a systems level. Their predictive power is largely determined by the gene expression dataset where tens of millions of data points can be generated using currently available mRNA profiling technologies. Such large, multidimensional data sets often have value beyond that extracted during their initial analysis and interpretation, particularly if conducted on widely distributed reference genetic materials. Besides quality and scale, access to the data is of primary importance as accessibility potentially allows the extraction of considerable added value from the same primary dataset by the wider research community. Although the number of genetical genomics experiments in different plant species is rapidly increasing, none to date has been presented in a form that allows quick and efficient on-line testing for possible associations between genes, loci and traits of interest by an entire research community. Using a reference population of 150 recombinant doubled haploid barley lines we generated novel phenotypic, mRNA abundance and SNP-based genotyping data sets, added them to a considerable volume of legacy trait data and entered them into the GeneNetwork http://www.genenetwork.org. GeneNetwork is a unified on-line analytical environment that enables the user to test genetic hypotheses about how component traits, such as mRNA abundance, may interact to condition more complex biological phenotypes (higher-order traits). Here we describe these barley data sets and demonstrate some of the functionalities GeneNetwork provides as an easily accessible and integrated analytical environment for exploring them. By integrating barley genotypic, phenotypic and mRNA abundance data sets directly within GeneNetwork's analytical environment we provide simple web access to the data for the research community. In this environment, a combination of correlation analysis and linkage mapping provides the potential to identify and substantiate gene targets for saturation mapping and positional cloning. By integrating datasets from an unsequenced crop plant (barley) in a database that has been designed for an animal model species (mouse) with a well established genome sequence, we prove the importance of the concept and practice of modular development and interoperability of software engineering for biological data sets.
Sher, Mazhar; Zhuang, Rachel; Demirci, Utkan; Asghar, Waseem
2017-01-01
Introduction There is a significant interest in developing inexpensive portable biosensing platforms for various applications including disease diagnostics, environmental monitoring, food safety, and water testing at the point-of-care (POC) settings. Current diagnostic assays available in the developed world require sophisticated laboratory infrastructure and expensive reagents. Hence, they are not suitable for resource-constrained settings with limited financial resources, basic health infrastructure, and few trained technicians. Cellulose and flexible transparency paper-based analytical devices have demonstrated enormous potential for developing robust, inexpensive and portable devices for disease diagnostics. These devices offer promising solutions to disease management in resource-constrained settings where the vast majority of the population cannot afford expensive and highly sophisticated treatment options. Areas covered In this review, the authors describe currently developed cellulose and flexible transparency paper-based microfluidic devices, device fabrication techniques, and sensing technologies that are integrated with these devices. The authors also discuss the limitations and challenges associated with these devices and their potential in clinical settings. Expert commentary In recent years, cellulose and flexible transparency paper-based microfluidic devices have demonstrated the potential to become future healthcare options despite a few limitations such as low sensitivity and reproducibility. PMID:28103450
Sher, Mazhar; Zhuang, Rachel; Demirci, Utkan; Asghar, Waseem
2017-04-01
There is a significant interest in developing inexpensive portable biosensing platforms for various applications including disease diagnostics, environmental monitoring, food safety, and water testing at the point-of-care (POC) settings. Current diagnostic assays available in the developed world require sophisticated laboratory infrastructure and expensive reagents. Hence, they are not suitable for resource-constrained settings with limited financial resources, basic health infrastructure, and few trained technicians. Cellulose and flexible transparency paper-based analytical devices have demonstrated enormous potential for developing robust, inexpensive and portable devices for disease diagnostics. These devices offer promising solutions to disease management in resource-constrained settings where the vast majority of the population cannot afford expensive and highly sophisticated treatment options. Areas covered: In this review, the authors describe currently developed cellulose and flexible transparency paper-based microfluidic devices, device fabrication techniques, and sensing technologies that are integrated with these devices. The authors also discuss the limitations and challenges associated with these devices and their potential in clinical settings. Expert commentary: In recent years, cellulose and flexible transparency paper-based microfluidic devices have demonstrated the potential to become future healthcare options despite a few limitations such as low sensitivity and reproducibility.
NASA Astrophysics Data System (ADS)
Alejandro Juárez-Reyes, Salvador; Sosa-Sánchez, Citlalli Teresa; Silva-Ortigoza, Gilberto; de Jesús Cabrera-Rosas, Omar; Espíndola-Ramos, Ernesto; Ortega-Vidals, Paula
2018-03-01
Among the best known non-interferometric optical tests are the wire test, the Foucault test and Ronchi test with a low frequency grating. Since the wire test is the seed to understand the other ones, the aim of the present work is to do a thorough study of this test for a lens with symmetry of revolution and to do this study for any configuration of the object and detection planes where both planes could intersect: two, one or no branches of the caustic region (including the marginal and paraxial foci). To this end, we calculated the vectorial representation for the caustic region, and we found the analytical expression for the pattern; we report that the analytical pattern explicitly depends on the magnitude of a branch of the caustic. With the analytical pattern we computed a set of simulations of a dynamical adaptation of the optical wire test. From those simulations, we have done a thorough analysis of the topological structure of the pattern; so we explain how the multiple image formation process and the image collapse process take place for each configuration, in particular, when both the wire and the detection planes are placed inside the caustic region, which has not been studied before. For the first time, we remark that not only the intersections of the object and detection planes with the caustic are important in the change of pattern topology; but also the projection of the intersection between the caustic and the object plane mapped onto the detection plane; and the virtual projection of the intersection between the caustic and the detection plane mapped onto the object plane. We present that for the new configurations of the optical system, the wire image is curves of the Tschirnhausen’s cubic, the piriform and the deformed eight-curve types.
Škrbić, Biljana; Héberger, Károly; Durišić-Mladenović, Nataša
2013-10-01
Sum of ranking differences (SRD) was applied for comparing multianalyte results obtained by several analytical methods used in one or in different laboratories, i.e., for ranking the overall performances of the methods (or laboratories) in simultaneous determination of the same set of analytes. The data sets for testing of the SRD applicability contained the results reported during one of the proficiency tests (PTs) organized by EU Reference Laboratory for Polycyclic Aromatic Hydrocarbons (EU-RL-PAH). In this way, the SRD was also tested as a discriminant method alternative to existing average performance scores used to compare mutlianalyte PT results. SRD should be used along with the z scores--the most commonly used PT performance statistics. SRD was further developed to handle the same rankings (ties) among laboratories. Two benchmark concentration series were selected as reference: (a) the assigned PAH concentrations (determined precisely beforehand by the EU-RL-PAH) and (b) the averages of all individual PAH concentrations determined by each laboratory. Ranking relative to the assigned values and also to the average (or median) values pointed to the laboratories with the most extreme results, as well as revealed groups of laboratories with similar overall performances. SRD reveals differences between methods or laboratories even if classical test(s) cannot. The ranking was validated using comparison of ranks by random numbers (a randomization test) and using seven folds cross-validation, which highlighted the similarities among the (methods used in) laboratories. Principal component analysis and hierarchical cluster analysis justified the findings based on SRD ranking/grouping. If the PAH-concentrations are row-scaled, (i.e., z scores are analyzed as input for ranking) SRD can still be used for checking the normality of errors. Moreover, cross-validation of SRD on z scores groups the laboratories similarly. The SRD technique is general in nature, i.e., it can be applied to any experimental problem in which multianalyte results obtained either by several analytical procedures, analysts, instruments, or laboratories need to be compared.
ASVCP guidelines: quality assurance for point-of-care testing in veterinary medicine.
Flatland, Bente; Freeman, Kathleen P; Vap, Linda M; Harr, Kendal E
2013-12-01
Point-of-care testing (POCT) refers to any laboratory testing performed outside the conventional reference laboratory and implies close proximity to patients. Instrumental POCT systems consist of small, handheld or benchtop analyzers. These have potential utility in many veterinary settings, including private clinics, academic veterinary medical centers, the community (eg, remote area veterinary medical teams), and for research applications in academia, government, and industry. Concern about the quality of veterinary in-clinic testing has been expressed in published veterinary literature; however, little guidance focusing on POCT is available. Recognizing this void, the ASVCP formed a subcommittee in 2009 charged with developing quality assurance (QA) guidelines for veterinary POCT. Guidelines were developed through literature review and a consensus process. Major recommendations include (1) taking a formalized approach to POCT within the facility, (2) use of written policies, standard operating procedures, forms, and logs, (3) operator training, including periodic assessment of skills, (4) assessment of instrument analytical performance and use of both statistical quality control and external quality assessment programs, (5) use of properly established or validated reference intervals, (6) and ensuring accurate patient results reporting. Where possible, given instrument analytical performance, use of a validated 13s control rule for interpretation of control data is recommended. These guidelines are aimed at veterinarians and veterinary technicians seeking to improve management of POCT in their clinical or research setting, and address QA of small chemistry and hematology instruments. These guidelines are not intended to be all-inclusive; rather, they provide a minimum standard for maintenance of POCT instruments in the veterinary setting. © 2013 American Society for Veterinary Clinical Pathology and European Society for Veterinary Clinical Pathology.
Versatile electrophoresis-based self-test platform.
Guijt, Rosanne M
2015-03-01
Lab on a Chip technology offers the possibility to extract chemical information from a complex sample in a simple, automated way without the need for a laboratory setting. In the health care sector, this chemical information could be used as a diagnostic tool for example to inform dosing. In this issue, the research underpinning a family of electrophoresis-based point-of-care devices for self-testing of ionic analytes in various sample matrices is described [Electrophoresis 2015, 36, 712-721.]. Hardware, software, and methodological chances made to improve the overall analytical performance in terms of accuracy, precision, detection limit, and reliability are discussed. In addition to the main focus of lithium monitoring, new applications including the use of the platform for veterinary purposes, sodium, and for creatinine measurements are included. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Pinsky, Benjamin A.; Sahoo, Malaya K.; Sandlund, Johanna; Kleman, Marika; Kulkarni, Medha; Grufman, Per; Nygren, Malin; Kwiatkowski, Robert; Baron, Ellen Jo; Tenover, Fred; Denison, Blake; Higuchi, Russell; Van Atta, Reuel; Beer, Neil Reginald; Carrillo, Alda Celena; Naraghi-Arani, Pejman; Mire, Chad E.; Ranadheera, Charlene; Grolla, Allen; Lagerqvist, Nina; Persing, David H.
2015-01-01
Background The recently developed Xpert® Ebola Assay is a novel nucleic acid amplification test for simplified detection of Ebola virus (EBOV) in whole blood and buccal swab samples. The assay targets sequences in two EBOV genes, lowering the risk for new variants to escape detection in the test. The objective of this report is to present analytical characteristics of the Xpert® Ebola Assay on whole blood samples. Methods and Findings This study evaluated the assay’s analytical sensitivity, analytical specificity, inclusivity and exclusivity performance in whole blood specimens. EBOV RNA, inactivated EBOV, and infectious EBOV were used as targets. The dynamic range of the assay, the inactivation of virus, and specimen stability were also evaluated. The lower limit of detection (LoD) for the assay using inactivated virus was estimated to be 73 copies/mL (95% CI: 51–97 copies/mL). The LoD for infectious virus was estimated to be 1 plaque-forming unit/mL, and for RNA to be 232 copies/mL (95% CI 163–302 copies/mL). The assay correctly identified five different Ebola viruses, Yambuku-Mayinga, Makona-C07, Yambuku-Ecran, Gabon-Ilembe, and Kikwit-956210, and correctly excluded all non-EBOV isolates tested. The conditions used by Xpert® Ebola for inactivation of infectious virus reduced EBOV titer by ≥6 logs. Conclusion In summary, we found the Xpert® Ebola Assay to have high analytical sensitivity and specificity for the detection of EBOV in whole blood. It offers ease of use, fast turnaround time, and remote monitoring. The test has an efficient viral inactivation protocol, fulfills inclusivity and exclusivity criteria, and has specimen stability characteristics consistent with the need for decentralized testing. The simplicity of the assay should enable testing in a wide variety of laboratory settings, including remote laboratories that are not capable of performing highly complex nucleic acid amplification tests, and during outbreaks where time to detection is critical. PMID:26562786
NASA Technical Reports Server (NTRS)
Purves, L.; Strang, R. F.; Dube, M. P.; Alea, P.; Ferragut, N.; Hershfeld, D.
1983-01-01
The software and procedures of a system of programs used to generate a report of the statistical correlation between NASTRAN modal analysis results and physical tests results from modal surveys are described. Topics discussed include: a mathematical description of statistical correlation, a user's guide for generating a statistical correlation report, a programmer's guide describing the organization and functions of individual programs leading to a statistical correlation report, and a set of examples including complete listings of programs, and input and output data.
Review of design and operational characteristics of the 0.3-meter transonic cryogenic tunnel
NASA Technical Reports Server (NTRS)
Ray, E. J.; Ladson, C. L.; Adcock, J. B.; Lawing, P. L.; Hall, R. M.
1979-01-01
The fundamentals of cryogenic testing are validated both analytically and experimentally employing the 0.3-m transonic cryogenic tunnel. The tunnel with its unique Reynolds number capability has been used for a wide variety of aerodynamic tests. Techniques regarding real-gas effects have been developed and cryogenic tunnel conditions are set and maintained accurately. It is shown that cryogenic cooling, by injecting nitrogen directly into the tunnel circuit, imposes no problems with temperature distribution or dynamic response characteristics.
NASA Astrophysics Data System (ADS)
Ribera, Javier; Tahboub, Khalid; Delp, Edward J.
2015-03-01
Video surveillance systems are widely deployed for public safety. Real-time monitoring and alerting are some of the key requirements for building an intelligent video surveillance system. Real-life settings introduce many challenges that can impact the performance of real-time video analytics. Video analytics are desired to be resilient to adverse and changing scenarios. In this paper we present various approaches to characterize the uncertainty of a classifier and incorporate crowdsourcing at the times when the method is uncertain about making a particular decision. Incorporating crowdsourcing when a real-time video analytic method is uncertain about making a particular decision is known as online active learning from crowds. We evaluate our proposed approach by testing a method we developed previously for crowd flow estimation. We present three different approaches to characterize the uncertainty of the classifier in the automatic crowd flow estimation method and test them by introducing video quality degradations. Criteria to aggregate crowdsourcing results are also proposed and evaluated. An experimental evaluation is conducted using a publicly available dataset.
Shum, Bennett O V; Henner, Ilya; Belluoccio, Daniele; Hinchcliffe, Marcus J
2017-07-01
The sensitivity and specificity of next-generation sequencing laboratory developed tests (LDTs) are typically determined by an analyte-specific approach. Analyte-specific validations use disease-specific controls to assess an LDT's ability to detect known pathogenic variants. Alternatively, a methods-based approach can be used for LDT technical validations. Methods-focused validations do not use disease-specific controls but use benchmark reference DNA that contains known variants (benign, variants of unknown significance, and pathogenic) to assess variant calling accuracy of a next-generation sequencing workflow. Recently, four whole-genome reference materials (RMs) from the National Institute of Standards and Technology (NIST) were released to standardize methods-based validations of next-generation sequencing panels across laboratories. We provide a practical method for using NIST RMs to validate multigene panels. We analyzed the utility of RMs in validating a novel newborn screening test that targets 70 genes, called NEO1. Despite the NIST RM variant truth set originating from multiple sequencing platforms, replicates, and library types, we discovered a 5.2% false-negative variant detection rate in the RM truth set genes that were assessed in our validation. We developed a strategy using complementary non-RM controls to demonstrate 99.6% sensitivity of the NEO1 test in detecting variants. Our findings have implications for laboratories or proficiency testing organizations using whole-genome NIST RMs for testing. Copyright © 2017 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Setting Learning Analytics in Context: Overcoming the Barriers to Large-Scale Adoption
ERIC Educational Resources Information Center
Ferguson, Rebecca; Macfadyen, Leah P.; Clow, Doug; Tynan, Belinda; Alexander, Shirley; Dawson, Shane
2014-01-01
A core goal for most learning analytic projects is to move from small-scale research towards broader institutional implementation, but this introduces a new set of challenges because institutions are stable systems, resistant to change. To avoid failure and maximize success, implementation of learning analytics at scale requires explicit and…
Flood, James G; Khaliq, Tahira; Bishop, Kenneth A; Griggs, David A
2016-05-01
We implemented oral fluid (OF) as an alternative specimen type to urine for detection of cocaine (COC) and opiate abuse in outpatient addiction medicine clinics. We implemented a 2-μg/L limit of quantification OF LC-MS/MS assay and compiled and reviewed all findings from a 22-month collection period for COC, benzoylecgonine (BZE), codeine (COD), 6-acetylmorphine (MAM), and morphine (MOR). We also compared the results of our clinical samples at different OF cutoffs and analytes specified in the new 2015 SAMHSA OF guidelines. Of 3608 OF samples, COC and BZE were positive in 593 and 508, respectively. COC or BZE was positive in 662 samples. Importantly and unexpectedly, 154 samples were COC positive and BZE negative, with 125 having COC 2.0-7.9 μg/L. A simulation with the new guideline cutoffs confirmed 65% (430 of 662) of all COC- or BZE-positive data set samples. Similarly, the new guidelines confirmed 44% (263 of 603) of data set samples positive for MOR or COD. Simulation found that the new, lower MAM guideline cutoffs detected 89% of the 382 MAM-positive samples in the data set, 104 of which the new guidelines had identified as negative for MOR and COD. COC (not BZE) is the dominant low-concentration OF analyte in an addiction medicine setting. This information will aid OF test interpretation. It also illustrates the importance of the 2015 guideline's new immunoassay cross-reactivity requirements and the likely improvement in detection of heroin use stemming from the new, lower MAM cutoffs. © 2016 American Association for Clinical Chemistry.
Zietze, Stefan; Müller, Rainer H; Brecht, René
2008-03-01
In order to set up a batch-to-batch-consistency analytical scheme for N-glycosylation analysis, several sample preparation steps including enzyme digestions and fluorophore labelling and two HPLC-methods were established. The whole method scheme was standardized, evaluated and validated according to the requirements on analytical testing in early clinical drug development by usage of a recombinant produced reference glycoprotein (RGP). The standardization of the methods was performed by clearly defined standard operation procedures. During evaluation of the methods, the major interest was in the loss determination of oligosaccharides within the analytical scheme. Validation of the methods was performed with respect to specificity, linearity, repeatability, LOD and LOQ. Due to the fact that reference N-glycan standards were not available, a statistical approach was chosen to derive accuracy from the linearity data. After finishing the validation procedure, defined limits for method variability could be calculated and differences observed in consistency analysis could be separated into significant and incidental ones.
NASA Astrophysics Data System (ADS)
Joh, Daniel Y.; Hucknall, Angus M.; Wei, Qingshan; Mason, Kelly A.; Lund, Margaret L.; Fontes, Cassio M.; Hill, Ryan T.; Blair, Rebecca; Zimmers, Zackary; Achar, Rohan K.; Tseng, Derek; Gordan, Raluca; Freemark, Michael; Ozcan, Aydogan; Chilkoti, Ashutosh
2017-08-01
The ELISA is the mainstay for sensitive and quantitative detection of protein analytes. Despite its utility, ELISA is time-consuming, resource-intensive, and infrastructure-dependent, limiting its availability in resource-limited regions. Here, we describe a self-contained immunoassay platform (the “D4 assay”) that converts the sandwich immunoassay into a point-of-care test (POCT). The D4 assay is fabricated by inkjet printing assay reagents as microarrays on nanoscale polymer brushes on glass chips, so that all reagents are “on-chip,” and these chips show durable storage stability without cold storage. The D4 assay can interrogate multiple analytes from a drop of blood, is compatible with a smartphone detector, and displays analytical figures of merit that are comparable to standard laboratory-based ELISA in whole blood. These attributes of the D4 POCT have the potential to democratize access to high-performance immunoassays in resource-limited settings without sacrificing their performance.
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Roychoudhury, Indranil; Balaban, Edward; Saxena, Abhinav
2010-01-01
Model-based diagnosis typically uses analytical redundancy to compare predictions from a model against observations from the system being diagnosed. However this approach does not work very well when it is not feasible to create analytic relations describing all the observed data, e.g., for vibration data which is usually sampled at very high rates and requires very detailed finite element models to describe its behavior. In such cases, features (in time and frequency domains) that contain diagnostic information are extracted from the data. Since this is a computationally intensive process, it is not efficient to extract all the features all the time. In this paper we present an approach that combines the analytic model-based and feature-driven diagnosis approaches. The analytic approach is used to reduce the set of possible faults and then features are chosen to best distinguish among the remaining faults. We describe an implementation of this approach on the Flyable Electro-mechanical Actuator (FLEA) test bed.
Freye, Chris E; Moore, Nicholas R; Synovec, Robert E
2018-02-16
The complementary information provided by tandem ionization time-of-flight mass spectrometry (TI-TOFMS) is investigated for comparative discovery-based analysis, when coupled with comprehensive two-dimensional gas chromatography (GC × GC). The TI conditions implemented were a hard ionization energy (70 eV) concurrently collected with a soft ionization energy (14 eV). Tile-based Fisher ratio (F-ratio) analysis is used to analyze diesel fuel spiked with twelve analytes at a nominal concentration of 50 ppm. F-ratio analysis is a supervised discovery-based technique that compares two different sample classes, in this case spiked and unspiked diesel, to reduce the complex GC × GC-TI-TOFMS data into a hit list of class distinguishing analyte features. Hit lists of the 70 eV and 14 eV data sets, and the single hit list produced when the two data sets are fused together, are all investigated. For the 70 eV hit list, eleven of the twelve analytes were found in the top thirteen hits. For the 14 eV hit list, nine of the twelve analytes were found in the top nine hits, with the other three analytes either not found or well down the hit list. As expected, the F-ratios per m/z used to calculate each average F-ratio per hit were generally smaller fragment ions for the 70 eV data set, while the larger fragment ions were emphasized in the 14 eV data set, supporting the notion that complementary information was provided. The discovery rate was improved when F-ratio analysis was performed on the fused data sets resulted in eleven of the twelve analytes being at the top of the single hit list. Using PARAFAC, analytes that were "discovered" were deconvoluted in order to obtain their identification via match values (MV). Location of the analytes and the "F-ratio spectra" obtained from F-ratio analysis were used to guide the deconvolution. Eight of the twelve analytes where successfully deconvoluted and identified using the in-house library for the 70 eV data set. PARAFAC deconvolution of the two separate data sets provided increased confidence in identification of "discovered" analytes. Herein, we explore the limit of analyte discovery and limit of analyte identification, and demonstrate a general workflow for the investigation of key chemical features in complex samples. Copyright © 2018 Elsevier B.V. All rights reserved.
Test of the Semi-Analytical Case 1 and Gelbstoff Case 2 SeaWiFS Algorithm with a Global Data Set
NASA Technical Reports Server (NTRS)
Carder, Kendall L.
1997-01-01
The algorithm-development activities at USF during the second half of 1997 have concentrated on data collection and theoretical modeling. Six abstracts were submitted for presentation at the AGU conference in San Diego, California during February 9-13, 1998. Four papers were submitted to JGR and Applied Optics for publication.
ERIC Educational Resources Information Center
McCollum, Daniel L.; Kajs, Lawrence T.
2009-01-01
The goal orientation theory of motivation posits sets of beliefs people hold regarding their goals. The 2 x 2 model of goal orientations has received almost no attention in the domain of educational leadership. The present researchers used a confirmatory factor analysis to test a measure based on the hypothesized 2 x 2 model in educational…
[The requirements of standard and conditions of interchangeability of medical articles].
Men'shikov, V V; Lukicheva, T I
2013-11-01
The article deals with possibility to apply specific approaches under evaluation of interchangeability of medical articles for laboratory analysis. The development of standardized analytical technologies of laboratory medicine and formulation of requirements of standards addressed to manufacturers of medical articles the clinically validated requirements are to be followed. These requirements include sensitivity and specificity of techniques, accuracy and precision of research results, stability of reagents' quality in particular conditions of their transportation and storage. The validity of requirements formulated in standards and addressed to manufacturers of medical articles can be proved using reference system, which includes master forms and standard samples, reference techniques and reference laboratories. This approach is supported by data of evaluation of testing systems for measurement of level of thyrotrophic hormone, thyroid hormones and glycated hemoglobin HB A1c. The versions of testing systems can be considered as interchangeable only in case of results corresponding to the results of reference technique and comparable with them. In case of absence of functioning reference system the possibilities of the Joined committee of traceability in laboratory medicine make it possible for manufacturers of reagent sets to apply the certified reference materials under development of manufacturing of sets for large listing of analytes.
Garcia Hejl, Carine; Ramirez, Jose Manuel; Vest, Philippe; Chianea, Denis; Renard, Christophe
2014-09-01
Laboratories working towards accreditation by the International Standards Organization (ISO) 15189 standard are required to demonstrate the validity of their analytical methods. The different guidelines set by various accreditation organizations make it difficult to provide objective evidence that an in-house method is fit for the intended purpose. Besides, the required performance characteristics tests and acceptance criteria are not always detailed. The laboratory must choose the most suitable validation protocol and set the acceptance criteria. Therefore, we propose a validation protocol to evaluate the performance of an in-house method. As an example, we validated the process for the detection and quantification of lead in whole blood by electrothermal absorption spectrometry. The fundamental parameters tested were, selectivity, calibration model, precision, accuracy (and uncertainty of measurement), contamination, stability of the sample, reference interval, and analytical interference. We have developed a protocol that has been applied successfully to quantify lead in whole blood by electrothermal atomic absorption spectrometry (ETAAS). In particular, our method is selective, linear, accurate, and precise, making it suitable for use in routine diagnostics.
Fatigue and fracture assessment of cracks in steel elements using acoustic emission
NASA Astrophysics Data System (ADS)
Nemati, Navid; Metrovich, Brian; Nanni, Antonio
2011-04-01
Single edge notches provide a very well defined load and fatigue crack size and shape environment for estimation of the stress intensity factor K, which is not found in welded elements. ASTM SE(T) specimens do not appear to provide ideal boundary conditions for proper recording of acoustic wave propagation and crack growth behavior observed in steel bridges, but do provide standard fatigue crack growth rate data. A modified versions of the SE(T) specimen has been examined to provide small scale specimens with improved acoustic emission(AE) characteristics while still maintaining accuracy of fatigue crack growth rate (da/dN) versus stress intensity factor (ΔK). The specimens intend to represent a steel beam flange subjected to pure tension, with a surface crack growing transverse to a uniform stress field. Fatigue test is conducted at low R ratio. Analytical and numerical studies of stress intensity factor are developed for single edge notch test specimens consistent with the experimental program. ABAQUS finite element software is utilized for stress analysis of crack tips. Analytical, experimental and numerical analysis were compared to assess the abilities of AE to capture a growing crack.
Koscho, Michael E; Grubbs, Robert H; Lewis, Nathan S
2002-03-15
Arrays of vapor detectors have been formed through addition of varying mass fractions of the plasticizer diethylene glycol dibenzoate to carbon black-polymer composites of poly(vinyl acetate) (PVAc) or of poly(N-vinylpyrrolidone). Addition of plasticizer in 5% mass fraction increments produced 20 compositionally different detectors from each polymer composite. Differences in vapor sorption and permeability that effected changes in the dc electrical resistance response of these compositionally different detectors allowed identification and classification of various test analytes using standard chemometric methods. Glass transition temperatures, Tg, were measured using differential scanning calorimetry for plasticized polymers having a mass fraction of 0, 0.10, 0.20, 0.30, 0.40, or 0.50 of plasticizer in the composite. The plasticized PVAc composites with Tg < 25 degrees C showed rapid responses at room temperature to all of the test analyte vapors studied in this work, whereas composites with Tg > 25 degrees C showed response times that were highly dependent on the polymer/analyte combination. These composites showed a discontinuity in the temperature dependence of their resistance, and this discontinuity provided a simple method for determining the Tg of the composite and for determining the temperature or plasticizer mass fraction above which rapid resistance responses could be obtained for all members of the test set of analyte vapors. The plasticization approach provides a method for achieving rapid detector response times as well as for producing a large number of chemically different vapor detectors from a limited number of initial chemical feedstocks.
NASA Technical Reports Server (NTRS)
Baumeister, Joseph F.
1994-01-01
A non-flowing, electrically heated test rig was developed to verify computer codes that calculate radiant energy propagation from nozzle geometries that represent aircraft propulsion nozzle systems. Since there are a variety of analysis tools used to evaluate thermal radiation propagation from partially enclosed nozzle surfaces, an experimental benchmark test case was developed for code comparison. This paper briefly describes the nozzle test rig and the developed analytical nozzle geometry used to compare the experimental and predicted thermal radiation results. A major objective of this effort was to make available the experimental results and the analytical model in a format to facilitate conversion to existing computer code formats. For code validation purposes this nozzle geometry represents one validation case for one set of analysis conditions. Since each computer code has advantages and disadvantages based on scope, requirements, and desired accuracy, the usefulness of this single nozzle baseline validation case can be limited for some code comparisons.
Vrooman, H A; Maliepaard, C; van der Linden, L P; Jessurun, E R; Ludwig, J W; Plokker, H W; Schalij, M J; Weeda, H W; Laufer, J L; Huysmans, H A; Reiber, J H
1997-09-01
The authors developed an analytic software package for the objective and reproducible assessment of a single leg separation (SLS) in the outlet strut of Björk-Shiley convexoconcave (BSCC) prosthetic heart valves. The radiographic cinefilm recordings of 18 phantom valves (12 intact and 6 SLS) and of 43 patient valves were acquired. After digitization of regions of interest in a cineframe, several processing steps were carried out to obtain a one-dimensional corrected and averaged density profile along the central axis of each strut leg. To characterize the degree of possible separation, two quantitative measures were introduced: the normalized pit depth (NPD) and the depth-sigma ratio (DSR). The group of 43 patient studies was divided into a learning set (25 patients) and a test set (18 patients). All phantom valves with an SLS were detected (sensitivity, 100%) at a specificity of 100%. The threshold values for the NPD and the DSR to decide whether a fracture was present or not were 3.6 and 2.5, respectively. On the basis of the visual interpretations of the 25 patient studies (learning set) by an expert panel, it was concluded that none of the patients had an SLS. To achieve a 100% specificity by quantitative analysis, the threshold values for the NPD and the DSR were set at 5.8 and 2.5, respectively, for the patient data. Based on these threshold values, the analysis of patient data from the test set resulted in one false-negative detection and three false-positive detections. An analytic software package for the detection of an SLS was developed. Phantom data showed excellent sensitivity (100%) and specificity (100%). Further research and software development is needed to increase the sensitivity and specificity for patient data.
Propellant Readiness Level: A Methodological Approach to Propellant Characterization
NASA Technical Reports Server (NTRS)
Bossard, John A.; Rhys, Noah O.
2010-01-01
A methodological approach to defining propellant characterization is presented. The method is based on the well-established Technology Readiness Level nomenclature. This approach establishes the Propellant Readiness Level as a metric for ascertaining the readiness of a propellant or a propellant combination by evaluating the following set of propellant characteristics: thermodynamic data, toxicity, applications, combustion data, heat transfer data, material compatibility, analytical prediction modeling, injector/chamber geometry, pressurization, ignition, combustion stability, system storability, qualification testing, and flight capability. The methodology is meant to be applicable to all propellants or propellant combinations; liquid, solid, and gaseous propellants as well as monopropellants and propellant combinations are equally served. The functionality of the proposed approach is tested through the evaluation and comparison of an example set of hydrocarbon fuels.
Analytical and Clinical Performance of Blood Glucose Monitors
Boren, Suzanne Austin; Clarke, William L.
2010-01-01
Background The objective of this study was to understand the level of performance of blood glucose monitors as assessed in the published literature. Methods Medline from January 2000 to October 2009 and reference lists of included articles were searched to identify eligible studies. Key information was abstracted from eligible studies: blood glucose meters tested, blood sample, meter operators, setting, sample of people (number, diabetes type, age, sex, and race), duration of diabetes, years using a glucose meter, insulin use, recommendations followed, performance evaluation measures, and specific factors affecting the accuracy evaluation of blood glucose monitors. Results Thirty-one articles were included in this review. Articles were categorized as review articles of blood glucose accuracy (6 articles), original studies that reported the performance of blood glucose meters in laboratory settings (14 articles) or clinical settings (9 articles), and simulation studies (2 articles). A variety of performance evaluation measures were used in the studies. The authors did not identify any studies that demonstrated a difference in clinical outcomes. Examples of analytical tools used in the description of accuracy (e.g., correlation coefficient, linear regression equations, and International Organization for Standardization standards) and how these traditional measures can complicate the achievement of target blood glucose levels for the patient were presented. The benefits of using error grid analysis to quantify the clinical accuracy of patient-determined blood glucose values were discussed. Conclusions When examining blood glucose monitor performance in the real world, it is important to consider if an improvement in analytical accuracy would lead to improved clinical outcomes for patients. There are several examples of how analytical tools used in the description of self-monitoring of blood glucose accuracy could be irrelevant to treatment decisions. PMID:20167171
Comparison of mine waste assessment methods at the Rattler mine site, Virginia Canyon, Colorado
Hageman, Phil L.; Smith, Kathleen S.; Wildeman, Thomas R.; Ranville, James F.
2005-01-01
In a joint project, the mine waste-piles at the Rattler Mine near Idaho Springs, Colorado, were sampled and analyzed by scientists from the U.S. Geological Survey (USGS) and the Colorado School of Mines (CSM). Separate sample collection, sample leaching, and leachate analyses were performed by both groups and the results were compared. For the study, both groups used the USGS sampling procedure and the USGS Field Leach Test (FLT). The leachates generated from these tests were analyzed for a suite of elements using ICP-AES (CSM) and ICP-MS (USGS). Leachate geochemical fingerprints produced by the two groups for composites collected from the same mine waste showed good agreement. In another set of tests, CSM collected another set of Rattler mine waste composite samples using the USGS sampling procedure. This set of composite samples was leached using the Colorado Division of Minerals and Geology (CDMG) leach test, and a modified Toxicity Characteristic Leaching Procedure (TCLP) leach test. Leachate geochemical fingerprints produced using these tests showed a variation of more than a factor of two from the geochemical fingerprints produced using the USGS FLT leach test. We have concluded that the variation in the results is due to the different parameters of the leaching tests and not due to the sampling or analytical methods.
NASA Technical Reports Server (NTRS)
Graf, John
2015-01-01
NASA has been developing and testing two different types of oxygen separation systems. One type of oxygen separation system uses pressure swing technology, the other type uses a solid electrolyte electrochemical oxygen separation cell. Both development systems have been subjected to long term testing, and performance testing under a variety of environmental and operational conditions. Testing these two systems revealed that measuring the product purity of oxygen, and determining if an oxygen separation device meets Aviator's Breathing Oxygen (ABO) specifications is a subtle and sometimes difficult analytical chemistry job. Verifying product purity of cryogenically produced oxygen presents a different set of analytical chemistry challenges. This presentation will describe some of the sample acquisition and analytical chemistry challenges presented by verifying oxygen produced by an oxygen separator - and verifying oxygen produced by cryogenic separation processes. The primary contaminant that causes gas samples to fail to meet ABO requirements is water. The maximum amount of water vapor allowed is 7 ppmv. The principal challenge of verifying oxygen produced by an oxygen separator is that it is produced relatively slowly, and at comparatively low temperatures. A short term failure that occurs for just a few minutes in the course of a 1 week run could cause an entire tank to be rejected. Continuous monitoring of oxygen purity and water vapor could identify problems as soon as they occur. Long term oxygen separator tests were instrumented with an oxygen analyzer and with an hygrometer: a GE Moisture Monitor Series 35. This hygrometer uses an aluminum oxide sensor. The user's manual does not report this, but long term exposure to pure oxygen causes the aluminum oxide sensor head to bias dry. Oxygen product that exceeded the 7 ppm specification was improperly accepted, because the sensor had biased. The bias is permanent - exposure to air does not cause the sensor to return to its original response - but the bias can be accounted for by recalibrating the sensor. After this issue was found, continuous measurements of water vapor in the oxygen product were made using an FTIR. The FTIR cell is relatively large, so response time is slow - but moisture measurements were repeatable and accurate. Verifying ABO compliance for oxygen produced by commercial cryogenic processes has a different set of sample acquisition and analytical chemistry challenges. Customers want analytical chemists to conserve as much as possible. Hygrometers are not exposed to hours of continuous flow of oxygen, so they don't bias, but small amounts of contamination in valves can cause a "fail". K bottles are periodically cleaned and recertified - after cleaning residual moisture can cause a "fail". Operators let bottle pressure drop to room pressure, introduce outside air into the bottle, and the subsequent fill will "fail". Outside storage of K-bottles has allowed enough in-leakage, so contents will "fail".
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neymark, J.; Kennedy, M.; Judkoff, R.
This report documents a set of diagnostic analytical verification cases for testing the ability of whole building simulation software to model the air distribution side of typical heating, ventilating and air conditioning (HVAC) equipment. These cases complement the unitary equipment cases included in American National Standards Institute (ANSI)/American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs, which test the ability to model the heat-transfer fluid side of HVAC equipment.
42 CFR 493.845 - Standard; Toxicology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.851 - Standard; Hematology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.843 - Standard; Endocrinology.
Code of Federal Regulations, 2013 CFR
2013-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.845 - Standard; Toxicology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.845 - Standard; Toxicology.
Code of Federal Regulations, 2013 CFR
2013-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.851 - Standard; Hematology.
Code of Federal Regulations, 2013 CFR
2013-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.843 - Standard; Endocrinology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.843 - Standard; Endocrinology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.851 - Standard; Hematology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
Dispersion relations with crossing symmetry for {pi}{pi} D- and F-wave amplitudes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaminski, R.
A set of once subtracted dispersion relations with imposed crossing symmetry condition for the {pi}{pi} D- and F-wave amplitudes is derived and analyzed. An example of numerical calculations in the effective two-pion mass range from the threshold to 1.1 GeV is presented. It is shown that these new dispersion relations impose quite strong constraints on the analyzed {pi}{pi} interactions and are very useful tools to test the {pi}{pi} amplitudes. One of the goals of this work is to provide a complete set of equations required for easy use. Full analytical expressions are presented. Along with the well-known dispersion relations successfulmore » in testing the {pi}{pi} S- and P-wave amplitudes, those presented here for the D and F waves give a complete set of tools for analyses of the {pi}{pi} interactions.« less
Mirmohseni, A; Abdollahi, H; Rostamizadeh, K
2007-02-28
Net analyte signal (NAS)-based method called HLA/GO was applied for the selectively determination of binary mixture of ethanol and water by quartz crystal nanobalance (QCN) sensor. A full factorial design was applied for the formation of calibration and prediction sets in the concentration ranges 5.5-22.2 microg mL(-1) for ethanol and 7.01-28.07 microg mL(-1) for water. An optimal time range was selected by procedure which was based on the calculation of the net analyte signal regression plot in any considered time window for each test sample. A moving window strategy was used for searching the region with maximum linearity of NAS regression plot (minimum error indicator) and minimum of PRESS value. On the base of obtained results, the differences on the adsorption profiles in the time range between 1 and 600 s were used to determine mixtures of both compounds by HLA/GO method. The calculation of the net analytical signal using HLA/GO method allows determination of several figures of merit like selectivity, sensitivity, analytical sensitivity and limit of detection, for each component. To check the ability of the proposed method in the selection of linear regions of adsorption profile, a test for detecting non-linear regions of adsorption profile data in the presence of methanol was also described. The results showed that the method was successfully applied for the determination of ethanol and water.
Analytical modeling and experimental validation of a magnetorheological mount
NASA Astrophysics Data System (ADS)
Nguyen, The; Ciocanel, Constantin; Elahinia, Mohammad
2009-03-01
Magnetorheological (MR) fluid has been increasingly researched and applied in vibration isolation devices. To date, the suspension system of several high performance vehicles has been equipped with MR fluid based dampers and research is ongoing to develop MR fluid based mounts for engine and powertrain isolation. MR fluid based devices have received attention due to the MR fluid's capability to change its properties in the presence of a magnetic field. This characteristic places MR mounts in the class of semiactive isolators making them a desirable substitution for the passive hydraulic mounts. In this research, an analytical model of a mixed-mode MR mount was constructed. The magnetorheological mount employs flow (valve) mode and squeeze mode. Each mode is powered by an independent electromagnet, so one mode does not affect the operation of the other. The analytical model was used to predict the performance of the MR mount with different sets of parameters. Furthermore, in order to produce the actual prototype, the analytical model was used to identify the optimal geometry of the mount. The experimental phase of this research was carried by fabricating and testing the actual MR mount. The manufactured mount was tested to evaluate the effectiveness of each mode individually and in combination. The experimental results were also used to validate the ability of the analytical model in predicting the response of the MR mount. Based on the observed response of the mount a suitable controller can be designed for it. However, the control scheme is not addressed in this study.
LOLAweb: a containerized web server for interactive genomic locus overlap enrichment analysis.
Nagraj, V P; Magee, Neal E; Sheffield, Nathan C
2018-06-06
The past few years have seen an explosion of interest in understanding the role of regulatory DNA. This interest has driven large-scale production of functional genomics data and analytical methods. One popular analysis is to test for enrichment of overlaps between a query set of genomic regions and a database of region sets. In this way, new genomic data can be easily connected to annotations from external data sources. Here, we present an interactive interface for enrichment analysis of genomic locus overlaps using a web server called LOLAweb. LOLAweb accepts a set of genomic ranges from the user and tests it for enrichment against a database of region sets. LOLAweb renders results in an R Shiny application to provide interactive visualization features, enabling users to filter, sort, and explore enrichment results dynamically. LOLAweb is built and deployed in a Linux container, making it scalable to many concurrent users on our servers and also enabling users to download and run LOLAweb locally.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez, Jesse E.; Baptista, António M.
A sediment model coupled to the hydrodynamic model SELFE is validated against a benchmark combining a set of idealized tests and an application to a field-data rich energetic estuary. After sensitivity studies, model results for the idealized tests largely agree with previously reported results from other models in addition to analytical, semi-analytical, or laboratory results. Results of suspended sediment in an open channel test with fixed bottom are sensitive to turbulence closure and treatment for hydrodynamic bottom boundary. Results for the migration of a trench are very sensitive to critical stress and erosion rate, but largely insensitive to turbulence closure.more » The model is able to qualitatively represent sediment dynamics associated with estuarine turbidity maxima in an idealized estuary. Applied to the Columbia River estuary, the model qualitatively captures sediment dynamics observed by fixed stations and shipborne profiles. Representation of the vertical structure of suspended sediment degrades when stratification is underpredicted. Across all tests, skill metrics of suspended sediments lag those of hydrodynamics even when qualitatively representing dynamics. The benchmark is fully documented in an openly available repository to encourage unambiguous comparisons against other models.« less
NASA Astrophysics Data System (ADS)
Ovchinnikov, M. Yu.; Ivanov, D. S.; Ivlev, N. A.; Karpenko, S. O.; Roldugin, D. S.; Tkachev, S. S.
2014-01-01
Design, analytical investigation, laboratory and in-flight testing of the attitude determination and control system (ADCS) of a microsatellites are considered. The system consists of three pairs of reaction wheels, three magnetorquers, a set of Sun sensors, a three-axis magnetometer and a control unit. The ADCS is designed for a small 10-50 kg LEO satellite. System development is accomplished in several steps: satellite dynamics preliminary study using asymptotical and numerical techniques, hardware and software design, laboratory testing of each actuator and sensor and the whole ADCS. Laboratory verification is carried out on the specially designed test-bench. In-flight ADCS exploitation results onboard the Russian microsatellite "Chibis-M" are presented. The satellite was developed, designed and manufactured by the Institute of Space Research of RAS. "Chibis-M" was launched by the "Progress-13M" cargo vehicle on January 25, 2012 after undocking from the International Space Station (ISS). This paper assess both the satellite and the ADCS mock-up dynamics. Analytical, numerical and laboratory study results are in good correspondence with in-flight data.
Big Data Analytics for a Smart Green Infrastructure Strategy
NASA Astrophysics Data System (ADS)
Barrile, Vincenzo; Bonfa, Stefano; Bilotta, Giuliana
2017-08-01
As well known, Big Data is a term for data sets so large or complex that traditional data processing applications aren’t sufficient to process them. The term “Big Data” is referred to using predictive analytics. It is often related to user behavior analytics, or other advanced data analytics methods which from data extract value, and rarely to a particular size of data set. This is especially true for the huge amount of Earth Observation data that satellites constantly orbiting the earth daily transmit.
Fulga, Netta
2013-06-01
Quality management and accreditation in the analytical laboratory setting are developing rapidly and becoming the standard worldwide. Quality management refers to all the activities used by organizations to ensure product or service consistency. Accreditation is a formal recognition by an authoritative regulatory body that a laboratory is competent to perform examinations and report results. The Motherisk Drug Testing Laboratory is licensed to operate at the Hospital for Sick Children in Toronto, Ontario. The laboratory performs toxicology tests of hair and meconium samples for research and clinical purposes. Most of the samples are involved in a chain of custody cases. Establishing a quality management system and achieving accreditation became mandatory by legislation for all Ontario clinical laboratories since 2003. The Ontario Laboratory Accreditation program is based on International Organization for Standardization 15189-Medical laboratories-Particular requirements for quality and competence, an international standard that has been adopted as a national standard in Canada. The implementation of a quality management system involves management commitment, planning and staff education, documentation of the system, validation of processes, and assessment against the requirements. The maintenance of a quality management system requires control and monitoring of the entire laboratory path of workflow. The process of transformation of a research/clinical laboratory into an accredited laboratory, and the benefits of maintaining an effective quality management system, are presented in this article.
Evaluation of analytical performance based on partial order methodology.
Carlsen, Lars; Bruggemann, Rainer; Kenessova, Olga; Erzhigitov, Erkin
2015-01-01
Classical measurements of performances are typically based on linear scales. However, in analytical chemistry a simple scale may be not sufficient to analyze the analytical performance appropriately. Here partial order methodology can be helpful. Within the context described here, partial order analysis can be seen as an ordinal analysis of data matrices, especially to simplify the relative comparisons of objects due to their data profile (the ordered set of values an object have). Hence, partial order methodology offers a unique possibility to evaluate analytical performance. In the present data as, e.g., provided by the laboratories through interlaboratory comparisons or proficiency testings is used as an illustrative example. However, the presented scheme is likewise applicable for comparison of analytical methods or simply as a tool for optimization of an analytical method. The methodology can be applied without presumptions or pretreatment of the analytical data provided in order to evaluate the analytical performance taking into account all indicators simultaneously and thus elucidating a "distance" from the true value. In the present illustrative example it is assumed that the laboratories analyze a given sample several times and subsequently report the mean value, the standard deviation and the skewness, which simultaneously are used for the evaluation of the analytical performance. The analyses lead to information concerning (1) a partial ordering of the laboratories, subsequently, (2) a "distance" to the Reference laboratory and (3) a classification due to the concept of "peculiar points". Copyright © 2014 Elsevier B.V. All rights reserved.
Internally insulated thermal storage system development program
NASA Technical Reports Server (NTRS)
Scott, O. L.
1980-01-01
A cost effective thermal storage system for a solar central receiver power system using molten salt stored in internally insulated carbon steel tanks is described. Factors discussed include: testing of internal insulation materials in molten salt; preliminary design of storage tanks, including insulation and liner installation; optimization of the storage configuration; and definition of a subsystem research experiment to demonstrate the system. A thermal analytical model and analysis of a thermocline tank was performed. Data from a present thermocline test tank was compared to gain confidence in the analytical approach. A computer analysis of the various storage system parameters (insulation thickness, number of tanks, tank geometry, etc.,) showed that (1) the most cost-effective configuration was a small number of large cylindrical tanks, and (2) the optimum is set by the mechanical constraints of the system, such as soil bearing strength and tank hoop stress, not by the economics.
Internally insulated thermal storage system development program
NASA Astrophysics Data System (ADS)
Scott, O. L.
1980-03-01
A cost effective thermal storage system for a solar central receiver power system using molten salt stored in internally insulated carbon steel tanks is described. Factors discussed include: testing of internal insulation materials in molten salt; preliminary design of storage tanks, including insulation and liner installation; optimization of the storage configuration; and definition of a subsystem research experiment to demonstrate the system. A thermal analytical model and analysis of a thermocline tank was performed. Data from a present thermocline test tank was compared to gain confidence in the analytical approach. A computer analysis of the various storage system parameters (insulation thickness, number of tanks, tank geometry, etc.,) showed that (1) the most cost-effective configuration was a small number of large cylindrical tanks, and (2) the optimum is set by the mechanical constraints of the system, such as soil bearing strength and tank hoop stress, not by the economics.
On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood.
Karabatsos, George
2018-06-01
This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon previous methods because it provides an omnibus test of the entire hierarchy of cancellation axioms, beyond double cancellation. It does so while accounting for the posterior uncertainty that is inherent in the empirical orderings that are implied by these axioms, together. The new method is illustrated through a test of the cancellation axioms on a classic survey data set, and through the analysis of simulated data.
L.U.St: a tool for approximated maximum likelihood supertree reconstruction.
Akanni, Wasiu A; Creevey, Christopher J; Wilkinson, Mark; Pisani, Davide
2014-06-12
Supertrees combine disparate, partially overlapping trees to generate a synthesis that provides a high level perspective that cannot be attained from the inspection of individual phylogenies. Supertrees can be seen as meta-analytical tools that can be used to make inferences based on results of previous scientific studies. Their meta-analytical application has increased in popularity since it was realised that the power of statistical tests for the study of evolutionary trends critically depends on the use of taxon-dense phylogenies. Further to that, supertrees have found applications in phylogenomics where they are used to combine gene trees and recover species phylogenies based on genome-scale data sets. Here, we present the L.U.St package, a python tool for approximate maximum likelihood supertree inference and illustrate its application using a genomic data set for the placental mammals. L.U.St allows the calculation of the approximate likelihood of a supertree, given a set of input trees, performs heuristic searches to look for the supertree of highest likelihood, and performs statistical tests of two or more supertrees. To this end, L.U.St implements a winning sites test allowing ranking of a collection of a-priori selected hypotheses, given as a collection of input supertree topologies. It also outputs a file of input-tree-wise likelihood scores that can be used as input to CONSEL for calculation of standard tests of two trees (e.g. Kishino-Hasegawa, Shimidoara-Hasegawa and Approximately Unbiased tests). This is the first fully parametric implementation of a supertree method, it has clearly understood properties, and provides several advantages over currently available supertree approaches. It is easy to implement and works on any platform that has python installed. bitBucket page - https://afro-juju@bitbucket.org/afro-juju/l.u.st.git. Davide.Pisani@bristol.ac.uk.
Schønning, Kristian; Pedersen, Martin Schou; Johansen, Kim; Landt, Bodil; Nielsen, Lone Gilmor; Weis, Nina; Westh, Henrik
2017-10-01
Chronic hepatitis C virus (HCV) infection can be effectively treated with directly acting antiviral (DAA) therapy. Measurement of HCV RNA is used to evaluate patient compliance and virological response during and after treatment. To compare the analytical performance of the Aptima HCV Quant Dx Assay (Aptima) and the COBAS Ampliprep/COBAS TaqMan HCV Test v2.0 (CAPCTMv2) for the quantification of HCV RNA in plasma samples, and compare the clinical utility of the two tests in patients undergoing treatment with DAA therapy. Analytical performance was evaluated on two sets of plasma samples: 125 genotyped samples and 172 samples referred for quantification of HCV RNA. Furthermore, performance was evaluated using dilutions series of four samples containing HCV genotype 1a, 2b, 3a, and 4a, respectively. Clinical utility was evaluated on 118 plasma samples obtained from 13 patients undergoing treatment with DAAs. Deming regression of results from 187 plasma samples with HCV RNA >2 Log IU/mL indicated that the Aptima assay quantified higher than the CAPCTMv2 test for HCV RNA >4.9 Log IU/mL. The linearity of the Aptima assay was excellent across dilution series of four HCV genotypes (slope of the regression line: 1.00-1.02). The Aptima assay detected significantly more replicates below targeted 2 Log IU/mL than the CAPCTMv2 test, and yielded clearly interpretable results when used to analyze samples from patients treated with DAAs. The analytical performance of the Aptima assay makes it well suited for monitoring patients with chronic HCV infection undergoing antiviral treatment. Copyright © 2017 Elsevier B.V. All rights reserved.
Standardizing in vitro diagnostics tasks in clinical trials: a call for action.
Lippi, Giuseppe; Simundic, Ana-Maria; Rodriguez-Manas, Leocadio; Bossuyt, Patrick; Banfi, Giuseppe
2016-05-01
Translational research is defined as the process of applying ideas, insights and discoveries generated through basic scientific inquiry to treatment or prevention of human diseases. Although precise information is lacking, several lines of evidence attest that up to 95% early-phase studies may not translate into tangible outcomes for improving clinical management. Major theoretical hurdles exist in the translational process, but is it also undeniable that many studies may have failed for practical reasons, such as the use of inappropriate diagnostic testing for evaluating efficacy, effectiveness or safety of a given medical intervention, or poor quality in laboratory testing. This can generate biased test results and result in misconceptions during data interpretation, eventually leading to no clinical benefit, possible harm, and a waste of valuable resources. From a genuine economic perspective, it can be estimated that over 10 million euros of funding may be lost each year in clinical trials in the European Union due to preanalytical and analytical problems. These are mostly attributions to the heterogeneity of current guidelines and recommendations for the testing process, to the poor evidence base for basic pre-analytical, analytical and post-analytical requirements in clinical trials, and to the failure to thoughtfully integrate the perspectives of clinicians, patients, nurses and diagnostic companies in laboratory best practices. The most rational means for filling the gap between what we know and what we practice in clinical trials cannot discount the development of multidisciplinary teams including research scientists, clinicians, nurses, patients associations and representative of in vitro diagnostic (IVD) companies, who should actively interplay and collaborate with laboratory professionals to adapt and disseminate evidence-based recommendations about biospecimen collection and management into the research settings, from preclinical to phase III studies.
Determination of Uncertainties for the New SSME Model
NASA Technical Reports Server (NTRS)
Coleman, Hugh W.; Hawk, Clark W.
1996-01-01
This report discusses the uncertainty analysis performed in support of a new test analysis and performance prediction model for the Space Shuttle Main Engine. The new model utilizes uncertainty estimates for experimental data and for the analytical model to obtain the most plausible operating condition for the engine system. This report discusses the development of the data sets and uncertainty estimates to be used in the development of the new model. It also presents the application of uncertainty analysis to analytical models and the uncertainty analysis for the conservation of mass and energy balance relations is presented. A new methodology for the assessment of the uncertainty associated with linear regressions is presented.
Modeling damaged wings: Element selection and constraint specification
NASA Technical Reports Server (NTRS)
Stronge, W. J.
1975-01-01
The NASTRAN analytical program was used for structural design, and no problems were anticipated in applying this program to a damaged structure as long as the deformations were small and the strains remained within the elastic range. In this context, NASTRAN was used to test three-dimensional analytical models of a damaged aircraft wing under static loads. A comparison was made of calculated and experimentally measured strains on primary structural components of an RF-84F wing. This comparison brought out two sensitive areas in modeling semimonocoque structures. The calculated strains were strongly affected by the type of elements used adjacent to the damaged region and by the choice of multipoint constraints sets on the damaged boundary.
Study designs appropriate for the workplace.
Hogue, C J
1986-01-01
Carlo and Hearn have called for "refinement of old [epidemiologic] methods and an ongoing evaluation of where methods fit in the overall scheme as we address the multiple complexities of reproductive hazard assessment." This review is an attempt to bring together the current state-of-the-art methods for problem definition and hypothesis testing available to the occupational epidemiologist. For problem definition, meta analysis can be utilized to narrow the field of potential causal hypotheses. Passive active surveillance may further refine issues for analytic research. Within analytic epidemiology, several methods may be appropriate for the workplace setting. Those discussed here may be used to estimate the risk ratio in either a fixed or dynamic population.
Extending Climate Analytics-As to the Earth System Grid Federation
NASA Astrophysics Data System (ADS)
Tamkin, G.; Schnase, J. L.; Duffy, D.; McInerney, M.; Nadeau, D.; Li, J.; Strong, S.; Thompson, J. H.
2015-12-01
We are building three extensions to prior-funded work on climate analytics-as-a-service that will benefit the Earth System Grid Federation (ESGF) as it addresses the Big Data challenges of future climate research: (1) We are creating a cloud-based, high-performance Virtual Real-Time Analytics Testbed supporting a select set of climate variables from six major reanalysis data sets. This near real-time capability will enable advanced technologies like the Cloudera Impala-based Structured Query Language (SQL) query capabilities and Hadoop-based MapReduce analytics over native NetCDF files while providing a platform for community experimentation with emerging analytic technologies. (2) We are building a full-featured Reanalysis Ensemble Service comprising monthly means data from six reanalysis data sets. The service will provide a basic set of commonly used operations over the reanalysis collections. The operations will be made accessible through NASA's climate data analytics Web services and our client-side Climate Data Services (CDS) API. (3) We are establishing an Open Geospatial Consortium (OGC) WPS-compliant Web service interface to our climate data analytics service that will enable greater interoperability with next-generation ESGF capabilities. The CDS API will be extended to accommodate the new WPS Web service endpoints as well as ESGF's Web service endpoints. These activities address some of the most important technical challenges for server-side analytics and support the research community's requirements for improved interoperability and improved access to reanalysis data.
Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders
2006-03-13
Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.
42 CFR 493.959 - Immunohematology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... challenges per testing event a program must provide for each analyte or test procedure is five. Analyte or... Compatibility testing Antibody identification (d) Evaluation of a laboratory's analyte or test performance. HHS... program must compare the laboratory's response for each analyte with the response that reflects agreement...
42 CFR 493.959 - Immunohematology.
Code of Federal Regulations, 2013 CFR
2013-10-01
... challenges per testing event a program must provide for each analyte or test procedure is five. Analyte or... Compatibility testing Antibody identification (d) Evaluation of a laboratory's analyte or test performance. HHS... program must compare the laboratory's response for each analyte with the response that reflects agreement...
42 CFR 493.959 - Immunohematology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... challenges per testing event a program must provide for each analyte or test procedure is five. Analyte or... Compatibility testing Antibody identification (d) Evaluation of a laboratory's analyte or test performance. HHS... program must compare the laboratory's response for each analyte with the response that reflects agreement...
42 CFR 493.959 - Immunohematology.
Code of Federal Regulations, 2011 CFR
2011-10-01
... challenges per testing event a program must provide for each analyte or test procedure is five. Analyte or... Compatibility testing Antibody identification (d) Evaluation of a laboratory's analyte or test performance. HHS... program must compare the laboratory's response for each analyte with the response that reflects agreement...
Stability analysis of magnetized neutron stars - a semi-analytic approach
NASA Astrophysics Data System (ADS)
Herbrik, Marlene; Kokkotas, Kostas D.
2017-04-01
We implement a semi-analytic approach for stability analysis, addressing the ongoing uncertainty about stability and structure of neutron star magnetic fields. Applying the energy variational principle, a model system is displaced from its equilibrium state. The related energy density variation is set up analytically, whereas its volume integration is carried out numerically. This facilitates the consideration of more realistic neutron star characteristics within the model compared to analytical treatments. At the same time, our method retains the possibility to yield general information about neutron star magnetic field and composition structures that are likely to be stable. In contrast to numerical studies, classes of parametrized systems can be studied at once, finally constraining realistic configurations for interior neutron star magnetic fields. We apply the stability analysis scheme on polytropic and non-barotropic neutron stars with toroidal, poloidal and mixed fields testing their stability in a Newtonian framework. Furthermore, we provide the analytical scheme for dropping the Cowling approximation in an axisymmetric system and investigate its impact. Our results confirm the instability of simple magnetized neutron star models as well as a stabilization tendency in the case of mixed fields and stratification. These findings agree with analytical studies whose spectrum of model systems we extend by lifting former simplifications.
Role of the laboratory in the evaluation of suspected drug abuse.
Gold, M S; Dackis, C A
1986-01-01
Despite the high incidence of substance abuse, it remains a common cause of misdiagnosis. In patients who have abused or who are currently abusing drugs, symptoms of a psychiatric illness may be mimicked by either the drug's presence or absence. The laboratory can aid in making a differential diagnosis and eliminating drugs from active consideration as a cause of psychosis, depression, mania, and personality changes. Treatment planning and prevention of serious medical consequences often rest on the accuracy of the admission drug screen. Testing is widely used to assess improvement in substance abuse in both inpatient and outpatient settings. In occupational settings, testing has been used as an early indicator that a problem exists and as a successful prevention tool. The appropriate use of analytic technology in drug abuse testing requires an understanding of available test methodologies. These include drug screens by thin-layer chromatography, comprehensive testing using enzyme immunoassay, and computer-assisted gas chromatography-mass spectrometry (GC-MS). Testing for specific drugs considered likely causes or precipitants of "psychiatric" complaints is available with enzyme assays, radioimmunoassay, or definitive forensic-quality testing using GC-MS.
Testing and Analysis of Sensor Ports
NASA Technical Reports Server (NTRS)
Zhang, M.; Frendi, A.; Thompson, W.; Casiano, M. J.
2016-01-01
This Technical Publication summarizes the work focused on the testing and analysis of sensor ports. The tasks under this contract were divided into three areas: (1) Development of an Analytical Model, (2) Conducting a Set of Experiments, and (3) Obtaining Computational Solutions. Results from the experiment using both short and long sensor ports were obtained using harmonic, random, and frequency sweep plane acoustic waves. An amplification factor of the pressure signal between the port inlet and the back of the port is obtained and compared to models. Comparisons of model and experimental results showed very good agreement.
Druka, Arnis; Druka, Ilze; Centeno, Arthur G; Li, Hongqiang; Sun, Zhaohui; Thomas, William TB; Bonar, Nicola; Steffenson, Brian J; Ullrich, Steven E; Kleinhofs, Andris; Wise, Roger P; Close, Timothy J; Potokina, Elena; Luo, Zewei; Wagner, Carola; Schweizer, Günther F; Marshall, David F; Kearsey, Michael J; Williams, Robert W; Waugh, Robbie
2008-01-01
Background A typical genetical genomics experiment results in four separate data sets; genotype, gene expression, higher-order phenotypic data and metadata that describe the protocols, processing and the array platform. Used in concert, these data sets provide the opportunity to perform genetic analysis at a systems level. Their predictive power is largely determined by the gene expression dataset where tens of millions of data points can be generated using currently available mRNA profiling technologies. Such large, multidimensional data sets often have value beyond that extracted during their initial analysis and interpretation, particularly if conducted on widely distributed reference genetic materials. Besides quality and scale, access to the data is of primary importance as accessibility potentially allows the extraction of considerable added value from the same primary dataset by the wider research community. Although the number of genetical genomics experiments in different plant species is rapidly increasing, none to date has been presented in a form that allows quick and efficient on-line testing for possible associations between genes, loci and traits of interest by an entire research community. Description Using a reference population of 150 recombinant doubled haploid barley lines we generated novel phenotypic, mRNA abundance and SNP-based genotyping data sets, added them to a considerable volume of legacy trait data and entered them into the GeneNetwork . GeneNetwork is a unified on-line analytical environment that enables the user to test genetic hypotheses about how component traits, such as mRNA abundance, may interact to condition more complex biological phenotypes (higher-order traits). Here we describe these barley data sets and demonstrate some of the functionalities GeneNetwork provides as an easily accessible and integrated analytical environment for exploring them. Conclusion By integrating barley genotypic, phenotypic and mRNA abundance data sets directly within GeneNetwork's analytical environment we provide simple web access to the data for the research community. In this environment, a combination of correlation analysis and linkage mapping provides the potential to identify and substantiate gene targets for saturation mapping and positional cloning. By integrating datasets from an unsequenced crop plant (barley) in a database that has been designed for an animal model species (mouse) with a well established genome sequence, we prove the importance of the concept and practice of modular development and interoperability of software engineering for biological data sets. PMID:19017390
Organic materials able to detect analytes
NASA Technical Reports Server (NTRS)
Swager, Timothy M. (Inventor); Zhu, Zhengguo (Inventor); Bulovic, Vladimir (Inventor); Rose, Aimee (Inventor); Madigan, Conor Francis (Inventor)
2012-01-01
The present invention generally relates to polymers with lasing characteristics that allow the polymers to be useful in detecting analytes. In one aspect, the polymer, upon an interaction with an analyte, may exhibit a change in a lasing characteristic that can be determined in some fashion. For example, interaction of an analyte with the polymer may affect the ability of the polymer to reach an excited state that allows stimulated emission of photons to occur, which may be determined, thereby determining the analyte. In another aspect, the polymer, upon interaction with an analyte, may exhibit a change in stimulated emission that is at least 10 times greater with respect to a change in the spontaneous emission of the polymer upon interaction with the analyte. The polymer may be a conjugated polymer in some cases. In one set of embodiments, the polymer includes one or more hydrocarbon side chains, which may be parallel to the polymer backbone in some instances. In another set of embodiments, the polymer may include one or more pendant aromatic rings. In yet another set of embodiments, the polymer may be substantially encapsulated in a hydrocarbon. In still another set of embodiments, the polymer may be substantially resistant to photobleaching. In certain aspects, the polymer may be useful in the detection of explosive agents, such as 2,4,6-trinitrotoluene (TNT) and 2,4-dinitrotoluene (DNT).
Merging OLTP and OLAP - Back to the Future
NASA Astrophysics Data System (ADS)
Lehner, Wolfgang
When the terms "Data Warehousing" and "Online Analytical Processing" were coined in the 1990s by Kimball, Codd, and others, there was an obvious need for separating data and workload for operational transactional-style processing and decision-making implying complex analytical queries over large and historic data sets. Large data warehouse infrastructures have been set up to cope with the special requirements of analytical query answering for multiple reasons: For example, analytical thinking heavily relies on predefined navigation paths to guide the user through the data set and to provide different views on different aggregation levels.Multi-dimensional queries exploiting hierarchically structured dimensions lead to complex star queries at a relational backend, which could hardly be handled by classical relational systems.
Negations in syllogistic reasoning: evidence for a heuristic-analytic conflict.
Stupple, Edward J N; Waterhouse, Eleanor F
2009-08-01
An experiment utilizing response time measures was conducted to test dominant processing strategies in syllogistic reasoning with the expanded quantifier set proposed by Roberts (2005). Through adding negations to existing quantifiers it is possible to change problem surface features without altering logical validity. Biases based on surface features such as atmosphere, matching, and the probability heuristics model (PHM; Chater & Oaksford, 1999; Wetherick & Gilhooly, 1995) would not be expected to show variance in response latencies, but participant responses should be highly sensitive to changes in the surface features of the quantifiers. In contrast, according to analytic accounts such as mental models theory and mental logic (e.g., Johnson-Laird & Byrne, 1991; Rips, 1994) participants should exhibit increased response times for negated premises, but not be overly impacted upon by the surface features of the conclusion. Data indicated that the dominant response strategy was based on a matching heuristic, but also provided evidence of a resource-demanding analytic procedure for dealing with double negatives. The authors propose that dual-process theories offer a stronger account of these data whereby participants employ competing heuristic and analytic strategies and fall back on a heuristic response when analytic processing fails.
Goldstein, Elizabeth; Farquhar, Marybeth; Crofton, Christine; Darby, Charles; Garfinkel, Steven
2005-12-01
To describe the developmental process for the CAHPS Hospital Survey. A pilot was conducted in three states with 19,720 hospital discharges. A rigorous, multi-step process was used to develop the CAHPS Hospital Survey. It included a public call for measures, multiple Federal Register notices soliciting public input, a review of the relevant literature, meetings with hospitals, consumers and survey vendors, cognitive interviews with consumer, a large-scale pilot test in three states and consumer testing and numerous small-scale field tests. The current version of the CAHPS Hospital Survey has survey items in seven domains, two overall ratings of the hospital and five items used for adjusting for the mix of patients across hospitals and for analytical purposes. The CAHPS Hospital Survey is a core set of questions that can be administered as a stand-alone questionnaire or combined with a broader set of hospital specific items.
Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.
Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís
2010-10-01
Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.
Applications of nuclear analytical techniques to environmental studies
NASA Astrophysics Data System (ADS)
Freitas, M. C.; Pacheco, A. M. G.; Marques, A. P.; Barros, L. I. C.; Reis, M. A.
2001-07-01
A few examples of application of nuclear-analytical techniques to biological monitors—natives and transplants—are given herein. Parmelia sulcata Taylor transplants were set up in a heavily industrialized area of Portugal—the Setúbal peninsula, about 50 km south of Lisbon—where indigenous lichens are rare. The whole area was 10×15 km around an oil-fired power station, and a 2.5×2.5 km grid was used. In north-western Portugal, native thalli of the same epiphytes (Parmelia spp., mostly Parmelia sulcata Taylor) and bark from olive trees (Olea europaea) were sampled across an area of 50×50 km, using a 10×10 km grid. This area is densely populated and features a blend of rural, urban-industrial and coastal environments, together with the country's second-largest metro area (Porto). All biomonitors have been analyzed by INAA and PIXE. Results were put through nonparametric tests and factor analysis for trend significance and emission sources, respectively.
NASA Astrophysics Data System (ADS)
Pekşen, Ertan; Yas, Türker; Kıyak, Alper
2014-09-01
We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.
Defining Higher-Order Turbulent Moment Closures with an Artificial Neural Network and Random Forest
NASA Astrophysics Data System (ADS)
McGibbon, J.; Bretherton, C. S.
2017-12-01
Unresolved turbulent advection and clouds must be parameterized in atmospheric models. Modern higher-order closure schemes depend on analytic moment closure assumptions that diagnose higher-order moments in terms of lower-order ones. These are then tested against Large-Eddy Simulation (LES) higher-order moment relations. However, these relations may not be neatly analytic in nature. Rather than rely on an analytic higher-order moment closure, can we use machine learning on LES data itself to define a higher-order moment closure?We assess the ability of a deep artificial neural network (NN) and random forest (RF) to perform this task using a set of observationally-based LES runs from the MAGIC field campaign. By training on a subset of 12 simulations and testing on remaining simulations, we avoid over-fitting the training data.Performance of the NN and RF will be assessed and compared to the Analytic Double Gaussian 1 (ADG1) closure assumed by Cloudy Layers Unified By Binormals (CLUBB), a higher-order turbulence closure currently used in the Community Atmosphere Model (CAM). We will show that the RF outperforms the NN and the ADG1 closure for the MAGIC cases within this diagnostic framework. Progress and challenges in using a diagnostic machine learning closure within a prognostic cloud and turbulence parameterization will also be discussed.
Extracting laboratory test information from biomedical text
Kang, Yanna Shen; Kayaalp, Mehmet
2013-01-01
Background: No previous study reported the efficacy of current natural language processing (NLP) methods for extracting laboratory test information from narrative documents. This study investigates the pathology informatics question of how accurately such information can be extracted from text with the current tools and techniques, especially machine learning and symbolic NLP methods. The study data came from a text corpus maintained by the U.S. Food and Drug Administration, containing a rich set of information on laboratory tests and test devices. Methods: The authors developed a symbolic information extraction (SIE) system to extract device and test specific information about four types of laboratory test entities: Specimens, analytes, units of measures and detection limits. They compared the performance of SIE and three prominent machine learning based NLP systems, LingPipe, GATE and BANNER, each implementing a distinct supervised machine learning method, hidden Markov models, support vector machines and conditional random fields, respectively. Results: Machine learning systems recognized laboratory test entities with moderately high recall, but low precision rates. Their recall rates were relatively higher when the number of distinct entity values (e.g., the spectrum of specimens) was very limited or when lexical morphology of the entity was distinctive (as in units of measures), yet SIE outperformed them with statistically significant margins on extracting specimen, analyte and detection limit information in both precision and F-measure. Its high recall performance was statistically significant on analyte information extraction. Conclusions: Despite its shortcomings against machine learning methods, a well-tailored symbolic system may better discern relevancy among a pile of information of the same type and may outperform a machine learning system by tapping into lexically non-local contextual information such as the document structure. PMID:24083058
NASA Technical Reports Server (NTRS)
Berry, R. L.; Tegart, J. R.; Demchak, L. J.
1979-01-01
Thirty sets of test data selected from the 89 low-g aircraft tests flown by NASA KC-135 zero-g aircraft are listed in tables with their accompanying test conditions. The data for each test consists of the time history plots of digitalized data (in engineering units) and the time history plots of the load cell data transformed to the tank axis system. The transformed load cell data was developed for future analytical comparisons; therefore, these data were transformed and plotted from the time at which the aircraft Z axis acceleration passed through l-g. There are 14 time history plots per test condition. The contents of each plot is shown in a table.
Dynamic testing and analysis of extension-twist-coupled composite tubular spars
NASA Astrophysics Data System (ADS)
Lake, Renee C.; Izapanah, Amir P.; Baucon, Robert M.
The results from a study aimed at improving the dynamic and aerodynamic characteristics of composite rotor blades through the use of extension-twist elastic coupling are presented. A set of extension-twist-coupled composite tubular spars, representative of the primary load carrying structure within a helicopter rotor blade, was manufactured using four plies of woven graphite/epoxy cloth 'prepreg.' These spars were non-circular in cross section design and were therefore subject to warping deformations. Three cross-sectional geometries were developed: square, D-shape, and flattened ellipse. Results from free-free vibration tests of the spars were compared with results from normal modes and frequency analyses of companion shell-finite-element models developed in MSC/NASTRAN. Five global or 'non-shell' modes were identified within the 0-2000 Hz range for each spar. The frequencies and associated mode shapes for the D-shape spar were correlated with analytical results, showing agreement within 13.8 percent. Frequencies corresponding to the five global mode shapes for the square spar agreed within 9.5 percent of the analytical results. Five global modes were similarly identified for the elliptical spar and agreed within 4.9 percent of the respective analytical results.
Dynamic testing and analysis of extension-twist-coupled composite tubular spars
NASA Technical Reports Server (NTRS)
Lake, Renee C.; Izapanah, Amir P.; Baucon, Robert M.
1992-01-01
The results from a study aimed at improving the dynamic and aerodynamic characteristics of composite rotor blades through the use of extension-twist elastic coupling are presented. A set of extension-twist-coupled composite tubular spars, representative of the primary load carrying structure within a helicopter rotor blade, was manufactured using four plies of woven graphite/epoxy cloth 'prepreg.' These spars were non-circular in cross section design and were therefore subject to warping deformations. Three cross-sectional geometries were developed: square, D-shape, and flattened ellipse. Results from free-free vibration tests of the spars were compared with results from normal modes and frequency analyses of companion shell-finite-element models developed in MSC/NASTRAN. Five global or 'non-shell' modes were identified within the 0-2000 Hz range for each spar. The frequencies and associated mode shapes for the D-shape spar were correlated with analytical results, showing agreement within 13.8 percent. Frequencies corresponding to the five global mode shapes for the square spar agreed within 9.5 percent of the analytical results. Five global modes were similarly identified for the elliptical spar and agreed within 4.9 percent of the respective analytical results.
Multi-center evaluation of analytical performance of the Beckman Coulter AU5822 chemistry analyzer.
Zimmerman, M K; Friesen, L R; Nice, A; Vollmer, P A; Dockery, E A; Rankin, J D; Zmuda, K; Wong, S H
2015-09-01
Our three academic institutions, Indiana University, Northwestern Memorial Hospital, and Wake Forest, were among the first in the United States to implement the Beckman Coulter AU5822 series chemistry analyzers. We undertook this post-hoc multi-center study by merging our data to determine performance characteristics and the impact of methodology changes on analyte measurement. We independently completed performance validation studies including precision, linearity/analytical measurement range, method comparison, and reference range verification. Complete data sets were available from at least one institution for 66 analytes with the following groups: 51 from all three institutions, and 15 from 1 or 2 institutions for a total sample size of 12,064. Precision was similar among institutions. Coefficients of variation (CV) were <10% for 97%. Analytes with CVs >10% included direct bilirubin and digoxin. All analytes exhibited linearity over the analytical measurement range. Method comparison data showed slopes between 0.900-1.100 for 87.9% of the analytes. Slopes for amylase, tobramycin and urine amylase were <0.8; the slope for lipase was >1.5, due to known methodology or standardization differences. Consequently, reference ranges of amylase, urine amylase and lipase required only minor or no modification. The four AU5822 analyzers independently evaluated at three sites showed consistent precision, linearity, and correlation results. Since installations, the test results had been well received by clinicians from all three institutions. Copyright © 2015. Published by Elsevier Inc.
Brooks, M.H.; Schroder, L.J.; Willoughby, T.C.
1987-01-01
The U.S. Geological Survey operated a blind audit sample program during 1974 to test the effects of the sample handling and shipping procedures used by the National Atmospheric Deposition Program and National Trends Network on the quality of wet deposition data produced by the combined networks. Blind audit samples, which were dilutions of standard reference water samples, were submitted by network site operators to the central analytical laboratory disguised as actual wet deposition samples. Results from the analyses of blind audit samples were used to calculate estimates of analyte bias associated with all network wet deposition samples analyzed in 1984 and to estimate analyte precision. Concentration differences between double blind samples that were submitted to the central analytical laboratory and separate analyses of aliquots of those blind audit samples that had not undergone network sample handling and shipping were used to calculate analyte masses that apparently were added to each blind audit sample by routine network handling and shipping procedures. These calculated masses indicated statistically significant biases for magnesium, sodium , potassium, chloride, and sulfate. Median calculated masses were 41.4 micrograms (ug) for calcium, 14.9 ug for magnesium, 23.3 ug for sodium, 0.7 ug for potassium, 16.5 ug for chloride and 55.3 ug for sulfate. Analyte precision was estimated using two different sets of replicate measures performed by the central analytical laboratory. Estimated standard deviations were similar to those previously reported. (Author 's abstract)
Cadamuro, Janne; Mrazek, Cornelia; Leichtle, Alexander B.; Kipman, Ulrike; Felder, Thomas K.; Wiedemann, Helmut; Oberkofler, Hannes; Fiedler, Georg M.; Haschke-Becher, Elisabeth
2017-01-01
Introduction Although centrifugation is performed in almost every blood sample, recommendations on duration and g-force are heterogeneous and mostly based on expert opinions. In order to unify this step in a fully automated laboratory, we aimed to evaluate different centrifugation settings and their influence on the results of routine clinical chemistry analytes. Materials and methods We collected blood from 41 healthy volunteers into BD Vacutainer PST II-heparin-gel- (LiHepGel), BD Vacutainer SST II-serum-, and BD Vacutainer Barricor heparin-tubes with a mechanical separator (LiHepBar). Tubes were centrifuged at 2000xg for 10 minutes and 3000xg for 7 and 5 minutes, respectively. Subsequently 60 and 21 clinical chemistry analytes were measured in plasma and serum samples, respectively, using a Roche COBAS instrument. Results High sensitive Troponin T, pregnancy-associated plasma protein A, ß human chorionic gonadotropin and rheumatoid factor had to be excluded from statistical evaluation as many of the respective results were below the measuring range. Except of free haemoglobin (fHb) measurements, no analyte result was altered by the use of shorter centrifugation times at higher g-forces. Comparing LiHepBar to LiHepGel tubes at different centrifugation setting, we found higher lactate-dehydrogenase (LD) (P = 0.003 to < 0.001) and lower bicarbonate values (P = 0.049 to 0.008) in the latter. Conclusions Serum and heparin samples may be centrifuged at higher speed (3000xg) for a shorter amount of time (5 minutes) without alteration of the analytes tested in this study. When using LiHepBar tubes for blood collection, a separate LD reference value might be needed. PMID:29187797
Cadamuro, Janne; Mrazek, Cornelia; Leichtle, Alexander B; Kipman, Ulrike; Felder, Thomas K; Wiedemann, Helmut; Oberkofler, Hannes; Fiedler, Georg M; Haschke-Becher, Elisabeth
2018-02-15
Although centrifugation is performed in almost every blood sample, recommendations on duration and g-force are heterogeneous and mostly based on expert opinions. In order to unify this step in a fully automated laboratory, we aimed to evaluate different centrifugation settings and their influence on the results of routine clinical chemistry analytes. We collected blood from 41 healthy volunteers into BD Vacutainer PST II-heparin-gel- (LiHepGel), BD Vacutainer SST II-serum-, and BD Vacutainer Barricor heparin-tubes with a mechanical separator (LiHepBar). Tubes were centrifuged at 2000xg for 10 minutes and 3000xg for 7 and 5 minutes, respectively. Subsequently 60 and 21 clinical chemistry analytes were measured in plasma and serum samples, respectively, using a Roche COBAS instrument. High sensitive Troponin T, pregnancy-associated plasma protein A, ß human chorionic gonadotropin and rheumatoid factor had to be excluded from statistical evaluation as many of the respective results were below the measuring range. Except of free haemoglobin (fHb) measurements, no analyte result was altered by the use of shorter centrifugation times at higher g-forces. Comparing LiHepBar to LiHepGel tubes at different centrifugation setting, we found higher lactate-dehydrogenase (LD) (P = 0.003 to < 0.001) and lower bicarbonate values (P = 0.049 to 0.008) in the latter. Serum and heparin samples may be centrifuged at higher speed (3000xg) for a shorter amount of time (5 minutes) without alteration of the analytes tested in this study. When using LiHepBar tubes for blood collection, a separate LD reference value might be needed.
Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan
2016-01-01
Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements. PMID:27112127
Pradines, Joël R; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan
2016-04-26
Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.
NASA Astrophysics Data System (ADS)
Pradines, Joël R.; Beccati, Daniela; Lech, Miroslaw; Ozug, Jennifer; Farutin, Victor; Huang, Yongqing; Gunay, Nur Sibel; Capila, Ishan
2016-04-01
Complex mixtures of molecular species, such as glycoproteins and glycosaminoglycans, have important biological and therapeutic functions. Characterization of these mixtures with analytical chemistry measurements is an important step when developing generic drugs such as biosimilars. Recent developments have focused on analytical methods and statistical approaches to test similarity between mixtures. The question of how much uncertainty on mixture composition is reduced by combining several measurements still remains mostly unexplored. Mathematical frameworks to combine measurements, estimate mixture properties, and quantify remaining uncertainty, i.e. a characterization extent, are introduced here. Constrained optimization and mathematical modeling are applied to a set of twenty-three experimental measurements on heparan sulfate, a mixture of linear chains of disaccharides having different levels of sulfation. While this mixture has potentially over two million molecular species, mathematical modeling and the small set of measurements establish the existence of nonhomogeneity of sulfate level along chains and the presence of abundant sulfate repeats. Constrained optimization yields not only estimations of sulfate repeats and sulfate level at each position in the chains but also bounds on these levels, thereby estimating the extent of characterization of the sulfation pattern which is achieved by the set of measurements.
RBioCloud: A Light-Weight Framework for Bioconductor and R-based Jobs on the Cloud.
Varghese, Blesson; Patel, Ishan; Barker, Adam
2015-01-01
Large-scale ad hoc analytics of genomic data is popular using the R-programming language supported by over 700 software packages provided by Bioconductor. More recently, analytical jobs are benefitting from on-demand computing and storage, their scalability and their low maintenance cost, all of which are offered by the cloud. While biologists and bioinformaticists can take an analytical job and execute it on their personal workstations, it remains challenging to seamlessly execute the job on the cloud infrastructure without extensive knowledge of the cloud dashboard. How analytical jobs can not only with minimum effort be executed on the cloud, but also how both the resources and data required by the job can be managed is explored in this paper. An open-source light-weight framework for executing R-scripts using Bioconductor packages, referred to as `RBioCloud', is designed and developed. RBioCloud offers a set of simple command-line tools for managing the cloud resources, the data and the execution of the job. Three biological test cases validate the feasibility of RBioCloud. The framework is available from http://www.rbiocloud.com.
Piérard, Gérald E; Courtois, Justine; Ritacco, Caroline; Humbert, Philippe; Fanian, Ferial; Piérard-Franchimont, Claudine
2015-01-01
Background In cosmetic science, noninvasive sampling of the upper part of the stratum corneum is conveniently performed using strippings with adhesive-coated discs (SACD) and cyanoacrylate skin surface strippings (CSSSs). Methods Under controlled conditions, it is possible to scrutinize SACD and CSSS with objectivity using appropriate methods of analytical morphology. These procedures apply to a series of clinical conditions including xerosis grading, comedometry, corneodynamics, corneomelametry, corneosurfametry, corneoxenometry, and dandruff assessment. Results With any of the analytical evaluations, SACD and CSSS provide specific salient information that is useful in the field of cosmetology. In particular, both methods appear valuable and complementary in assessing the human skin compatibility of personal skincare products. Conclusion A set of quantitative analytical methods applicable to the minimally invasive and low-cost SACD and CSSS procedures allow for a sound assessment of cosmetic effects on the stratum corneum. Under regular conditions, both methods are painless and do not induce adverse events. Globally, CSSS appears more precise and informative than the regular SACD stripping. PMID:25767402
The analytic setting today: using the couch or the chair?
Wiener, Jan
2015-09-01
This paper re-visits Murray Jackson's 1961 paper in the Journal of Analytical Psychology, 'Chair, couch and countertransference', with the aim of exploring the role of the couch for Jungian analysts in clinical practice today. Within the Society of Analytical Psychology (SAP) and some other London-based societies, there has been an evolution of practice from face-to-face sessions with the patient in the chair, as was Jung's preference, to a mode of practice where patients use the couch with the analyst sitting to the side rather than behind, as has been the tradition in psychoanalysis. Fordham was the founding member of the SAP and it was because of his liaison with psychoanalysis and psychoanalysts that this cultural shift came about. Using clinical examples, the author explores the couch/chair question in terms of her own practice and the internal setting as a structure in her mind. With reference to Bleger's (2013) paper 'Psychoanalysis of the psychoanalytic setting', the author discusses how the analytic setting, including use of the couch or the chair, can act as a silent container for the most primitive aspects of the patient's psyche which will only emerge in analysis when the setting changes or is breached. © 2015, The Society of Analytical Psychology.
Xu, Gaolian; Zhao, Hang; Cooper, Jonathan M; Reboud, Julien
2016-10-06
We demonstrate a multiplexed loop mediated isothermal amplification (LAMP) assay for infectious disease diagnostics, where the analytical process flow of target pathogens genomic DNA is performed manually by moving magnetic beads through a series of plugs in a capillary. Heat is provided by a water bath and the results are read by the naked eye, enabling applications in low resource settings.
Predicting Sasang Constitution Using Body-Shape Information
Jang, Eunsu; Do, Jun-Hyeong; Jin, HeeJeong; Park, KiHyun; Ku, Boncho; Lee, Siwoo; Kim, Jong Yeol
2012-01-01
Objectives. Body measurement plays a pivotal role not only in the diagnosis of disease but also in the classification of typology. Sasang constitutional medicine, which is one of the forms of Traditional Korean Medicine, is considered to be strongly associated with body shape. We attempted to determine whether a Sasang constitutional analytic tool based on body shape information (SCAT-B) could predict Sasang constitution (SC). Methods. After surveying 23 Oriental medical clinics, 2,677 subjects were recruited and body shape information was collected. The SCAT-Bs for males and females were developed using multinomial logistic regression. Stepwise forward-variable selection was applied using the score statistic and Wald's test. Results. The predictive rates of the SCAT-B for Tae-eumin (TE), Soeumin (SE), and Soyangin (SY) types in males and females were 80.2%, 56.9%, and 37.7% (males) and 69.3%, 38.9%, and 50.0% (females) in the training set and were 74%, 70.1%, and 35% (males), and 67.4%, 66.3%, and 53.7% (females) in the test set, respectively. Higher constitutional probability scores showed a trend for association with higher predictability. Conclusions. This study shows that the Sasang constitutional analytic tool, which is based on body shape information, may be relatively highly predictive of TE type but may be less predictive when used for SY type. PMID:22792124
[Quantitative spectrum analysis of characteristic gases of spontaneous combustion coal].
Liang, Yun-Tao; Tang, Xiao-Jun; Luo, Hai-Zhu; Sun, Yong
2011-09-01
Aimed at the characteristics of spontaneous combustion gas such as a variety of gases, lou limit of detection, and critical requirement of safety, Fourier transform infrared (FTIR) spectral analysis is presented to analyze characteristic gases of spontaneous combustion In this paper, analysis method is introduced at first by combing characteristics of absorption spectra of analyte and analysis requirement. Parameter setting method, sample preparation, feature variable abstract and analysis model building are taken into consideration. The methods of sample preparation, feature abstraction and analysis model are introduced in detail. And then, eleven kinds of gases were tested with Tensor 27 spectrometer. CH4, C2H6, C3H8, iC4H10, nC4H10, C2 H4, C3 H6, C3 H2, SF6, CO and CO2 were included. The optical path length was 10 cm while the spectra resolution was set as 1 cm(-1). The testing results show that the detection limit of all analytes is less than 2 x 10(-6). All the detection limits fit the measurement requirement of spontaneous combustion gas, which means that FTIR may be an ideal instrument and the analysis method used in this paper is competent for spontaneous combustion gas measurement on line.
Gray, Stephen J; Gallo, David A
2016-02-01
Belief in paranormal psychic phenomena is widespread in the United States, with over a third of the population believing in extrasensory perception (ESP). Why do some people believe, while others are skeptical? According to the cognitive differences hypothesis, individual differences in the way people process information about the world can contribute to the creation of psychic beliefs, such as differences in memory accuracy (e.g., selectively remembering a fortune teller's correct predictions) or analytical thinking (e.g., relying on intuition rather than scrutinizing evidence). While this hypothesis is prevalent in the literature, few have attempted to empirically test it. Here, we provided the most comprehensive test of the cognitive differences hypothesis to date. In 3 studies, we used online screening to recruit groups of strong believers and strong skeptics, matched on key demographics (age, sex, and years of education). These groups were then tested in laboratory and online settings using multiple cognitive tasks and other measures. Our cognitive testing showed that there were no consistent group differences on tasks of episodic memory distortion, autobiographical memory distortion, or working memory capacity, but skeptics consistently outperformed believers on several tasks tapping analytical or logical thinking as well as vocabulary. These findings demonstrate cognitive similarities and differences between these groups and suggest that differences in analytical thinking and conceptual knowledge might contribute to the development of psychic beliefs. We also found that psychic belief was associated with greater life satisfaction, demonstrating benefits associated with psychic beliefs and highlighting the role of both cognitive and noncognitive factors in understanding these individual differences.
Safety and Suitability for Service Assessment Testing for Surface and Underwater Launched Munitions
2014-12-05
test efficiency that tend to associate the Analytical S3 Test Approach with large, complex munition systems and the Empirical S3 Test Approach with...the smaller, less complex munition systems . 8.1 ANALYTICAL S3 TEST APPROACH. The Analytical S3 test approach, as shown in Figure 3, evaluates...assets than the Analytical S3 Test approach to establish the safety margin of the system . This approach is generally applicable to small munitions
HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.
2015-05-01
This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.
Elmiger, Marco P; Poetzsch, Michael; Steuer, Andrea E; Kraemer, Thomas
2018-03-06
High resolution mass spectrometry and modern data independent acquisition (DIA) methods enable the creation of general unknown screening (GUS) procedures. However, even when DIA is used, its potential is far from being exploited, because often, the untargeted acquisition is followed by a targeted search. Applying an actual GUS (including untargeted screening) produces an immense amount of data that must be dealt with. An optimization of the parameters regulating the feature detection and hit generation algorithms of the data processing software could significantly reduce the amount of unnecessary data and thereby the workload. Design of experiment (DoE) approaches allow a simultaneous optimization of multiple parameters. In a first step, parameters are evaluated (crucial or noncrucial). Second, crucial parameters are optimized. The aim in this study was to reduce the number of hits, without missing analytes. The obtained parameter settings from the optimization were compared to the standard settings by analyzing a test set of blood samples spiked with 22 relevant analytes as well as 62 authentic forensic cases. The optimization lead to a marked reduction of workload (12.3 to 1.1% and 3.8 to 1.1% hits for the test set and the authentic cases, respectively) while simultaneously increasing the identification rate (68.2 to 86.4% and 68.8 to 88.1%, respectively). This proof of concept study emphasizes the great potential of DoE approaches to master the data overload resulting from modern data independent acquisition methods used for general unknown screening procedures by optimizing software parameters.
NASA Technical Reports Server (NTRS)
Wilkenfeld, J. M.; Harlacher, B. L.; Mathews, D.
1982-01-01
A combined experimental and analytical program to develop system electrical test procedures for the qualification of spacecraft against damage produced by space-electron-induced discharges (EID) occurring on spacecraft dielectric outer surfaces is described. A review and critical evaluation of possible approaches to qualify spacecraft against space electron-induced discharges (EID) is presented. A variety of possible schemes to simulate EID electromagnetic effects produced in spacecraft was studied. These techniques form the principal element of a provisional, recommended set of test procedures for the EID qualification spacecraft. Significant gaps in our knowledge about EID which impact the final specification of an electrical test to qualify spacecraft against EID are also identified.
Bonanno, Lisa M.; Kwong, Tai C.; DeLouise, Lisa A.
2010-01-01
In this work we evaluate for the first time the performance of a label-free porous silicon (PSi) immunosensor assay in a blind clinical study designed to screen authentic patient urine specimens for a broad range of opiates. The PSi opiate immunosensor achieved 96% concordance with liquid chromatography-mass spectrometry/tandem mass spectrometry (LC-MS/MS) results on samples that underwent standard opiate testing (n=50). In addition, successful detection of a commonly abused opiate, oxycodone, resulted in 100% qualitative agreement between the PSi opiate sensor and LC-MS/MS. In contrast, a commercial broad opiate immunoassay technique (CEDIA®) achieved 65% qualitative concordance with LC-MS/MS. Evaluation of important performance attributes including precision, accuracy, and recovery was completed on blank urine specimens spiked with test analytes. Variability of morphine detection as a model opiate target was < 9% both within-run and between-day at and above the cutoff limit of 300 ng ml−1. This study validates the analytical screening capability of label-free PSi opiate immunosensors in authentic patient samples and is the first semi-quantitative demonstration of the technology’s successful clinical use. These results motivate future development of PSi technology to reduce complexity and cost of diagnostic testing particularly in a point-of-care setting. PMID:21062030
Analysis of structural dynamic data from Skylab. Volume 2: Skylab analytical and test model data
NASA Technical Reports Server (NTRS)
Demchak, L.; Harcrow, H.
1976-01-01
The orbital configuration test modal data, analytical test correlation modal data, and analytical flight configuration modal data are presented. Tables showing the generalized mass contributions (GMCs) for each of the thirty tests modes are given along with the two dimensional mode shape plots and tables of GMCs for the test correlated analytical modes. The two dimensional mode shape plots for the analytical modes and uncoupled and coupled modes of the orbital flight configuration at three development phases of the model are included.
A Dynamic Calibration Method for Experimental and Analytical Hub Load Comparison
NASA Technical Reports Server (NTRS)
Kreshock, Andrew R.; Thornburgh, Robert P.; Wilbur, Matthew L.
2017-01-01
This paper presents the results from an ongoing effort to produce improved correlation between analytical hub force and moment prediction and those measured during wind-tunnel testing on the Aeroelastic Rotor Experimental System (ARES), a conventional rotor testbed commonly used at the Langley Transonic Dynamics Tunnel (TDT). A frequency-dependent transformation between loads at the rotor hub and outputs of the testbed balance is produced from frequency response functions measured during vibration testing of the system. The resulting transformation is used as a dynamic calibration of the balance to transform hub loads predicted by comprehensive analysis into predicted balance outputs. In addition to detailing the transformation process, this paper also presents a set of wind-tunnel test cases, with comparisons between the measured balance outputs and transformed predictions from the comprehensive analysis code CAMRAD II. The modal response of the testbed is discussed and compared to a detailed finite-element model. Results reveal that the modal response of the testbed exhibits a number of characteristics that make accurate dynamic balance predictions challenging, even with the use of the balance transformation.
Modal testing of a rotating wind turbine
NASA Astrophysics Data System (ADS)
Carne, T. G.; Nord, A. R.
1982-11-01
A testing technique was developed to measure the modes of vibration of a rotating vertical-axis wind turbine. This technique was applied to the Sandia Two-Meter Turbine, where the changes in individual modal frequencies as a function of the rotational speed were tracked from 0 rpm (parked) to 600 rpm. During rotational testing, the structural response was measured using a combination of strain gages and accelerometers, passing the signals through slip rings. Excitation of the turbine structure was provided by a scheme which suddenly released a pretensioned cable, thus plucking the turbine as it was rotating at a set speed. In addition to calculating the real modes of the parked turbine, the modes of the rotating turbine were also determined at several rotational speeds. The modes of the rotating system proved to be complex due to centrifugal and Coriolis effects. The modal data for the parked turbine were used to update a finite-element model. Also, the measured modal parameters for the rotating turbine were compared to the analytical results, thus verifying the analytical procedures used to incorporate the effects of the rotating coordinate system.
Larson, Nicholas B; McDonnell, Shannon; Cannon Albright, Lisa; Teerlink, Craig; Stanford, Janet; Ostrander, Elaine A; Isaacs, William B; Xu, Jianfeng; Cooney, Kathleen A; Lange, Ethan; Schleutker, Johanna; Carpten, John D; Powell, Isaac; Bailey-Wilson, Joan E; Cussenot, Olivier; Cancel-Tassin, Geraldine; Giles, Graham G; MacInnis, Robert J; Maier, Christiane; Whittemore, Alice S; Hsieh, Chih-Lin; Wiklund, Fredrik; Catalona, William J; Foulkes, William; Mandal, Diptasri; Eeles, Rosalind; Kote-Jarai, Zsofia; Ackerman, Michael J; Olson, Timothy M; Klein, Christopher J; Thibodeau, Stephen N; Schaid, Daniel J
2017-05-01
Next-generation sequencing technologies have afforded unprecedented characterization of low-frequency and rare genetic variation. Due to low power for single-variant testing, aggregative methods are commonly used to combine observed rare variation within a single gene. Causal variation may also aggregate across multiple genes within relevant biomolecular pathways. Kernel-machine regression and adaptive testing methods for aggregative rare-variant association testing have been demonstrated to be powerful approaches for pathway-level analysis, although these methods tend to be computationally intensive at high-variant dimensionality and require access to complete data. An additional analytical issue in scans of large pathway definition sets is multiple testing correction. Gene set definitions may exhibit substantial genic overlap, and the impact of the resultant correlation in test statistics on Type I error rate control for large agnostic gene set scans has not been fully explored. Herein, we first outline a statistical strategy for aggregative rare-variant analysis using component gene-level linear kernel score test summary statistics as well as derive simple estimators of the effective number of tests for family-wise error rate control. We then conduct extensive simulation studies to characterize the behavior of our approach relative to direct application of kernel and adaptive methods under a variety of conditions. We also apply our method to two case-control studies, respectively, evaluating rare variation in hereditary prostate cancer and schizophrenia. Finally, we provide open-source R code for public use to facilitate easy application of our methods to existing rare-variant analysis results. © 2017 WILEY PERIODICALS, INC.
PFLOTRAN Verification: Development of a Testing Suite to Ensure Software Quality
NASA Astrophysics Data System (ADS)
Hammond, G. E.; Frederick, J. M.
2016-12-01
In scientific computing, code verification ensures the reliability and numerical accuracy of a model simulation by comparing the simulation results to experimental data or known analytical solutions. The model is typically defined by a set of partial differential equations with initial and boundary conditions, and verification ensures whether the mathematical model is solved correctly by the software. Code verification is especially important if the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment [Oberkampf and Trucano (2007)]. Justified confidence in a particular computational tool requires clarity in the exercised physics and transparency in its verification process with proper documentation. We present a quality assurance (QA) testing suite developed by Sandia National Laboratories that performs code verification for PFLOTRAN, an open source, massively-parallel subsurface simulator. PFLOTRAN solves systems of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport processes in porous media. PFLOTRAN's QA test suite compares the numerical solutions of benchmark problems in heat and mass transport against known, closed-form, analytical solutions, including documentation of the exercised physical process models implemented in each PFLOTRAN benchmark simulation. The QA test suite development strives to follow the recommendations given by Oberkampf and Trucano (2007), which describes four essential elements in high-quality verification benchmark construction: (1) conceptual description, (2) mathematical description, (3) accuracy assessment, and (4) additional documentation and user information. Several QA tests within the suite will be presented, including details of the benchmark problems and their closed-form analytical solutions, implementation of benchmark problems in PFLOTRAN simulations, and the criteria used to assess PFLOTRAN's performance in the code verification procedure. References Oberkampf, W. L., and T. G. Trucano (2007), Verification and Validation Benchmarks, SAND2007-0853, 67 pgs., Sandia National Laboratories, Albuquerque, NM.
Interacting steps with finite-range interactions: Analytical approximation and numerical results
NASA Astrophysics Data System (ADS)
Jaramillo, Diego Felipe; Téllez, Gabriel; González, Diego Luis; Einstein, T. L.
2013-05-01
We calculate an analytical expression for the terrace-width distribution P(s) for an interacting step system with nearest- and next-nearest-neighbor interactions. Our model is derived by mapping the step system onto a statistically equivalent one-dimensional system of classical particles. The validity of the model is tested with several numerical simulations and experimental results. We explore the effect of the range of interactions q on the functional form of the terrace-width distribution and pair correlation functions. For physically plausible interactions, we find modest changes when next-nearest neighbor interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.
NASA Technical Reports Server (NTRS)
Sun, Xian-He; Moitra, Stuti
1996-01-01
Various tridiagonal solvers have been proposed in recent years for different parallel platforms. In this paper, the performance of three tridiagonal solvers, namely, the parallel partition LU algorithm, the parallel diagonal dominant algorithm, and the reduced diagonal dominant algorithm, is studied. These algorithms are designed for distributed-memory machines and are tested on an Intel Paragon and an IBM SP2 machines. Measured results are reported in terms of execution time and speedup. Analytical study are conducted for different communication topologies and for different tridiagonal systems. The measured results match the analytical results closely. In addition to address implementation issues, performance considerations such as problem sizes and models of speedup are also discussed.
Pumping tests in non-uniform aquifers - the linear strip case
Butler, J.J.; Liu, W.Z.
1991-01-01
Many pumping tests are performed in geologic settings that can be conceptualized as a linear infinite strip of one material embedded in a matrix of differing flow properties. A semi-analytical solution is presented to aid the analysis of drawdown data obtained from pumping tests performed in settings that can be represented by such a conceptual model. Integral transform techniques are employed to obtain a solution in transform space that can be numerically inverted to real space. Examination of the numerically transformed solution reveals several interesting features of flow in this configuration. If the transmissivity of the strip is much higher than that of the matrix, linear and bilinear flow are the primary flow regimes during a pumping test. If the contrast between matrix and strip properties is not as extreme, then radial flow should be the primary flow mechanism. Sensitivity analysis is employed to develop insight into the controls on drawdown in this conceptual model and to demonstrate the importance of temporal and spatial placement of observations. Changes in drawdown are sensitive to the transmissivity of the strip for a limited time duration. After that time, only the total drawdown remains a function of strip transmissivity. In the case of storativity, both the total drawdown and changes in drawdown are sensitive to the storativity of the strip for a time of quite limited duration. After that time, essentially no information can be gained about the storage properties of the strip from drawdown data. An example analysis is performed using data previously presented in the literature to demonstrate the viability of the semi-analytical solution and to illustrate a general procedure for analysis of drawdown data in complex geologic settings. This example reinforces the importance of observation well placement and the time of data collection in constraining parameter correlation, a major source of the uncertainty that arises in the parameter estimation procedure. ?? 1991.
WHAEM: PROGRAM DOCUMENTATION FOR THE WELLHEAD ANALYTIC ELEMENT MODEL
The Wellhead Analytic Element Model (WhAEM) demonstrates a new technique for the definition of time-of-travel capture zones in relatively simple geohydrologic settings. he WhAEM package includes an analytic element model that uses superposition of (many) analytic solutions to gen...
Benchmarking an unstructured grid sediment model in an energetic estuary
Lopez, Jesse E.; Baptista, António M.
2016-12-14
A sediment model coupled to the hydrodynamic model SELFE is validated against a benchmark combining a set of idealized tests and an application to a field-data rich energetic estuary. After sensitivity studies, model results for the idealized tests largely agree with previously reported results from other models in addition to analytical, semi-analytical, or laboratory results. Results of suspended sediment in an open channel test with fixed bottom are sensitive to turbulence closure and treatment for hydrodynamic bottom boundary. Results for the migration of a trench are very sensitive to critical stress and erosion rate, but largely insensitive to turbulence closure.more » The model is able to qualitatively represent sediment dynamics associated with estuarine turbidity maxima in an idealized estuary. Applied to the Columbia River estuary, the model qualitatively captures sediment dynamics observed by fixed stations and shipborne profiles. Representation of the vertical structure of suspended sediment degrades when stratification is underpredicted. Across all tests, skill metrics of suspended sediments lag those of hydrodynamics even when qualitatively representing dynamics. The benchmark is fully documented in an openly available repository to encourage unambiguous comparisons against other models.« less
Routine development of objectively derived search strategies.
Hausner, Elke; Waffenschmidt, Siw; Kaiser, Thomas; Simon, Michael
2012-02-29
Over the past few years, information retrieval has become more and more professionalized, and information specialists are considered full members of a research team conducting systematic reviews. Research groups preparing systematic reviews and clinical practice guidelines have been the driving force in the development of search strategies, but open questions remain regarding the transparency of the development process and the available resources. An empirically guided approach to the development of a search strategy provides a way to increase transparency and efficiency. Our aim in this paper is to describe the empirically guided development process for search strategies as applied by the German Institute for Quality and Efficiency in Health Care (Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen, or "IQWiG"). This strategy consists of the following steps: generation of a test set, as well as the development, validation and standardized documentation of the search strategy. We illustrate our approach by means of an example, that is, a search for literature on brachytherapy in patients with prostate cancer. For this purpose, a test set was generated, including a total of 38 references from 3 systematic reviews. The development set for the generation of the strategy included 25 references. After application of textual analytic procedures, a strategy was developed that included all references in the development set. To test the search strategy on an independent set of references, the remaining 13 references in the test set (the validation set) were used. The validation set was also completely identified. Our conclusion is that an objectively derived approach similar to that used in search filter development is a feasible way to develop and validate reliable search strategies. Besides creating high-quality strategies, the widespread application of this approach will result in a substantial increase in the transparency of the development process of search strategies.
Multiple Phenotype Association Tests Using Summary Statistics in Genome-Wide Association Studies
Liu, Zhonghua; Lin, Xihong
2017-01-01
Summary We study in this paper jointly testing the associations of a genetic variant with correlated multiple phenotypes using the summary statistics of individual phenotype analysis from Genome-Wide Association Studies (GWASs). We estimated the between-phenotype correlation matrix using the summary statistics of individual phenotype GWAS analyses, and developed genetic association tests for multiple phenotypes by accounting for between-phenotype correlation without the need to access individual-level data. Since genetic variants often affect multiple phenotypes differently across the genome and the between-phenotype correlation can be arbitrary, we proposed robust and powerful multiple phenotype testing procedures by jointly testing a common mean and a variance component in linear mixed models for summary statistics. We computed the p-values of the proposed tests analytically. This computational advantage makes our methods practically appealing in large-scale GWASs. We performed simulation studies to show that the proposed tests maintained correct type I error rates, and to compare their powers in various settings with the existing methods. We applied the proposed tests to a GWAS Global Lipids Genetics Consortium summary statistics data set and identified additional genetic variants that were missed by the original single-trait analysis. PMID:28653391
Multiple phenotype association tests using summary statistics in genome-wide association studies.
Liu, Zhonghua; Lin, Xihong
2018-03-01
We study in this article jointly testing the associations of a genetic variant with correlated multiple phenotypes using the summary statistics of individual phenotype analysis from Genome-Wide Association Studies (GWASs). We estimated the between-phenotype correlation matrix using the summary statistics of individual phenotype GWAS analyses, and developed genetic association tests for multiple phenotypes by accounting for between-phenotype correlation without the need to access individual-level data. Since genetic variants often affect multiple phenotypes differently across the genome and the between-phenotype correlation can be arbitrary, we proposed robust and powerful multiple phenotype testing procedures by jointly testing a common mean and a variance component in linear mixed models for summary statistics. We computed the p-values of the proposed tests analytically. This computational advantage makes our methods practically appealing in large-scale GWASs. We performed simulation studies to show that the proposed tests maintained correct type I error rates, and to compare their powers in various settings with the existing methods. We applied the proposed tests to a GWAS Global Lipids Genetics Consortium summary statistics data set and identified additional genetic variants that were missed by the original single-trait analysis. © 2017, The International Biometric Society.
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this Part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
40 CFR 136.6 - Method modifications and analytical requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... PROGRAMS (CONTINUED) GUIDELINES ESTABLISHING TEST PROCEDURES FOR THE ANALYSIS OF POLLUTANTS § 136.6 Method... person or laboratory using a test procedure (analytical method) in this part. (2) Chemistry of the method means the reagents and reactions used in a test procedure that allow determination of the analyte(s) of...
Novel predictive models for metabolic syndrome risk: a "big data" analytic approach.
Steinberg, Gregory B; Church, Bruce W; McCall, Carol J; Scott, Adam B; Kalis, Brian P
2014-06-01
We applied a proprietary "big data" analytic platform--Reverse Engineering and Forward Simulation (REFS)--to dimensions of metabolic syndrome extracted from a large data set compiled from Aetna's databases for 1 large national customer. Our goals were to accurately predict subsequent risk of metabolic syndrome and its various factors on both a population and individual level. The study data set included demographic, medical claim, pharmacy claim, laboratory test, and biometric screening results for 36,944 individuals. The platform reverse-engineered functional models of systems from diverse and large data sources and provided a simulation framework for insight generation. The platform interrogated data sets from the results of 2 Comprehensive Metabolic Syndrome Screenings (CMSSs) as well as complete coverage records; complete data from medical claims, pharmacy claims, and lab results for 2010 and 2011; and responses to health risk assessment questions. The platform predicted subsequent risk of metabolic syndrome, both overall and by risk factor, on population and individual levels, with ROC/AUC varying from 0.80 to 0.88. We demonstrated that improving waist circumference and blood glucose yielded the largest benefits on subsequent risk and medical costs. We also showed that adherence to prescribed medications and, particularly, adherence to routine scheduled outpatient doctor visits, reduced subsequent risk. The platform generated individualized insights using available heterogeneous data within 3 months. The accuracy and short speed to insight with this type of analytic platform allowed Aetna to develop targeted cost-effective care management programs for individuals with or at risk for metabolic syndrome.
Expanding the analyte set of the JPL Electronic Nose to include inorganic compounds
NASA Technical Reports Server (NTRS)
Ryan, M. A.; Homer, M. L.; Zhou, H.; Mannat, K.; Manfreda, A.; Kisor, A.; Shevade, A.; Yen, S. P. S.
2005-01-01
An array-based sensing system based on 32 polymer/carbon composite conductometric sensors is under development at JPL. Until the present phase of development, the analyte set has focuses on organic compounds and a few selected inorganic compounds, notably ammonia and hydrazine.
ERIC Educational Resources Information Center
Tuttle, Christina Clark; Teh, Bing-ru; Nichols-Barrer, Ira; Gill, Brian P.; Gleason, Philip
2010-01-01
In this set of four supplemental tables, the authors compare the baseline test scores of the treatment and matched control group samples observed in each year after KIPP entry (outcome years 1 to 4). As discussed in Chapter III, the authors used an iterative propensity score estimation procedure to calculate each student's probability of entering…
2013-01-01
settings that cover the range of environmental conditions in which the rations are expected to function. These vitally important state-of-the- art ...rf ig h te ri N u tr it io n iM o d e lin g Warfighter Nutrition Modeling continued from page 1. (Health Affairs) and the Joint Culinary Center of...Food Pilot Plant for production and testing of food to facilitate state-of-the- art ration development. The Analytical Microbiology and Food
Andersson, Maria; Stephanson, Nikolai; Ohman, Inger; Terzuoli, Tommy; Lindh, Jonatan D; Beck, Olof
2014-04-01
Opiates comprise a class of abused drugs that is of primary interest in clinical and forensic urine drug testing. Determination of heroin, codeine, or a multi-drug ingestion is complicated since both heroin and codeine can lead to urinary excretion of free and conjugated morphine. Liquid chromatography-tandem mass spectrometry (LC-MS/MS) offers advantage over gas chromatography-mass spectrometry by simplifying sample preparation but increases the number of analytes. A method based on direct injection of five-fold diluted urine for confirmation of morphine, morphine-3-glucuronide, morphine-6-glucuronide, codeine, codeine-6-glucuronide and 6-acetylmorphine was validated using LC-MS/MS in positive electrospray mode monitoring two transitions using selected reaction monitoring. The method was applied for the analysis of 3155 unknown urine samples which were positive for opiates in immunochemical screening. A linear response was observed for all compounds in the calibration curves covering more than three orders of magnitude. Cut off was set to 2 ng/ml for 6-acetylmorphine and 150 ng/ml for the other analytes. 6-Acetylmorphine was found to be effective (sensitivity 82%) in detecting samples as heroin intake. Morphine-3-glucuronide and codeine-6-glucuronide was the predominant components of total morphine and codeine, 84% and 93%, respectively. The authors have validated a robust LC-MS/MS method for rapid qualitative and quantitative analysis of opiates in urine. 6-Acetylmorphine has been demonstrated as a sensitive and important parameter for a heroin intake. A possible interpretation strategy to conclude the source of detected analytes was proposed. The method might be further developed by reducing the number of analytes to morphine-3-glucuronide, codeine-6-glucuronide and 6-acetylmorphine without compromising test performance. Copyright © 2013 John Wiley & Sons, Ltd.
The HARPS-N archive through a Cassandra, NoSQL database suite?
NASA Astrophysics Data System (ADS)
Molinari, Emilio; Guerra, Jose; Harutyunyan, Avet; Lodi, Marcello; Martin, Adrian
2016-07-01
The TNG-INAF is developing the science archive for the WEAVE instrument. The underlying architecture of the archive is based on a non relational database, more precisely, on Apache Cassandra cluster, which uses a NoSQL technology. In order to test and validate the use of this architecture, we created a local archive which we populated with all the HARPSN spectra collected at the TNG since the instrument's start of operations in mid-2012, as well as developed tools for the analysis of this data set. The HARPS-N data set is two orders of magnitude smaller than WEAVE, but we want to demonstrate the ability to walk through a complete data set and produce scientific output, as valuable as that produced by an ordinary pipeline, though without accessing directly the FITS files. The analytics is done by Apache Solr and Spark and on a relational PostgreSQL database. As an example, we produce observables like metallicity indexes for the targets in the archive and compare the results with the ones coming from the HARPS-N regular data reduction software. The aim of this experiment is to explore the viability of a high availability cluster and distributed NoSQL database as a platform for complex scientific analytics on a large data set, which will then be ported to the WEAVE Archive System (WAS) which we are developing for the WEAVE multi object, fiber spectrograph.
The impact of surface area, volume, curvature, and Lennard-Jones potential to solvation modeling.
Nguyen, Duc D; Wei, Guo-Wei
2017-01-05
This article explores the impact of surface area, volume, curvature, and Lennard-Jones (LJ) potential on solvation free energy predictions. Rigidity surfaces are utilized to generate robust analytical expressions for maximum, minimum, mean, and Gaussian curvatures of solvent-solute interfaces, and define a generalized Poisson-Boltzmann (GPB) equation with a smooth dielectric profile. Extensive correlation analysis is performed to examine the linear dependence of surface area, surface enclosed volume, maximum curvature, minimum curvature, mean curvature, and Gaussian curvature for solvation modeling. It is found that surface area and surfaces enclosed volumes are highly correlated to each other's, and poorly correlated to various curvatures for six test sets of molecules. Different curvatures are weakly correlated to each other for six test sets of molecules, but are strongly correlated to each other within each test set of molecules. Based on correlation analysis, we construct twenty six nontrivial nonpolar solvation models. Our numerical results reveal that the LJ potential plays a vital role in nonpolar solvation modeling, especially for molecules involving strong van der Waals interactions. It is found that curvatures are at least as important as surface area or surface enclosed volume in nonpolar solvation modeling. In conjugation with the GPB model, various curvature-based nonpolar solvation models are shown to offer some of the best solvation free energy predictions for a wide range of test sets. For example, root mean square errors from a model constituting surface area, volume, mean curvature, and LJ potential are less than 0.42 kcal/mol for all test sets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Wang, Xuefeng; Lee, Seunggeun; Zhu, Xiaofeng; Redline, Susan; Lin, Xihong
2013-12-01
Family-based genetic association studies of related individuals provide opportunities to detect genetic variants that complement studies of unrelated individuals. Most statistical methods for family association studies for common variants are single marker based, which test one SNP a time. In this paper, we consider testing the effect of an SNP set, e.g., SNPs in a gene, in family studies, for both continuous and discrete traits. Specifically, we propose a generalized estimating equations (GEEs) based kernel association test, a variance component based testing method, to test for the association between a phenotype and multiple variants in an SNP set jointly using family samples. The proposed approach allows for both continuous and discrete traits, where the correlation among family members is taken into account through the use of an empirical covariance estimator. We derive the theoretical distribution of the proposed statistic under the null and develop analytical methods to calculate the P-values. We also propose an efficient resampling method for correcting for small sample size bias in family studies. The proposed method allows for easily incorporating covariates and SNP-SNP interactions. Simulation studies show that the proposed method properly controls for type I error rates under both random and ascertained sampling schemes in family studies. We demonstrate through simulation studies that our approach has superior performance for association mapping compared to the single marker based minimum P-value GEE test for an SNP-set effect over a range of scenarios. We illustrate the application of the proposed method using data from the Cleveland Family GWAS Study. © 2013 WILEY PERIODICALS, INC.
Aarsand, Aasne K; Villanger, Jørild H; Støle, Egil; Deybach, Jean-Charles; Marsden, Joanne; To-Figueras, Jordi; Badminton, Mike; Elder, George H; Sandberg, Sverre
2011-11-01
The porphyrias are a group of rare metabolic disorders whose diagnosis depends on identification of specific patterns of porphyrin precursor and porphyrin accumulation in urine, blood, and feces. Diagnostic tests for porphyria are performed by specialized laboratories in many countries. Data regarding the analytical and diagnostic performance of these laboratories are scarce. We distributed 5 sets of multispecimen samples from different porphyria patients accompanied by clinical case histories to 18-21 European specialist porphyria laboratories/centers as part of a European Porphyria Network organized external analytical and postanalytical quality assessment (EQA) program. The laboratories stated which analyses they would normally have performed given the case histories and reported results of all porphyria-related analyses available, interpretative comments, and diagnoses. Reported diagnostic strategies initially showed considerable diversity, but the number of laboratories applying adequate diagnostic strategies increased during the study period. We found an average interlaboratory CV of 50% (range 12%-152%) for analytes in absolute concentrations. Result normalization by forming ratios to the upper reference limits did not reduce this variation. Sixty-five percent of reported results were within biological variation-based analytical quality specifications. Clinical interpretation of the obtained analytical results was accurate, and most laboratories established the correct diagnosis in all distributions. Based on a case-based EQA scheme, variations were apparent in analytical and diagnostic performance between European specialist porphyria laboratories. Our findings reinforce the use of EQA schemes as an essential tool to assess both analytical and diagnostic processes and thereby to improve patient care in rare diseases.
Influence versus intent for predictive analytics in situation awareness
NASA Astrophysics Data System (ADS)
Cui, Biru; Yang, Shanchieh J.; Kadar, Ivan
2013-05-01
Predictive analytics in situation awareness requires an element to comprehend and anticipate potential adversary activities that might occur in the future. Most work in high level fusion or predictive analytics utilizes machine learning, pattern mining, Bayesian inference, and decision tree techniques to predict future actions or states. The emergence of social computing in broader contexts has drawn interests in bringing the hypotheses and techniques from social theory to algorithmic and computational settings for predictive analytics. This paper aims at answering the question on how influence and attitude (some interpreted such as intent) of adversarial actors can be formulated and computed algorithmically, as a higher level fusion process to provide predictions of future actions. The challenges in this interdisciplinary endeavor include drawing existing understanding of influence and attitude in both social science and computing fields, as well as the mathematical and computational formulation for the specific context of situation to be analyzed. The study of `influence' has resurfaced in recent years due to the emergence of social networks in the virtualized cyber world. Theoretical analysis and techniques developed in this area are discussed in this paper in the context of predictive analysis. Meanwhile, the notion of intent, or `attitude' using social theory terminologies, is a relatively uncharted area in the computing field. Note that a key objective of predictive analytics is to identify impending/planned attacks so their `impact' and `threat' can be prevented. In this spirit, indirect and direct observables are drawn and derived to infer the influence network and attitude to predict future threats. This work proposes an integrated framework that jointly assesses adversarial actors' influence network and their attitudes as a function of past actions and action outcomes. A preliminary set of algorithms are developed and tested using the Global Terrorism Database (GTD). Our results reveals the benefits to perform joint predictive analytics with both attitude and influence. At the same time, we discover significant challenges in deriving influence and attitude from indirect observables for diverse adversarial behavior. These observations warrant further investigation of optimal use of influence and attitude for predictive analytics, as well as the potential inclusion of other environmental or capability elements for the actors.
García-Blanco, Ana; Peña-Bautista, Carmen; Oger, Camille; Vigor, Claire; Galano, Jean-Marie; Durand, Thierry; Martín-Ibáñez, Nuria; Baquero, Miguel; Vento, Máximo; Cháfer-Pericás, Consuelo
2018-07-01
Lipid peroxidation plays an important role in Alzheimer Disease, so corresponding metabolites found in urine samples could be potential biomarkers. The aim of this work is to develop a reliable ultra-performance liquid chromatography-tandem mass spectrometry analytical method to determine a new set of lipid peroxidation compounds in urine samples. Excellent sensitivity was achieved with limits of detection between 0.08 and 17 nmol L -1 , which renders this method suitable to monitor analytes concentrations in real samples. The method's precision was satisfactory with coefficients of variation around 5-17% (intra-day) and 8-19% (inter-day). The accuracy of the method was assessed by analysis of spiked urine samples obtaining recoveries between 70% and 120% for most of the analytes. The utility of the described method was tested by analyzing urine samples from patients early diagnosed with mild cognitive impairment or mild dementia Alzheimer Disease following the clinical standard criteria. As preliminary results, some analytes (17(RS)-10-epi-SC-Δ 15 -11-dihomo-IsoF, PGE 2 ) and total parameters (Neuroprostanes, Isoprostanes, Isofurans) show differences between the control and the clinical groups. So, these analytes could be potential early Alzheimer Disease biomarkers assessing the patients' pro-oxidant condition. Copyright © 2018 Elsevier B.V. All rights reserved.
Mons, M N; Heringa, M B; van Genderen, J; Puijker, L M; Brand, W; van Leeuwen, C J; Stoks, P; van der Hoek, J P; van der Kooij, D
2013-03-15
Ongoing pollution and improving analytical techniques reveal more and more anthropogenic substances in drinking water sources, and incidentally in treated water as well. In fact, complete absence of any trace pollutant in treated drinking water is an illusion as current analytical techniques are capable of detecting very low concentrations. Most of the substances detected lack toxicity data to derive safe levels and have not yet been regulated. Although the concentrations in treated water usually do not have adverse health effects, their presence is still undesired because of customer perception. This leads to the question how sensitive analytical methods need to become for water quality screening, at what levels water suppliers need to take action and how effective treatment methods need to be designed to remove contaminants sufficiently. Therefore, in the Netherlands a clear and consistent approach called 'Drinking Water Quality for the 21st century (Q21)' has been developed within the joint research program of the drinking water companies. Target values for anthropogenic drinking water contaminants were derived by using the recently introduced Threshold of Toxicological Concern (TTC) approach. The target values for individual genotoxic and steroid endocrine chemicals were set at 0.01 μg/L. For all other organic chemicals the target values were set at 0.1 μg/L. The target value for the total sum of genotoxic chemicals, the total sum of steroid hormones and the total sum of all other organic compounds were set at 0.01, 0.01 and 1.0 μg/L, respectively. The Dutch Q21 approach is further supplemented by the standstill-principle and effect-directed testing. The approach is helpful in defining the goals and limits of future treatment process designs and of analytical methods to further improve and ensure the quality of drinking water, without going to unnecessary extents. Copyright © 2013 Elsevier Ltd. All rights reserved.
Effectiveness of a Risk Screener in Identifying Hepatitis C Virus in a Primary Care Setting
Litwin, Alain H.; Smith, Bryce D.; Koppelman, Elisa A.; McKee, M. Diane; Christiansen, Cindy L.; Gifford, Allen L.; Weinbaum, Cindy M.; Southern, William N.
2012-01-01
Objectives. We evaluated an intervention designed to identify patients at risk for hepatitis C virus (HCV) through a risk screener used by primary care providers. Methods. A clinical reminder sticker prompted physicians at 3 urban clinics to screen patients for 12 risk factors and order HCV testing if any risks were present. Risk factor data were collected from the sticker; demographic and testing data were extracted from electronic medical records. We used the t test, χ2 test, and rank-sum test to compare patients who had and had not been screened and developed an analytic model to identify the incremental value of each element of the screener. Results. Among screened patients, 27.8% (n = 902) were identified as having at least 1 risk factor. Of screened patients with risk factors, 55.4% (n = 500) were tested for HCV. Our analysis showed that 7 elements (injection drug use, intranasal drug use, elevated alanine aminotransferase, transfusions before 1992, ≥ 20 lifetime sex partners, maternal HCV, existing liver disease) accounted for all HCV infections identified. Conclusions. A brief risk screener with a paper-based clinical reminder was effective in increasing HCV testing in a primary care setting. PMID:22994166
NASA Astrophysics Data System (ADS)
Fewtrell, Timothy; Bates, Paul; Horritt, Matthew
2010-05-01
This abstract describes the development of a new set of equations derived from 1D shallow water theory for use in 2D storage cell inundation models. The new equation set is designed to be solved explicitly at very low computational cost, and is here tested against a suite of four analytical and numerical test cases of increasing complexity. In each case the predicted water depths compare favourably to analytical solutions or to benchmark results from the optimally stable diffusive storage cell code of Hunter et al. (2005). For the most complex test involving the fine spatial resolution simulation of flow in a topographically complex urban area the Root Mean Squared Difference between the new formulation and the model of Hunter et al. is ~1 cm. However, unlike diffusive storage cell codes where the stable time step scales with (1-?x)2 the new equation set developed here represents shallow water wave propagation and so the stability is controlled by the Courant-Freidrichs-Lewy condition such that the stable time step instead scales with 1-?x. This allows use of a stable time step that is 1-3 orders of magnitude greater for typical cell sizes than that possible with diffusive storage cell models and results in commensurate reductions in model run times. The maximum speed up achieved over a diffusive storage cell model was 1120x in these tests, although the actual value seen will depend on model resolution and water depth and surface gradient. Solutions using the new equation set are shown to be relatively grid-independent for the conditions considered given the numerical diffusion likely at coarse model resolution. In addition, the inertial formulation appears to have an intuitively correct sensitivity to friction, however small instabilities and increased errors on predicted depth were noted when Manning's n = 0.01. These small instabilities are likely to be a result of the numerical scheme employed, whereby friction is acting to stabilise the solution although this scheme is still widely used in practice. The new equations are likely to find widespread application in many types of flood inundation modelling and should provide a useful additional tool, alongside more established model formulations, for a variety of flood risk management studies.
Makarov, Sergey N.; Yanamadala, Janakinadh; Piazza, Matthew W.; Helderman, Alex M.; Thang, Niang S.; Burnham, Edward H.; Pascual-Leone, Alvaro
2016-01-01
Goals Transcranial magnetic stimulation (TMS) is increasingly used as a diagnostic and therapeutic tool for numerous neuropsychiatric disorders. The use of TMS might cause whole-body exposure to undesired induced currents in patients and TMS operators. The aim of the present study is to test and justify a simple analytical model known previously, which may be helpful as an upper estimate of eddy current density at a particular distant observation point for any body composition and any coil setup. Methods We compare the analytical solution with comprehensive adaptive mesh refinement-based FEM simulations of a detailed full-body human model, two coil types, five coil positions, about 100,000 observation points, and two distinct pulse rise times, thus providing a representative number of different data sets for comparison, while also using other numerical data. Results Our simulations reveal that, after a certain modification, the analytical model provides an upper estimate for the eddy current density at any location within the body. In particular, it overestimates the peak eddy currents at distant locations from a TMS coil by a factor of 10 on average. Conclusion The simple analytical model tested in the present study may be valuable as a rapid method to safely estimate levels of TMS currents at different locations within a human body. Significance At present, safe limits of general exposure to TMS electric and magnetic fields are an open subject, including fetal exposure for pregnant women. PMID:26685221
Block, Darci R; Algeciras-Schimnich, Alicia
2013-01-01
Requests for testing various analytes in serous fluids (e.g., pleural, peritoneal, pericardial effusions) are submitted daily to clinical laboratories. Testing of these fluids deviates from assay manufacturers' specifications, as most laboratory assays are optimized for testing blood or urine specimens. These requests add a burden to clinical laboratories, which need to validate assay performance characteristics in these fluids to exclude matrix interferences (given the different composition of body fluids) while maintaining regulatory compliance. Body fluid testing for a number of analytes has been reported in the literature; however, understanding the clinical utility of these analytes is critical because laboratories must address the analytic and clinical validation requirements, while educating clinicians on proper test utilization. In this article, we review the published data to evaluate the clinical utility of testing for numerous analytes in body fluid specimens. We also highlight the pre-analytic and analytic variables that need to be considered when reviewing published studies in body fluid testing. Finally, we provide guidance on how published studies might (or might not) guide interpretation of test results in today's clinical laboratories.
NASA Technical Reports Server (NTRS)
Hylton, L. D.; Mihelc, M. S.; Turner, E. R.; Nealy, D. A.; York, R. E.
1983-01-01
Three airfoil data sets were selected for use in evaluating currently available analytical models for predicting airfoil surface heat transfer distributions in a 2-D flow field. Two additional airfoils, representative of highly loaded, low solidity airfoils currently being designed, were selected for cascade testing at simulated engine conditions. Some 2-D analytical methods were examined and a version of the STAN5 boundary layer code was chosen for modification. The final form of the method utilized a time dependent, transonic inviscid cascade code coupled to a modified version of the STAN5 boundary layer code featuring zero order turbulence modeling. The boundary layer code is structured to accommodate a full spectrum of empirical correlations addressing the coupled influences of pressure gradient, airfoil curvature, and free-stream turbulence on airfoil surface heat transfer distribution and boundary layer transitional behavior. Comparison of pedictions made with the model to the data base indicates a significant improvement in predictive capability.
PyVCI: A flexible open-source code for calculating accurate molecular infrared spectra
NASA Astrophysics Data System (ADS)
Sibaev, Marat; Crittenden, Deborah L.
2016-06-01
The PyVCI program package is a general purpose open-source code for simulating accurate molecular spectra, based upon force field expansions of the potential energy surface in normal mode coordinates. It includes harmonic normal coordinate analysis and vibrational configuration interaction (VCI) algorithms, implemented primarily in Python for accessibility but with time-consuming routines written in C. Coriolis coupling terms may be optionally included in the vibrational Hamiltonian. Non-negligible VCI matrix elements are stored in sparse matrix format to alleviate the diagonalization problem. CPU and memory requirements may be further controlled by algorithmic choices and/or numerical screening procedures, and recommended values are established by benchmarking using a test set of 44 molecules for which accurate analytical potential energy surfaces are available. Force fields in normal mode coordinates are obtained from the PyPES library of high quality analytical potential energy surfaces (to 6th order) or by numerical differentiation of analytic second derivatives generated using the GAMESS quantum chemical program package (to 4th order).
NASA Astrophysics Data System (ADS)
Hylton, L. D.; Mihelc, M. S.; Turner, E. R.; Nealy, D. A.; York, R. E.
1983-05-01
Three airfoil data sets were selected for use in evaluating currently available analytical models for predicting airfoil surface heat transfer distributions in a 2-D flow field. Two additional airfoils, representative of highly loaded, low solidity airfoils currently being designed, were selected for cascade testing at simulated engine conditions. Some 2-D analytical methods were examined and a version of the STAN5 boundary layer code was chosen for modification. The final form of the method utilized a time dependent, transonic inviscid cascade code coupled to a modified version of the STAN5 boundary layer code featuring zero order turbulence modeling. The boundary layer code is structured to accommodate a full spectrum of empirical correlations addressing the coupled influences of pressure gradient, airfoil curvature, and free-stream turbulence on airfoil surface heat transfer distribution and boundary layer transitional behavior. Comparison of pedictions made with the model to the data base indicates a significant improvement in predictive capability.
Aziz, Nazneen; Zhao, Qin; Bry, Lynn; Driscoll, Denise K; Funke, Birgit; Gibson, Jane S; Grody, Wayne W; Hegde, Madhuri R; Hoeltge, Gerald A; Leonard, Debra G B; Merker, Jason D; Nagarajan, Rakesh; Palicki, Linda A; Robetorye, Ryan S; Schrijver, Iris; Weck, Karen E; Voelkerding, Karl V
2015-04-01
The higher throughput and lower per-base cost of next-generation sequencing (NGS) as compared to Sanger sequencing has led to its rapid adoption in clinical testing. The number of laboratories offering NGS-based tests has also grown considerably in the past few years, despite the fact that specific Clinical Laboratory Improvement Amendments of 1988/College of American Pathologists (CAP) laboratory standards had not yet been developed to regulate this technology. To develop a checklist for clinical testing using NGS technology that sets standards for the analytic wet bench process and for bioinformatics or "dry bench" analyses. As NGS-based clinical tests are new to diagnostic testing and are of much greater complexity than traditional Sanger sequencing-based tests, there is an urgent need to develop new regulatory standards for laboratories offering these tests. To develop the necessary regulatory framework for NGS and to facilitate appropriate adoption of this technology for clinical testing, CAP formed a committee in 2011, the NGS Work Group, to deliberate upon the contents to be included in the checklist. Results . -A total of 18 laboratory accreditation checklist requirements for the analytic wet bench process and bioinformatics analysis processes have been included within CAP's molecular pathology checklist (MOL). This report describes the important issues considered by the CAP committee during the development of the new checklist requirements, which address documentation, validation, quality assurance, confirmatory testing, exception logs, monitoring of upgrades, variant interpretation and reporting, incidental findings, data storage, version traceability, and data transfer confidentiality.
Dimech, Wayne; Karakaltsas, Marina; Vincini, Giuseppe A
2018-05-25
A general trend towards conducting infectious disease serology testing in centralized laboratories means that quality control (QC) principles used for clinical chemistry testing are applied to infectious disease testing. However, no systematic assessment of methods used to establish QC limits has been applied to infectious disease serology testing. A total of 103 QC data sets, obtained from six different infectious disease serology analytes, were parsed through standard methods for establishing statistical control limits, including guidelines from Public Health England, USA Clinical and Laboratory Standards Institute (CLSI), German Richtlinien der Bundesärztekammer (RiliBÄK) and Australian QConnect. The percentage of QC results failing each method was compared. The percentage of data sets having more than 20% of QC results failing Westgard rules when the first 20 results were used to calculate the mean±2 standard deviation (SD) ranged from 3 (2.9%) for R4S to 66 (64.1%) for 10X rule, whereas the percentage ranged from 0 (0%) for R4S to 32 (40.5%) for 10X when the first 100 results were used to calculate the mean±2 SD. By contrast, the percentage of data sets with >20% failing the RiliBÄK control limits was 25 (24.3%). Only two data sets (1.9%) had more than 20% of results outside the QConnect Limits. The rate of failure of QCs using QConnect Limits was more applicable for monitoring infectious disease serology testing compared with UK Public Health, CLSI and RiliBÄK, as the alternatives to QConnect Limits reported an unacceptably high percentage of failures across the 103 data sets.
42 CFR 493.17 - Test categorization.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., analytic or postanalytic phases of the testing. (2) Training and experience—(i) Score 1. (A) Minimal training is required for preanalytic, analytic and postanalytic phases of the testing process; and (B... necessary for analytic test performance. (3) Reagents and materials preparation—(i) Score 1. (A) Reagents...
42 CFR 493.17 - Test categorization.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., analytic or postanalytic phases of the testing. (2) Training and experience—(i) Score 1. (A) Minimal training is required for preanalytic, analytic and postanalytic phases of the testing process; and (B... necessary for analytic test performance. (3) Reagents and materials preparation—(i) Score 1. (A) Reagents...
The modern rotor aerodynamic limits survey: A report and data survey
NASA Technical Reports Server (NTRS)
Cross, J.; Brilla, J.; Kufeld, R.; Balough, D.
1993-01-01
The first phase of the Modern Technology Rotor Program, the Modern Rotor Aerodynamic Limits Survey, was a flight test conducted by the United States Army Aviation Engineering Flight Activity for NASA Ames Research Center. The test was performed using a United States Army UH-60A Black Hawk aircraft and the United States Air Force HH-60A Night Hawk instrumented main-rotor blade. The primary purpose of this test was to gather high-speed, steady-state, and maneuvering data suitable for correlation purposes with analytical prediction tools. All aspects of the data base, flight-test instrumentation, and test procedures are presented and analyzed. Because of the high volume of data, only select data points are presented. However, access to the entire data set is available upon request.
bigSCale: an analytical framework for big-scale single-cell data.
Iacono, Giovanni; Mereu, Elisabetta; Guillaumet-Adkins, Amy; Corominas, Roser; Cuscó, Ivon; Rodríguez-Esteban, Gustavo; Gut, Marta; Pérez-Jurado, Luis Alberto; Gut, Ivo; Heyn, Holger
2018-06-01
Single-cell RNA sequencing (scRNA-seq) has significantly deepened our insights into complex tissues, with the latest techniques capable of processing tens of thousands of cells simultaneously. Analyzing increasing numbers of cells, however, generates extremely large data sets, extending processing time and challenging computing resources. Current scRNA-seq analysis tools are not designed to interrogate large data sets and often lack sensitivity to identify marker genes. With bigSCale, we provide a scalable analytical framework to analyze millions of cells, which addresses the challenges associated with large data sets. To handle the noise and sparsity of scRNA-seq data, bigSCale uses large sample sizes to estimate an accurate numerical model of noise. The framework further includes modules for differential expression analysis, cell clustering, and marker identification. A directed convolution strategy allows processing of extremely large data sets, while preserving transcript information from individual cells. We evaluated the performance of bigSCale using both a biological model of aberrant gene expression in patient-derived neuronal progenitor cells and simulated data sets, which underlines the speed and accuracy in differential expression analysis. To test its applicability for large data sets, we applied bigSCale to assess 1.3 million cells from the mouse developing forebrain. Its directed down-sampling strategy accumulates information from single cells into index cell transcriptomes, thereby defining cellular clusters with improved resolution. Accordingly, index cell clusters identified rare populations, such as reelin ( Reln )-positive Cajal-Retzius neurons, for which we report previously unrecognized heterogeneity associated with distinct differentiation stages, spatial organization, and cellular function. Together, bigSCale presents a solution to address future challenges of large single-cell data sets. © 2018 Iacono et al.; Published by Cold Spring Harbor Laboratory Press.
Barreda-García, Susana; Miranda-Castro, Rebeca; de-Los-Santos-Álvarez, Noemí; Miranda-Ordieres, Arturo J; Lobo-Castañón, María Jesús
2018-01-01
Highly sensitive testing of nucleic acids is essential to improve the detection of pathogens, which pose a major threat for public health worldwide. Currently available molecular assays, mainly based on PCR, have a limited utility in point-of-need control or resource-limited settings. Consequently, there is a strong interest in developing cost-effective, robust, and portable platforms for early detection of these harmful microorganisms. Since its description in 2004, isothermal helicase-dependent amplification (HDA) has been successfully applied in the development of novel molecular-based technologies for rapid, sensitive, and selective detection of viruses and bacteria. In this review, we highlight relevant analytical systems using this simple nucleic acid amplification methodology that takes place at a constant temperature and that is readily compatible with microfluidic technologies. Different strategies for monitoring HDA amplification products are described. In addition, we present technological advances for integrating sample preparation, HDA amplification, and detection. Future perspectives and challenges toward point-of-need use not only for clinical diagnosis but also in food safety testing and environmental monitoring are also discussed. Graphical Abstract Expanding the analytical toolbox for the detection of DNA sequences specific of pathogens with isothermal helicase dependent amplification (HDA).
Analytic family of post-merger template waveforms
NASA Astrophysics Data System (ADS)
Del Pozzo, Walter; Nagar, Alessandro
2017-06-01
Building on the analytical description of the post-merger (ringdown) waveform of coalescing, nonprecessing, spinning binary black holes introduced by Damour and Nagar [Phys. Rev. D 90, 024054 (2014), 10.1103/PhysRevD.90.024054], we propose an analytic, closed form, time-domain, representation of the ℓ=m =2 gravitational radiation mode emitted after merger. This expression is given as a function of the component masses and dimensionless spins (m1 ,2,χ1 ,2) of the two inspiraling objects, as well as of the mass MBH and (complex) frequency σ1 of the fundamental quasinormal mode of the remnant black hole. Our proposed template is obtained by fitting the post-merger waveform part of several publicly available numerical relativity simulations from the Simulating eXtreme Spacetimes (SXS) catalog and then suitably interpolating over (symmetric) mass ratio and spins. We show that this analytic expression accurately reproduces (˜0.01 rad ) the phasing of the post-merger data of other data sets not used in its construction. This is notably the case of the spin-aligned run SXS:BBH:0305, whose intrinsic parameters are consistent with the 90% credible intervals reported in the parameter-estimation followup of GW150914 by B.P. Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016), 10.1103/PhysRevLett.116.241102]. Using SXS waveforms as "experimental" data, we further show that our template could be used on the actual GW150914 data to perform a new measure of the complex frequency of the fundamental quasinormal mode so as to exploit the complete (high signal-to-noise-ratio) post-merger waveform. We assess the usefulness of our proposed template by analyzing, in a realistic setting, SXS full inspiral-merger-ringdown waveforms and constructing posterior probability distribution functions for the central frequency damping time of the first overtone of the fundamental quasinormal mode as well as for the physical parameters of the systems. We also briefly explore the possibility opened by our waveform model to test the second law of black hole dynamics. Our model will help improve current tests of general relativity, in particular the general-relativistic no-hair theorem, and allow for novel tests, such as that of the area theorem.
Lima, Manoel J A; Fernandes, Ridvan N; Tanaka, Auro A; Reis, Boaventura F
2016-02-01
This paper describes a new technique for the determination of captopril in pharmaceutical formulations, implemented by employing multicommuted flow analysis. The analytical procedure was based on the reaction between hypochlorite and captopril. The remaining hypochlorite oxidized luminol that generated electromagnetic radiation detected using a homemade luminometer. To the best of our knowledge, this is the first time that this reaction has been exploited for the determination of captopril in pharmaceutical products, offering a clean analytical procedure with minimal reagent usage. The effectiveness of the proposed procedure was confirmed by analyzing a set of pharmaceutical formulations. Application of the paired t-test showed that there was no significant difference between the data sets at a 95% confidence level. The useful features of the new analytical procedure included a linear response for captopril concentrations in the range 20.0-150.0 µmol/L (r = 0.997), a limit of detection (3σ) of 2.0 µmol/L, a sample throughput of 164 determinations per hour, reagent consumption of 9 µg luminol and 42 µg hypochlorite per determination and generation of 0.63 mL of waste. A relative standard deviation of 1% (n = 6) for a standard solution containing 80 µmol/L captopril was also obtained. Copyright © 2015 John Wiley & Sons, Ltd.
Standardless quantification by parameter optimization in electron probe microanalysis
NASA Astrophysics Data System (ADS)
Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.
2012-11-01
A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively.
Spackman, Erica; Ip, Hon S.; Suarez, D.L.; Slemons, R.D.; Stallknecht, D.E.
2008-01-01
A real-time reverse transcription polymerase chain reaction test for the identification of the H7 subtype in North American Avian influenza viruses (AIVs) was first reported in 2002; however, recent AIV surveillance efforts in wild birds and H7 outbreaks in poultry demonstrated that the 2002 test did not detect all H7 AIVs present in North and South America. Therefore, a new test, the 2008 Pan-American H7 test, was developed by using recently available H7 nucleotide sequences. The analytical specificity of the new assay was characterized with an RNA panel composed of 19 H7 viruses from around the world and RNA from all hemagglutinin subtypes except H16. Specificity for North and South American lineage H7 viruses was observed. Assay limits of detection were determined to be between 103 and 104 gene copies per reaction with in vitro transcribed RNA, and 100.0 and 10 0.8 50% egg infectious doses per reaction. The 2008 Pan-American H7 test also was shown to perform similarly to the 2002 test with specimens from chickens experimentally exposed to A/Chicken/BritishColumbia/314514-2/04 H7N3 highly pathogenic AIV. Furthermore, the 2008 test was able to detect 100% (n = 27) of the H7 AIV isolates recovered from North American wild birds in a 2006-2007 sample set (none of which were detected by the 2002 H7 test).
ERIC Educational Resources Information Center
Wise, Alyssa Friend; Vytasek, Jovita Maria; Hausknecht, Simone; Zhao, Yuting
2016-01-01
This paper addresses a relatively unexplored area in the field of learning analytics: how analytics are taken up and used as part of teaching and learning processes. Initial steps are taken towards developing design knowledge for this "middle space," with a focus on students as analytics users. First, a core set of challenges for…
Development of a research ethics knowledge and analytical skills assessment tool.
Taylor, Holly A; Kass, Nancy E; Ali, Joseph; Sisson, Stephen; Bertram, Amanda; Bhan, Anant
2012-04-01
The goal of this project was to develop and validate a new tool to evaluate learners' knowledge and skills related to research ethics. A core set of 50 questions from existing computer-based online teaching modules were identified, refined and supplemented to create a set of 74 multiple-choice, true/false and short answer questions. The questions were pilot-tested and item discrimination was calculated for each question. Poorly performing items were eliminated or refined. Two comparable assessment tools were created. These assessment tools were administered as a pre-test and post-test to a cohort of 58 Indian junior health research investigators before and after exposure to a new course on research ethics. Half of the investigators were exposed to the course online, the other half in person. Item discrimination was calculated for each question and Cronbach's α for each assessment tool. A final version of the assessment tool that incorporated the best questions from the pre-/post-test phase was used to assess retention of research ethics knowledge and skills 3 months after course delivery. The final version of the REKASA includes 41 items and had a Cronbach's α of 0.837. The results illustrate, in one sample of learners, the successful, systematic development and use of a knowledge and skills assessment tool in research ethics capable of not only measuring basic knowledge in research ethics and oversight but also assessing learners' ability to apply ethics knowledge to the analytical task of reasoning through research ethics cases, without reliance on essay or discussion-based examination. These promising preliminary findings should be confirmed with additional groups of learners.
Niosh analytical methods for Set G
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1976-12-01
Industrial Hygiene sampling and analytical monitoring methods validated under the joint NIOSH/OSHA Standards Completion Program for Set G are contained herein. Monitoring methods for the following compounds are included: butadiene, heptane, ketene, methyl cyclohexane, octachloronaphthalene, pentachloronaphthalene, petroleum distillates, propylene dichloride, turpentine, dioxane, hexane, LPG, naphtha(coal tar), octane, pentane, propane, and stoddard solvent.
Preliminary Evaluation of MapReduce for High-Performance Climate Data Analysis
NASA Technical Reports Server (NTRS)
Duffy, Daniel Q.; Schnase, John L.; Thompson, John H.; Freeman, Shawn M.; Clune, Thomas L.
2012-01-01
MapReduce is an approach to high-performance analytics that may be useful to data intensive problems in climate research. It offers an analysis paradigm that uses clusters of computers and combines distributed storage of large data sets with parallel computation. We are particularly interested in the potential of MapReduce to speed up basic operations common to a wide range of analyses. In order to evaluate this potential, we are prototyping a series of canonical MapReduce operations over a test suite of observational and climate simulation datasets. Our initial focus has been on averaging operations over arbitrary spatial and temporal extents within Modern Era Retrospective- Analysis for Research and Applications (MERRA) data. Preliminary results suggest this approach can improve efficiencies within data intensive analytic workflows.
Helios: Understanding Solar Evolution Through Text Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Randazzese, Lucien
This proof-of-concept project focused on developing, testing, and validating a range of bibliometric, text analytic, and machine-learning based methods to explore the evolution of three photovoltaic (PV) technologies: Cadmium Telluride (CdTe), Dye-Sensitized solar cells (DSSC), and Multi-junction solar cells. The analytical approach to the work was inspired by previous work by the same team to measure and predict the scientific prominence of terms and entities within specific research domains. The goal was to create tools that could assist domain-knowledgeable analysts in investigating the history and path of technological developments in general, with a focus on analyzing step-function changes in performance,more » or “breakthroughs,” in particular. The text-analytics platform developed during this project was dubbed Helios. The project relied on computational methods for analyzing large corpora of technical documents. For this project we ingested technical documents from the following sources into Helios: Thomson Scientific Web of Science (papers), the U.S. Patent & Trademark Office (patents), the U.S. Department of Energy (technical documents), the U.S. National Science Foundation (project funding summaries), and a hand curated set of full-text documents from Thomson Scientific and other sources.« less
A Dynamic Calibration Method for Experimental and Analytical Hub Load Comparison
2017-03-01
minimal differences were noted. As discussed above, a “dummy” four- bladed hub was fabricated to permit application of shaker loads to the ARES testbed...experimental data used for comparison was from wind-tunnel testing of a set of Active-Twist Rotor (ATR) blades , which had undergone extensive bench...experimental measurements, one low-speed and the other high-speed. Although these blades are capable of actively twisting during flight, in both of these
Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.
Ritz, Christian; Van der Vliet, Leana
2009-09-01
The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.
Park, Jin-A; Abd El-Aty, A M; Zheng, Weijia; Kim, Seong-Kwan; Choi, Jeong-Min; Hacımüftüoğlu, Ahmet; Shim, Jae-Han; Shin, Ho-Chul
2018-06-01
In this work, a method was developed for the simultaneous determination of residual metoserpate, buquinolate and diclofenac in pork, milk, and eggs. Samples were extracted with 0.1% formic acid in acetonitrile, defatted with n-hexane, and filtered prior to analysis using liquid chromatography-tandem mass spectrometry. The analytes were separated on a C 18 column using 0.1% acetic acid and methanol as the mobile phase. The matrix-matched calibration curves showed good linearity over a concentration range of 5-50 ng/g with coefficients of determination (R 2 ) ≥0.991. The intra- and inter-day accuracies (expressed as recovery percentage values) calculated using three spiking levels (5, 10, and 20 μg/kg) were 80-108.65 and 74.06-107.15%, respectively, and the precisions (expressed as relative standard deviation) were 2.86-13.67 and 0.05-11.74%, respectively, for the tested drugs determined in various matrices. The limits of quantification (1 and 2 μg/kg) were below the uniform residual level (0.01 mg/kg) set for compounds that have no specific maximum residue limit (MRL). The developed method was tested using market samples and none of the target analytes was detected in any of the samples. The validated method proved to be practicable for detection of the tested analytes in pork, milk, and eggs. Copyright © 2018 John Wiley & Sons, Ltd.
No need for external orthogonality in subsystem density-functional theory.
Unsleber, Jan P; Neugebauer, Johannes; Jacob, Christoph R
2016-08-03
Recent reports on the necessity of using externally orthogonal orbitals in subsystem density-functional theory (SDFT) [Annu. Rep. Comput. Chem., 8, 2012, 53; J. Phys. Chem. A, 118, 2014, 9182] are re-investigated. We show that in the basis-set limit, supermolecular Kohn-Sham-DFT (KS-DFT) densities can exactly be represented as a sum of subsystem densities, even if the subsystem orbitals are not externally orthogonal. This is illustrated using both an analytical example and in basis-set free numerical calculations for an atomic test case. We further show that even with finite basis sets, SDFT calculations using accurate reconstructed potentials can closely approach the supermolecular KS-DFT density, and that the deviations between SDFT and KS-DFT decrease as the basis-set limit is approached. Our results demonstrate that formally, there is no need to enforce external orthogonality in SDFT, even though this might be a useful strategy when developing projection-based DFT embedding schemes.
Wieser, Stefan; Axmann, Markus; Schütz, Gerhard J.
2008-01-01
We propose here an approach for the analysis of single-molecule trajectories which is based on a comprehensive comparison of an experimental data set with multiple Monte Carlo simulations of the diffusion process. It allows quantitative data analysis, particularly whenever analytical treatment of a model is infeasible. Simulations are performed on a discrete parameter space and compared with the experimental results by a nonparametric statistical test. The method provides a matrix of p-values that assess the probability for having observed the experimental data at each setting of the model parameters. We show the testing approach for three typical situations observed in the cellular plasma membrane: i), free Brownian motion of the tracer, ii), hop diffusion of the tracer in a periodic meshwork of squares, and iii), transient binding of the tracer to slowly diffusing structures. By plotting the p-value as a function of the model parameters, one can easily identify the most consistent parameter settings but also recover mutual dependencies and ambiguities which are difficult to determine by standard fitting routines. Finally, we used the test to reanalyze previous data obtained on the diffusion of the glycosylphosphatidylinositol-protein CD59 in the plasma membrane of the human T24 cell line. PMID:18805933
Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F
2017-01-01
Two new, simple, and specific green analytical methods are proposed: zero-crossing first-derivative and chemometric-based spectrophotometric artificial neural network (ANN). The proposed methods were used for the simultaneous estimation of two closely related antioxidant nutraceuticals, coenzyme Q10 (Q10) and vitamin E, in their mixtures and pharmaceutical preparations. The first method is based on the handling of spectrophotometric data with the first-derivative technique, in which both nutraceuticals were determined in ethanol, each at the zero crossing of the other. The amplitudes of the first-derivative spectra for Q10 and vitamin E were recorded at 285 and 235 nm respectively, and correlated with their concentrations. The linearity ranges of Q10 and vitamin E were 10-60 and 5.6-70 μg⋅mL-1, respectively. The second method, ANN, is a multivariate calibration method and it was developed and applied for the simultaneous determination of both analytes. A training set of 90 different synthetic mixtures containing Q10 and vitamin E in the ranges of 0-100 and 0-556 μg⋅mL-1, respectively, was prepared in ethanol. The absorption spectra of the training set were recorded in the spectral region of 230-300 nm. By relating the concentration sets (x-block) with their corresponding absorption data (y-block), gradient-descent back-propagation ANN calibration could be computed. To validate the proposed network, a set of 45 synthetic mixtures of the two drugs was used. Both proposed methods were successfully applied for the assay of Q10 and vitamin E in their laboratory-prepared mixtures and in their pharmaceutical tablets with excellent recovery. These methods offer advantages over other methods because of low-cost equipment, time-saving measures, and environmentally friendly materials. In addition, no chemical separation prior to analysis was needed. The ANN method was superior to the derivative technique because ANN can determine both drugs under nonlinear experimental conditions. Consequently, ANN would be the method of choice in the routine analysis of Q10 and vitamin E tablets. No interference from common pharmaceutical additives was observed. Student's t-test and the F-test were used to compare the two methods. No significant difference was recorded.
High resolution x-ray CMT: Reconstruction methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, J.K.
This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited formore » high accuracy, tomographic reconstruction codes.« less
Thinking within the box: The relational processing style elicited by counterfactual mind-sets.
Kray, Laura J; Galinsky, Adam D; Wong, Elaine M
2006-07-01
By comparing reality to what might have been, counterfactuals promote a relational processing style characterized by a tendency to consider relationships and associations among a set of stimuli. As such, counterfactual mind-sets were expected to improve performance on tasks involving the consideration of relationships and associations but to impair performance on tasks requiring novel ideas that are uninfluenced by salient associations. The authors conducted several experiments to test this hypothesis. In Experiments 1a and 1b, the authors determined that counterfactual mind-sets increase mental states and preferences for thinking styles consistent with relational thought. Experiment 2 demonstrated a facilitative effect of counterfactual mind-sets on an analytic task involving logical relationships; Experiments 3 and 4 demonstrated that counterfactual mind-sets structure thought and imagination around salient associations and therefore impaired performance on creative generation tasks. In Experiment 5, the authors demonstrated that the detrimental effect of counterfactual mind-sets is limited to creative tasks involving novel idea generation; in a creative association task involving the consideration of relationships between task stimuli, counterfactual mind-sets improved performance. Copyright 2006 APA, all rights reserved.
42 CFR 493.859 - Standard; ABO group and D (Rho) typing.
Code of Federal Regulations, 2013 CFR
2013-10-01
... attain a score of at least 100 percent of acceptable responses for each analyte or test in each testing event is unsatisfactory analyte performance for the testing event. (b) Failure to attain an overall.... (2) For any unacceptable analyte or unsatisfactory testing event score, remedial action must be taken...
42 CFR 493.859 - Standard; ABO group and D (Rho) typing.
Code of Federal Regulations, 2012 CFR
2012-10-01
... attain a score of at least 100 percent of acceptable responses for each analyte or test in each testing event is unsatisfactory analyte performance for the testing event. (b) Failure to attain an overall.... (2) For any unacceptable analyte or unsatisfactory testing event score, remedial action must be taken...
42 CFR 493.859 - Standard; ABO group and D (Rho) typing.
Code of Federal Regulations, 2014 CFR
2014-10-01
... attain a score of at least 100 percent of acceptable responses for each analyte or test in each testing event is unsatisfactory analyte performance for the testing event. (b) Failure to attain an overall.... (2) For any unacceptable analyte or unsatisfactory testing event score, remedial action must be taken...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parfenov, O.G.
1994-12-25
We discuss three results. The first exhibits the order of decrease of the s-values as a function of the CR-dimension of a compact set on which we approximate the class of analytic functions being studied. The second is an asymptotic formula for the case when the domain of analyticity and the compact set are Reinhart domains. The third is the computation of the s-values of a special operator that is of interest for approximation theory on one-dimensional manifolds.
Ottaway, Josh; Farrell, Jeremy A; Kalivas, John H
2013-02-05
An essential part to calibration is establishing the analyte calibration reference samples. These samples must characterize the sample matrix and measurement conditions (chemical, physical, instrumental, and environmental) of any sample to be predicted. Calibration usually requires measuring spectra for numerous reference samples in addition to determining the corresponding analyte reference values. Both tasks are typically time-consuming and costly. This paper reports on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory prepared or determined reference values. Instead, an analyte pure component spectrum is used in conjunction with nonanalyte spectra for calibration. Nonanalyte spectra can be from different sources including pure component interference samples, blanks, and constant analyte samples. The approach is also applicable to calibration maintenance when the analyte pure component spectrum is measured in one set of conditions and nonanalyte spectra are measured in new conditions. The PCTR method balances the trade-offs between calibration model shrinkage and the degree of orthogonality to the nonanalyte content (model direction) in order to obtain accurate predictions. Using visible and near-infrared (NIR) spectral data sets, the PCTR results are comparable to those obtained using ridge regression (RR) with reference calibration sets. The flexibility of PCTR also allows including reference samples if such samples are available.
On precisely modelling surface deformation due to interacting magma chambers and dykes
NASA Astrophysics Data System (ADS)
Pascal, Karen; Neuberg, Jurgen; Rivalta, Eleonora
2014-01-01
Combined data sets of InSAR and GPS allow us to observe surface deformation in volcanic settings. However, at the vast majority of volcanoes, a detailed 3-D structure that could guide the modelling of deformation sources is not available, due to the lack of tomography studies, for example. Therefore, volcano ground deformation due to magma movement in the subsurface is commonly modelled using simple point (Mogi) or dislocation (Okada) sources, embedded in a homogeneous, isotropic and elastic half-space. When data sets are too complex to be explained by a single deformation source, the magmatic system is often represented by a combination of these sources and their displacements fields are simply summed. By doing so, the assumption of homogeneity in the half-space is violated and the resulting interaction between sources is neglected. We have quantified the errors of such a simplification and investigated the limits in which the combination of analytical sources is justified. We have calculated the vertical and horizontal displacements for analytical models with adjacent deformation sources and have tested them against the solutions of corresponding 3-D finite element models, which account for the interaction between sources. We have tested various double-source configurations with either two spherical sources representing magma chambers, or a magma chamber and an adjacent dyke, modelled by a rectangular tensile dislocation or pressurized crack. For a tensile Okada source (representing an opening dyke) aligned or superposed to a Mogi source (magma chamber), we find the discrepancies with the numerical models to be insignificant (<5 per cent) independently of the source separation. However, if a Mogi source is placed side by side to an Okada source (in the strike-perpendicular direction), we find the discrepancies to become significant for a source separation less than four times the radius of the magma chamber. For horizontally or vertically aligned pressurized sources, the discrepancies are up to 20 per cent, which translates into surprisingly large errors when inverting deformation data for source parameters such as depth and volume change. Beyond 8 radii however, we demonstrate that the summation of analytical sources represents adjacent magma chambers correctly.
ERIC Educational Resources Information Center
McDougall, Dennis; Ornelles, Cecily; Mersberg, Kawika; Amona, Kekama
2015-01-01
In this meta-analytic review, we critically evaluate procedures and outcomes from nine intervention studies in which students used tactile-cued self-monitoring in educational settings. Findings suggest that most tactile-cued self-monitoring interventions have moderate to strong effects, have emerged only recently, and have not yet achieved the…
Review of Pre-Analytical Errors in Oral Glucose Tolerance Testing in a Tertiary Care Hospital.
Nanda, Rachita; Patel, Suprava; Sahoo, Sibashish; Mohapatra, Eli
2018-03-13
The pre-pre-analytical and pre-analytical phases form a major chunk of the errors in a laboratory. The process has taken into consideration a very common procedure which is the oral glucose tolerance test to identify the pre-pre-analytical errors. Quality indicators provide evidence of quality, support accountability and help in the decision making of laboratory personnel. The aim of this research is to evaluate pre-analytical performance of the oral glucose tolerance test procedure. An observational study that was conducted overa period of three months, in the phlebotomy and accessioning unit of our laboratory using questionnaire that examined the pre-pre-analytical errors through a scoring system. The pre-analytical phase was analyzed for each sample collected as per seven quality indicators. About 25% of the population gave wrong answer with regard to the question that tested the knowledge of patient preparation. The appropriateness of test result QI-1 had the most error. Although QI-5 for sample collection had a low error rate, it is a very important indicator as any wrongly collected sample can alter the test result. Evaluating the pre-analytical and pre-pre-analytical phase is essential and must be conducted routinely on a yearly basis to identify errors and take corrective action and to facilitate their gradual introduction into routine practice.
NASA Astrophysics Data System (ADS)
Tamkin, G.; Schnase, J. L.; Duffy, D.; Li, J.; Strong, S.; Thompson, J. H.
2017-12-01
NASA's efforts to advance climate analytics-as-a-service are making new capabilities available to the research community: (1) A full-featured Reanalysis Ensemble Service (RES) comprising monthly means data from multiple reanalysis data sets, accessible through an enhanced set of extraction, analytic, arithmetic, and intercomparison operations. The operations are made accessible through NASA's climate data analytics Web services and our client-side Climate Data Services Python library, CDSlib; (2) A cloud-based, high-performance Virtual Real-Time Analytics Testbed supporting a select set of climate variables. This near real-time capability enables advanced technologies like Spark and Hadoop-based MapReduce analytics over native NetCDF files; and (3) A WPS-compliant Web service interface to our climate data analytics service that will enable greater interoperability with next-generation systems such as ESGF. The Reanalysis Ensemble Service includes the following: - New API that supports full temporal, spatial, and grid-based resolution services with sample queries - A Docker-ready RES application to deploy across platforms - Extended capabilities that enable single- and multiple reanalysis area average, vertical average, re-gridding, standard deviation, and ensemble averages - Convenient, one-stop shopping for commonly used data products from multiple reanalyses including basic sub-setting and arithmetic operations (e.g., avg, sum, max, min, var, count, anomaly) - Full support for the MERRA-2 reanalysis dataset in addition to, ECMWF ERA-Interim, NCEP CFSR, JMA JRA-55 and NOAA/ESRL 20CR… - A Jupyter notebook-based distribution mechanism designed for client use cases that combines CDSlib documentation with interactive scenarios and personalized project management - Supporting analytic services for NASA GMAO Forward Processing datasets - Basic uncertainty quantification services that combine heterogeneous ensemble products with comparative observational products (e.g., reanalysis, observational, visualization) - The ability to compute and visualize multiple reanalysis for ease of inter-comparisons - Automated tools to retrieve and prepare data collections for analytic processing
Using business analytics to improve outcomes.
Rivera, Jose; Delaney, Stephen
2015-02-01
Orlando Health has brought its hospital and physician practice revenue cycle systems into better balance using four sets of customized analytics: Physician performance analytics gauge the total net revenue for every employed physician. Patient-pay analytics provide financial risk scores for all patients on both the hospital and physician practice sides. Revenue management analytics bridge the gap between the back-end central business office and front-end physician practice managers and administrators. Enterprise management analytics allow the hospitals and physician practices to share important information about common patients.
NASA Technical Reports Server (NTRS)
Smith, D. R.; Leslie, F. W.
1984-01-01
The Purdue Regional Objective Analysis of the Mesoscale (PROAM) is a successive correction type scheme for the analysis of surface meteorological data. The scheme is subjected to a series of experiments to evaluate its performance under a variety of analysis conditions. The tests include use of a known analytic temperature distribution to quantify error bounds for the scheme. Similar experiments were conducted using actual atmospheric data. Results indicate that the multiple pass technique increases the accuracy of the analysis. Furthermore, the tests suggest appropriate values for the analysis parameters in resolving disturbances for the data set used in this investigation.
Experimental operation of a sodium heat pipe
NASA Astrophysics Data System (ADS)
Holtz, R. E.; McLennan, G. A.; Koehl, E. R.
1985-05-01
This report documents the operation of a 28 in. long sodium heat pipe in the Heat Pipe Test Facility (HPTF) installed at Argonne National Laboratory. Experimental data were collected to simulate conditions prototypic of both a fluidized bed coal combustor application and a space environment application. Both sets of experiment data show good agreement with the heat pipe analytical model. The heat transfer performance of the heat pipe proved reliable over a substantial period of operation and over much thermal cycling. Additional testing of longer heat pipes under controlled laboratory conditions will be necessary to determine performance limitations and to complete the design code validation.
An Analysis of Rocket Propulsion Testing Costs
NASA Technical Reports Server (NTRS)
Ramirez-Pagan, Carmen P.; Rahman, Shamim A.
2009-01-01
The primary mission at NASA Stennis Space Center (SSC) is rocket propulsion testing. Such testing is generally performed within two arenas: (1) Production testing for certification and acceptance, and (2) Developmental testing for prototype or experimental purposes. The customer base consists of NASA programs, DOD programs, and commercial programs. Resources in place to perform on-site testing include both civil servants and contractor personnel, hardware and software including data acquisition and control, and 6 test stands with a total of 14 test positions/cells. For several business reasons there is the need to augment understanding of the test costs for all the various types of test campaigns. Historical propulsion test data was evaluated and analyzed in many different ways with the intent to find any correlation or statistics that could help produce more reliable and accurate cost estimates and projections. The analytical efforts included timeline trends, statistical curve fitting, average cost per test, cost per test second, test cost timeline, and test cost envelopes. Further, the analytical effort includes examining the test cost from the perspective of thrust level and test article characteristics. Some of the analytical approaches did not produce evidence strong enough for further analysis. Some other analytical approaches yield promising results and are candidates for further development and focused study. Information was organized for into its elements: a Project Profile, Test Cost Timeline, and Cost Envelope. The Project Profile is a snap shot of the project life cycle on a timeline fashion, which includes various statistical analyses. The Test Cost Timeline shows the cumulative average test cost, for each project, at each month where there was test activity. The Test Cost Envelope shows a range of cost for a given number of test(s). The supporting information upon which this study was performed came from diverse sources and thus it was necessary to build several intermediate databases in order to understand, validate, and manipulate data. These intermediate databases (validated historical account of schedule, test activity, and cost) by themselves are of great value and utility. For example, for the Project Profile, we were able to merged schedule, cost, and test activity. This kind of historical account conveys important information about sequence of events, lead time, and opportunities for improvement in future propulsion test projects. The Product Requirement Document (PRD) file is a collection of data extracted from each project PRD (technical characteristics, test requirements, and projection of cost, schedule, and test activity). This information could help expedite the development of future PRD (or equivalent document) on similar projects, and could also, when compared to the actual results, help improve projections around cost and schedule. Also, this file can be sorted by the parameter of interest to perform a visual review of potential common themes or trends. The process of searching, collecting, and validating propulsion test data encountered a lot of difficulties which then led to a set of recommendations for improvement in order to facilitate future data gathering and analysis.
Code of Federal Regulations, 2012 CFR
2012-07-01
... PROCEDURES Equipment, Fuel, and Gas Specifications § 1066.101 Overview. (a) This subpart addresses equipment related to emission testing, as well as test fuels and analytical gases. This section addresses emission sampling and analytical equipment, test fuels, and analytical gases. (b) The provisions of 40 CFR part 1065...
Code of Federal Regulations, 2013 CFR
2013-07-01
... PROCEDURES Equipment, Fuel, and Gas Specifications § 1066.101 Overview. (a) This subpart addresses equipment related to emission testing, as well as test fuels and analytical gases. This section addresses emission sampling and analytical equipment, test fuels, and analytical gases. (b) The provisions of 40 CFR part 1065...
Analytical solutions for efficient interpretation of single-well push-pull tracer tests
Single-well push-pull tracer tests have been used to characterize the extent, fate, and transport of subsurface contamination. Analytical solutions provide one alternative for interpreting test results. In this work, an exact analytical solution to two-dimensional equations descr...
Proxy-SU(3) symmetry in heavy deformed nuclei
NASA Astrophysics Data System (ADS)
Bonatsos, Dennis; Assimakis, I. E.; Minkov, N.; Martinou, Andriana; Cakirli, R. B.; Casten, R. F.; Blaum, K.
2017-06-01
Background: Microscopic calculations of heavy nuclei face considerable difficulties due to the sizes of the matrices that need to be solved. Various approximation schemes have been invoked, for example by truncating the spaces, imposing seniority limits, or appealing to various symmetry schemes such as pseudo-SU(3). This paper proposes a new symmetry scheme also based on SU(3). This proxy-SU(3) can be applied to well-deformed nuclei, is simple to use, and can yield analytic predictions. Purpose: To present the new scheme and its microscopic motivation, and to test it using a Nilsson model calculation with the original shell model orbits and with the new proxy set. Method: We invoke an approximate, analytic, treatment of the Nilsson model, that allows the above vetting and yet is also transparent in understanding the approximations involved in the new proxy-SU(3). Results: It is found that the new scheme yields a Nilsson diagram for well-deformed nuclei that is very close to the original Nilsson diagram. The specific levels of approximation in the new scheme are also shown, for each major shell. Conclusions: The new proxy-SU(3) scheme is a good approximation to the full set of orbits in a major shell. Being able to replace a complex shell model calculation with a symmetry-based description now opens up the possibility to predict many properties of nuclei analytically and often in a parameter-free way. The new scheme works best for heavier nuclei, precisely where full microscopic calculations are most challenged. Some cases in which the new scheme can be used, often analytically, to make specific predictions, are shown in a subsequent paper.
Analytic Methods Used in Quality Control in a Compounding Pharmacy.
Allen, Loyd V
2017-01-01
Analytical testing will no doubt become a more important part of pharmaceutical compounding as the public and regulatory agencies demand increasing documentation of the quality of compounded preparations. Compounding pharmacists must decide what types of testing and what amount of testing to include in their quality-control programs, and whether testing should be done in-house or outsourced. Like pharmaceutical compounding, analytical testing should be performed only by those who are appropriately trained and qualified. This article discusses the analytical methods that are used in quality control in a compounding pharmacy. Copyright© by International Journal of Pharmaceutical Compounding, Inc.
NASA Astrophysics Data System (ADS)
Wu, Dongmei; Wang, Zhongcheng
2006-03-01
According to Mickens [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563], the general HB (harmonic balance) method is an approximation to the convergent Fourier series representation of the periodic solution of a nonlinear oscillator and not an approximation to an expansion in terms of a small parameter. Consequently, for a nonlinear undamped Duffing equation with a driving force Bcos(ωx), to find a periodic solution when the fundamental frequency is identical to ω, the corresponding Fourier series can be written as y˜(x)=∑n=1m acos[(2n-1)ωx]. How to calculate the coefficients of the Fourier series efficiently with a computer program is still an open problem. For HB method, by substituting approximation y˜(x) into force equation, expanding the resulting expression into a trigonometric series, then letting the coefficients of the resulting lowest-order harmonic be zero, one can obtain approximate coefficients of approximation y˜(x) [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563]. But for nonlinear differential equations such as Duffing equation, it is very difficult to construct higher-order analytical approximations, because the HB method requires solving a set of algebraic equations for a large number of unknowns with very complex nonlinearities. To overcome the difficulty, forty years ago, Urabe derived a computational method for Duffing equation based on Galerkin procedure [M. Urabe, A. Reiter, Numerical computation of nonlinear forced oscillations by Galerkin's procedure, J. Math. Anal. Appl. 14 (1966) 107-140]. Dooren obtained an approximate solution of the Duffing oscillator with a special set of parameters by using Urabe's method [R. van Dooren, Stabilization of Cowell's classic finite difference method for numerical integration, J. Comput. Phys. 16 (1974) 186-192]. In this paper, in the frame of the general HB method, we present a new iteration algorithm to calculate the coefficients of the Fourier series. By using this new method, the iteration procedure starts with a(x)cos(ωx)+b(x)sin(ωx), and the accuracy may be improved gradually by determining new coefficients a,a,… will be produced automatically in an one-by-one manner. In all the stage of calculation, we need only to solve a cubic equation. Using this new algorithm, we develop a Mathematica program, which demonstrates following main advantages over the previous HB method: (1) it avoids solving a set of associate nonlinear equations; (2) it is easier to be implemented into a computer program, and produces a highly accurate solution with analytical expression efficiently. It is interesting to find that, generally, for a given set of parameters, a nonlinear Duffing equation can have three independent oscillation modes. For some sets of the parameters, it can have two modes with complex displacement and one with real displacement. But in some cases, it can have three modes, all of them having real displacement. Therefore, we can divide the parameters into two classes, according to the solution property: there is only one mode with real displacement and there are three modes with real displacement. This program should be useful to study the dynamically periodic behavior of a Duffing oscillator and can provide an approximate analytical solution with high-accuracy for testing the error behavior of newly developed numerical methods with a wide range of parameters. Program summaryTitle of program:AnalyDuffing.nb Catalogue identifier:ADWR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWR_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:none Computer for which the program is designed and others on which it has been tested:the program has been designed for a microcomputer and been tested on the microcomputer. Computers:IBM PC Installations:the address(es) of your computer(s) Operating systems under which the program has been tested:Windows XP Programming language used:Software Mathematica 4.2, 5.0 and 5.1 No. of lines in distributed program, including test data, etc.:23 663 No. of bytes in distributed program, including test data, etc.:152 321 Distribution format:tar.gz Memory required to execute with typical data:51 712 Bytes No. of bits in a word: No. of processors used:1 Has the code been vectorized?:no Peripherals used:no Program Library subprograms used:no Nature of physical problem:To find an approximate solution with analytical expressions for the undamped nonlinear Duffing equation with periodic driving force when the fundamental frequency is identical to the driving force. Method of solution:In the frame of the general HB method, by using a new iteration algorithm to calculate the coefficients of the Fourier series, we can obtain an approximate analytical solution with high-accuracy efficiently. Restrictions on the complexity of the problem:For problems, which have a large driving frequency, the convergence may be a little slow, because more iterative times are needed. Typical running time:several seconds Unusual features of the program:For an undamped Duffing equation, it can provide all the solutions or the oscillation modes with real displacement for any interesting parameters, for the required accuracy, efficiently. The program can be used to study the dynamically periodic behavior of a nonlinear oscillator, and can provide a high-accurate approximate analytical solution for developing high-accurate numerical method.
DEMONSTRATION OF THE ANALYTIC ELEMENT METHOD FOR WELLHEAD PROTECTION
A new computer program has been developed to determine time-of-travel capture zones in relatively simple geohydrological settings. The WhAEM package contains an analytic element model that uses superposition of (many) closed form analytical solutions to generate a ground-water fl...
10 CFR 26.168 - Blind performance testing.
Code of Federal Regulations, 2014 CFR
2014-01-01
... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...
10 CFR 26.168 - Blind performance testing.
Code of Federal Regulations, 2010 CFR
2010-01-01
... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...
10 CFR 26.168 - Blind performance testing.
Code of Federal Regulations, 2011 CFR
2011-01-01
... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...
10 CFR 26.168 - Blind performance testing.
Code of Federal Regulations, 2013 CFR
2013-01-01
... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...
10 CFR 26.168 - Blind performance testing.
Code of Federal Regulations, 2012 CFR
2012-01-01
... analyte and must be certified by immunoassay and confirmatory testing; (2) Drug positive. These samples must contain a measurable amount of the target drug or analyte in concentrations ranging between 150... performance test sample must contain a measurable amount of the target drug or analyte in concentrations...
Measurement of latent cognitive abilities involved in concept identification learning.
Thomas, Michael L; Brown, Gregory G; Gur, Ruben C; Moore, Tyler M; Patt, Virginie M; Nock, Matthew K; Naifeh, James A; Heeringa, Steven; Ursano, Robert J; Stein, Murray B
2015-01-01
We used cognitive and psychometric modeling techniques to evaluate the construct validity and measurement precision of latent cognitive abilities measured by a test of concept identification learning: the Penn Conditional Exclusion Test (PCET). Item response theory parameters were embedded within classic associative- and hypothesis-based Markov learning models and were fitted to 35,553 Army soldiers' PCET data from the Army Study to Assess Risk and Resilience in Servicemembers (Army STARRS). Data were consistent with a hypothesis-testing model with multiple latent abilities-abstraction and set shifting. Latent abstraction ability was positively correlated with number of concepts learned, and latent set-shifting ability was negatively correlated with number of perseverative errors, supporting the construct validity of the two parameters. Abstraction was most precisely assessed for participants with abilities ranging from 1.5 standard deviations below the mean to the mean itself. Measurement of set shifting was acceptably precise only for participants making a high number of perseverative errors. The PCET precisely measures latent abstraction ability in the Army STARRS sample, especially within the range of mildly impaired to average ability. This precision pattern is ideal for a test developed to measure cognitive impairment as opposed to cognitive strength. The PCET also measures latent set-shifting ability, but reliable assessment is limited to the impaired range of ability, reflecting that perseverative errors are rare among cognitively healthy adults. Integrating cognitive and psychometric models can provide information about construct validity and measurement precision within a single analytical framework.
Hutchinson, Joseph P; Li, Jianfeng; Farrell, William; Groeber, Elizabeth; Szucs, Roman; Dicinoski, Greg; Haddad, Paul R
2011-03-25
The responses of four different types of aerosol detectors have been evaluated and compared to establish their potential use as a universal detector in conjunction with ultra high pressure liquid chromatography (UHPLC). Two charged-aerosol detectors, namely Corona CAD and Corona Ultra, and also two different types of light-scattering detectors (an evaporative light scattering detector, and a nano-quantity analyte detector [NQAD]) were evaluated. The responses of these detectors were systematically investigated under changing experimental and instrumental parameters, such as the mobile phase flow-rate, analyte concentration, mobile phase composition, nebulizer temperature, evaporator temperature, evaporator gas flow-rate and instrumental signal filtering after detection. It was found that these parameters exerted non-linear effects on the responses of the aerosol detectors and must therefore be considered when designing analytical separation conditions, particularly when gradient elution is performed. Identical reversed-phase gradient separations were compared on all four aerosol detectors and further compared with UV detection at 200 nm. The aerosol detectors were able to detect all 11 analytes in a test set comprising species having a variety of physicochemical properties, whilst UV detection was applicable only to those analytes containing chromophores. The reproducibility of the detector response for 11 analytes over 10 consecutive separations was found to be approximately 5% for the charged-aerosol detectors and approximately 11% for the light-scattering detectors. The tested analytes included semi-volatile species which exhibited a more variable response on the aerosol detectors. Peak efficiencies were generally better on the aerosol detectors in comparison to UV detection and particularly so for the light-scattering detectors which exhibited efficiencies of around 110,000 plates per metre. Limits of detection were calculated using different mobile phase compositions and the NQAD detector was found to be the most sensitive (LOD of 10 ng/mL), followed by the Corona CAD (76 ng/mL), then UV detection at 200 nm (178 ng/mL) using an injection volume of 25 μL. Copyright © 2011 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelles, A.; Corstanje, A.; Enriquez, J.E.
2015-05-01
The pattern of the radio emission of air showers is finely sampled with the Low-Frequency ARray (LOFAR). A set of 382 measured air showers is used to test a fast, analytic parameterization of the distribution of pulse powers. Using this parameterization we are able to reconstruct the shower axis and give estimators for the energy of the air shower as well as the distance to the shower maximum.
NASA Technical Reports Server (NTRS)
Colwell, R. N. (Principal Investigator)
1983-01-01
The three types of LANDSAT 4 film products generally accessible to the user community were analyzed and attempts were made to acquire a data set consisting of a variety of TM and MSS image products for the Sacramento and San Francisco Bay Area test sites. On request, the EDC developed an interim TM analytical film by using a leaser beam recorder to produce black and white masters from which natural and false color composites were created.
NASA Technical Reports Server (NTRS)
Deckert, J. C.
1983-01-01
The details are presented of an onboard digital computer algorithm designed to reliably detect and isolate the first failure in a duplex set of flight control sensors aboard the NASA F-8 digital fly-by-wire aircraft. The algorithm's successful flight test program is summarized, and specific examples are presented of algorithm behavior in response to software-induced signal faults, both with and without aircraft parameter modeling errors.
Ultrasonic imaging of textured alumina
NASA Technical Reports Server (NTRS)
Stang, David B.; Salem, Jonathan A.; Generazio, Edward R.
1989-01-01
Ultrasonic images representing the bulk attenuation and velocity of a set of alumina samples were obtained by a pulse-echo contact scanning technique. The samples were taken from larger bodies that were chemically similar but were processed by extrusion or isostatic processing. The crack growth resistance and fracture toughness of the larger bodies were found to vary with processing method and test orientation. The results presented here demonstrate that differences in texture that contribute to variations in structural performance can be revealed by analytic ultrasonic techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelms, Benjamin; Stambaugh, Cassandra; Hunt, Dylan
2015-08-15
Purpose: The authors designed data, methods, and metrics that can serve as a standard, independent of any software package, to evaluate dose-volume histogram (DVH) calculation accuracy and detect limitations. The authors use simple geometrical objects at different orientations combined with dose grids of varying spatial resolution with linear 1D dose gradients; when combined, ground truth DVH curves can be calculated analytically in closed form to serve as the absolute standards. Methods: DICOM RT structure sets containing a small sphere, cylinder, and cone were created programmatically with axial plane spacing varying from 0.2 to 3 mm. Cylinders and cones were modeledmore » in two different orientations with respect to the IEC 1217 Y axis. The contours were designed to stringently but methodically test voxelation methods required for DVH. Synthetic RT dose files were generated with 1D linear dose gradient and with grid resolution varying from 0.4 to 3 mm. Two commercial DVH algorithms—PINNACLE (Philips Radiation Oncology Systems) and PlanIQ (Sun Nuclear Corp.)—were tested against analytical values using custom, noncommercial analysis software. In Test 1, axial contour spacing was constant at 0.2 mm while dose grid resolution varied. In Tests 2 and 3, the dose grid resolution was matched to varying subsampled axial contours with spacing of 1, 2, and 3 mm, and difference analysis and metrics were employed: (1) histograms of the accuracy of various DVH parameters (total volume, D{sub max}, D{sub min}, and doses to % volume: D99, D95, D5, D1, D0.03 cm{sup 3}) and (2) volume errors extracted along the DVH curves were generated and summarized in tabular and graphical forms. Results: In Test 1, PINNACLE produced 52 deviations (15%) while PlanIQ produced 5 (1.5%). In Test 2, PINNACLE and PlanIQ differed from analytical by >3% in 93 (36%) and 18 (7%) times, respectively. Excluding D{sub min} and D{sub max} as least clinically relevant would result in 32 (15%) vs 5 (2%) scored deviations for PINNACLE vs PlanIQ in Test 1, while Test 2 would yield 53 (25%) vs 17 (8%). In Test 3, statistical analyses of volume errors extracted continuously along the curves show PINNACLE to have more errors and higher variability (relative to PlanIQ), primarily due to PINNACLE’s lack of sufficient 3D grid supersampling. Another major driver for PINNACLE errors is an inconsistency in implementation of the “end-capping”; the additional volume resulting from expanding superior and inferior contours halfway to the next slice is included in the total volume calculation, but dose voxels in this expanded volume are excluded from the DVH. PlanIQ had fewer deviations, and most were associated with a rotated cylinder modeled by rectangular axial contours; for coarser axial spacing, the limited number of cross-sectional rectangles hinders the ability to render the true structure volume. Conclusions: The method is applicable to any DVH-calculating software capable of importing DICOM RT structure set and dose objects (the authors’ examples are available for download). It includes a collection of tests that probe the design of the DVH algorithm, measure its accuracy, and identify failure modes. Merits and applicability of each test are discussed.« less
Zhang, Hong-guang; Lu, Jian-gang
2016-02-01
Abstract To overcome the problems of significant difference among samples and nonlinearity between the property and spectra of samples in spectral quantitative analysis, a local regression algorithm is proposed in this paper. In this algorithm, net signal analysis method(NAS) was firstly used to obtain the net analyte signal of the calibration samples and unknown samples, then the Euclidean distance between net analyte signal of the sample and net analyte signal of calibration samples was calculated and utilized as similarity index. According to the defined similarity index, the local calibration sets were individually selected for each unknown sample. Finally, a local PLS regression model was built on each local calibration sets for each unknown sample. The proposed method was applied to a set of near infrared spectra of meat samples. The results demonstrate that the prediction precision and model complexity of the proposed method are superior to global PLS regression method and conventional local regression algorithm based on spectral Euclidean distance.
Thorenz, Ute R; Kundel, Michael; Müller, Lars; Hoffmann, Thorsten
2012-11-01
In this work, we describe a simple diffusion capillary device for the generation of various organic test gases. Using a set of basic equations the output rate of the test gas devices can easily be predicted only based on the molecular formula and the boiling point of the compounds of interest. Since these parameters are easily accessible for a large number of potential analytes, even for those compounds which are typically not listed in physico-chemical handbooks or internet databases, the adjustment of the test gas source to the concentration range required for the individual analytical application is straightforward. The agreement of the predicted and measured values is shown to be valid for different groups of chemicals, such as halocarbons, alkanes, alkenes, and aromatic compounds and for different dimensions of the diffusion capillaries. The limits of the predictability of the output rates are explored and observed to result in an underprediction of the output rates when very thin capillaries are used. It is demonstrated that pressure variations are responsible for the observed deviation of the output rates. To overcome the influence of pressure variations and at the same time to establish a suitable test gas source for highly volatile compounds, also the usability of permeation sources is explored, for example for the generation of molecular bromine test gases.
Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration
NASA Technical Reports Server (NTRS)
Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.
1993-01-01
Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.
40 CFR 600.108-08 - Analytical gases.
Code of Federal Regulations, 2012 CFR
2012-07-01
... ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy and Carbon-Related Exhaust Emission Test Procedures § 600.108-08 Analytical gases. The analytical gases for all fuel economy testing...
40 CFR 600.108-08 - Analytical gases.
Code of Federal Regulations, 2013 CFR
2013-07-01
... ECONOMY AND GREENHOUSE GAS EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy and Carbon-Related Exhaust Emission Test Procedures § 600.108-08 Analytical gases. The analytical gases for all fuel economy testing...
Diagnosing breast cancer using Raman spectroscopy: prospective analysis
NASA Astrophysics Data System (ADS)
Haka, Abigail S.; Volynskaya, Zoya; Gardecki, Joseph A.; Nazemi, Jon; Shenk, Robert; Wang, Nancy; Dasari, Ramachandra R.; Fitzmaurice, Maryann; Feld, Michael S.
2009-09-01
We present the first prospective test of Raman spectroscopy in diagnosing normal, benign, and malignant human breast tissues. Prospective testing of spectral diagnostic algorithms allows clinicians to accurately assess the diagnostic information contained in, and any bias of, the spectroscopic measurement. In previous work, we developed an accurate, internally validated algorithm for breast cancer diagnosis based on analysis of Raman spectra acquired from fresh-frozen in vitro tissue samples. We currently evaluate the performance of this algorithm prospectively on a large ex vivo clinical data set that closely mimics the in vivo environment. Spectroscopic data were collected from freshly excised surgical specimens, and 129 tissue sites from 21 patients were examined. Prospective application of the algorithm to the clinical data set resulted in a sensitivity of 83%, a specificity of 93%, a positive predictive value of 36%, and a negative predictive value of 99% for distinguishing cancerous from normal and benign tissues. The performance of the algorithm in different patient populations is discussed. Sources of bias in the in vitro calibration and ex vivo prospective data sets, including disease prevalence and disease spectrum, are examined and analytical methods for comparison provided.
DEMONSTRATION OF THE ANALYTIC ELEMENT METHOD FOR WELLHEAD PROJECTION - PROJECT SUMMARY
A new computer program has been developed to determine time-of-travel capture zones in relatively simple geohydrological settings. The WhAEM package contains an analytic element model that uses superposition of (many) closed form analytical solutions to generate a ground-water fl...
NHEXAS PHASE I REGION 5 STUDY--METALS IN DUST ANALYTICAL RESULTS
This data set includes analytical results for measurements of metals in 1,906 dust samples. Dust samples were collected to assess potential residential sources of dermal and inhalation exposures and to examine relationships between analyte levels in dust and in personal and bioma...
Features Students Really Expect from Learning Analytics
ERIC Educational Resources Information Center
Schumacher, Clara; Ifenthaler, Dirk
2016-01-01
In higher education settings more and more learning is facilitated through online learning environments. To support and understand students' learning processes better, learning analytics offers a promising approach. The purpose of this study was to investigate students' expectations toward features of learning analytics systems. In a first…
Centaur Standard Shroud (CSS) static ultimate load structural tests
NASA Technical Reports Server (NTRS)
1975-01-01
A series of tests were conducted on the jettisonable metallic shroud used on the Titan/Centaur launch vehicle to verify its structural capabilities and to evaluate its structural interaction with the Centaur stage. A flight configured shroud and the interfacing Titan/Centaur structural assemblies were subjected to tests consisting of combinations of applied axial and shear loads to design ultimate values, including a set of tests on thermal conditions and two dynamic response tests to verify the analytical stiffness model. The strength capabilities were demonstrated at ultimate (125 percent of design limit) loads. It was also verified that the spring rate of the flight configured shroud-to-Centaur forward structural deflections of the specimen became nonlinear, as expected, above limit load values. This test series qualification program verified that the Titan/Centaur shroud and the Centaur and Titan interface components are qualified structurally at design ultimate loads.
Discharge reliability in ablative pulsed plasma thrusters
NASA Astrophysics Data System (ADS)
Wu, Zhiwen; Sun, Guorui; Yuan, Shiyue; Huang, Tiankun; Liu, Xiangyang; Xie, Kan; Wang, Ningfei
2017-08-01
Discharge reliability is typically neglected in low-ignition-cycle ablative pulsed plasma thrusters (APPTs). In this study, the discharge reliability of an APPT is assessed analytically and experimentally. The goals of this study are to better understand the ignition characteristics and to assess the accuracy of the analytical method. For each of six sets of operating conditions, 500 tests of a parallel-plate APPT with a coaxial semiconductor spark plug are conducted. The discharge voltage and current are measured with a high-voltage probe and a Rogowski coil, respectively, to determine whether the discharge is successful. Generally, the discharge success rate increases as the discharge voltage increases, and it decreases as the electrode gap and the number of ignitions increases. The theoretical analysis and the experimental results are reasonably consistent. This approach provides a reference for designing APPTs and improving their stability.
Hey, Jody; Nielsen, Rasmus
2007-01-01
In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231
Development and Preliminary Evaluation of a Multivariate Index Assay for Ovarian Cancer
Chen, Tzong-Hao; Bergstrom, Katharine J.; Zhao, Jinghua; Seshaiah, Partha; Yip, Ping; Mansfield, Brian C.
2009-01-01
Background Most women with a clinical presentation consistent with ovarian cancer have benign conditions. Therefore methods to distinguish women with ovarian cancer from those with benign conditions would be beneficial. We describe the development and preliminary evaluation of a serum-based multivariate assay for ovarian cancer. This hypothesis-driven study examined whether an informative pattern could be detected in stage I disease that persists through later stages. Methodology/Principal Findings Sera, collected under uniform protocols from multiple institutions, representing 176 cases and 187 controls from women presenting for surgery were examined using high-throughput, multiplexed immunoassays. All stages and common subtypes of epithelial ovarian cancer, and the most common benign ovarian conditions were represented. A panel of 104 antigens, 44 autoimmune and 56 infectious disease markers were assayed and informative combinations identified. Using a training set of 91 stage I data sets, representing 61 individual samples, and an equivalent number of controls, an 11-analyte profile, composed of CA-125, CA 19-9, EGF-R, C-reactive protein, myoglobin, apolipoprotein A1, apolipoprotein CIII, MIP-1α, IL-6, IL-18 and tenascin C was identified and appears informative for all stages and common subtypes of ovarian cancer. Using a testing set of 245 samples, approximately twice the size of the model building set, the classifier had 91.3% sensitivity and 88.5% specificity. While these preliminary results are promising, further refinement and extensive validation of the classifier in a clinical trial is necessary to determine if the test has clinical value. Conclusions/Significance We describe a blood-based assay using 11 analytes that can distinguish women with ovarian cancer from those with benign conditions. Preliminary evaluation of the classifier suggests it has the potential to offer approximately 90% sensitivity and 90% specificity. While promising, the performance needs to be assessed in a blinded clinical validation study. PMID:19240799
A survey of tools for variant analysis of next-generation genome sequencing data
Pabinger, Stephan; Dander, Andreas; Fischer, Maria; Snajder, Rene; Sperk, Michael; Efremova, Mirjana; Krabichler, Birgit; Speicher, Michael R.; Zschocke, Johannes
2014-01-01
Recent advances in genome sequencing technologies provide unprecedented opportunities to characterize individual genomic landscapes and identify mutations relevant for diagnosis and therapy. Specifically, whole-exome sequencing using next-generation sequencing (NGS) technologies is gaining popularity in the human genetics community due to the moderate costs, manageable data amounts and straightforward interpretation of analysis results. While whole-exome and, in the near future, whole-genome sequencing are becoming commodities, data analysis still poses significant challenges and led to the development of a plethora of tools supporting specific parts of the analysis workflow or providing a complete solution. Here, we surveyed 205 tools for whole-genome/whole-exome sequencing data analysis supporting five distinct analytical steps: quality assessment, alignment, variant identification, variant annotation and visualization. We report an overview of the functionality, features and specific requirements of the individual tools. We then selected 32 programs for variant identification, variant annotation and visualization, which were subjected to hands-on evaluation using four data sets: one set of exome data from two patients with a rare disease for testing identification of germline mutations, two cancer data sets for testing variant callers for somatic mutations, copy number variations and structural variations, and one semi-synthetic data set for testing identification of copy number variations. Our comprehensive survey and evaluation of NGS tools provides a valuable guideline for human geneticists working on Mendelian disorders, complex diseases and cancers. PMID:23341494
White, P Lewis; Barnes, Rosemary A; Springer, Jan; Klingspor, Lena; Cuenca-Estrella, Manuel; Morton, C Oliver; Lagrou, Katrien; Bretagne, Stéphane; Melchers, Willem J G; Mengoli, Carlo; Donnelly, J Peter; Heinz, Werner J; Loeffler, Juergen
2015-09-01
Aspergillus PCR testing of serum provides technical simplicity but with potentially reduced sensitivity compared to whole-blood testing. With diseases for which screening to exclude disease represents an optimal strategy, sensitivity is paramount. The associated analytical study confirmed that DNA concentrations were greater in plasma than those in serum. The aim of the current investigation was to confirm analytical findings by comparing the performance of Aspergillus PCR testing of plasma and serum in the clinical setting. Standardized Aspergillus PCR was performed on plasma and serum samples concurrently obtained from hematology patients in a multicenter retrospective anonymous case-control study, with cases diagnosed according to European Organization for Research and Treatment of Cancer/Invasive Fungal Infections Cooperative Group and the National Institute of Allergy and Infectious Diseases Mycoses Study Group (EORTC/MSG) consensus definitions (19 proven/probable cases and 42 controls). Clinical performance and clinical utility (time to positivity) were calculated for both kinds of samples. The sensitivity and specificity for Aspergillus PCR when testing serum were 68.4% and 76.2%, respectively, and for plasma, they were 94.7% and 83.3%, respectively. Eighty-five percent of serum and plasma PCR results were concordant. On average, plasma PCR was positive 16.8 days before diagnosis and was the earliest indicator of infection in 13 cases, combined with other biomarkers in five cases. On average, serum PCR was positive 10.8 days before diagnosis and was the earliest indicator of infection in six cases, combined with other biomarkers in three cases. These results confirm the analytical finding that the sensitivity of Aspergillus PCR using plasma is superior to that using serum. PCR positivity occurs earlier when testing plasma and provides sufficient sensitivity for the screening of invasive aspergillosis while maintaining methodological simplicity. Copyright © 2015 White et al.
Experimental performance and acoustic investigation of modern, counterrotating blade concepts
NASA Technical Reports Server (NTRS)
Hoff, G. E.
1990-01-01
The aerodynamic, acoustic, and aeromechanical performance of counterrotating blade concepts were evaluated both theoretically and experimentally. Analytical methods development and design are addressed. Utilizing the analytical methods which evolved during the conduct of this work, aerodynamic and aeroacoustic predictions were developed, which were compared to NASA and GE wind tunnel test results. The detailed mechanical design and fabrication of five different composite shell/titanium spar counterrotating blade set configurations are presented. Design philosophy, analyses methods, and material geometry are addressed, as well as the influence of aerodynamics, aeromechanics, and aeroacoustics on the design procedures. Blade fabrication and quality control procedures are detailed; bench testing procedures and results of blade integrity verification are presented; and instrumentation associated with the bench testing also is identified. Additional hardware to support specialized testing is described, as are operating blade instrumentation and the associated stress limits. The five counterrotating blade concepts were scaled to a tip diameter of 2 feet, so they could be incorporated into MPS (model propulsion simulators). Aerodynamic and aeroacoustic performance testing was conducted in the NASA Lewis 8 x 6 supersonic and 9 x 15 V/STOL (vertical or short takeoff and landing) wind tunnels and in the GE freejet anechoic test chamber (Cell 41) to generate an experimental data base for these counterrotating blade designs. Test facility and MPS vehicle matrices are provided, and test procedures are presented. Effects on performance of rotor-to-rotor spacing, angle-of-attack, pylon proximity, blade number, reduced-diameter aft blades, and mismatched rotor speeds are addressed. Counterrotating blade and specialized aeromechanical hub stability test results are also furnished.
ERIC Educational Resources Information Center
Kimaru, Irene; Koether, Marina; Chichester, Kimberly; Eaton, Lafayette
2017-01-01
Analytical method transfer (AMT) and dissolution testing are important topics required in industry that should be taught in analytical chemistry courses. Undergraduate students in senior level analytical chemistry laboratory courses at Kennesaw State University (KSU) and St. John Fisher College (SJFC) participated in development, validation, and…
Modeling of classical swirl injector dynamics
NASA Astrophysics Data System (ADS)
Ismailov, Maksud M.
The knowledge of the dynamics of a swirl injector is crucial in designing a stable liquid rocket engine. Since the swirl injector is a complex fluid flow device in itself, not much work has been conducted to describe its dynamics either analytically or by using computational fluid dynamics techniques. Even the experimental observation is limited up to date. Thus far, there exists an analytical linear theory by Bazarov [1], which is based on long-wave disturbances traveling on the free surface of the injector core. This theory does not account for variation of the nozzle reflection coefficient as a function of disturbance frequency, and yields a response function which is strongly dependent on the so called artificial viscosity factor. This causes an uncertainty in designing an injector for the given operational combustion instability frequencies in the rocket engine. In this work, the author has studied alternative techniques to describe the swirl injector response, both analytically and computationally. In the analytical part, by using the linear small perturbation analysis, the entire phenomenon of unsteady flow in swirl injectors is dissected into fundamental components, which are the phenomena of disturbance wave refraction and reflection, and vortex chamber resonance. This reveals the nature of flow instability and the driving factors leading to maximum injector response. In the computational part, by employing the nonlinear boundary element method (BEM), the author sets the boundary conditions such that they closely simulate those in the analytical part. The simulation results then show distinct peak responses at frequencies that are coincident with those resonant frequencies predicted in the analytical part. Moreover, a cold flow test of the injector related to this study also shows a clear growth of instability with its maximum amplitude at the first fundamental frequency predicted both by analytical methods and BEM. It shall be noted however that Bazarov's theory does not predict the resonant peaks. Overall this methodology provides clearer understanding of the injector dynamics compared to Bazarov's. Even though the exact value of response is not possible to obtain at this stage of theoretical, computational, and experimental investigation, this methodology sets the starting point from where the theoretical description of reflection/refraction, resonance, and their interaction between each other may be refined to higher order to obtain its more precise value.
Analytic Cognitive Style Predicts Religious and Paranormal Belief
ERIC Educational Resources Information Center
Pennycook, Gordon; Cheyne, James Allan; Seli, Paul; Koehler, Derek J.; Fugelsang, Jonathan A.
2012-01-01
An analytic cognitive style denotes a propensity to set aside highly salient intuitions when engaging in problem solving. We assess the hypothesis that an analytic cognitive style is associated with a history of questioning, altering, and rejecting (i.e., unbelieving) supernatural claims, both religious and paranormal. In two studies, we examined…
WHAEM: PROGRAM DOCUMENTATION FOR THE WELLHEAD ANALYTIC ELEMENT MODEL (EPA/600/SR-94/210)
A new computer program has been developed to determine time-of-travel capture zones in relatively simple geohydrological settings. The WhAEM package contains an analytic element model that uses superposition of (many) closed form analytical solutions to generate a groundwater flo...
NHEXAS PHASE I MARYLAND STUDY--QA ANALYTICAL RESULTS FOR PESTICIDES IN SPIKE SAMPLES
The Pesticides in Spikes data set contains the analytical results of measurements of up to 17 pesticides in 12 control samples (spikes) from 11 households. Measurements were made in samples of blood serum. Controls were used to assess recovery of target analytes from a sample m...
42 CFR 493.1236 - Standard: Evaluation of proficiency testing performance.
Code of Federal Regulations, 2011 CFR
2011-10-01
... the following: (1) Any analyte or subspecialty without analytes listed in subpart I of this part that is not evaluated or scored by a CMS-approved proficiency testing program. (2) Any analyte, specialty...
42 CFR 493.1236 - Standard: Evaluation of proficiency testing performance.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the following: (1) Any analyte or subspecialty without analytes listed in subpart I of this part that is not evaluated or scored by a CMS-approved proficiency testing program. (2) Any analyte, specialty...
42 CFR 493.1236 - Standard: Evaluation of proficiency testing performance.
Code of Federal Regulations, 2014 CFR
2014-10-01
... the following: (1) Any analyte or subspecialty without analytes listed in subpart I of this part that is not evaluated or scored by a CMS-approved proficiency testing program. (2) Any analyte, specialty...
42 CFR 493.1236 - Standard: Evaluation of proficiency testing performance.
Code of Federal Regulations, 2012 CFR
2012-10-01
... the following: (1) Any analyte or subspecialty without analytes listed in subpart I of this part that is not evaluated or scored by a CMS-approved proficiency testing program. (2) Any analyte, specialty...
42 CFR 493.1236 - Standard: Evaluation of proficiency testing performance.
Code of Federal Regulations, 2013 CFR
2013-10-01
... the following: (1) Any analyte or subspecialty without analytes listed in subpart I of this part that is not evaluated or scored by a CMS-approved proficiency testing program. (2) Any analyte, specialty...
Space Launch System Vibration Analysis Support
NASA Technical Reports Server (NTRS)
Johnson, Katie
2016-01-01
The ultimate goal for my efforts during this internship was to help prepare for the Space Launch System (SLS) integrated modal test (IMT) with Rodney Rocha. In 2018, the Structural Engineering Loads and Dynamics Team will have 10 days to perform the IMT on the SLS Integrated Launch Vehicle. After that 10 day period, we will have about two months to analyze the test data and determine whether the integrated vehicle modes/frequencies are adequate for launching the vehicle. Because of the time constraints, NASA must have newly developed post-test analysis methods proven well and with technical confidence before testing. NASA civil servants along with help from rotational interns are working with novel techniques developed and applied external to Johnson Space Center (JSC) to uncover issues in applying this technique to much larger scales than ever before. We intend to use modal decoupling methods to separate the entangled vibrations coming from the SLS and its support structure during the IMT. This new approach is still under development. The primary goal of my internship was to learn the basics of structural dynamics and physical vibrations. I was able to accomplish this by working on two experimental test set ups, the Simple Beam and TAURUS-T, and by doing some light analytical and post-processing work. Within the Simple Beam project, my role involves changing the data acquisition system, reconfiguration of the test set up, transducer calibration, data collection, data file recovery, and post-processing analysis. Within the TAURUS-T project, my duties included cataloging and removing the 30+ triaxial accelerometers, coordinating the removal of the structure from the current rolling cart to a sturdy billet for further testing, preparing the accelerometers for remounting, accurately calibrating, mounting, and mapping of all accelerometer channels, and some testing. Hammer and shaker tests will be performed to easily visualize mode shapes at low frequencies. Short analytical projects using MATLAB were also assigned to aid in research efforts. These included integration of acceleration data for comparison to measured displacement data. Laplace and Fourier transforms were also investigated to determine viability as a method of modal decoupling. In addition to these projects, I was also able contribute work that would benefit future interns and the division as a whole. I gave a short presentation and answered questions to aid in the recruitment of subsequent interns and co-ops for the division. I also assisted in revisions and additions to Intern/Co-Op Handbook to provide incoming employees with background information on the organization they are about to work for. I further developed tutorial on Pulse software, which was used for data acquisition for both experiments and will be helpful to interns and engineers that may be unfamiliar to the software. I gained a diverse range of experience throughout my internship. I was introduced to advanced dynamics and analytical techniques. This was through new experience with both hands on experimentation and analytical post processing methods. I was exposed to the benefits of interdepartmental collaboration and developed stronger skills in time management by coordinating two different tests at once. This internship provided an excellent opportunity to see how engineering theories applied to real life scenarios, and an introduction to how NASA/JSC solves technical problems.
Kavsak, Peter A; Ainsworth, Craig; Arnold, Donald M; Scott, Terry; Clark, Lorna; Ivica, Josko; Mackett, Katharine; Whitlock, Richard; Worster, Andrew
2018-05-08
Elevated and non-changing high-sensitivity cardiac troponin (hs-cTn) concentrations may suggest a process other than acute injury, possibly due to chronic condition(s) causing the elevation, an analytical error/interference or the formation of macrocomplexes. Heart-type fatty acid binding protein (H-FABP) might be useful in this setting to identify the etiology of abnormally high and non-changing cTn concentrations which could aid clinical decision making in the hospital setting. We analytically validated the H-FABP assay (Randox) on the Abbott ARICHTECTc8000 platform, testing imprecision, linearity, stability, and matrix comparison. Over the 2-month analytical validation; EDTA plasma samples from patients with a hospital visit with persistently elevated and stable cTnI concentrations (Abbott hs-cTnI≥52 ng/L or 2x99th percentile upper limit of normal (ULN = 26 ng/L) with change between results <20%) were collected and frozen (-20 °C). These samples were tested with the H-FABP assay, polyethylene glycol (PEG) precipitation, with the lowest estimated glomerular filtration rate (eGRF) during the hospital visit also obtained from these patients. The H-FABP assay was linear, with concentrations stable after 4 freeze/thaw cycles, up to 150 h at room temperature, and comparable between lithium heparin and EDTA plasma. During the validation there were 6 patients with eGFR ≥60 ml/min/1.73 m 2 identified (total population screened n = 917) with high and non-changing hs-cTnI concentrations. All 6 patients had H-FABP<2xULN; with 3 patients having a macrocomplex and a final diagnosis of not ACS. Testing of H-FABP in patients with an eGFR≥60 ml/min/1.73 m 2 with persistently high and stable cTn elevations may help to confirm prior cardiac injury or the presence of macrocomplexes as the source of these elevations. Copyright © 2017. Published by Elsevier Inc.
Coedo, A G; Padilla, I; Dorado, M T
2004-12-01
This paper describes a study designed to determine the possibility of using a dried aerosol solution for calibration in laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS). The relative sensitivities of tested materials mobilized by laser ablation and by aqueous nebulization were established, and the experimentally determined relative sensitivity factors (RSFs) were used in conjunction with aqueous calibration for the analysis of solid steel samples. To such a purpose a set of CRM carbon steel samples (SS-451/1 to SS-460/1) were sampled into an ICP-MS instrument by solution nebulization using a microconcentric nebulizer with membrane desolvating (D-MCN) and by laser ablation (LA). Both systems were applied with the same ICP-MS operating parameters and the analyte signals were compared. The RSF (desolvated aerosol response/ablated solid response) values were close to 1 for the analytes Cr, Ni, Co, V, and W, about 1.3 for Mo, and 1.7 for As, P, and Mn. Complementary tests were carried out using CRM SS-455/1 as a solid standard for one-point calibration, applying LAMTRACE software for data reduction and quantification. The analytical results are in good agreement with the certified values in all cases, showing that the applicability of dried aerosol solutions is a good alternative calibration system for laser ablation sampling.
Dried Blood Spots - Preparing and Processing for Use in Immunoassays and in Molecular Techniques
Grüner, Nico; Stambouli, Oumaima; Ross, R. Stefan
2015-01-01
The idea of collecting blood on a paper card and subsequently using the dried blood spots (DBS) for diagnostic purposes originated a century ago. Since then, DBS testing for decades has remained predominantly focused on the diagnosis of infectious diseases especially in resource-limited settings or the systematic screening of newborns for inherited metabolic disorders and only recently have a variety of new and innovative DBS applications begun to emerge. For many years, pre-analytical variables were only inappropriately considered in the field of DBS testing and even today, with the exception of newborn screening, the entire pre-analytical phase, which comprises the preparation and processing of DBS for their final analysis has not been standardized. Given this background, a comprehensive step-by-step protocol, which covers al the essential phases, is proposed, i.e., collection of blood; preparation of blood spots; drying of blood spots; storage and transportation of DBS; elution of DBS, and finally analyses of DBS eluates. The effectiveness of this protocol was first evaluated with 1,762 coupled serum/DBS pairs for detecting markers of hepatitis B virus, hepatitis C virus, and human immunodeficiency virus infections on an automated analytical platform. In a second step, the protocol was utilized during a pilot study, which was conducted on active drug users in the German cities of Berlin and Essen. PMID:25867233
Use of evidence in a categorization task: analytic and holistic processing modes.
Greco, Alberto; Moretti, Stefania
2017-11-01
Category learning performance can be influenced by many contextual factors, but the effects of these factors are not the same for all learners. The present study suggests that these differences can be due to the different ways evidence is used, according to two main basic modalities of processing information, analytically or holistically. In order to test the impact of the information provided, an inductive rule-based task was designed, in which feature salience and comparison informativeness between examples of two categories were manipulated during the learning phases, by introducing and progressively reducing some perceptual biases. To gather data on processing modalities, we devised the Active Feature Composition task, a production task that does not require classifying new items but reproducing them by combining features. At the end, an explicit rating task was performed, which entailed assessing the accuracy of a set of possible categorization rules. A combined analysis of the data collected with these two different tests enabled profiling participants in regard to the kind of processing modality, the structure of representations and the quality of categorial judgments. Results showed that despite the fact that the information provided was the same for all participants, those who adopted analytic processing better exploited evidence and performed more accurately, whereas with holistic processing categorization is perfectly possible but inaccurate. Finally, the cognitive implications of the proposed procedure, with regard to involved processes and representations, are discussed.
CMEs' Speed, Travel Time, and Temperature: A Thermodynamic Approach
NASA Astrophysics Data System (ADS)
Durand-Manterola, Hector J.; Flandes, Alberto; Rivera, Ana Leonor; Lara, Alejandro; Niembro, Tatiana
2017-12-01
Due to their important role in space weather, coronal mass ejections or CMEs have been thoroughly studied in order to forecast their speed and transit time from the Sun to the Earth. We present a thermodynamic analytical model that describes the dynamics of CMEs. The thermodynamic approach has some advantages with respect to the hydrodynamic approach. First, it deals with the energy involved, which is a scalar quantity. Second, one may calculate the work done by the different forces separately and sum all contributions to determine the changes in speed, which simplifies the problem and allows us to obtain fully rigorous results. Our model considers the drag force, which dominates the dynamics of CMEs and the solar gravitational force, which has a much smaller effect, but it is, still, relevant enough to be considered. We derive an explicit analytical expression for the speed of CMEs in terms of its most relevant parameters and obtain an analytical expression for the CME temperature. The model is tested with a CME observed at three different heliocentric distances with three different spacecraft (SOHO, ACE, and Ulysses); also, with a set of 11 CMEs observed with the SOHO, Wind, and ACE spacecraft and, finally, with two events observed with the STEREO spacecraft. In all cases, we have a consistent agreement between the theoretical and the observed speeds and transit times. Additionally, for the set of 11 events, we estimate their temperatures at their departure position from their temperatures measured near the orbit of the Earth.
Enhance your team-based qualitative research.
Fernald, Douglas H; Duclos, Christine W
2005-01-01
Qualitative research projects often involve the collaborative efforts of a research team. Challenges inherent in teamwork include changes in membership and differences in analytical style, philosophy, training, experience, and skill. This article discusses teamwork issues and tools and techniques used to improve team-based qualitative research. We drew on our experiences in working on numerous projects of varying, size, duration, and purpose. Through trials of different tools and techniques, expert consultation, and review of the literature, we learned to improve how we build teams, manage information, and disseminate results. Attention given to team members and team processes is as important as choosing appropriate analytical tools and techniques. Attentive team leadership, commitment to early and regular team meetings, and discussion of roles, responsibilities, and expectations all help build more effective teams and establish clear norms. As data are collected and analyzed, it is important to anticipate potential problems from differing skills and styles, and how information and files are managed. Discuss analytical preferences and biases and set clear guidelines and practices for how data will be analyzed and handled. As emerging ideas and findings disperse across team members, common tools (such as summary forms and data grids), coding conventions, intermediate goals or products, and regular documentation help capture essential ideas and insights. In a team setting, little should be left to chance. This article identifies ways to improve team-based qualitative research with more a considered and systematic approach. Qualitative researchers will benefit from further examination and discussion of effective, field-tested, team-based strategies.
PAREMD: A parallel program for the evaluation of momentum space properties of atoms and molecules
NASA Astrophysics Data System (ADS)
Meena, Deep Raj; Gadre, Shridhar R.; Balanarayan, P.
2018-03-01
The present work describes a code for evaluating the electron momentum density (EMD), its moments and the associated Shannon information entropy for a multi-electron molecular system. The code works specifically for electronic wave functions obtained from traditional electronic structure packages such as GAMESS and GAUSSIAN. For the momentum space orbitals, the general expression for Gaussian basis sets in position space is analytically Fourier transformed to momentum space Gaussian basis functions. The molecular orbital coefficients of the wave function are taken as an input from the output file of the electronic structure calculation. The analytic expressions of EMD are evaluated over a fine grid and the accuracy of the code is verified by a normalization check and a numerical kinetic energy evaluation which is compared with the analytic kinetic energy given by the electronic structure package. Apart from electron momentum density, electron density in position space has also been integrated into this package. The program is written in C++ and is executed through a Shell script. It is also tuned for multicore machines with shared memory through OpenMP. The program has been tested for a variety of molecules and correlated methods such as CISD, Møller-Plesset second order (MP2) theory and density functional methods. For correlated methods, the PAREMD program uses natural spin orbitals as an input. The program has been benchmarked for a variety of Gaussian basis sets for different molecules showing a linear speedup on a parallel architecture.
Complement system biomarkers in epilepsy.
Kopczynska, Maja; Zelek, Wioleta M; Vespa, Simone; Touchard, Samuel; Wardle, Mark; Loveless, Samantha; Thomas, Rhys H; Hamandi, Khalid; Morgan, B Paul
2018-05-24
To explore whether complement dysregulation occurs in a routinely recruited clinical cohort of epilepsy patients, and whether complement biomarkers have potential to be used as markers of disease severity and seizure control. Plasma samples from 157 epilepsy cases (106 with focal seizures, 46 generalised seizures, 5 unclassified) and 54 controls were analysed. Concentrations of 10 complement analytes (C1q, C3, C4, factor B [FB], terminal complement complex [TCC], iC3b, factor H [FH], Clusterin [Clu], Properdin, C1 Inhibitor [C1Inh] plus C-reactive protein [CRP]) were measured using enzyme linked immunosorbent assay (ELISA). Univariate and multivariate statistical analysis were used to test whether combinations of complement analytes were predictive of epilepsy diagnoses and seizure occurrence. Correlation between number and type of anti-epileptic drugs (AED) and complement analytes was also performed. We found: CONCLUSION: This study adds to evidence implicating complement in pathogenesis of epilepsy and may allow the development of better therapeutics and prognostic markers in the future. Replication in a larger sample set is needed to validate the findings of the study. Copyright © 2018. Published by Elsevier Ltd.
Labour Market Driven Learning Analytics
ERIC Educational Resources Information Center
Kobayashi, Vladimer; Mol, Stefan T.; Kismihók, Gábor
2014-01-01
This paper briefly outlines a project about integrating labour market information in a learning analytics goal-setting application that provides guidance to students in their transition from education to employment.
A 'range test' for determining scatterers with unknown physical properties
NASA Astrophysics Data System (ADS)
Potthast, Roland; Sylvester, John; Kusiak, Steven
2003-06-01
We describe a new scheme for determining the convex scattering support of an unknown scatterer when the physical properties of the scatterers are not known. The convex scattering support is a subset of the scatterer and provides information about its location and estimates for its shape. For convex polygonal scatterers the scattering support coincides with the scatterer and we obtain full shape reconstructions. The method will be formulated for the reconstruction of the scatterers from the far field pattern for one or a few incident waves. The method is non-iterative in nature and belongs to the type of recently derived generalized sampling schemes such as the 'no response test' of Luke-Potthast. The range test operates by testing whether it is possible to analytically continue a far field to the exterior of any test domain Omegatest. By intersecting the convex hulls of various test domains we can produce a minimal convex set, the convex scattering support of which must be contained in the convex hull of the support of any scatterer which produces that far field. The convex scattering support is calculated by testing the range of special integral operators for a sampling set of test domains. The numerical results can be used as an approximation for the support of the unknown scatterer. We prove convergence and regularity of the scheme and show numerical examples for sound-soft, sound-hard and medium scatterers. We can apply the range test to non-convex scatterers as well. We can conclude that an Omegatest which passes the range test has a non-empty intersection with the infinity-support (the complement of the unbounded component of the complement of the support) of the true scatterer, but cannot find a minimal set which must be contained therein.
Kopcinovic, Lara Milevoj; Vogrinc, Zeljka; Kocijan, Irena; Culej, Jelena; Aralica, Merica; Jokic, Anja; Antoncic, Dragana; Bozovic, Marija
2016-10-15
We hypothesized that extravascular body fluid (EBF) analysis in Croatia is not harmonized and aimed to investigate preanalytical, analytical and postanalytical procedures used in EBF analysis in order to identify key aspects that should be addressed in future harmonization attempts. An anonymous online survey created to explore laboratory testing of EBF was sent to secondary, tertiary and private health care Medical Biochemistry Laboratories (MBLs) in Croatia. Statements were designed to address preanalytical, analytical and postanalytical procedures of cerebrospinal, pleural, peritoneal (ascites), pericardial, seminal, synovial, amniotic fluid and sweat. Participants were asked to declare the strength of agreement with proposed statements using a Likert scale. Mean scores for corresponding separate statements divided according to health care setting were calculated and compared. The survey response rate was 0.64 (58 / 90). None of the participating private MBLs declared to analyse EBF. We report a mean score of 3.45 obtained for all statements evaluated. Deviations from desirable procedures were demonstrated in all EBF testing phases. Minor differences in procedures used for EBF analysis comparing secondary and tertiary health care MBLs were found. The lowest scores were obtained for statements regarding quality control procedures in EBF analysis, participation in proficiency testing programmes and provision of interpretative comments on EBF's test reports. Although good laboratory EBF practice is present in Croatia, procedures for EBF analysis should be further harmonized to improve the quality of EBF testing and patient safety.
Coproducing Aboriginal patient journey mapping tools for improved quality and coordination of care.
Kelly, Janet; Dwyer, Judith; Mackean, Tamara; O'Donnell, Kim; Willis, Eileen
2016-12-08
This paper describes the rationale and process for developing a set of Aboriginal patient journey mapping tools with Aboriginal patients, health professionals, support workers, educators and researchers in the Managing Two Worlds Together project between 2008 and 2015. Aboriginal patients and their families from rural and remote areas, and healthcare providers in urban, rural and remote settings, shared their perceptions of the barriers and enablers to quality care in interviews and focus groups, and individual patient journey case studies were documented. Data were thematically analysed. In the absence of suitable existing tools, a new analytical framework and mapping approach was developed. The utility of the tools in other settings was then tested with health professionals, and the tools were further modified for use in quality improvement in health and education settings in South Australia and the Northern Territory. A central set of patient journey mapping tools with flexible adaptations, a workbook, and five sets of case studies describing how staff adapted and used the tools at different sites are available for wider use.
Pastor, Dena A; Lazowski, Rory A
2018-01-01
The term "multilevel meta-analysis" is encountered not only in applied research studies, but in multilevel resources comparing traditional meta-analysis to multilevel meta-analysis. In this tutorial, we argue that the term "multilevel meta-analysis" is redundant since all meta-analysis can be formulated as a special kind of multilevel model. To clarify the multilevel nature of meta-analysis the four standard meta-analytic models are presented using multilevel equations and fit to an example data set using four software programs: two specific to meta-analysis (metafor in R and SPSS macros) and two specific to multilevel modeling (PROC MIXED in SAS and HLM). The same parameter estimates are obtained across programs underscoring that all meta-analyses are multilevel in nature. Despite the equivalent results, not all software programs are alike and differences are noted in the output provided and estimators available. This tutorial also recasts distinctions made in the literature between traditional and multilevel meta-analysis as differences between meta-analytic choices, not between meta-analytic models, and provides guidance to inform choices in estimators, significance tests, moderator analyses, and modeling sequence. The extent to which the software programs allow flexibility with respect to these decisions is noted, with metafor emerging as the most favorable program reviewed.
NASA Astrophysics Data System (ADS)
Lake, Renee C.; Izadpanah, Amir P.; Baucom, Robert M.
1993-02-01
The results from a study aimed at improving the dynamic and aerodynamic characteristics of composite rotor blades through the use of extension-twist coupling are presented. A set of extension-twist-coupled composite spars was manufactured with four plies of graphite-epoxy cloth prepreg. These spars were noncircular in cross-section design and were therefore subject to warping deformations. Three different cross-sectional geometries were developed: D-shape, square, and flattened ellipse. Three spars of each type were fabricated to assess the degree of repeatability in the manufacturing process of extension-twist-coupled structures. Results from free-free vibration tests of the spars were compared with results from normal modes and frequency analyses of companion shell-finite-element models. Five global modes were identified within the frequency range from 0 to 2000 Hz for each spar. The experimental results for only one D-shape spar could be determined, however, and agreed within 13.8 percent of the analytical results. Frequencies corresponding to the five global modes for the three square spars agreed within 9.5, 11.6, and 8.5 percent of the respective analytical results and for the three elliptical spars agreed within 4.9, 7.7, and 9.6 percent of the respective analytical results.
NASA Technical Reports Server (NTRS)
Lake, Renee C.; Izadpanah, Amir P.; Baucom, Robert M.
1993-01-01
The results from a study aimed at improving the dynamic and aerodynamic characteristics of composite rotor blades through the use of extension-twist coupling are presented. A set of extension-twist-coupled composite spars was manufactured with four plies of graphite-epoxy cloth prepreg. These spars were noncircular in cross-section design and were therefore subject to warping deformations. Three different cross-sectional geometries were developed: D-shape, square, and flattened ellipse. Three spars of each type were fabricated to assess the degree of repeatability in the manufacturing process of extension-twist-coupled structures. Results from free-free vibration tests of the spars were compared with results from normal modes and frequency analyses of companion shell-finite-element models. Five global modes were identified within the frequency range from 0 to 2000 Hz for each spar. The experimental results for only one D-shape spar could be determined, however, and agreed within 13.8 percent of the analytical results. Frequencies corresponding to the five global modes for the three square spars agreed within 9.5, 11.6, and 8.5 percent of the respective analytical results and for the three elliptical spars agreed within 4.9, 7.7, and 9.6 percent of the respective analytical results.
Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C
2014-01-01
Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patients pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (SBM), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or QCP) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patients physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patients condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the simulated biosignals in the early stages of physiologic deterioration and while the variables are still within normal ranges. Thus, the SBM system was found to identify pathophysiologic conditions in a timeframe that would not have been detected in a usual clinical monitoring scenario. Conclusion. In this study the functionality of a multivariate machine learning predictive methodology that that incorporates commonly monitored clinical information was tested using a computer model of human physiology. SBM and predictive analytics were able to differentiate a state of decompensation while the monitored variables were still within normal clinical ranges. This finding suggests that the SBM could provide for early identification of a clinical deterioration using predictive analytic techniques. predictive analytics, hemodynamic, monitoring.
BiSet: Semantic Edge Bundling with Biclusters for Sensemaking.
Sun, Maoyuan; Mi, Peng; North, Chris; Ramakrishnan, Naren
2016-01-01
Identifying coordinated relationships is an important task in data analytics. For example, an intelligence analyst might want to discover three suspicious people who all visited the same four cities. Existing techniques that display individual relationships, such as between lists of entities, require repetitious manual selection and significant mental aggregation in cluttered visualizations to find coordinated relationships. In this paper, we present BiSet, a visual analytics technique to support interactive exploration of coordinated relationships. In BiSet, we model coordinated relationships as biclusters and algorithmically mine them from a dataset. Then, we visualize the biclusters in context as bundled edges between sets of related entities. Thus, bundles enable analysts to infer task-oriented semantic insights about potentially coordinated activities. We make bundles as first class objects and add a new layer, "in-between", to contain these bundle objects. Based on this, bundles serve to organize entities represented in lists and visually reveal their membership. Users can interact with edge bundles to organize related entities, and vice versa, for sensemaking purposes. With a usage scenario, we demonstrate how BiSet supports the exploration of coordinated relationships in text analytics.
Barycentric parameterizations for isotropic BRDFs.
Stark, Michael M; Arvo, James; Smits, Brian
2005-01-01
A bidirectional reflectance distribution function (BRDF) is often expressed as a function of four real variables: two spherical coordinates in each of the the "incoming" and "outgoing" directions. However, many BRDFs reduce to functions of fewer variables. For example, isotropic reflection can be represented by a function of three variables. Some BRDF models can be reduced further. In this paper, we introduce new sets of coordinates which we use to reduce the dimensionality of several well-known analytic BRDFs as well as empirically measured BRDF data. The proposed coordinate systems are barycentric with respect to a triangular support with a direct physical interpretation. One coordinate set is based on the BRDF model proposed by Lafortune. Another set, based on a model of Ward, is associated with the "halfway" vector common in analytical BRDF formulas. Through these coordinate sets we establish lower bounds on the approximation error inherent in the models on which they are based. We present a third set of coordinates, not based on any analytical model, that performs well in approximating measured data. Finally, our proposed variables suggest novel ways of constructing and visualizing BRDFs.
Trace metal speciation in natural waters: Computational vs. analytical
Nordstrom, D. Kirk
1996-01-01
Improvements in the field sampling, preservation, and determination of trace metals in natural waters have made many analyses more reliable and less affected by contamination. The speciation of trace metals, however, remains controversial. Chemical model speciation calculations do not necessarily agree with voltammetric, ion exchange, potentiometric, or other analytical speciation techniques. When metal-organic complexes are important, model calculations are not usually helpful and on-site analytical separations are essential. Many analytical speciation techniques have serious interferences and only work well for a limited subset of water types and compositions. A combined approach to the evaluation of speciation could greatly reduce these uncertainties. The approach proposed would be to (1) compare and contrast different analytical techniques with each other and with computed speciation, (2) compare computed trace metal speciation with reliable measurements of solubility, potentiometry, and mean activity coefficients, and (3) compare different model calculations with each other for the same set of water analyses, especially where supplementary data on speciation already exist. A comparison and critique of analytical with chemical model speciation for a range of water samples would delineate the useful range and limitations of these different approaches to speciation. Both model calculations and analytical determinations have useful and different constraints on the range of possible speciation such that they can provide much better insight into speciation when used together. Major discrepancies in the thermodynamic databases of speciation models can be evaluated with the aid of analytical speciation, and when the thermodynamic models are highly consistent and reliable, the sources of error in the analytical speciation can be evaluated. Major thermodynamic discrepancies also can be evaluated by simulating solubility and activity coefficient data and testing various chemical models for their range of applicability. Until a comparative approach such as this is taken, trace metal speciation will remain highly uncertain and controversial.
Ab Initio and Analytic Intermolecular Potentials for Ar-CF₄
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vayner, Grigoriy; Alexeev, Yuri; Wang, Jiangping
2006-03-09
Ab initio calculations at the CCSD(T) level of theory are performed to characterize the Ar + CF ₄ intermolecular potential. Extensive calculations, with and without a correction for basis set superposition error (BSSE), are performed with the cc-pVTZ basis set. Additional calculations are performed with other correlation consistent (cc) basis sets to extrapolate the Ar---CF₄potential energy minimum to the complete basis set (CBS) limit. Both the size of the basis set and BSSE have substantial effects on the Ar + CF₄ potential. Calculations with the cc-pVTZ basis set and without a BSSE correction, appear to give a good representation ofmore » the potential at the CBS limit and with a BSSE correction. In addition, MP2 theory is found to give potential energies in very good agreement with those determined by the much higher level CCSD(T) theory. Two analytic potential energy functions were determined for Ar + CF₄by fitting the cc-pVTZ calculations both with and without a BSSE correction. These analytic functions were written as a sum of two body potentials and excellent fits to the ab initio potentials were obtained by representing each two body interaction as a Buckingham potential.« less
Real-time sensor data validation
NASA Technical Reports Server (NTRS)
Bickmore, Timothy W.
1994-01-01
This report describes the status of an on-going effort to develop software capable of detecting sensor failures on rocket engines in real time. This software could be used in a rocket engine controller to prevent the erroneous shutdown of an engine due to sensor failures which would otherwise be interpreted as engine failures by the control software. The approach taken combines analytical redundancy with Bayesian belief networks to provide a solution which has well defined real-time characteristics and well-defined error rates. Analytical redundancy is a technique in which a sensor's value is predicted by using values from other sensors and known or empirically derived mathematical relations. A set of sensors and a set of relations among them form a network of cross-checks which can be used to periodically validate all of the sensors in the network. Bayesian belief networks provide a method of determining if each of the sensors in the network is valid, given the results of the cross-checks. This approach has been successfully demonstrated on the Technology Test Bed Engine at the NASA Marshall Space Flight Center. Current efforts are focused on extending the system to provide a validation capability for 100 sensors on the Space Shuttle Main Engine.
A Pilot Study to Examine Maturation of Body Temperature Control in Preterm Infants
Knobel, Robin B.; Levy, Janet; Katz, Laurence; Guenther, Bob; Holditch-Davis, Diane
2013-01-01
Objective To test instrumentation and develop analytic models to use in a larger study to examine developmental trajectories of body temperature and peripheral perfusion from birth in extremely low birth weight (EBLW) infants. Design A case study design. Setting The study took place in a level four neonatal intensive care unit (NICU) in North Carolina. Participants Four ELBW infants, less than 29 weeks gestational age at birth. Methods Physiologic data were measured every minute for the first 5 days of life: peripheral perfusion using perfusion index by Masimo and body temperature using thermistors. Body temperature was also measured using infrared thermal imaging. Stimulation and care events were recorded over the first 5 days using video which was coded with Noldus Observer software. Novel analytical models using the state space approach to time series analysis were developed to explore maturation of neural control over central and peripheral body temperature. Results/Conclusion Results from this pilot study confirmed the feasibility of using multiple instruments to measure temperature and perfusion in ELBW infants. This approach added rich data to our case study design and set a clinical context with which to interpret longitudinal physiological data. PMID:24004312
A WPS Based Architecture for Climate Data Analytic Services (CDAS) at NASA
NASA Astrophysics Data System (ADS)
Maxwell, T. P.; McInerney, M.; Duffy, D.; Carriere, L.; Potter, G. L.; Doutriaux, C.
2015-12-01
Faced with unprecedented growth in the Big Data domain of climate science, NASA has developed the Climate Data Analytic Services (CDAS) framework. This framework enables scientists to execute trusted and tested analysis operations in a high performance environment close to the massive data stores at NASA. The data is accessed in standard (NetCDF, HDF, etc.) formats in a POSIX file system and processed using trusted climate data analysis tools (ESMF, CDAT, NCO, etc.). The framework is structured as a set of interacting modules allowing maximal flexibility in deployment choices. The current set of module managers include: Staging Manager: Runs the computation locally on the WPS server or remotely using tools such as celery or SLURM. Compute Engine Manager: Runs the computation serially or distributed over nodes using a parallelization framework such as celery or spark. Decomposition Manger: Manages strategies for distributing the data over nodes. Data Manager: Handles the import of domain data from long term storage and manages the in-memory and disk-based caching architectures. Kernel manager: A kernel is an encapsulated computational unit which executes a processor's compute task. Each kernel is implemented in python exploiting existing analysis packages (e.g. CDAT) and is compatible with all CDAS compute engines and decompositions. CDAS services are accessed via a WPS API being developed in collaboration with the ESGF Compute Working Team to support server-side analytics for ESGF. The API can be executed using either direct web service calls, a python script or application, or a javascript-based web application. Client packages in python or javascript contain everything needed to make CDAS requests. The CDAS architecture brings together the tools, data storage, and high-performance computing required for timely analysis of large-scale data sets, where the data resides, to ultimately produce societal benefits. It is is currently deployed at NASA in support of the Collaborative REAnalysis Technical Environment (CREATE) project, which centralizes numerous global reanalysis datasets onto a single advanced data analytics platform. This service permits decision makers to investigate climate changes around the globe, inspect model trends, compare multiple reanalysis datasets, and variability.
The Case for Adopting Server-side Analytics
NASA Astrophysics Data System (ADS)
Tino, C.; Holmes, C. P.; Feigelson, E.; Hurlburt, N. E.
2017-12-01
The standard method for accessing Earth and space science data relies on a scheme developed decades ago: data residing in one or many data stores must be parsed out and shipped via internet lines or physical transport to the researcher who in turn locally stores the data for analysis. The analyses tasks are varied and include visualization, parameterization, and comparison with or assimilation into physics models. In many cases this process is inefficient and unwieldy as the data sets become larger and demands on the analysis tasks become more sophisticated and complex. For about a decade, several groups have explored a new paradigm to this model. The names applied to the paradigm include "data analytics", "climate analytics", and "server-side analytics". The general concept is that in close network proximity to the data store there will be a tailored processing capability appropriate to the type and use of the data served. The user of the server-side analytics will operate on the data with numerical procedures. The procedures can be accessed via canned code, a scripting processor, or an analysis package such as Matlab, IDL or R. Results of the analytics processes will then be relayed via the internet to the user. In practice, these results will be at a much lower volume, easier to transport to and store locally by the user and easier for the user to interoperate with data sets from other remote data stores. The user can also iterate on the processing call to tailor the results as needed. A major component of server-side analytics could be to provide sets of tailored results to end users in order to eliminate the repetitive preconditioning that is both often required with these data sets and which drives much of the throughput challenges. NASA's Big Data Task Force studied this issue. This paper will present the results of this study including examples of SSAs that are being developed and demonstrated and suggestions for architectures that might be developed for future applications.
Importance of implementing an analytical quality control system in a core laboratory.
Marques-Garcia, F; Garcia-Codesal, M F; Caro-Narros, M R; Contreras-SanFeliciano, T
2015-01-01
The aim of the clinical laboratory is to provide useful information for screening, diagnosis and monitoring of disease. The laboratory should ensure the quality of extra-analytical and analytical process, based on set criteria. To do this, it develops and implements a system of internal quality control, designed to detect errors, and compare its data with other laboratories, through external quality control. In this way it has a tool to detect the fulfillment of the objectives set, and in case of errors, allowing corrective actions to be made, and ensure the reliability of the results. This article sets out to describe the design and implementation of an internal quality control protocol, as well as its periodical assessment intervals (6 months) to determine compliance with pre-determined specifications (Stockholm Consensus(1)). A total of 40 biochemical and 15 immunochemical methods were evaluated using three different control materials. Next, a standard operation procedure was planned to develop a system of internal quality control that included calculating the error of the analytical process, setting quality specifications, and verifying compliance. The quality control data were then statistically depicted as means, standard deviations, and coefficients of variation, as well as systematic, random, and total errors. The quality specifications were then fixed and the operational rules to apply in the analytical process were calculated. Finally, our data were compared with those of other laboratories through an external quality assurance program. The development of an analytical quality control system is a highly structured process. This should be designed to detect errors that compromise the stability of the analytical process. The laboratory should review its quality indicators, systematic, random and total error at regular intervals, in order to ensure that they are meeting pre-determined specifications, and if not, apply the appropriate corrective actions. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.
Ishibashi, Midori
2015-01-01
The cost, speed, and quality are the three important factors recently indicated by the Ministry of Health, Labour and Welfare (MHLW) for the purpose of accelerating clinical studies. Based on this background, the importance of laboratory tests is increasing, especially in the evaluation of clinical study participants' entry and safety, and drug efficacy. To assure the quality of laboratory tests, providing high-quality laboratory tests is mandatory. For providing adequate quality assurance in laboratory tests, quality control in the three fields of pre-analytical, analytical, and post-analytical processes is extremely important. There are, however, no detailed written requirements concerning specimen collection, handling, preparation, storage, and shipping. Most laboratory tests for clinical studies are performed onsite in a local laboratory; however, a part of laboratory tests is done in offsite central laboratories after specimen shipping. As factors affecting laboratory tests, individual and inter-individual variations are well-known. Besides these factors, standardizing the factors of specimen collection, handling, preparation, storage, and shipping, may improve and maintain the high quality of clinical studies in general. Furthermore, the analytical method, units, and reference interval are also important factors. It is concluded that, to overcome the problems derived from pre-analytical processes, it is necessary to standardize specimen handling in a broad sense.
Methods for evaluating the predictive accuracy of structural dynamic models
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, Jon D.
1990-01-01
Uncertainty of frequency response using the fuzzy set method and on-orbit response prediction using laboratory test data to refine an analytical model are emphasized with respect to large space structures. Two aspects of the fuzzy set approach were investigated relative to its application to large structural dynamics problems: (1) minimizing the number of parameters involved in computing possible intervals; and (2) the treatment of extrema which may occur in the parameter space enclosed by all possible combinations of the important parameters of the model. Extensive printer graphics were added to the SSID code to help facilitate model verification, and an application of this code to the LaRC Ten Bay Truss is included in the appendix to illustrate this graphics capability.
Ab initio relativistic effective potentials with spin--orbit operators. III. Rb through Xe
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaJohn, L.A.; Christiansen, P.A.; Ross, R.B.
A refined version of the ''shape consistent'' effective potential procedure of Christiansen, Lee, and Pitzer was used to compute averaged relativistic effective potentials (AREP) and spin--orbit operators for the elements Rb through Xe. Particular attention was given to the partitioning of the core and valence space and, where appropriate, more than one set of potentials is provided. These are tabulated in analytic form. Gaussian basis sets with contraction coefficients for the lowest energy state of each atom are given. The reliability of the transition metal AREPs was examined by comparing computed atomic excitation energies with accurate all-electron relativistic values. Themore » spin--orbit operators were tested in calculations on selected atoms.« less
NASA Astrophysics Data System (ADS)
Coughlin, J.; Mital, R.; Nittur, S.; SanNicolas, B.; Wolf, C.; Jusufi, R.
2016-09-01
Operational analytics when combined with Big Data technologies and predictive techniques have been shown to be valuable in detecting mission critical sensor anomalies that might be missed by conventional analytical techniques. Our approach helps analysts and leaders make informed and rapid decisions by analyzing large volumes of complex data in near real-time and presenting it in a manner that facilitates decision making. It provides cost savings by being able to alert and predict when sensor degradations pass a critical threshold and impact mission operations. Operational analytics, which uses Big Data tools and technologies, can process very large data sets containing a variety of data types to uncover hidden patterns, unknown correlations, and other relevant information. When combined with predictive techniques, it provides a mechanism to monitor and visualize these data sets and provide insight into degradations encountered in large sensor systems such as the space surveillance network. In this study, data from a notional sensor is simulated and we use big data technologies, predictive algorithms and operational analytics to process the data and predict sensor degradations. This study uses data products that would commonly be analyzed at a site. This study builds on a big data architecture that has previously been proven valuable in detecting anomalies. This paper outlines our methodology of implementing an operational analytic solution through data discovery, learning and training of data modeling and predictive techniques, and deployment. Through this methodology, we implement a functional architecture focused on exploring available big data sets and determine practical analytic, visualization, and predictive technologies.
Dharan, Nila J; Blakemore, Robert; Sloutsky, Alex; Kaur, Devinder; Alexander, Richard C; Ghajar, Minoo; Musser, Kimberlee A; Escuyer, Vincent E; Rowlinson, Marie-Claire; Crowe, Susanne; Laniado-Laborin, Rafael; Valli, Eloise; Nabeta, Pamela; Johnson, Pamela; Alland, David
2016-12-20
The Xpert ® MTB/RIF (Xpert) assay is a rapid PCR-based assay for the detection of Mycobacterium tuberculosis complex DNA (MTBc) and mutations associated with rifampin resistance (RIF). An updated version introduced in 2011, the G4 Xpert, included modifications to probe B and updated analytic software. An analytical study was performed to assess Xpert detection of mutations associated with rifampin resistance in rifampin-susceptible and -resistant isolates. A clinical study was performed in which specimens from US and non-US persons suspected of tuberculosis (TB) were tested to determine Xpert performance characteristics. All specimens underwent smear microscopy, mycobacterial culture, conventional drug-susceptibility testing and Xpert testing; DNA from isolates with discordant rifampin resistance results was sequenced. Among 191 laboratory-prepared isolates in the analytical study, Xpert sensitivity for detection of rifampin resistance associated mutations was 97.7% and specificity was 90.8%, which increased to 99.0% after DNA sequencing analysis of the discordant samples. Of the 1,096 subjects in the four clinical studies, 49% were from the US. Overall, Xpert detected MTBc in 439 of 468 culture-positive specimens for a sensitivity of 93.8% (95% confidence interval [CI]: 91.2%-95.7%) and did not detect MTBc in 620 of 628 culture-negative specimens for a specificity of 98.7% (95% CI: 97.5%-99.4%). Sensitivity was 99.7% among smear-positive cases, and 76.1% among smear-negative cases. Non-determinate MTBc detection and false-positive RIF resistance results were low (1.2 and 0.9%, respectively). The updated Xpert assay retained the high sensitivity and specificity of the previous assay versions and demonstrated low rates of non-determinate and RIF resistance false positive results.
A computer program (MACPUMP) for interactive aquifer-test analysis
Day-Lewis, F. D.; Person, M.A.; Konikow, Leonard F.
1995-01-01
This report introduces MACPUMP (Version 1.0), an aquifer-test-analysis package for use with Macintosh4 computers. The report outlines the input- data format, describes the solutions encoded in the program, explains the menu-items, and offers a tutorial illustrating the use of the program. The package reads list-directed aquifer-test data from a file, plots the data to the screen, generates and plots type curves for several different test conditions, and allows mouse-controlled curve matching. MACPUMP features pull-down menus, a simple text viewer for displaying data-files, and optional on-line help windows. This version includes the analytical solutions for nonleaky and leaky confined aquifers, using both type curves and straight-line methods, and for the analysis of single-well slug tests using type curves. An executable version of the code and sample input data sets are included on an accompanying floppy disk.
Sargsyan, Ori
2012-05-25
Hitchhiking and severe bottleneck effects have impact on the dynamics of genetic diversity of a population by inducing homogenization at a single locus and at the genome-wide scale, respectively. As a result, identification and differentiation of the signatures of such events from DNA sequence data at a single locus is challenging. This study develops an analytical framework for identifying and differentiating recent homogenization events at multiple neutral loci in low recombination regions. The dynamics of genetic diversity at a locus after a recent homogenization event is modeled according to the infinite-sites mutation model and the Wright-Fisher model of reproduction withmore » constant population size. In this setting, I derive analytical expressions for the distribution, mean, and variance of the number of polymorphic sites in a random sample of DNA sequences from a locus affected by a recent homogenization event. Based on this framework, three likelihood-ratio based tests are presented for identifying and differentiating recent homogenization events at multiple loci. Lastly, I apply the framework to two data sets. First, I consider human DNA sequences from four non-coding loci on different chromosomes for inferring evolutionary history of modern human populations. The results suggest, in particular, that recent homogenization events at the loci are identifiable when the effective human population size is 50000 or greater in contrast to 10000, and the estimates of the recent homogenization events are agree with the “Out of Africa” hypothesis. Second, I use HIV DNA sequences from HIV-1-infected patients to infer the times of HIV seroconversions. The estimates are contrasted with other estimates derived as the mid-time point between the last HIV-negative and first HIV-positive screening tests. Finally, the results show that significant discrepancies can exist between the estimates.« less
Kim, SungHwan; Lin, Chien-Wei; Tseng, George C
2016-07-01
Supervised machine learning is widely applied to transcriptomic data to predict disease diagnosis, prognosis or survival. Robust and interpretable classifiers with high accuracy are usually favored for their clinical and translational potential. The top scoring pair (TSP) algorithm is an example that applies a simple rank-based algorithm to identify rank-altered gene pairs for classifier construction. Although many classification methods perform well in cross-validation of single expression profile, the performance usually greatly reduces in cross-study validation (i.e. the prediction model is established in the training study and applied to an independent test study) for all machine learning methods, including TSP. The failure of cross-study validation has largely diminished the potential translational and clinical values of the models. The purpose of this article is to develop a meta-analytic top scoring pair (MetaKTSP) framework that combines multiple transcriptomic studies and generates a robust prediction model applicable to independent test studies. We proposed two frameworks, by averaging TSP scores or by combining P-values from individual studies, to select the top gene pairs for model construction. We applied the proposed methods in simulated data sets and three large-scale real applications in breast cancer, idiopathic pulmonary fibrosis and pan-cancer methylation. The result showed superior performance of cross-study validation accuracy and biomarker selection for the new meta-analytic framework. In conclusion, combining multiple omics data sets in the public domain increases robustness and accuracy of the classification model that will ultimately improve disease understanding and clinical treatment decisions to benefit patients. An R package MetaKTSP is available online. (http://tsenglab.biostat.pitt.edu/software.htm). ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Ranking Bias in Association Studies
Jeffries, Neal O.
2009-01-01
Background It is widely appreciated that genomewide association studies often yield overestimates of the association of a marker with disease when attention focuses upon the marker showing the strongest relationship. For example, in a case-control setting the largest (in absolute value) estimated odds ratio has been found to typically overstate the association as measured in a second, independent set of data. The most common reason given for this observation is that the choice of the most extreme test statistic is often conditional upon first observing a significant p value associated with the marker. A second, less appreciated reason is described here. Under common circumstances it is the multiple testing of many markers and subsequent focus upon those with most extreme test statistics (i.e. highly ranked results) that leads to bias in the estimated effect sizes. Conclusions This bias, termed ranking bias, is separate from that arising from conditioning on a significant p value and may often be a more important factor in generating bias. An analytic description of this bias, simulations demonstrating its extent, and identification of some factors leading to its exacerbation are presented. PMID:19172085
Load sharing in distributed real-time systems with state-change broadcasts
NASA Technical Reports Server (NTRS)
Shin, Kang G.; Chang, Yi-Chieh
1989-01-01
A decentralized dynamic load-sharing (LS) method based on state-change broadcasts is proposed for a distributed real-time system. Whenever the state of a node changes from underloaded to fully loaded and vice versa, the node broadcasts this change to a set of nodes, called a buddy set, in the system. The performance of the method is evaluated with both analytic modeling and simulation. It is modeled first by an embedded Markov chain for which numerical solutions are derived. The model solutions are then used to calculate the distribution of queue lengths at the nodes and the probability of meeting task deadlines. The analytical results show that buddy sets of 10 nodes outperform those of less than 10 nodes, and the incremental benefit gained from increasing the buddy set size beyond 15 nodes is insignificant. These and other analytical results are verified by simulation. The proposed LS method is shown to meet task deadlines with a very high probability.
Designing a Marketing Analytics Course for the Digital Age
ERIC Educational Resources Information Center
Liu, Xia; Burns, Alvin C.
2018-01-01
Marketing analytics is receiving great attention because of evolving technology and the radical changes in the marketing environment. This study aims to assist the design and implementation of a marketing analytics course. We assembled a rich data set from four sources: business executives, 400 employers' job postings, one million tweets about…
NHEXAS PHASE I REGION 5 STUDY--QA ANALYTICAL RESULTS FOR METALS IN SPIKES
This data set includes analytical results for measurements of metals in 49 field control samples (spikes). Measurements were made for up to 11 metals in samples of water, blood, and urine. Field controls were used to assess recovery of target analytes from a sample media during s...
NHEXAS PHASE I MARYLAND STUDY--QA ANALYTICAL RESULTS FOR PESTICIDE METABOLITES IN SPIKE SAMPLES
The Pesticides in Spikes data set contains the analytical results of measurements of up to 17 pesticides in 12 control samples (spikes) from 11 households. Measurements were made in samples of blood serum. Controls were used to assess recovery of target analytes from a sample m...
OMV: A simplified mathematical model of the orbital maneuvering vehicle
NASA Technical Reports Server (NTRS)
Teoh, W.
1984-01-01
A model of the orbital maneuvering vehicle (OMV) is presented which contains several simplications. A set of hand controller signals may be used to control the motion of the OMV. Model verification is carried out using a sequence of tests. The dynamic variables generated by the model are compared, whenever possible, with the corresponding analytical variables. The results of the tests show conclusively that the present model is behaving correctly. Further, this model interfaces properly with the state vector transformation module (SVX) developed previously. Correct command sentence sequences are generated by the OMV and and SVX system, and these command sequences can be used to drive the flat floor simulation system at MSFC.
NASA Technical Reports Server (NTRS)
Oglebay, J. C.
1977-01-01
A thermal analytic model for a 30-cm engineering model mercury-ion thruster was developed and calibrated using the experimental test results of tests of a pre-engineering model 30-cm thruster. A series of tests, performed later, simulated a wide range of thermal environments on an operating 30-cm engineering model thruster, which was instrumented to measure the temperature distribution within it. The modified analytic model is described and analytic and experimental results compared for various operating conditions. Based on the comparisons, it is concluded that the analytic model can be used as a preliminary design tool to predict thruster steady-state temperature distributions for stage and mission studies and to define the thermal interface bewteen the thruster and other elements of a spacecraft.
Ichihara, Kiyoshi; Ozarda, Yesim; Barth, Julian H; Klee, George; Shimizu, Yoshihisa; Xia, Liangyu; Hoffmann, Mariza; Shah, Swarup; Matsha, Tandi; Wassung, Janette; Smit, Francois; Ruzhanskaya, Anna; Straseski, Joely; Bustos, Daniel N; Kimura, Shogo; Takahashi, Aki
2017-04-01
The intent of this study, based on a global multicenter study of reference values (RVs) for serum analytes was to explore biological sources of variation (SVs) of the RVs among 12 countries around the world. As described in the first part of this paper, RVs of 50 major serum analytes from 13,396 healthy individuals living in 12 countries were obtained. Analyzed in this study were 23 clinical chemistry analytes and 8 analytes measured by immunoturbidimetry. Multiple regression analysis was performed for each gender, country by country, analyte by analyte, by setting four major SVs (age, BMI, and levels of drinking and smoking) as a fixed set of explanatory variables. For analytes with skewed distributions, log-transformation was applied. The association of each source of variation with RVs was expressed as the partial correlation coefficient (r p ). Obvious gender and age-related changes in the RVs were observed in many analytes, almost consistently between countries. Compilation of age-related variations of RVs after adjusting for between-country differences revealed peculiar patterns specific to each analyte. Judged fromthe r p , BMI related changes were observed for many nutritional and inflammatory markers in almost all countries. However, the slope of linear regression of BMI vs. RV differed greatly among countries for some analytes. Alcohol and smoking-related changes were observed less conspicuously in a limited number of analytes. The features of sex, age, alcohol, and smoking-related changes in RVs of the analytes were largely comparable worldwide. The finding of differences in BMI-related changes among countries in some analytes is quite relevant to understanding ethnic differences in susceptibility to nutritionally related diseases. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Major advances in testing of dairy products: milk component and dairy product attribute testing.
Barbano, D M; Lynch, J M
2006-04-01
Milk component analysis is relatively unusual in the field of quantitative analytical chemistry because an analytical test result determines the allocation of very large amounts of money between buyers and sellers of milk. Therefore, there is high incentive to develop and refine these methods to achieve a level of analytical performance rarely demanded of most methods or laboratory staff working in analytical chemistry. In the last 25 yr, well-defined statistical methods to characterize and validate analytical method performance combined with significant improvements in both the chemical and instrumental methods have allowed achievement of improved analytical performance for payment testing. A shift from marketing commodity dairy products to the development, manufacture, and marketing of value added dairy foods for specific market segments has created a need for instrumental and sensory approaches and quantitative data to support product development and marketing. Bringing together sensory data from quantitative descriptive analysis and analytical data from gas chromatography olfactometry for identification of odor-active compounds in complex natural dairy foods has enabled the sensory scientist and analytical chemist to work together to improve the consistency and quality of dairy food flavors.
Analytic cognitive style predicts religious and paranormal belief.
Pennycook, Gordon; Cheyne, James Allan; Seli, Paul; Koehler, Derek J; Fugelsang, Jonathan A
2012-06-01
An analytic cognitive style denotes a propensity to set aside highly salient intuitions when engaging in problem solving. We assess the hypothesis that an analytic cognitive style is associated with a history of questioning, altering, and rejecting (i.e., unbelieving) supernatural claims, both religious and paranormal. In two studies, we examined associations of God beliefs, religious engagement (attendance at religious services, praying, etc.), conventional religious beliefs (heaven, miracles, etc.) and paranormal beliefs (extrasensory perception, levitation, etc.) with performance measures of cognitive ability and analytic cognitive style. An analytic cognitive style negatively predicted both religious and paranormal beliefs when controlling for cognitive ability as well as religious engagement, sex, age, political ideology, and education. Participants more willing to engage in analytic reasoning were less likely to endorse supernatural beliefs. Further, an association between analytic cognitive style and religious engagement was mediated by religious beliefs, suggesting that an analytic cognitive style negatively affects religious engagement via lower acceptance of conventional religious beliefs. Results for types of God belief indicate that the association between an analytic cognitive style and God beliefs is more nuanced than mere acceptance and rejection, but also includes adopting less conventional God beliefs, such as Pantheism or Deism. Our data are consistent with the idea that two people who share the same cognitive ability, education, political ideology, sex, age and level of religious engagement can acquire very different sets of beliefs about the world if they differ in their propensity to think analytically. Copyright © 2012 Elsevier B.V. All rights reserved.
MALBEC: a new CUDA-C ray-tracer in general relativity
NASA Astrophysics Data System (ADS)
Quiroga, G. D.
2018-06-01
A new CUDA-C code for tracing orbits around non-charged black holes is presented. This code, named MALBEC, take advantage of the graphic processing units and the CUDA platform for tracking null and timelike test particles in Schwarzschild and Kerr. Also, a new general set of equations that describe the closed circular orbits of any timelike test particle in the equatorial plane is derived. These equations are extremely important in order to compare the analytical behavior of the orbits with the numerical results and verify the correct implementation of the Runge-Kutta algorithm in MALBEC. Finally, other numerical tests are performed, demonstrating that MALBEC is able to reproduce some well-known results in these metrics in a faster and more efficient way than a conventional CPU implementation.
betaFIT: A computer program to fit pointwise potentials to selected analytic functions
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.; Pashov, Asen
2017-01-01
This paper describes program betaFIT, which performs least-squares fits of sets of one-dimensional (or radial) potential function values to four different types of sophisticated analytic potential energy functional forms. These families of potential energy functions are: the Expanded Morse Oscillator (EMO) potential [J Mol Spectrosc 1999;194:197], the Morse/Long-Range (MLR) potential [Mol Phys 2007;105:663], the Double Exponential/Long-Range (DELR) potential [J Chem Phys 2003;119:7398], and the "Generalized Potential Energy Function (GPEF)" form introduced by Šurkus et al. [Chem Phys Lett 1984;105:291], which includes a wide variety of polynomial potentials, such as the Dunham [Phys Rev 1932;41:713], Simons-Parr-Finlan [J Chem Phys 1973;59:3229], and Ogilvie-Tipping [Proc R Soc A 1991;378:287] polynomials, as special cases. This code will be useful for providing the realistic sets of potential function shape parameters that are required to initiate direct fits of selected analytic potential functions to experimental data, and for providing better analytical representations of sets of ab initio results.
Polyatomic molecular Dirac-Hartree-Fock calculations with Gaussian basis sets
NASA Technical Reports Server (NTRS)
Dyall, Kenneth G.; Faegri, Knut, Jr.; Taylor, Peter R.
1990-01-01
Numerical methods have been used successfully in atomic Dirac-Hartree-Fock (DHF) calculations for many years. Some DHF calculations using numerical methods have been done on diatomic molecules, but while these serve a useful purpose for calibration, the computational effort in extending this approach to polyatomic molecules is prohibitive. An alternative more in line with traditional quantum chemistry is to use an analytical basis set expansion of the wave function. This approach fell into disrepute in the early 1980's due to problems with variational collapse and intruder states, but has recently been put on firm theoretical foundations. In particular, the problems of variational collapse are well understood, and prescriptions for avoiding the most serious failures have been developed. Consequently, it is now possible to develop reliable molecular programs using basis set methods. This paper describes such a program and reports results of test calculations to demonstrate the convergence and stability of the method.
Social Context of First Birth Timing in a Rapidly Changing Rural Setting
Ghimire, Dirgha J.
2016-01-01
This article examines the influence of social context on the rate of first birth. Drawing on socialization models, I develop a theoretical framework to explain how different aspects of social context (i.e., neighbors), may affect the rate of first birth. Neighbors, who in the study setting comprise individuals’ immediate social context, have an important influence on the rate of first birth. To test my hypotheses, I leverage a setting, measures and analytical techniques designed to study the impact of macro-level social contexts on micro-level individual behavior. The results show that neighbors’ age at first birth, travel to the capital city and media exposure tend to reduce the first birth rate, while neighbors’ non-family work experience increases first birth rate. These effects are independent of neighborhood characteristics and are robust against several key variations in model specifications. PMID:27886737
Pantano, Flaminia; Brauneis, Stefano; Forneris, Alexandre; Pacifici, Roberta; Marinelli, Enrico; Kyriakou, Chrystalla; Pichini, Simona; Busardò, Francesco Paolo
2017-08-28
Oxycodone is a narcotic drug widely used to alleviate moderate and severe acute and chronic pain. Variability in analgesic efficacy could be explained by inter-subject variations in plasma concentrations of parent drug and its active metabolite, oxymorphone. To evaluate patient compliance and to set up therapeutic drug monitoring (TDM), an ultra-high-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) assay was developed and validated for the parent drug and its major metabolites noroxycodone and oxymorphone. Extraction of analytes from plasma and urine samples was obtained by simple liquid-liquid extraction. The chromatographic separation was achieved with a reversed phase column using a linear gradient elution with two solvents: acetic acid 1% in water and methanol. The separated analytes were detected with a triple quadrupole mass spectrometer operated in multiple reaction monitoring (MRM) mode via positive electrospray ionization (ESI). Separation of analytes was obtained in less than 5 min. Linear calibration curves for all the analytes under investigation in urine and plasma samples showed determination coefficients (r2) equal or higher than 0.990. Mean absolute analytical recoveries were always above 86%. Intra- and inter-assay precision (measured as coefficient of variation, CV%) and accuracy (measured as % error) values were always better than 13%. Limit of detection at 0.06 and 0.15 ng/mL and limit of quantification at 0.2 and 0.5 ng/mL for plasma and urine samples, respectively, were adequate for the purpose of the present study. Rapid extraction, identification and quantification of oxycodone and its metabolites both in urine and plasma by UHPLC-MS/MS assay was tested for its feasibility in clinical samples and provided excellent results for rapid and effective drug testing in patients under oxycodone treatment.
Jonker, Willem; Clarijs, Bas; de Witte, Susannah L; van Velzen, Martin; de Koning, Sjaak; Schaap, Jaap; Somsen, Govert W; Kool, Jeroen
2016-09-02
Gas chromatography (GC) is a superior separation technique for many compounds. However, fractionation of a GC eluate for analyte isolation and/or post-column off-line analysis is not straightforward, and existing platforms are limited in the number of fractions that can be collected. Moreover, aerosol formation may cause serious analyte losses. Previously, our group has developed a platform that resolved these limitations of GC fractionation by post-column infusion of a trap solvent prior to continuous small-volume fraction collection in a 96-wells plate (Pieke et al., 2013 [17]). Still, this GC fractionation set-up lacked a chemical detector for the on-line recording of chromatograms, and the introduction of trap solvent resulted in extensive peak broadening for late-eluting compounds. This paper reports advancements to the fractionation platform allowing flame ionization detection (FID) parallel to high-resolution collection of a full GC chromatograms in up to 384 nanofractions of 7s each. To this end, a post-column split was incorporated which directs part of the eluate towards FID. Furthermore, a solvent heating device was developed for stable delivery of preheated/vaporized trap solvent, which significantly reduced band broadening by post-column infusion. In order to achieve optimal analyte trapping, several solvents were tested at different flow rates. The repeatability of the optimized GC fraction collection process was assessed demonstrating the possibility of up-concentration of isolated analytes by repetitive analyses of the same sample. The feasibility of the improved GC fractionation platform for bioactivity screening of toxic compounds was studied by the analysis of a mixture of test pesticides, which after fractionation were subjected to a post-column acetylcholinesterase (AChE) assay. Fractions showing AChE inhibition could be unambiguously correlated with peaks from the parallel-recorded FID chromatogram. Copyright © 2016 Elsevier B.V. All rights reserved.
Campi-Azevedo, Ana Carolina; Peruhype-Magalhães, Vanessa; Coelho-Dos-Reis, Jordana Grazziela; Costa-Pereira, Christiane; Yamamura, Anna Yoshida; Lima, Sheila Maria Barbosa de; Simões, Marisol; Campos, Fernanda Magalhães Freire; de Castro Zacche Tonini, Aline; Lemos, Elenice Moreira; Brum, Ricardo Cristiano; de Noronha, Tatiana Guimarães; Freire, Marcos Silva; Maia, Maria de Lourdes Sousa; Camacho, Luiz Antônio Bastos; Rios, Maria; Chancey, Caren; Romano, Alessandro; Domingues, Carla Magda; Teixeira-Carvalho, Andréa; Martins-Filho, Olindo Assis
2017-09-01
Technological innovations in vaccinology have recently contributed to bring about novel insights for the vaccine-induced immune response. While the current protocols that use peripheral blood samples may provide abundant data, a range of distinct components of whole blood samples are required and the different anticoagulant systems employed may impair some properties of the biological sample and interfere with functional assays. Although the interference of heparin in functional assays for viral neutralizing antibodies such as the functional plaque-reduction neutralization test (PRNT), considered the gold-standard method to assess and monitor the protective immunity induced by the Yellow fever virus (YFV) vaccine, has been well characterized, the development of pre-analytical treatments is still required for the establishment of optimized protocols. The present study intended to optimize and evaluate the performance of pre-analytical treatment of heparin-collected blood samples with ecteola-cellulose (ECT) to provide accurate measurement of anti-YFV neutralizing antibodies, by PRNT. The study was designed in three steps, including: I. Problem statement; II. Pre-analytical steps; III. Analytical steps. Data confirmed the interference of heparin on PRNT reactivity in a dose-responsive fashion. Distinct sets of conditions for ECT pre-treatment were tested to optimize the heparin removal. The optimized protocol was pre-validated to determine the effectiveness of heparin plasma:ECT treatment to restore the PRNT titers as compared to serum samples. The validation and comparative performance was carried out by using a large range of serum vs heparin plasma:ECT 1:2 paired samples obtained from unvaccinated and 17DD-YFV primary vaccinated subjects. Altogether, the findings support the use of heparin plasma:ECT samples for accurate measurement of anti-YFV neutralizing antibodies. Copyright © 2017 Elsevier B.V. All rights reserved.
Shah, S N R; Sulong, N H Ramli; Shariati, Mahdi; Jumaat, M Z
2015-01-01
Steel pallet rack (SPR) beam-to-column connections (BCCs) are largely responsible to avoid the sway failure of frames in the down-aisle direction. The overall geometry of beam end connectors commercially used in SPR BCCs is different and does not allow a generalized analytic approach for all types of beam end connectors; however, identifying the effects of the configuration, profile and sizes of the connection components could be the suitable approach for the practical design engineers in order to predict the generalized behavior of any SPR BCC. This paper describes the experimental behavior of SPR BCCs tested using a double cantilever test set-up. Eight sets of specimens were identified based on the variation in column thickness, beam depth and number of tabs in the beam end connector in order to investigate the most influential factors affecting the connection performance. Four tests were repeatedly performed for each set to bring uniformity to the results taking the total number of tests to thirty-two. The moment-rotation (M-θ) behavior, load-strain relationship, major failure modes and the influence of selected parameters on connection performance were investigated. A comparative study to calculate the connection stiffness was carried out using the initial stiffness method, the slope to half-ultimate moment method and the equal area method. In order to find out the more appropriate method, the mean stiffness of all the tested connections and the variance in values of mean stiffness according to all three methods were calculated. The calculation of connection stiffness by means of the initial stiffness method is considered to overestimate the values when compared to the other two methods. The equal area method provided more consistent values of stiffness and lowest variance in the data set as compared to the other two methods.
Analytical Verifications in Cryogenic Testing of NGST Advanced Mirror System Demonstrators
NASA Technical Reports Server (NTRS)
Cummings, Ramona; Levine, Marie; VanBuren, Dave; Kegley, Jeff; Green, Joseph; Hadaway, James; Presson, Joan; Cline, Todd; Stahl, H. Philip (Technical Monitor)
2002-01-01
Ground based testing is a critical and costly part of component, assembly, and system verifications of large space telescopes. At such tests, however, with integral teamwork by planners, analysts, and test personnel, segments can be included to validate specific analytical parameters and algorithms at relatively low additional cost. This paper opens with strategy of analytical verification segments added to vacuum cryogenic testing of Advanced Mirror System Demonstrator (AMSD) assemblies. These AMSD assemblies incorporate material and architecture concepts being considered in the Next Generation Space Telescope (NGST) design. The test segments for workmanship testing, cold survivability, and cold operation optical throughput are supplemented by segments for analytical verifications of specific structural, thermal, and optical parameters. Utilizing integrated modeling and separate materials testing, the paper continues with support plan for analyses, data, and observation requirements during the AMSD testing, currently slated for late calendar year 2002 to mid calendar year 2003. The paper includes anomaly resolution as gleaned by authors from similar analytical verification support of a previous large space telescope, then closes with draft of plans for parameter extrapolations, to form a well-verified portion of the integrated modeling being done for NGST performance predictions.
Turner, Terry D.; Beller, Laurence S.; Clark, Michael L.; Klingler, Kerry M.
1997-01-01
A method of processing a test sample to concentrate an analyte in the sample from a solvent in the sample includes: a) boiling the test sample containing the analyte and solvent in a boiling chamber to a temperature greater than or equal to the solvent boiling temperature and less than the analyte boiling temperature to form a rising sample vapor mixture; b) passing the sample vapor mixture from the boiling chamber to an elongated primary separation tube, the separation tube having internal sidewalls and a longitudinal axis, the longitudinal axis being angled between vertical and horizontal and thus having an upper region and a lower region; c) collecting the physically transported liquid analyte on the internal sidewalls of the separation tube; and d) flowing the collected analyte along the angled internal sidewalls of the separation tube to and pass the separation tube lower region. The invention also includes passing a turbulence inducing wave through a vapor mixture to separate physically transported liquid second material from vaporized first material. Apparatus are also disclosed for effecting separations. Further disclosed is a fluidically powered liquid test sample withdrawal apparatus for withdrawing a liquid test sample from a test sample container and for cleaning the test sample container.
Turner, T.D.; Beller, L.S.; Clark, M.L.; Klingler, K.M.
1997-10-14
A method of processing a test sample to concentrate an analyte in the sample from a solvent in the sample includes: (a) boiling the test sample containing the analyte and solvent in a boiling chamber to a temperature greater than or equal to the solvent boiling temperature and less than the analyte boiling temperature to form a rising sample vapor mixture; (b) passing the sample vapor mixture from the boiling chamber to an elongated primary separation tube, the separation tube having internal sidewalls and a longitudinal axis, the longitudinal axis being angled between vertical and horizontal and thus having an upper region and a lower region; (c) collecting the physically transported liquid analyte on the internal sidewalls of the separation tube; and (d) flowing the collected analyte along the angled internal sidewalls of the separation tube to and pass the separation tube lower region. The invention also includes passing a turbulence inducing wave through a vapor mixture to separate physically transported liquid second material from vaporized first material. Apparatus is also disclosed for effecting separations. Further disclosed is a fluidically powered liquid test sample withdrawal apparatus for withdrawing a liquid test sample from a test sample container and for cleaning the test sample container. 8 figs.
Analytical investigation of aerodynamic characteristics of highly swept wings with separated flow
NASA Technical Reports Server (NTRS)
Reddy, C. S.
1980-01-01
Many modern aircraft designed for supersonic speeds employ highly swept-back and low-aspect-ratio wings with sharp or thin edges. Flow separation occurs near the leading and tip edges of such wings at moderate to high angles of attack. Attempts have been made over the years to develop analytical methods for predicting the aerodynamic characteristics of such aircraft. Before any method can really be useful, it must be tested against a standard set of data to determine its capabilities and limitations. The present work undertakes such an investigation. Three methods are considered: the free-vortex-sheet method (Weber et al., 1975), the vortex-lattice method with suction analogy (Lamar and Gloss, 1975), and the quasi-vortex lattice method of Mehrotra (1977). Both flat and cambered wings of different configurations, for which experimental data are available, are studied and comparisons made.
NASA Astrophysics Data System (ADS)
Hong, Y.; Curteza, A.; Zeng, X.; Bruniaux, P.; Chen, Y.
2016-06-01
Material selection is the most difficult section in the customized garment product design and development process. This study aims to create a hierarchical framework for material selection. The analytic hierarchy process and fuzzy sets theories have been applied to mindshare the diverse requirements from the customer and inherent interaction/interdependencies among these requirements. Sensory evaluation ensures a quick and effective selection without complex laboratory test such as KES and FAST, using the professional knowledge of the designers. A real empirical application for the physically disabled people is carried out to demonstrate the proposed method. Both the theoretical and practical background of this paper have indicated the fuzzy analytical network process can capture expert's knowledge existing in the form of incomplete, ambiguous and vague information for the mutual influence on attribute and criteria of the material selection.
Analytical solutions for the profile of two-dimensional droplets with finite-length precursor films
NASA Astrophysics Data System (ADS)
Perazzo, Carlos Alberto; Mac Intyre, J. R.; Gomba, J. M.
2017-12-01
By means of the lubrication approximation we obtain the full family of static bidimensional profiles of a liquid resting on a substrate under partial-wetting conditions imposed by a disjoining-conjoining pressure. We show that for a set of quite general disjoining-conjoining pressure potentials, the free surface can adopt only five nontrivial static patterns; in particular, we find solutions when the height goes to zero which describe satisfactorily the complete free surface for a finite amount of fluid deposited on a substrate. To test the extension of the applicability of our solutions, we compare them with those obtained when the lubrication approximations are not employed and under conditions where the lubrication hypothesis are not strictly valid, and also with axisymmetric solutions. For a given disjoining-conjoining potential, we report a new analytical solution that accounts for all the five possible solutions.
Analytical interatomic potential for modeling nonequilibrium processes in the W-C-H system
NASA Astrophysics Data System (ADS)
Juslin, N.; Erhart, P.; Träskelin, P.; Nord, J.; Henriksson, K. O. E.; Nordlund, K.; Salonen, E.; Albe, K.
2005-12-01
A reactive interatomic potential based on an analytical bond-order scheme is developed for the ternary system W-C-H. The model combines Brenner's hydrocarbon potential with parameter sets for W-W, W-C, and W-H interactions and is adjusted to materials properties of reference structures with different local atomic coordinations including tungsten carbide, W-H molecules, as well as H dissolved in bulk W. The potential has been tested in various scenarios, such as surface, defect, and melting properties, none of which were considered in the fitting. The intended area of application is simulations of hydrogen and hydrocarbon interactions with tungsten, which have a crucial role in fusion reactor plasma-wall interactions. Furthermore, this study shows that the angular-dependent bond-order scheme can be extended to second nearest-neighbor interactions, which are relevant in body-centered-cubic metals. Moreover, it provides a possibly general route for modeling metal carbides.
MODFLOW equipped with a new method for the accurate simulation of axisymmetric flow
NASA Astrophysics Data System (ADS)
Samani, N.; Kompani-Zare, M.; Barry, D. A.
2004-01-01
Axisymmetric flow to a well is an important topic of groundwater hydraulics, the simulation of which depends on accurate computation of head gradients. Groundwater numerical models with conventional rectilinear grid geometry such as MODFLOW (in contrast to analytical models) generally have not been used to simulate aquifer test results at a pumping well because they are not designed or expected to closely simulate the head gradient near the well. A scaling method is proposed based on mapping the governing flow equation from cylindrical to Cartesian coordinates, and vice versa. A set of relationships and scales is derived to implement the conversion. The proposed scaling method is then embedded in MODFLOW 2000. To verify the accuracy of the method steady and unsteady flows in confined and unconfined aquifers with fully or partially penetrating pumping wells are simulated and compared with the corresponding analytical solutions. In all cases a high degree of accuracy is achieved.
Influence of Wake Models on Calculated Tiltrotor Aerodynamics
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2001-01-01
The tiltrotor aircraft configuration has the potential to revolutionize air transportation by providing an economical combination of vertical take-off and landing capability with efficient, high-speed cruise flight. To achieve this potential it is necessary to have validated analytical tools that will support future tiltrotor aircraft development. These analytical tools must calculate tiltrotor aeromechanical behavior, including performance, structural loads, vibration, and aeroelastic stability, with an accuracy established by correlation with measured tiltrotor data. The recent test of the Tilt Rotor Aeroacoustic Model (TRAM) with a single,l/4-scale V-22 rotor in the German-Dutch Wind Tunnel (DNW) provides an extensive set of aeroacoustic, performance, and structural loads data. This paper will examine the influence of wake models on calculated tiltrotor aerodynamics, comparing calculations of performance and airloads with TRAM DNW measurements. The calculations will be performed using the comprehensive analysis CAMRAD II.
A Comparison of Analytical and Data Preprocessing Methods for Spectral Fingerprinting
LUTHRIA, DEVANAND L.; MUKHOPADHYAY, SUDARSAN; LIN, LONG-ZE; HARNLY, JAMES M.
2013-01-01
Spectral fingerprinting, as a method of discriminating between plant cultivars and growing treatments for a common set of broccoli samples, was compared for six analytical instruments. Spectra were acquired for finely powdered solid samples using Fourier transform infrared (FT-IR) and Fourier transform near-infrared (NIR) spectrometry. Spectra were also acquired for unfractionated aqueous methanol extracts of the powders using molecular absorption in the ultraviolet (UV) and visible (VIS) regions and mass spectrometry with negative (MS−) and positive (MS+) ionization. The spectra were analyzed using nested one-way analysis of variance (ANOVA) and principal component analysis (PCA) to statistically evaluate the quality of discrimination. All six methods showed statistically significant differences between the cultivars and treatments. The significance of the statistical tests was improved by the judicious selection of spectral regions (IR and NIR), masses (MS+ and MS−), and derivatives (IR, NIR, UV, and VIS). PMID:21352644
Suitability of analytical methods to measure solubility for the purpose of nanoregulation.
Tantra, Ratna; Bouwmeester, Hans; Bolea, Eduardo; Rey-Castro, Carlos; David, Calin A; Dogné, Jean-Michel; Jarman, John; Laborda, Francisco; Laloy, Julie; Robinson, Kenneth N; Undas, Anna K; van der Zande, Meike
2016-01-01
Solubility is an important physicochemical parameter in nanoregulation. If nanomaterial is completely soluble, then from a risk assessment point of view, its disposal can be treated much in the same way as "ordinary" chemicals, which will simplify testing and characterisation regimes. This review assesses potential techniques for the measurement of nanomaterial solubility and evaluates the performance against a set of analytical criteria (based on satisfying the requirements as governed by the cosmetic regulation as well as the need to quantify the concentration of free (hydrated) ions). Our findings show that no universal method exists. A complementary approach is thus recommended, to comprise an atomic spectrometry-based method in conjunction with an electrochemical (or colorimetric) method. This article shows that although some techniques are more commonly used than others, a huge research gap remains, related with the need to ensure data reliability.
Estimates of effects of residual acceleration on USML-1 experiments
NASA Technical Reports Server (NTRS)
Naumann, Robert J.
1995-01-01
The purpose of this study effort was to develop analytical models to describe the effects of residual accelerations on the experiments to be carried on the first U.S. Microgravity Lab mission (USML-1) and to test the accuracy of these models by comparing the pre-flight predicted effects with the post-flight measured effects. After surveying the experiments to be performed on USML-1, it became evident that the anticipated residual accelerations during the USML-1 mission were well below the threshold for most of the primary experiments and all of the secondary (Glovebox) experiments and that the only set of experiments that could provide quantifiable effects, and thus provide a definitive test of the analytical models, were the three melt growth experiments using the Bridgman-Stockbarger type Crystal Growth Furnace (CGF). This class of experiments is by far the most sensitive to low level quasi-steady accelerations that are unavoidable on space craft operating in low earth orbit. Because of this, they have been the drivers for the acceleration requirements imposed on the Space Station. Therefore, it is appropriate that the models on which these requirements are based are tested experimentally. Also, since solidification proceeds directionally over a long period of time, the solidified ingot provides a more or less continuous record of the effects from acceleration disturbances.
Vandekerckhove, Kristof; Seidl, Andreas; Gutka, Hiten; Kumar, Manish; Gratzl, Gyöngyi; Keire, David; Coffey, Todd; Kuehne, Henriette
2018-05-10
Leading regulatory agencies recommend biosimilar assessment to proceed in a stepwise fashion, starting with a detailed analytical comparison of the structural and functional properties of the proposed biosimilar and reference product. The degree of analytical similarity determines the degree of residual uncertainty that must be addressed through downstream in vivo studies. Substantive evidence of similarity from comprehensive analytical testing may justify a targeted clinical development plan, and thus enable a shorter path to licensing. The importance of a careful design of the analytical similarity study program therefore should not be underestimated. Designing a state-of-the-art analytical similarity study meeting current regulatory requirements in regions such as the USA and EU requires a methodical approach, consisting of specific steps that far precede the work on the actual analytical study protocol. This white paper discusses scientific and methodological considerations on the process of attribute and test method selection, criticality assessment, and subsequent assignment of analytical measures to US FDA's three tiers of analytical similarity assessment. Case examples of selection of critical quality attributes and analytical methods for similarity exercises are provided to illustrate the practical implementation of the principles discussed.
NASA Astrophysics Data System (ADS)
Cucu, Daniela; Woods, Mike
2008-08-01
The paper aims to present a practical approach for testing laboratories to ensure the quality of their test results. It is based on the experience gained in assessing a large number of testing laboratories, discussing with management and staff, reviewing results obtained in national and international PTs and ILCs and exchanging information in the EA laboratory committee. According to EN ISO/IEC 17025, an accredited laboratory has to implement a programme to ensure the quality of its test results for each measurand. Pre-analytical, analytical and post-analytical measures shall be applied in a systematic manner. They shall include both quality control and quality assurance measures. When designing the quality assurance programme a laboratory should consider pre-analytical activities (like personnel training, selection and validation of test methods, qualifying equipment), analytical activities ranging from sampling, sample preparation, instrumental analysis and post-analytical activities (like decoding, calculation, use of statistical tests or packages, management of results). Designed on different levels (analyst, quality manager and technical manager), including a variety of measures, the programme shall ensure the validity and accuracy of test results, the adequacy of the management system, prove the laboratory's competence in performing tests under accreditation and last but not least show the comparability of test results. Laboratory management should establish performance targets and review periodically QC/QA results against them, implementing appropriate measures in case of non-compliance.
Mining functionally relevant gene sets for analyzing physiologically novel clinical expression data.
Turcan, Sevin; Vetter, Douglas E; Maron, Jill L; Wei, Xintao; Slonim, Donna K
2011-01-01
Gene set analyses have become a standard approach for increasing the sensitivity of transcriptomic studies. However, analytical methods incorporating gene sets require the availability of pre-defined gene sets relevant to the underlying physiology being studied. For novel physiological problems, relevant gene sets may be unavailable or existing gene set databases may bias the results towards only the best-studied of the relevant biological processes. We describe a successful attempt to mine novel functional gene sets for translational projects where the underlying physiology is not necessarily well characterized in existing annotation databases. We choose targeted training data from public expression data repositories and define new criteria for selecting biclusters to serve as candidate gene sets. Many of the discovered gene sets show little or no enrichment for informative Gene Ontology terms or other functional annotation. However, we observe that such gene sets show coherent differential expression in new clinical test data sets, even if derived from different species, tissues, and disease states. We demonstrate the efficacy of this method on a human metabolic data set, where we discover novel, uncharacterized gene sets that are diagnostic of diabetes, and on additional data sets related to neuronal processes and human development. Our results suggest that our approach may be an efficient way to generate a collection of gene sets relevant to the analysis of data for novel clinical applications where existing functional annotation is relatively incomplete.
Kuan, Da-Han; Wang, I-Shun; Lin, Jiun-Rue; Yang, Chao-Han; Huang, Chi-Hsien; Lin, Yen-Hung; Lin, Chih-Ting; Huang, Nien-Tsu
2016-08-02
The hemoglobin-A1c test, measuring the ratio of glycated hemoglobin (HbA1c) to hemoglobin (Hb) levels, has been a standard assay in diabetes diagnosis that removes the day-to-day glucose level variation. Currently, the HbA1c test is restricted to hospitals and central laboratories due to the laborious, time-consuming whole blood processing and bulky instruments. In this paper, we have developed a microfluidic device integrating dual CMOS polysilicon nanowire sensors (MINS) for on-chip whole blood processing and simultaneous detection of multiple analytes. The micromachined polymethylmethacrylate (PMMA) microfluidic device consisted of a serpentine microchannel with multiple dam structures designed for non-lysed cells or debris trapping, uniform plasma/buffer mixing and dilution. The CMOS-fabricated polysilicon nanowire sensors integrated with the microfluidic device were designed for the simultaneous, label-free electrical detection of multiple analytes. Our study first measured the Hb and HbA1c levels in 11 clinical samples via these nanowire sensors. The results were compared with those of standard Hb and HbA1c measurement methods (Hb: the sodium lauryl sulfate hemoglobin detection method; HbA1c: cation-exchange high-performance liquid chromatography) and showed comparable outcomes. Finally, we successfully demonstrated the efficacy of the MINS device's on-chip whole blood processing followed by simultaneous Hb and HbA1c measurement in a clinical sample. Compared to current Hb and HbA1c sensing instruments, the MINS platform is compact and can simultaneously detect two analytes with only 5 μL of whole blood, which corresponds to a 300-fold blood volume reduction. The total assay time, including the in situ sample processing and analyte detection, was just 30 minutes. Based on its on-chip whole blood processing and simultaneous multiple analyte detection functionalities with a lower sample volume requirement and shorter process time, the MINS device can be effectively applied to real-time diabetes diagnostics and monitoring in point-of-care settings.
75 FR 5722 - Procedures for Transportation Workplace Drug and Alcohol Testing Programs
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-04
... drugs in a DOT drug test. You must not test ``DOT specimens'' for any other drugs. (a) Marijuana... test analyte concentration analyte concentration Marijuana metabolites 50 ng/mL THCA \\1\\ 15 ng/mL...
ERIC Educational Resources Information Center
Ifenthaler, Dirk; Widanapathirana, Chathuranga
2014-01-01
Interest in collecting and mining large sets of educational data on student background and performance to conduct research on learning and instruction has developed as an area generally referred to as learning analytics. Higher education leaders are recognizing the value of learning analytics for improving not only learning and teaching but also…
The generation of criteria for selecting analytical tools for landscape management
Marilyn Duffey-Armstrong
1979-01-01
This paper presents an approach to generating criteria for selecting the analytical tools used to assess visual resources for various landscape management tasks. The approach begins by first establishing the overall parameters for the visual assessment task, and follows by defining the primary requirements of the various sets of analytical tools to be used. Finally,...
42 CFR 493.807 - Condition: Reinstatement of laboratories performing nonwaived testing.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., subspecialties, analyte or test, or voluntarily withdraws its certification under CLIA for the failed specialty, subspecialty, or analyte, the laboratory must then demonstrate sustained satisfactory performance on two... reinstatement for certification and Medicare or Medicaid approval in that specialty, subspecialty, analyte or...
42 CFR 493.807 - Condition: Reinstatement of laboratories performing nonwaived testing.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., subspecialties, analyte or test, or voluntarily withdraws its certification under CLIA for the failed specialty, subspecialty, or analyte, the laboratory must then demonstrate sustained satisfactory performance on two... reinstatement for certification and Medicare or Medicaid approval in that specialty, subspecialty, analyte or...
42 CFR 493.807 - Condition: Reinstatement of laboratories performing nonwaived testing.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., subspecialties, analyte or test, or voluntarily withdraws its certification under CLIA for the failed specialty, subspecialty, or analyte, the laboratory must then demonstrate sustained satisfactory performance on two... reinstatement for certification and Medicare or Medicaid approval in that specialty, subspecialty, analyte or...
Meier, D C; Benkstein, K D; Hurst, W S; Chu, P M
2017-05-01
Performance standard specifications for point chemical vapor detectors are established in ASTM E 2885-13 and ASTM E 2933-13. The performance evaluation of the detectors requires the accurate delivery of known concentrations of the chemical target to the system under test. Referee methods enable the analyte test concentration and associated uncertainties in the analyte test concentration to be validated by independent analysis, which is especially important for reactive analytes. This work extends the capability of a previously demonstrated method for using Fourier transform infrared (FT-IR) absorption spectroscopy for quantitatively evaluating the composition of vapor streams containing hazardous materials at Acute Exposure Guideline Levels (AEGL) to include test conditions colder than laboratory ambient temperatures. The described method covers the use of primary reference spectra to establish analyte concentrations, the generation of secondary reference spectra suitable for measuring analyte concentrations under specified testing environments, and the use of additional reference spectra and spectral profile strategies to mitigate the uncertainties due to impurities and water condensation within the low-temperature (7 °C, -5 °C) test cell. Important benefits of this approach include verification of the test analyte concentration with characterized uncertainties by in situ measurements co-located with the detector under test, near-real-time feedback, and broad applicability to toxic industrial chemicals.
Meier, D.C.; Benkstein, K.D.; Hurst, W.S.; Chu, P.M.
2016-01-01
Performance standard specifications for point chemical vapor detectors are established in ASTM E 2885-13 and ASTM E 2933-13. The performance evaluation of the detectors requires the accurate delivery of known concentrations of the chemical target to the system under test. Referee methods enable the analyte test concentration and associated uncertainties in the analyte test concentration to be validated by independent analysis, which is especially important for reactive analytes. This work extends the capability of a previously demonstrated method for using Fourier transform infrared (FT-IR) absorption spectroscopy for quantitatively evaluating the composition of vapor streams containing hazardous materials at Acute Exposure Guideline Levels (AEGL) to include test conditions colder than laboratory ambient temperatures. The described method covers the use of primary reference spectra to establish analyte concentrations, the generation of secondary reference spectra suitable for measuring analyte concentrations under specified testing environments, and the use of additional reference spectra and spectral profile strategies to mitigate the uncertainties due to impurities and water condensation within the low-temperature (7 °C, −5 °C) test cell. Important benefits of this approach include verification of the test analyte concentration with characterized uncertainties by in situ measurements co-located with the detector under test, near-real-time feedback, and broad applicability to toxic industrial chemicals. PMID:28090126
A modeling approach to compare ΣPCB concentrations between congener-specific analyses
Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.
2017-01-01
Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time.
NASA Technical Reports Server (NTRS)
Chuang, Shun-Lien
1987-01-01
Two sets of coupled-mode equations for multiwaveguide systems are derived using a generalized reciprocity relation; one set for a lossless system, and the other for a general lossy or lossless system. The second set of equations also reduces to those of the first set in the lossless case under the condition that the transverse field components are chosen to be real. Analytical relations between the coupling coefficients are shown and applied to the coupling of mode equations. It is shown analytically that these results satisfy exactly both the reciprocity theorem and power conservation. New orthogonal relations between the supermodes are derived in matrix form, with the overlap integrals taken into account.
NASA Technical Reports Server (NTRS)
Gregorek, Gerald; Dresse, John J.; LaNoe, Karine; Ratvasky, Thomas (Technical Monitor)
2000-01-01
The need for fundamental research in Ice Contaminated Tailplane Stall (ICTS) was established through three international conferences sponsored by the FAA. A joint NASA/FAA Tailplane Icing Program was formed in 1994 with the Ohio State University playing a critical role for wind tunnel and analytical research. Two entries of a full-scale 2-dimensional tailplane airfoil model of a DHC-6 Twin Otter were made in The Ohio State University 7x10 ft wind tunnel. This report describes the second test entry that examined additional ice shapes and roughness, as well as airfoil section differences. The addition data obtained in this test fortified the original database of aerodynamic coefficients that permit a detailed analysis of flight test results with an OSU-developed analytical program. The testing encompassed a full range of angles of attack and elevator deflections at flight Reynolds number conditions. Aerodynamic coefficients, C(L), C(M), and C(He), were obtained by integrating static pressure coefficient, C(P), values obtained from surface taps. Comparisons of clean and iced airfoil results show a significant decrease in the tailplane aeroperformance (decreased C(Lmax), decreased stall angle, increased C(He)) for all ice shapes with the grit having the lease affect and the LEWICE shape having the greatest affect. All results were consistent with observed tailplane stall phenomena and constitute an effective set of data for comprehensive analysis of ICTS.
Hagger, Martin S.; Gucciardi, Daniel F.; Chatzisarantis, Nikos L. D.
2017-01-01
Tests of social cognitive theories provide informative data on the factors that relate to health behavior, and the processes and mechanisms involved. In the present article, we contend that tests of social cognitive theories should adhere to the principles of nomological validity, defined as the degree to which predictions in a formal theoretical network are confirmed. We highlight the importance of nomological validity tests to ensure theory predictions can be disconfirmed through observation. We argue that researchers should be explicit on the conditions that lead to theory disconfirmation, and identify any auxiliary assumptions on which theory effects may be conditional. We contend that few researchers formally test the nomological validity of theories, or outline conditions that lead to model rejection and the auxiliary assumptions that may explain findings that run counter to hypotheses, raising potential for ‘falsification evasion.’ We present a brief analysis of studies (k = 122) testing four key social cognitive theories in health behavior to illustrate deficiencies in reporting theory tests and evaluations of nomological validity. Our analysis revealed that few articles report explicit statements suggesting that their findings support or reject the hypotheses of the theories tested, even when findings point to rejection. We illustrate the importance of explicit a priori specification of fundamental theory hypotheses and associated auxiliary assumptions, and identification of the conditions which would lead to rejection of theory predictions. We also demonstrate the value of confirmatory analytic techniques, meta-analytic structural equation modeling, and Bayesian analyses in providing robust converging evidence for nomological validity. We provide a set of guidelines for researchers on how to adopt and apply the nomological validity approach to testing health behavior models. PMID:29163307
Paper analytical devices for detection of low-quality pharmaceuticals
NASA Astrophysics Data System (ADS)
Weaver, A.; Lieberman, M.
2014-03-01
There is currently no global screening system to detect low quality pharmaceuticals, despite widespread recognition of the public health problems caused by substandard and falsified medicines. In order to fill this void, we designed a rapid field screening test that is interfaced with the mobile phone network. The user scrapes a pill over several reaction areas on a paper test card, and then dips one edge of the card into water to activate dried reagents stored on the paper. These reagents carry out multiple color tests and result in a pattern of colored stripes that give information about the chemical content of the pill. The test cards are inexpensive and instrument-free, and we think they will be a scalable testing option in low resource settings. Studies on falsified drugs archived at the FDA show that the test cards are effective at detecting a wide variety of low-quality formulations of many classes of pharmaceuticals, and field tests are currently under way in Kenya.
Loophole-free Bell test using electron spins in diamond: second experiment and additional analysis
Hensen, B.; Kalb, N.; Blok, M. S.; Dréau, A. E.; Reiserer, A.; Vermeulen, R. F. L.; Schouten, R. N.; Markham, M.; Twitchen, D. J.; Goodenough, K.; Elkouss, D.; Wehner, S.; Taminiau, T. H.; Hanson, R.
2016-01-01
The recently reported violation of a Bell inequality using entangled electronic spins in diamonds (Hensen et al., Nature 526, 682–686) provided the first loophole-free evidence against local-realist theories of nature. Here we report on data from a second Bell experiment using the same experimental setup with minor modifications. We find a violation of the CHSH-Bell inequality of 2.35 ± 0.18, in agreement with the first run, yielding an overall value of S = 2.38 ± 0.14. We calculate the resulting P-values of the second experiment and of the combined Bell tests. We provide an additional analysis of the distribution of settings choices recorded during the two tests, finding that the observed distributions are consistent with uniform settings for both tests. Finally, we analytically study the effect of particular models of random number generator (RNG) imperfection on our hypothesis test. We find that the winning probability per trial in the CHSH game can be bounded knowing only the mean of the RNG bias. This implies that our experimental result is robust for any model underlying the estimated average RNG bias, for random bits produced up to 690 ns too early by the random number generator. PMID:27509823
DOE Office of Scientific and Technical Information (OSTI.GOV)
T.Rex is used to explore tabular data sets containing up to ten million records to help rapidly understand a previously unknown data set. Analysis can quickly identify patterns of interest and the records and fields that capture those patterns. T.Rex contains a growing set of deep analytical tools and supports robust export capabilities that selected data can be incorporated into to other specialized tools for further analysis. T.Rex is flexible in ingesting different types and formats of data, allowing the user to interactively experiment and perform trial and error guesses on the structure of the data; and also has amore » variety of linked visual analytic tools that enable exploration of the data to find relevant content, relationships among content, trends within the content, and capture knowledge about the content. Finally, T.Rex has a rich export capability, to extract relevant subsets of a larger data source, to further analyze their data in other analytic tools.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
2016-06-08
T.Rex is used to explore tabular data sets containing up to ten million records to help rapidly understand a previously unknown data set. Analysis can quickly identify patterns of interest and the records and fields that capture those patterns. T.Rex contains a growing set of deep analytical tools and supports robust export capabilities that selected data can be incorporated into to other specialized tools for further analysis. T.Rex is flexible in ingesting different types and formats of data, allowing the user to interactively experiment and perform trial and error guesses on the structure of the data; and also has amore » variety of linked visual analytic tools that enable exploration of the data to find relevant content, relationships among content, trends within the content, and capture knowledge about the content. Finally, T.Rex has a rich export capability, to extract relevant subsets of a larger data source, to further analyze their data in other analytic tools.« less
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory.
Horowitz, Gary L; Zaman, Zahur; Blanckaert, Norbert J C; Chan, Daniel W; Dubois, Jeffrey A; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W; Nilsen, Olaug L; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality.
MODULAR ANALYTICS: A New Approach to Automation in the Clinical Laboratory
Zaman, Zahur; Blanckaert, Norbert J. C.; Chan, Daniel W.; Dubois, Jeffrey A.; Golaz, Olivier; Mensi, Noury; Keller, Franz; Stolz, Herbert; Klingler, Karl; Marocchi, Alessandro; Prencipe, Lorenzo; McLawhon, Ronald W.; Nilsen, Olaug L.; Oellerich, Michael; Luthe, Hilmar; Orsonneau, Jean-Luc; Richeux, Gérard; Recio, Fernando; Roldan, Esther; Rymo, Lars; Wicktorsson, Anne-Charlotte; Welch, Shirley L.; Wieland, Heinrich; Grawitz, Andrea Busse; Mitsumaki, Hiroshi; McGovern, Margaret; Ng, Katherine; Stockmann, Wolfgang
2005-01-01
MODULAR ANALYTICS (Roche Diagnostics) (MODULAR ANALYTICS, Elecsys and Cobas Integra are trademarks of a member of the Roche Group) represents a new approach to automation for the clinical chemistry laboratory. It consists of a control unit, a core unit with a bidirectional multitrack rack transportation system, and three distinct kinds of analytical modules: an ISE module, a P800 module (44 photometric tests, throughput of up to 800 tests/h), and a D2400 module (16 photometric tests, throughput up to 2400 tests/h). MODULAR ANALYTICS allows customised configurations for various laboratory workloads. The performance and practicability of MODULAR ANALYTICS were evaluated in an international multicentre study at 16 sites. Studies included precision, accuracy, analytical range, carry-over, and workflow assessment. More than 700 000 results were obtained during the course of the study. Median between-day CVs were typically less than 3% for clinical chemistries and less than 6% for homogeneous immunoassays. Median recoveries for nearly all standardised reference materials were within 5% of assigned values. Method comparisons versus current existing routine instrumentation were clinically acceptable in all cases. During the workflow studies, the work from three to four single workstations was transferred to MODULAR ANALYTICS, which offered over 100 possible methods, with reduction in sample splitting, handling errors, and turnaround time. Typical sample processing time on MODULAR ANALYTICS was less than 30 minutes, an improvement from the current laboratory systems. By combining multiple analytic units in flexible ways, MODULAR ANALYTICS met diverse laboratory needs and offered improvement in workflow over current laboratory situations. It increased overall efficiency while maintaining (or improving) quality. PMID:18924721
Long-term CF6 engine performance deterioration: Evaluation of engine S/N 451-380
NASA Technical Reports Server (NTRS)
Kramer, W. H.; Smith, J. J.
1978-01-01
The performance testing and analytical teardown of CF6-6D engine serial number 451-380 which was recently removed from a DC-10 aircraft is summarized. The investigative test program was conducted inbound prior to normal overhaul/refurbishment. The performance testing included an inbound test, a test following cleaning of the low pressure turbine airfoils, and a final test after leading edge rework and cleaning the stage one fan blades. The analytical teardown consisted of detailed disassembly inspection measurements and airfoil surface finish checks of the as-received deteriorated hardware. Aspects discussed include the analysis of the test cell performance data, a complete analytical teardown report with a detailed description of all observed hardware distress, and an analytical assessment of the performance loss (deterioration) relating measured hardware conditions to losses in both specific fuel comsumption and exhaust gas temperature.
Long-term CF6 engine performance deterioration: Evaluation of engine S/N 451-479
NASA Technical Reports Server (NTRS)
Kramer, W. H.; Smith, J. J.
1978-01-01
The performance testing and analytical teardown of CF6-6D engine is summarized. This engine had completed its initial installation on DC-10 aircraft. The investigative test program was conducted inbound prior to normal overhaul/refurbishment. The performance testing included an inbound test, a test following cleaning of the low pressure turbine airfoils, and a final test after leading edge rework and cleaning the stage one fan blades. The analytical teardown consisted of detailed disassembly inspection measurements and airfoil surface finish checks of the as received deteriorated hardware. Included in this report is a detailed analysis of the test cell performance data, a complete analytical teardown report with a detailed description of all observed hardware distress, and an analytical assessment of the performance loss (deterioration) relating measured hardware conditions to losses in both SFC (specific fuel consumption) and EGT (exhaust gas temperature).
NASA Technical Reports Server (NTRS)
Sadunas, J. A.; French, E. P.; Sexton, H.
1973-01-01
A 1/25 scale model S-2 stage base region thermal environment test is presented. Analytical results are included which reflect the effect of engine operating conditions, model scale, turbo-pump exhaust gas injection on base region thermal environment. Comparisons are made between full scale flight data, model test data, and analytical results. The report is prepared in two volumes. The description of analytical predictions and comparisons with flight data are presented. Tabulation of the test data is provided.
Brooks, M.H.; Schroder, L.J.; Malo, B.A.
1985-01-01
Four laboratories were evaluated in their analysis of identical natural and simulated precipitation water samples. Interlaboratory comparability was evaluated using analysis of variance coupled with Duncan 's multiple range test, and linear-regression models describing the relations between individual laboratory analytical results for natural precipitation samples. Results of the statistical analyses indicate that certain pairs of laboratories produce different results when analyzing identical samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple range test on data produced by the laboratories from the analysis of identical simulated precipitation samples. Bias for a given analyte produced by a single laboratory has been indicated when the laboratory mean for that analyte is shown to be significantly different from the mean for the most-probable analyte concentrations in the simulated precipitation samples. Ion-chromatographic methods for the determination of chloride, nitrate, and sulfate have been compared with the colorimetric methods that were also in use during the study period. Comparisons were made using analysis of variance coupled with Duncan 's multiple range test for means produced by the two methods. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Analyte estimated precisions have been compared using F-tests and differences in analyte precisions for laboratory pairs have been reported. (USGS)
Sierra/Aria 4.48 Verification Manual.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sierra Thermal Fluid Development Team
Presented in this document is a portion of the tests that exist in the Sierra Thermal/Fluids verification test suite. Each of these tests is run nightly with the Sierra/TF code suite and the results of the test checked under mesh refinement against the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.
van't Hoog, Anna H; Cobelens, Frank; Vassall, Anna; van Kampen, Sanne; Dorman, Susan E; Alland, David; Ellner, Jerrold
2013-01-01
High costs are a limitation to scaling up the Xpert MTB/RIF assay (Xpert) for the diagnosis of tuberculosis in resource-constrained settings. A triaging strategy in which a sensitive but not necessarily highly specific rapid test is used to select patients for Xpert may result in a more affordable diagnostic algorithm. To inform the selection and development of particular diagnostics as a triage test we explored combinations of sensitivity, specificity and cost at which a hypothetical triage test will improve affordability of the Xpert assay. In a decision analytical model parameterized for Uganda, India and South Africa, we compared a diagnostic algorithm in which a cohort of patients with presumptive TB received Xpert to a triage algorithm whereby only those with a positive triage test were tested by Xpert. A triage test with sensitivity equal to Xpert, 75% specificity, and costs of US$5 per patient tested reduced total diagnostic costs by 42% in the Uganda setting, and by 34% and 39% respectively in the India and South Africa settings. When exploring triage algorithms with lower sensitivity, the use of an example triage test with 95% sensitivity relative to Xpert, 75% specificity and test costs $5 resulted in similar cost reduction, and was cost-effective by the WHO willingness-to-pay threshold compared to Xpert for all in Uganda, but not in India and South Africa. The gain in affordability of the examined triage algorithms increased with decreasing prevalence of tuberculosis among the cohort. A triage test strategy could potentially improve the affordability of Xpert for TB diagnosis, particularly in low-income countries and with enhanced case-finding. Tests and markers with lower accuracy than desired of a diagnostic test may fall within the ranges of sensitivity, specificity and cost required for triage tests and be developed as such.
Compressive Detection of Highly Overlapped Spectra Using Walsh-Hadamard-Based Filter Functions.
Corcoran, Timothy C
2018-03-01
In the chemometric context in which spectral loadings of the analytes are already known, spectral filter functions may be constructed which allow the scores of mixtures of analytes to be determined in on-the-fly fashion directly, by applying a compressive detection strategy. Rather than collecting the entire spectrum over the relevant region for the mixture, a filter function may be applied within the spectrometer itself so that only the scores are recorded. Consequently, compressive detection shrinks data sets tremendously. The Walsh functions, the binary basis used in Walsh-Hadamard transform spectroscopy, form a complete orthonormal set well suited to compressive detection. A method for constructing filter functions using binary fourfold linear combinations of Walsh functions is detailed using mathematics borrowed from genetic algorithm work, as a means of optimizing said functions for a specific set of analytes. These filter functions can be constructed to automatically strip the baseline from analysis. Monte Carlo simulations were performed with a mixture of four highly overlapped Raman loadings and with ten excitation-emission matrix loadings; both sets showed a very high degree of spectral overlap. Reasonable estimates of the true scores were obtained in both simulations using noisy data sets, proving the linearity of the method.
Nariya, Maulik K; Kim, Jae Hyun; Xiong, Jian; Kleindl, Peter A; Hewarathna, Asha; Fisher, Adam C; Joshi, Sangeeta B; Schöneich, Christian; Forrest, M Laird; Middaugh, C Russell; Volkin, David B; Deeds, Eric J
2017-11-01
There is growing interest in generating physicochemical and biological analytical data sets to compare complex mixture drugs, for example, products from different manufacturers. In this work, we compare various crofelemer samples prepared from a single lot by filtration with varying molecular weight cutoffs combined with incubation for different times at different temperatures. The 2 preceding articles describe experimental data sets generated from analytical characterization of fractionated and degraded crofelemer samples. In this work, we use data mining techniques such as principal component analysis and mutual information scores to help visualize the data and determine discriminatory regions within these large data sets. The mutual information score identifies chemical signatures that differentiate crofelemer samples. These signatures, in many cases, would likely be missed by traditional data analysis tools. We also found that supervised learning classifiers robustly discriminate samples with around 99% classification accuracy, indicating that mathematical models of these physicochemical data sets are capable of identifying even subtle differences in crofelemer samples. Data mining and machine learning techniques can thus identify fingerprint-type attributes of complex mixture drugs that may be used for comparative characterization of products. Copyright © 2017 American Pharmacists Association®. All rights reserved.
The pathway not taken: understanding 'omics data in the perinatal context.
Edlow, Andrea G; Slonim, Donna K; Wick, Heather C; Hui, Lisa; Bianchi, Diana W
2015-07-01
'Omics analysis of large datasets has an increasingly important role in perinatal research, but understanding gene expression analyses in the fetal context remains a challenge. We compared the interpretation provided by a widely used systems biology resource (ingenuity pathway analysis [IPA]) with that from gene set enrichment analysis (GSEA) with functional annotation curated specifically for the fetus (Developmental FunctionaL Annotation at Tufts [DFLAT]). Using amniotic fluid supernatant transcriptome datasets previously produced by our group, we analyzed 3 different developmental perturbations: aneuploidy (Trisomy 21 [T21]), hemodynamic (twin-twin transfusion syndrome [TTTS]), and metabolic (maternal obesity) vs sex- and gestational age-matched control subjects. Differentially expressed probe sets were identified with the use of paired t-tests with the Benjamini-Hochberg correction for multiple testing (P < .05). Functional analyses were performed with IPA and GSEA/DFLAT. Outputs were compared for biologic relevance to the fetus. Compared with control subjects, there were 414 significantly dysregulated probe sets in T21 fetuses, 2226 in TTTS recipient twins, and 470 in fetuses of obese women. Each analytic output was unique but complementary. For T21, both IPA and GSEA/DFLAT identified dysregulation of brain, cardiovascular, and integumentary system development. For TTTS, both analytic tools identified dysregulation of cell growth/proliferation, immune and inflammatory signaling, brain, and cardiovascular development. For maternal obesity, both tools identified dysregulation of immune and inflammatory signaling, brain and musculoskeletal development, and cell death. GSEA/DFLAT identified substantially more dysregulated biologic functions in fetuses of obese women (1203 vs 151). For all 3 datasets, GSEA/DFLAT provided more comprehensive information about brain development. IPA consistently provided more detailed annotation about cell death. IPA produced many dysregulated terms that pertained to cancer (14 in T21, 109 in TTTS, 26 in maternal obesity); GSEA/DFLAT did not. Interpretation of the fetal amniotic fluid supernatant transcriptome depends on the analytic program, which suggests that >1 resource should be used. Within IPA, physiologic cellular proliferation in the fetus produced many "false positive" annotations that pertained to cancer, which reflects its bias toward adult diseases. This study supports the use of gene annotation resources with a developmental focus, such as DFLAT, for 'omics studies in perinatal medicine. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Nunes, Josane C.
1991-02-01
This work quantifies the changes effected in electron absorbed dose to a soft-tissue equivalent medium when part of this medium is replaced by a material that is not soft -tissue equivalent. That is, heterogeneous dosimetry is addressed. Radionuclides which emit beta particles are the electron sources of primary interest. They are used in brachytherapy and in nuclear medicine: for example, beta -ray applicators made with strontium-90 are employed in certain ophthalmic treatments and iodine-131 is used to test thyroid function. More recent medical procedures under development and which involve beta radionuclides include radioimmunotherapy and radiation synovectomy; the first is a cancer modality and the second deals with the treatment of rheumatoid arthritis. In addition, the possibility of skin surface contamination exists whenever there is handling of radioactive material. Determination of absorbed doses in the examples of the preceding paragraph requires considering boundaries of interfaces. Whilst the Monte Carlo method can be applied to boundary calculations, for routine work such as in clinical situations, or in other circumstances where doses need to be determined quickly, analytical dosimetry would be invaluable. Unfortunately, few analytical methods for boundary beta dosimetry exist. Furthermore, the accuracy of results from both Monte Carlo and analytical methods has to be assessed. Although restricted to one radionuclide, phosphorus -32, the experimental data obtained in this work serve several purposes, one of which is to provide standards against which calculated results can be tested. The experimental data also contribute to the relatively sparse set of published boundary dosimetry data. At the same time, they may be useful in developing analytical boundary dosimetry methodology. The first application of the experimental data is demonstrated. Results from two Monte Carlo codes and two analytical methods, which were developed elsewhere, are compared with experimental data. Monte Carlo results compare satisfactory with experimental results for the boundaries considered. The agreement with experimental results for air interfaces is of particular interest because of discrepancies reported previously by another investigator who used data obtained from a different experimental technique. Results from one of the analytical methods differ significantly from the experimental data obtained here. The second analytical method provided data which approximate experimental results to within 30%. This is encouraging but it remains to be determined whether this method performs equally well for other source energies.
NASA Astrophysics Data System (ADS)
Priore, Ryan J.; Jacksen, Niels
2016-05-01
Infrared hyperspectral imagers (HSI) have been fielded for the detection of hazardous chemical and biological compounds, tag detection (friend versus foe detection) and other defense critical sensing missions over the last two decades. Low Size/Weight/Power/Cost (SWaPc) methods of identification of chemical compounds spectroscopy has been a long term goal for hand held applications. We describe a new HSI concept for low cost / high performance InGaAs SWIR camera chemical identification for military, security, industrial and commercial end user applications. Multivariate Optical Elements (MOEs) are thin-film devices that encode a broadband, spectroscopic pattern allowing a simple broadband detector to generate a highly sensitive and specific detection for a target analyte. MOEs can be matched 1:1 to a discrete analyte or class prediction. Additionally, MOE filter sets are capable of sensing an orthogonal projection of the original sparse spectroscopic space enabling a small set of MOEs to discriminate a multitude of target analytes. This paper identifies algorithms and broadband optical filter designs that have been demonstrated to identify chemical compounds using high performance InGaAs VGA detectors. It shows how some of the initial models have been reduced to simple spectral designs and tested to produce positive identification of such chemicals. We also are developing pixilated MOE compressed detection sensors for the detection of a multitude of chemical targets in challenging backgrounds/environments for both commercial and defense/security applications. This MOE based, real-time HSI sensor will exhibit superior sensitivity and specificity as compared to currently fielded HSI systems.
NASA Astrophysics Data System (ADS)
Bartholomeusz, Daniel A.; Davies, Rupert H.; Andrade, Joseph D.
2006-02-01
A centrifugal-based microfluidic device1 was built with lyophilized bioluminescent reagents for measuring multiple metabolites from a sample of less than 15 μL. Microfluidic channels, reaction wells, and valves were cut in adhesive vinyl film using a knife plotter with features down to 30 μm and transferred to metalized polycarbonate compact disks (CDs). The fabrication method was simple enough to test over 100 prototypes within a few months. It also allowed enzymes to be packaged in microchannels without exposure to heat or chemicals. The valves were rendered hydrophobic using liquid phase deposition. Microchannels were patterned using soft lithography to make them hydrophilic. Reagents and calibration standards were deposited and lyophilized in different wells before being covered with another adhesive film. Sample delivery was controlled by a modified CD ROM. The CD was capable of distributing 200 nL sample aliquots to 36 channels, each with a different set of reagents that mixed with the sample before initiating the luminescent reactions. Reflection of light from the metalized layer and lens configuration allowed for 20% of the available light to be collected from each channel. ATP was detected down to 0.1 μM. Creatinine, glucose, and galactose were also measured in micro and milliMolar ranges. Other optical-based analytical assays can easily be incorporated into the device design. The minimal sample size needed and expandability of the device make it easier to simultaneously measure a variety of clinically relevant analytes in point-of-care settings.
Hasslacher, Christoph; Kulozik, Felix; Platten, Isabel
2014-05-01
We investigated the analytical accuracy of 27 glucose monitoring systems (GMS) in a clinical setting, using the new ISO accuracy limits. In addition to measuring accuracy at blood glucose (BG) levels < 100 mg/dl and > 100 mg/dl, we also analyzed devices performance with respect to these criteria at 5 specific BG level ranges, making it possible to further differentiate between devices with regard to overall performance. Carbohydrate meals and insulin injections were used to induce an increase or decrease in BG levels in 37 insulin-dependent patients. Capillary blood samples were collected at 10-minute intervals, and BG levels determined simultaneously using GMS and a laboratory-based method. Results obtained via both methods were analyzed according to the new ISO criteria. Only 12 of 27 devices tested met overall requirements of the new ISO accuracy limits. When accuracy was assessed at BG levels < 100 mg/dl and > 100 mg/dl, criteria were met by 14 and 13 devices, respectively. A more detailed analysis involving 5 different BG level ranges revealed that 13 (48.1%) devices met the required criteria at BG levels between 50 and 150 mg/dl, whereas 19 (70.3%) met these criteria at BG levels above 250 mg/dl. The overall frequency of outliers was low. The assessment of analytical accuracy of GMS at a number of BG level ranges made it possible to further differentiate between devices with regard to overall performance, a process that is of particular importance given the user-centered nature of the devices' intended use. © 2014 Diabetes Technology Society.
NASA Astrophysics Data System (ADS)
Bi, Yiming; Tang, Liang; Shan, Peng; Xie, Qiong; Hu, Yong; Peng, Silong; Tan, Jie; Li, Changwen
2014-08-01
Interference such as baseline drift and light scattering can degrade the model predictability in multivariate analysis of near-infrared (NIR) spectra. Usually interference can be represented by an additive and a multiplicative factor. In order to eliminate these interferences, correction parameters are needed to be estimated from spectra. However, the spectra are often mixed of physical light scattering effects and chemical light absorbance effects, making it difficult for parameter estimation. Herein, a novel algorithm was proposed to find a spectral region automatically that the interesting chemical absorbance and noise are low, that is, finding an interference dominant region (IDR). Based on the definition of IDR, a two-step method was proposed to find the optimal IDR and the corresponding correction parameters estimated from IDR. Finally, the correction was performed to the full spectral range using previously obtained parameters for the calibration set and test set, respectively. The method can be applied to multi target systems with one IDR suitable for all targeted analytes. Tested on two benchmark data sets of near-infrared spectra, the performance of the proposed method provided considerable improvement compared with full spectral estimation methods and comparable with other state-of-art methods.
Post-analytical Issues in Hemostasis and Thrombosis Testing.
Favaloro, Emmanuel J; Lippi, Giuseppe
2017-01-01
Analytical concerns within hemostasis and thrombosis testing are continuously decreasing. This is essentially attributable to modern instrumentation, improvements in test performance and reliability, as well as the application of appropriate internal quality control and external quality assurance measures. Pre-analytical issues are also being dealt with in some newer instrumentation, which are able to detect hemolysis, icteria and lipemia, and, in some cases, other issues related to sample collection such as tube under-filling. Post-analytical issues are generally related to appropriate reporting and interpretation of test results, and these are the focus of the current overview, which provides a brief description of these events, as well as guidance for their prevention or minimization. In particular, we propose several strategies for improved post-analytical reporting of hemostasis assays and advise that this may provide the final opportunity to prevent serious clinical errors in diagnosis.
Xu, Shen; Rogers, Toby; Fairweather, Elliot; Glenn, Anthony; Curran, James; Curcin, Vasa
2018-01-01
Data provenance is a technique that describes the history of digital objects. In health data settings, it can be used to deliver auditability and transparency, and to achieve trust in a software system. However, implementing data provenance in analytics software at an enterprise level presents a different set of challenges from the research environments where data provenance was originally devised. In this paper, the challenges of reporting provenance information to the user is presented. Provenance captured from analytics software can be large and complex and visualizing a series of tasks over a long period can be overwhelming even for a domain expert, requiring visual aggregation mechanisms that fit with complex human cognitive activities involved in the process. This research studied how provenance-based reporting can be integrated into a health data analytics software, using the example of Atmolytics visual reporting tool. PMID:29888084
Analytical formulation of cellular automata rules using data models
NASA Astrophysics Data System (ADS)
Jaenisch, Holger M.; Handley, James W.
2009-05-01
We present a unique method for converting traditional cellular automata (CA) rules into analytical function form. CA rules have been successfully used for morphological image processing and volumetric shape recognition and classification. Further, the use of CA rules as analog models to the physical and biological sciences can be significantly extended if analytical (as opposed to discrete) models could be formulated. We show that such transformations are possible. We use as our example John Horton Conway's famous "Game of Life" rule set. We show that using Data Modeling, we are able to derive both polynomial and bi-spectrum models of the IF-THEN rules that yield equivalent results. Further, we demonstrate that the "Game of Life" rule set can be modeled using the multi-fluxion, yielding a closed form nth order derivative and integral. All of the demonstrated analytical forms of the CA rule are general and applicable to real-time use.
NASA Astrophysics Data System (ADS)
Zuiani, Federico; Vasile, Massimiliano
2015-03-01
This paper presents a set of analytical formulae for the perturbed Keplerian motion of a spacecraft under the effect of a constant control acceleration. The proposed set of formulae can treat control accelerations that are fixed in either a rotating or inertial reference frame. Moreover, the contribution of the zonal harmonic is included in the analytical formulae. It will be shown that the proposed analytical theory allows for the fast computation of long, multi-revolution spirals while maintaining good accuracy. The combined effect of different perturbations and of the shadow regions due to solar eclipse is also included. Furthermore, a simplified control parameterisation is introduced to optimise thrusting patterns with two thrust arcs and two cost arcs per revolution. This simple parameterisation is shown to ensure enough flexibility to describe complex low thrust spirals. The accuracy and speed of the proposed analytical formulae are compared against a full numerical integration with different integration schemes. An averaging technique is then proposed as an application of the analytical formulae. Finally, the paper presents an example of design of an optimal low-thrust spiral to transfer a spacecraft from an elliptical to a circular orbit around the Earth.
Point-of-care testing in an organ procurement organization donor management setting.
Baier, K A; Markham, L E; Flaigle, S P; Nelson, P W; Shield, C F; Muruve, N A; Aeder, M I; Murillo, D; Bryan, C F
2003-01-01
Our organ procurement organization (OPO) evaluated the clinical and financial efficacy of point-of-care testing (POCT) in management of our deceased organ donors. Before we implemented point-of care testing with the i-STAT into routine clinical donor management, we compared the i-STAT result with the result from the respective donor hospital lab (DHL) for certain analytes on 15 consecutive donors in our OPO from 26 March to 14 May 2001. The financial impact was studied by reviewing 77 donors from July 2001 to March 2002. There was a strong correlation for each analyte between the POC and DHL test results with r-values as follows: pH 0.86; PCO2 = 0.96; PO2 = 0.98; sodium = 0.98; potassium = 0.95; chloride = 0.94; BUN = 0.98; glucose = 0.92; haematocrit = 0.87 and creatinine = 0.95. Since our OPO coordinators began using i-STAT in their routine clinical management of organ donors, they can now more quickly maximize oxygenation and fluid management of the donor and make extra-renal placement calls sooner. Finally, since we are no longer being billed for the testing performed on the i-STAT, average financial savings to our OPO are US dollars 733 per case. Point-of-care testing in management of our OPO donors provides a result that is equivalent to that of the donor hospital lab, has quicker turn-around time than the donor hospital laboratory, allowing more immediate clinical management decisions to be made so that extra-renal offers may begin sooner.
Kopcinovic, Lara Milevoj; Vogrinc, Zeljka; Kocijan, Irena; Culej, Jelena; Aralica, Merica; Jokic, Anja; Antoncic, Dragana; Bozovic, Marija
2016-01-01
Introduction We hypothesized that extravascular body fluid (EBF) analysis in Croatia is not harmonized and aimed to investigate preanalytical, analytical and postanalytical procedures used in EBF analysis in order to identify key aspects that should be addressed in future harmonization attempts. Materials and methods An anonymous online survey created to explore laboratory testing of EBF was sent to secondary, tertiary and private health care Medical Biochemistry Laboratories (MBLs) in Croatia. Statements were designed to address preanalytical, analytical and postanalytical procedures of cerebrospinal, pleural, peritoneal (ascites), pericardial, seminal, synovial, amniotic fluid and sweat. Participants were asked to declare the strength of agreement with proposed statements using a Likert scale. Mean scores for corresponding separate statements divided according to health care setting were calculated and compared. Results The survey response rate was 0.64 (58 / 90). None of the participating private MBLs declared to analyse EBF. We report a mean score of 3.45 obtained for all statements evaluated. Deviations from desirable procedures were demonstrated in all EBF testing phases. Minor differences in procedures used for EBF analysis comparing secondary and tertiary health care MBLs were found. The lowest scores were obtained for statements regarding quality control procedures in EBF analysis, participation in proficiency testing programmes and provision of interpretative comments on EBF’s test reports. Conclusions Although good laboratory EBF practice is present in Croatia, procedures for EBF analysis should be further harmonized to improve the quality of EBF testing and patient safety. PMID:27812307
Beloglazova, N V; Goryacheva, I Yu; Rusanova, T Yu; Yurasov, N A; Galve, R; Marco, M-P; De Saeger, S
2010-07-05
A new rapid method which allows simultaneous one step detection of two analytes of different nature (2,4,6,-trichlorophenol (TCP) and ochratoxin A (OTA)) in red wine was developed. It was based on a column test with three separate immunolayers: two test layers and one control layer. Each layer consisted of sepharose gel with immobilized anti-OTA (OTA test layer), anti-TCP (TCP test layer) or anti-HRP (control layer) antibodies. Analytes bind to the antibodies in the corresponding test layer while sample flows through the column. Then a mixture of OTA-HRP and TCP-HRP in appropriate dilutions was used, followed by the application of chromogenic substrate. Colour development of the test layer occurred when the corresponding analyte was absent in the sample. HRP-conjugates bound to anti-HRP antibody in the control layer independently of presence or absence of analytes and a blue colour developed in the control layer. Cut-off values for both analytes were 2 microg L(-1). The described method was applied to the simultaneous detection of TCP and OTA in wine samples. To screen the analytes in red wine samples, clean-up columns were used for sample pre-treatment in combination with the test column. Results were confirmed by chromatographic methods. Copyright 2010 Elsevier B.V. All rights reserved.
Dillon, Roslyn; Croner, Lisa J; Bucci, John; Kairs, Stefanie N; You, Jia; Beasley, Sharon; Blimline, Mark; Carino, Rochele B; Chan, Vicky C; Cuevas, Danissa; Diggs, Jeff; Jennings, Megan; Levy, Jacob; Mina, Ginger; Yee, Alvin; Wilcox, Bruce
2018-05-30
Early detection of colorectal cancer (CRC) is key to reducing associated mortality. Despite the importance of early detection, approximately 40% of individuals in the United States between the ages of 50-75 have never been screened for CRC. The low compliance with colonoscopy and fecal-based screening may be addressed with a non-invasive alternative such as a blood-based test. We describe here the analytical validation of a multiplexed blood-based assay that measures the plasma concentrations of 15 proteins to assess advanced adenoma (AA) and CRC risk in symptomatic patients. The test was developed on an electrochemiluminescent immunoassay platform employing four multi-marker panels, to be implemented in the clinic as a laboratory developed test (LDT). Under the Clinical Laboratory Improvement Amendments (CLIA) and College of American Pathologists (CAP) regulations, a United States-based clinical laboratory utilizing an LDT must establish performance characteristics relating to analytical validity prior to releasing patient test results. This report describes a series of studies demonstrating the precision, accuracy, analytical sensitivity, and analytical specificity for each of the 15 assays, as required by CLIA/CAP. In addition, the report describes studies characterizing each of the assays' dynamic range, parallelism, tolerance to common interfering substances, spike recovery, and stability to sample freeze-thaw cycles. Upon completion of the analytical characterization, a clinical accuracy study was performed to evaluate concordance of AA and CRC classifier model calls using the analytical method intended for use in the clinic. Of 434 symptomatic patient samples tested, the percent agreement with original CRC and AA calls was 87% and 92% respectively. All studies followed CLSI guidelines and met the regulatory requirements for implementation of a new LDT. The results provide the analytical evidence to support the implementation of the novel multi-marker test as a clinical test for evaluating CRC and AA risk in symptomatic individuals. Copyright © 2018 Elsevier B.V. All rights reserved.
A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrie, Michael; Shadwick, B. A.
2016-01-04
Here, we present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Juttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviors that do not exist in the non relativistic case.more » The numerical study of the relativistic two-stream instability completes the set of benchmarking tests.« less
A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrié, Michael, E-mail: mcarrie2@unl.edu; Shadwick, B. A., E-mail: shadwick@mailaps.org
2016-01-15
We present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Jüttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviours that do not exist in the nonrelativistic case. The numericalmore » study of the relativistic two-stream instability completes the set of benchmarking tests.« less
In-orbit evaluation of the control system/structural mode interactions of the OSO-8 spacecraft
NASA Technical Reports Server (NTRS)
Slafer, L. I.
1979-01-01
The Orbiting Solar Observatory-8 experienced severe structural mode/control loop interaction problems during the spacecraft development. Extensive analytical studies, using the hybrid coordinate modeling approach, and comprehensive ground testing were carried out in order to achieve the system's precision pointing performance requirements. A recent series of flight tests were conducted with the spacecraft in which a wide bandwidth, high resolution telemetry system was utilized to evaluate the on-orbit flexible dynamics characteristics of the vehicle along with the control system performance. The paper describes the results of these tests, reviewing the basic design problem, analytical approach taken, ground test philosophy, and on-orbit testing. Data from the tests was used to determine the primary mode frequency, damping, and servo coupling dynamics for the on-orbit condition. Additionally, the test results have verified analytically predicted differences between the on-orbit and ground test environments, and have led to a validation of both the analytical modeling and servo design techniques used during the development of the control system.
Test Cases for Modeling and Validation of Structures with Piezoelectric Actuators
NASA Technical Reports Server (NTRS)
Reaves, Mercedes C.; Horta, Lucas G.
2001-01-01
A set of benchmark test articles were developed to validate techniques for modeling structures containing piezoelectric actuators using commercially available finite element analysis packages. The paper presents the development, modeling, and testing of two structures: an aluminum plate with surface mounted patch actuators and a composite box beam with surface mounted actuators. Three approaches for modeling structures containing piezoelectric actuators using the commercially available packages: MSC/NASTRAN and ANSYS are presented. The approaches, applications, and limitations are discussed. Data for both test articles are compared in terms of frequency response functions from deflection and strain data to input voltage to the actuator. Frequency response function results using the three different analysis approaches provided comparable test/analysis results. It is shown that global versus local behavior of the analytical model and test article must be considered when comparing different approaches. Also, improper bonding of actuators greatly reduces the electrical to mechanical effectiveness of the actuators producing anti-resonance errors.
Wu, Xia; Li, Juan; Ayutyanont, Napatkamon; Protas, Hillary; Jagust, William; Fleisher, Adam; Reiman, Eric; Yao, Li; Chen, Kewei
2013-01-01
Given a single index, the receiver operational characteristic (ROC) curve analysis is routinely utilized for characterizing performances in distinguishing two conditions/groups in terms of sensitivity and specificity. Given the availability of multiple data sources (referred to as multi-indices), such as multimodal neuroimaging data sets, cognitive tests, and clinical ratings and genomic data in Alzheimer’s disease (AD) studies, the single-index-based ROC underutilizes all available information. For a long time, a number of algorithmic/analytic approaches combining multiple indices have been widely used to simultaneously incorporate multiple sources. In this study, we propose an alternative for combining multiple indices using logical operations, such as “AND,” “OR,” and “at least n” (where n is an integer), to construct multivariate ROC (multiV-ROC) and characterize the sensitivity and specificity statistically associated with the use of multiple indices. With and without the “leave-one-out” cross-validation, we used two data sets from AD studies to showcase the potentially increased sensitivity/specificity of the multiV-ROC in comparison to the single-index ROC and linear discriminant analysis (an analytic way of combining multi-indices). We conclude that, for the data sets we investigated, the proposed multiV-ROC approach is capable of providing a natural and practical alternative with improved classification accuracy as compared to univariate ROC and linear discriminant analysis.
Wu, Xia; Li, Juan; Ayutyanont, Napatkamon; Protas, Hillary; Jagust, William; Fleisher, Adam; Reiman, Eric; Yao, Li; Chen, Kewei
2014-01-01
Given a single index, the receiver operational characteristic (ROC) curve analysis is routinely utilized for characterizing performances in distinguishing two conditions/groups in terms of sensitivity and specificity. Given the availability of multiple data sources (referred to as multi-indices), such as multimodal neuroimaging data sets, cognitive tests, and clinical ratings and genomic data in Alzheimer’s disease (AD) studies, the single-index-based ROC underutilizes all available information. For a long time, a number of algorithmic/analytic approaches combining multiple indices have been widely used to simultaneously incorporate multiple sources. In this study, we propose an alternative for combining multiple indices using logical operations, such as “AND,” “OR,” and “at least n” (where n is an integer), to construct multivariate ROC (multiV-ROC) and characterize the sensitivity and specificity statistically associated with the use of multiple indices. With and without the “leave-one-out” cross-validation, we used two data sets from AD studies to showcase the potentially increased sensitivity/specificity of the multiV-ROC in comparison to the single-index ROC and linear discriminant analysis (an analytic way of combining multi-indices). We conclude that, for the data sets we investigated, the proposed multiV-ROC approach is capable of providing a natural and practical alternative with improved classification accuracy as compared to univariate ROC and linear discriminant analysis. PMID:23702553
Finding accurate frontiers: A knowledge-intensive approach to relational learning
NASA Technical Reports Server (NTRS)
Pazzani, Michael; Brunk, Clifford
1994-01-01
An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory.
Miniaturised wireless smart tag for optical chemical analysis applications.
Steinberg, Matthew D; Kassal, Petar; Tkalčec, Biserka; Murković Steinberg, Ivana
2014-01-01
A novel miniaturised photometer has been developed as an ultra-portable and mobile analytical chemical instrument. The low-cost photometer presents a paradigm shift in mobile chemical sensor instrumentation because it is built around a contactless smart card format. The photometer tag is based on the radio-frequency identification (RFID) smart card system, which provides short-range wireless data and power transfer between the photometer and a proximal reader, and which allows the reader to also energise the photometer by near field electromagnetic induction. RFID is set to become a key enabling technology of the Internet-of-Things (IoT), hence devices such as the photometer described here will enable numerous mobile, wearable and vanguard chemical sensing applications in the emerging connected world. In the work presented here, we demonstrate the characterisation of a low-power RFID wireless sensor tag with an LED/photodiode-based photometric input. The performance of the wireless photometer has been tested through two different model analytical applications. The first is photometry in solution, where colour intensity as a function of dye concentration was measured. The second is an ion-selective optode system in which potassium ion concentrations were determined by using previously well characterised bulk optode membranes. The analytical performance of the wireless photometer smart tag is clearly demonstrated by these optical absorption-based analytical experiments, with excellent data agreement to a reference laboratory instrument. © 2013 Elsevier B.V. All rights reserved.
Electrocardiographic interpretation skills of cardiology residents: are they competent?
Sibbald, Matthew; Davies, Edward G; Dorian, Paul; Yu, Eric H C
2014-12-01
Achieving competency at electrocardiogram (ECG) interpretation among cardiology subspecialty residents has traditionally focused on interpreting a target number of ECGs during training. However, there is little evidence to support this approach. Further, there are no data documenting the competency of ECG interpretation skills among cardiology residents, who become de facto the gold standard in their practice communities. We tested 29 Cardiology residents from all 3 years in a large training program using a set of 20 ECGs collected from a community cardiology practice over a 1-month period. Residents interpreted half of the ECGs using a standard analytic framework, and half using their own approach. Residents were scored on the number of correct and incorrect diagnoses listed. Overall diagnostic accuracy was 58%. Of 6 potentially life-threatening diagnoses, residents missed 36% (123 of 348) including hyperkalemia (81%), long QT (52%), complete heart block (35%), and ventricular tachycardia (19%). Residents provided additional inappropriate diagnoses on 238 ECGs (41%). Diagnostic accuracy was similar between ECGs interpreted using an analytic framework vs ECGs interpreted without an analytic framework (59% vs 58%; F(1,1333) = 0.26; P = 0.61). Cardiology resident proficiency at ECG interpretation is suboptimal. Despite the use of an analytic framework, there remain significant deficiencies in ECG interpretation among Cardiology residents. A more systematic method of addressing these important learning gaps is urgently needed. Copyright © 2014 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.
Rogstad, Sarah; Pang, Eric; Sommers, Cynthia; Hu, Meng; Jiang, Xiaohui; Keire, David A; Boyne, Michael T
2015-11-01
Glatiramer acetate (GA) is a mixture of synthetic copolymers consisting of four amino acids (glutamic acid, lysine, alanine, and tyrosine) with a labeled molecular weight range of 5000 to 9000 Da. GA is marketed as Copaxone™ by Teva for the treatment of multiple sclerosis. Here, the agency has evaluated the structure and composition of GA and a commercially available comparator, Copolymer-1. Modern analytical technologies which can characterize these complex mixtures are desirable for analysis of their comparability and structural "sameness." In the studies herein, a molecular fingerprinting approach is taken using mass-accurate mass spectrometry (MS) analysis, nuclear magnetic resonance (NMR) (1D-(1)H-NMR, 1D-(13)C-NMR, and 2D NMR), and asymmetric field flow fractionation (AFFF) coupled with multi-angle light scattering (MALS) for an in-depth characterization of three lots of the marketplace drug and a formulated sample of the comparator. Statistical analyses were applied to the MS and AFFF-MALS data to assess these methods' ability to detect analytical differences in the mixtures. The combination of multiple orthogonal measurements by liquid chromatography coupled with MS (LC-MS), AFFF-MALS, and NMR on the same sample set was found to be fit for the intended purpose of distinguishing analytical differences between these complex mixtures of peptide chains.
Annual banned-substance review: Analytical approaches in human sports drug testing.
Thevis, Mario; Kuuranne, Tiia; Geyer, Hans
2018-01-01
Several high-profile revelations concerning anti-doping rule violations over the past 12 months have outlined the importance of tackling prevailing challenges and reducing the limitations of the current anti-doping system. At this time, the necessity to enhance, expand, and improve analytical test methods in response to the substances outlined in the World Anti-Doping Agency's (WADA) Prohibited List represents an increasingly crucial task for modern sports drug-testing programs. The ability to improve analytical testing methods often relies on the expedient application of novel information regarding superior target analytes for sports drug-testing assays, drug elimination profiles, alternative test matrices, together with recent advances in instrumental developments. This annual banned-substance review evaluates literature published between October 2016 and September 2017 offering an in-depth evaluation of developments in these arenas and their potential application to substances reported in WADA's 2017 Prohibited List. Copyright © 2017 John Wiley & Sons, Ltd.
Hybrid neural network and fuzzy logic approaches for rendezvous and capture in space
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Castellano, Timothy
1991-01-01
The nonlinear behavior of many practical systems and unavailability of quantitative data regarding the input-output relations makes the analytical modeling of these systems very difficult. On the other hand, approximate reasoning-based controllers which do not require analytical models have demonstrated a number of successful applications such as the subway system in the city of Sendai. These applications have mainly concentrated on emulating the performance of a skilled human operator in the form of linguistic rules. However, the process of learning and tuning the control rules to achieve the desired performance remains a difficult task. Fuzzy Logic Control is based on fuzzy set theory. A fuzzy set is an extension of a crisp set. Crisp sets only allow full membership or no membership at all, whereas fuzzy sets allow partial membership. In other words, an element may partially belong to a set.
Quality of Big Data in health care.
Sukumar, Sreenivas R; Natarajan, Ramachandran; Ferrell, Regina K
2015-01-01
The current trend in Big Data analytics and in particular health information technology is toward building sophisticated models, methods and tools for business, operational and clinical intelligence. However, the critical issue of data quality required for these models is not getting the attention it deserves. The purpose of this paper is to highlight the issues of data quality in the context of Big Data health care analytics. The insights presented in this paper are the results of analytics work that was done in different organizations on a variety of health data sets. The data sets include Medicare and Medicaid claims, provider enrollment data sets from both public and private sources, electronic health records from regional health centers accessed through partnerships with health care claims processing entities under health privacy protected guidelines. Assessment of data quality in health care has to consider: first, the entire lifecycle of health data; second, problems arising from errors and inaccuracies in the data itself; third, the source(s) and the pedigree of the data; and fourth, how the underlying purpose of data collection impact the analytic processing and knowledge expected to be derived. Automation in the form of data handling, storage, entry and processing technologies is to be viewed as a double-edged sword. At one level, automation can be a good solution, while at another level it can create a different set of data quality issues. Implementation of health care analytics with Big Data is enabled by a road map that addresses the organizational and technological aspects of data quality assurance. The value derived from the use of analytics should be the primary determinant of data quality. Based on this premise, health care enterprises embracing Big Data should have a road map for a systematic approach to data quality. Health care data quality problems can be so very specific that organizations might have to build their own custom software or data quality rule engines. Today, data quality issues are diagnosed and addressed in a piece-meal fashion. The authors recommend a data lifecycle approach and provide a road map, that is more appropriate with the dimensions of Big Data and fits different stages in the analytical workflow.
Proactive Supply Chain Performance Management with Predictive Analytics
Stefanovic, Nenad
2014-01-01
Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment. PMID:25386605
Maggio, Rubén M; Damiani, Patricia C; Olivieri, Alejandro C
2011-01-30
Liquid chromatographic-diode array detection data recorded for aqueous mixtures of 11 pesticides show the combined presence of strongly coeluting peaks, distortions in the time dimension between experimental runs, and the presence of potential interferents not modeled by the calibration phase in certain test samples. Due to the complexity of these phenomena, data were processed by a second-order multivariate algorithm based on multivariate curve resolution and alternating least-squares, which allows one to successfully model both the spectral and retention time behavior for all sample constituents. This led to the accurate quantitation of all analytes in a set of validation samples: aldicarb sulfoxide, oxamyl, aldicarb sulfone, methomyl, 3-hydroxy-carbofuran, aldicarb, propoxur, carbofuran, carbaryl, 1-naphthol and methiocarb. Limits of detection in the range 0.1-2 μg mL(-1) were obtained. Additionally, the second-order advantage for several analytes was achieved in samples containing several uncalibrated interferences. The limits of detection for all analytes were decreased by solid phase pre-concentration to values compatible to those officially recommended, i.e., in the order of 5 ng mL(-1). Copyright © 2010 Elsevier B.V. All rights reserved.
Proactive supply chain performance management with predictive analytics.
Stefanovic, Nenad
2014-01-01
Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment.
Petersen, Per Hyltoft; Sandberg, Sverre; Fraser, Callum G
2011-04-01
The Stockholm conference held in 1999 on "Strategies to set global analytical quality specifications (AQS) in laboratory medicine" reached a consensus and advocated the ubiquitous application of a hierarchical structure of approaches to setting AQS. This approach has been widely used over the last decade, although several issues remain unanswered. A number of new suggestions have been recently proposed for setting AQS. One of these recommendations is described by Haeckel and Wosniok in this issue of Clinical Chemistry and Laboratory Medicine. Their concept is to estimate the increase in false-positive results using conventional population-based reference intervals, the delta false-positive rate due to analytical imprecision and bias, and relate the results directly to the current analytical quality attained. Thus, the actual estimates in the laboratory for imprecision and bias are compared to the AQS. These values are classified in a ranking system according to the closeness to the AQS, and this combination is the new idea of the proposal. Other new ideas have been proposed recently. We wait, with great interest, as should others, to see if these newer approaches become widely used and worthy of incorporation into the hierarchy.
FDA perspective on specifications for biotechnology products--from IND to PLA.
Murano, G
1997-01-01
Quality standards are obligatory throughout development, approval and post-marketing phases of biotechnology-derived products, thus assuring product identity, purity, and potency/strength. The process of developing and setting specifications should be based on sound science and should represent a logical progression of actions based on the use of experiential data spanning manufacturing process validation, consistency in production, and characterization of relevant product properties/attributes, by multiple analytical means. This interactive process occurs in phases, varying in rigour. It is best described as encompassing a framework which starts with the implementation of realistic/practical operational quality limits, progressing to the establishment/adoption of more stringent specifications. The historical database is generated from preclinical, toxicology and early clinical lots. This supports the clinical development programme which, as it progresses, allows for further assay method validation/refinement, adoption/addition due to relevant or newly recognized product attributes or rejection due to irrelevance. In the next phase, (licensing/approval) specifications are set through extended experience and validation of both the preparative and analytical processes, to include availability of suitable reference standards and extensive product characterization throughout its proposed dating period. Subsequent to product approval, the incremental database of test results serves as a natural continuum for further evolving/refining specifications. While there is considerable latitude in the kinds of testing modalities finally adopted to establish product quality on a routine basis, for both drugs and drug products, it is important that the selection takes into consideration relevant (significant) product characteristics that appropriately reflect on identity, purity and potency.
NASA Astrophysics Data System (ADS)
Dhakal, S.; Bhandary, N. P.; Yatabe, R.; Kinoshita, N.
2012-04-01
In a previous companion paper, we presented a three-tier modelling of a particular type of rockfall protective cable-net structure (barrier), developed newly in Japan. Therein, we developed a three-dimensional, Finite Element based, nonlinear numerical model having been calibrated/back-calculated and verified with the element- and structure-level physical tests. Moreover, using a very simple, lumped-mass, single-degree-of-freedom, equivalently linear analytical model, a global-displacement-predictive correlation was devised by modifying the basic equation - obtained by combining the principles of conservation of linear momentum and energy - based on the back-analysis of the tests on the numerical model. In this paper, we use the developed models to explore the performance enhancement potential of the structure in terms of (a) the control of global displacement - possibly the major performance criterion for the proposed structure owing to a narrow space available in the targeted site, and (b) the increase in energy dissipation by the existing U-bolt-type Friction-brake Devices - which are identified to have performed weakly when integrated into the structure. A set of parametric investigations have revealed correlations to achieve the first objective in terms of the structure's mass, particularly by manipulating the wire-net's characteristics, and has additionally disclosed the effects of the impacting-block's parameters. Towards achieving the second objective, another set of parametric investigations have led to a proposal of a few innovative improvements in the constitutive behaviour (model) of the studied brake device (dissipator), in addition to an important recommendation of careful handling of the device based on the identified potential flaw.
Development of dynamic Bayesian models for web application test management
NASA Astrophysics Data System (ADS)
Azarnova, T. V.; Polukhin, P. V.; Bondarenko, Yu V.; Kashirina, I. L.
2018-03-01
The mathematical apparatus of dynamic Bayesian networks is an effective and technically proven tool that can be used to model complex stochastic dynamic processes. According to the results of the research, mathematical models and methods of dynamic Bayesian networks provide a high coverage of stochastic tasks associated with error testing in multiuser software products operated in a dynamically changing environment. Formalized representation of the discrete test process as a dynamic Bayesian model allows us to organize the logical connection between individual test assets for multiple time slices. This approach gives an opportunity to present testing as a discrete process with set structural components responsible for the generation of test assets. Dynamic Bayesian network-based models allow us to combine in one management area individual units and testing components with different functionalities and a direct influence on each other in the process of comprehensive testing of various groups of computer bugs. The application of the proposed models provides an opportunity to use a consistent approach to formalize test principles and procedures, methods used to treat situational error signs, and methods used to produce analytical conclusions based on test results.
Big data analytics in healthcare: promise and potential.
Raghupathi, Wullianallur; Raghupathi, Viju
2014-01-01
To describe the promise and potential of big data analytics in healthcare. The paper describes the nascent field of big data analytics in healthcare, discusses the benefits, outlines an architectural framework and methodology, describes examples reported in the literature, briefly discusses the challenges, and offers conclusions. The paper provides a broad overview of big data analytics for healthcare researchers and practitioners. Big data analytics in healthcare is evolving into a promising field for providing insight from very large data sets and improving outcomes while reducing costs. Its potential is great; however there remain challenges to overcome.
Dominating Scale-Free Networks Using Generalized Probabilistic Methods
Molnár,, F.; Derzsy, N.; Czabarka, É.; Székely, L.; Szymanski, B. K.; Korniss, G.
2014-01-01
We study ensemble-based graph-theoretical methods aiming to approximate the size of the minimum dominating set (MDS) in scale-free networks. We analyze both analytical upper bounds of dominating sets and numerical realizations for applications. We propose two novel probabilistic dominating set selection strategies that are applicable to heterogeneous networks. One of them obtains the smallest probabilistic dominating set and also outperforms the deterministic degree-ranked method. We show that a degree-dependent probabilistic selection method becomes optimal in its deterministic limit. In addition, we also find the precise limit where selecting high-degree nodes exclusively becomes inefficient for network domination. We validate our results on several real-world networks, and provide highly accurate analytical estimates for our methods. PMID:25200937
NASA Astrophysics Data System (ADS)
Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris
2015-04-01
Many geoscience applications can benefit from testing many combinations of input parameters for geochemical simulation models. It is, however, a challenge to screen the input and output data from the model to identify the significant relationships between input parameters and output variables. For addressing this problem we propose a Visual Analytics approach that has been developed in an ongoing collaboration between computer science and geoscience researchers. Our Visual Analytics approach uses visualization methods of hierarchical horizontal axis, multi-factor stacked bar charts and interactive semi-automated filtering for input and output data together with automatic sensitivity analysis. This guides the users towards significant relationships. We implement our approach as an interactive data exploration tool. It is designed with flexibility in mind, so that a diverse set of tasks such as inverse modeling, sensitivity analysis and model parameter refinement can be supported. Here we demonstrate the capabilities of our approach by two examples for gas storage applications. For the first example our Visual Analytics approach enabled the analyst to observe how the element concentrations change around previously established baselines in response to thousands of different combinations of mineral phases. This supported combinatorial inverse modeling for interpreting observations about the chemical composition of the formation fluids at the Ketzin pilot site for CO2 storage. The results indicate that, within the experimental error range, the formation fluid cannot be considered at local thermodynamical equilibrium with the mineral assemblage of the reservoir rock. This is a valuable insight from the predictive geochemical modeling for the Ketzin site. For the second example our approach supports sensitivity analysis for a reaction involving the reductive dissolution of pyrite with formation of pyrrothite in presence of gaseous hydrogen. We determine that this reaction is thermodynamically favorable under a broad range of conditions. This includes low temperatures and absence of microbial catalysators. Our approach has potential for use in other applications that involve exploration of relationships in geochemical simulation model data.
NASA Technical Reports Server (NTRS)
Hohenemser, K. H.; Crews, S. T.
1972-01-01
A two bladed 16-inch hingeless rotor model was built and tested outside and inside a 24 by 24 inch wind tunnel test section at collective pitch settings up to 5 deg and rotor advance ratios up to .4. The rotor model has a simple eccentric mechanism to provide progressing or regressing cyclic pitch excitation. The flapping responses were compared to analytically determined responses which included flap-bending elasticity but excluded rotor wake effects. Substantial systematic deviations of the measured responses from the computed responses were found, which were interpreted as the effects of interaction of the blades with a rotating asymmetrical wake.
When Science Replaces Religion: Science as a Secular Authority Bolsters Moral Sensitivity
2015-01-01
Scientific and religious thinking compete with each other on several levels. For example, activating one generally weakens the other. Since priming religion is known to increase moral behaviour and moral sensitivity, priming science might be expected to have the opposite effect. However, it was recently demonstrated that, on the contrary, science priming increases moral sensitivity as well. The present set of studies sought to replicate this effect and test two explanations for it. Study 1 used a sentence unscrambling task for implicitly priming the concept of science but failed to replicate its effect on moral sensitivity, presumably due to a ceiling effect. Study 2 replicated the effect with a new measure of moral sensitivity. Study 3 tested whether science-related words create this effect by activating the idea of secular authority or by activating analytic thinking. It was demonstrated that words related to secular authority, but not words related to analytic thinking, produced a similar increase in moral sensitivity. Religiosity level of the participants did not influence this basic finding. The results are consistent with the hypothesis that science as a secular institution has overtaken some of the functions of religion in modern societies. PMID:26360826
When Science Replaces Religion: Science as a Secular Authority Bolsters Moral Sensitivity.
Yilmaz, Onurcan; Bahçekapili, Hasan G
2015-01-01
Scientific and religious thinking compete with each other on several levels. For example, activating one generally weakens the other. Since priming religion is known to increase moral behaviour and moral sensitivity, priming science might be expected to have the opposite effect. However, it was recently demonstrated that, on the contrary, science priming increases moral sensitivity as well. The present set of studies sought to replicate this effect and test two explanations for it. Study 1 used a sentence unscrambling task for implicitly priming the concept of science but failed to replicate its effect on moral sensitivity, presumably due to a ceiling effect. Study 2 replicated the effect with a new measure of moral sensitivity. Study 3 tested whether science-related words create this effect by activating the idea of secular authority or by activating analytic thinking. It was demonstrated that words related to secular authority, but not words related to analytic thinking, produced a similar increase in moral sensitivity. Religiosity level of the participants did not influence this basic finding. The results are consistent with the hypothesis that science as a secular institution has overtaken some of the functions of religion in modern societies.
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; DeLoach, Richard
2003-01-01
A wind tunnel experiment for characterizing the aerodynamic and propulsion forces and moments acting on a research model airplane is described. The model airplane called the Free-flying Airplane for Sub-scale Experimental Research (FASER), is a modified off-the-shelf radio-controlled model airplane, with 7 ft wingspan, a tractor propeller driven by an electric motor, and aerobatic capability. FASER was tested in the NASA Langley 12-foot Low-Speed Wind Tunnel, using a combination of traditional sweeps and modern experiment design. Power level was included as an independent variable in the wind tunnel test, to allow characterization of power effects on aerodynamic forces and moments. A modeling technique that employs multivariate orthogonal functions was used to develop accurate analytic models for the aerodynamic and propulsion force and moment coefficient dependencies from the wind tunnel data. Efficient methods for generating orthogonal modeling functions, expanding the orthogonal modeling functions in terms of ordinary polynomial functions, and analytical orthogonal blocking were developed and discussed. The resulting models comprise a set of smooth, differentiable functions for the non-dimensional aerodynamic force and moment coefficients in terms of ordinary polynomials in the independent variables, suitable for nonlinear aircraft simulation.
Analytical performance of a bronchial genomic classifier.
Hu, Zhanzhi; Whitney, Duncan; Anderson, Jessica R; Cao, Manqiu; Ho, Christine; Choi, Yoonha; Huang, Jing; Frink, Robert; Smith, Kate Porta; Monroe, Robert; Kennedy, Giulia C; Walsh, P Sean
2016-02-26
The current standard practice of lung lesion diagnosis often leads to inconclusive results, requiring additional diagnostic follow up procedures that are invasive and often unnecessary due to the high benign rate in such lesions (Chest 143:e78S-e92, 2013). The Percepta bronchial genomic classifier was developed and clinically validated to provide more accurate classification of lung nodules and lesions that are inconclusive by bronchoscopy, using bronchial brushing specimens (N Engl J Med 373:243-51, 2015, BMC Med Genomics 8:18, 2015). The analytical performance of the Percepta test is reported here. Analytical performance studies were designed to characterize the stability of RNA in bronchial brushing specimens during collection and shipment; analytical sensitivity defined as input RNA mass; analytical specificity (i.e. potentially interfering substances) as tested on blood and genomic DNA; and assay performance studies including intra-run, inter-run, and inter-laboratory reproducibility. RNA content within bronchial brushing specimens preserved in RNAprotect is stable for up to 20 days at 4 °C with no changes in RNA yield or integrity. Analytical sensitivity studies demonstrated tolerance to variation in RNA input (157 ng to 243 ng). Analytical specificity studies utilizing cancer positive and cancer negative samples mixed with either blood (up to 10 % input mass) or genomic DNA (up to 10 % input mass) demonstrated no assay interference. The test is reproducible from RNA extraction through to Percepta test result, including variation across operators, runs, reagent lots, and laboratories (standard deviation of 0.26 for scores on > 6 unit scale). Analytical sensitivity, analytical specificity and robustness of the Percepta test were successfully verified, supporting its suitability for clinical use.
On-orbit evaluation of the control system/structural mode interactions on OSO-8
NASA Technical Reports Server (NTRS)
Slafer, L. I.
1980-01-01
The Orbiting Solar Observatory-8 experienced severe structural mode/control loop interaction problems during the spacecraft development. Extensive analytical studies, using the hybrid coordinate modeling approach, and comprehensive ground testing were carried out in order to achieve the system's precision pointing performance requirements. A recent series of flight tests were conducted with the spacecraft in which a wide bandwidth, high resolution telemetry system was utilized to evaluate the on-orbit flexible dynamics characteristics of the vehicle along with the control system performance. This paper describes the results of these tests, reviewing the basic design problem, analytical approach taken, ground test philosophy, and on-orbit testing. Data from the tests was used to determine the primary mode frequency, damping, and servo coupling dynamics for the on-orbit condition. Additionally, the test results have verified analytically predicted differences between the on-orbit and ground test environments. The test results have led to a validation of both the analytical modeling and servo design techniques used during the development of the control system, and also verified the approach taken to vehicle and servo ground testing.
NASA Technical Reports Server (NTRS)
Mason, P. W.; Harris, H. G.; Zalesak, J.; Bernstein, M.
1974-01-01
The methods and procedures used in the analysis and testing of the scale model are reported together with the correlation of the analytical and experimental results. The model, the NASTRAN finite element analysis, and results are discussed. Tests and analytical investigations are also reported.
Krleza, Jasna Lenicek; Dorotic, Adrijana; Grzunov, Ana
2017-02-15
Proper standardization of laboratory testing requires assessment of performance after the tests are performed, known as the post-analytical phase. A nationwide external quality assessment (EQA) scheme implemented in Croatia in 2014 includes a questionnaire on post-analytical practices, and the present study examined laboratory responses in order to identify current post-analytical phase practices and identify areas for improvement. In four EQA exercises between September 2014 and December 2015, 145-174 medical laboratories across Croatia were surveyed using the Module 11 questionnaire on the post-analytical phase of testing. Based on their responses, the laboratories were evaluated on four quality indicators: turnaround time (TAT), critical values, interpretative comments and procedures in the event of abnormal results. Results were presented as absolute numbers and percentages. Just over half of laboratories (56.3%) monitored TAT. Laboratories varied substantially in how they dealt with critical values. Most laboratories (65-97%) issued interpretative comments with test results. One third of medical laboratories (30.6-33.3%) issued abnormal test results without confirming them in additional testing. Our results suggest that the nationwide post-analytical EQA scheme launched in 2014 in Croatia has yet to be implemented to the full. To close the gaps between existing recommendations and laboratory practice, laboratory professionals should focus on ensuring that TAT is monitored and lists of critical values are established within laboratories. Professional bodies/institutions should focus on clarify and harmonized rules to standardized practices and applied for adding interpretative comments to laboratory test results and for dealing with abnormal test results.
42 CFR 493.801 - Condition: Enrollment and testing of samples.
Code of Federal Regulations, 2013 CFR
2013-10-01
... subpart. (2)(i) Designate the program(s) to be used for each specialty, subspecialty, and analyte or test... procedures, in accordance with § 493.1236(c)(1). (3) For each specialty, subspecialty and analyte or test...
42 CFR 493.801 - Condition: Enrollment and testing of samples.
Code of Federal Regulations, 2012 CFR
2012-10-01
... subpart. (2)(i) Designate the program(s) to be used for each specialty, subspecialty, and analyte or test... procedures, in accordance with § 493.1236(c)(1). (3) For each specialty, subspecialty and analyte or test...
42 CFR 493.801 - Condition: Enrollment and testing of samples.
Code of Federal Regulations, 2010 CFR
2010-10-01
... subpart. (2)(i) Designate the program(s) to be used for each specialty, subspecialty, and analyte or test... procedures, in accordance with § 493.1236(c)(1). (3) For each specialty, subspecialty and analyte or test...
42 CFR 493.801 - Condition: Enrollment and testing of samples.
Code of Federal Regulations, 2011 CFR
2011-10-01
... subpart. (2)(i) Designate the program(s) to be used for each specialty, subspecialty, and analyte or test... procedures, in accordance with § 493.1236(c)(1). (3) For each specialty, subspecialty and analyte or test...
42 CFR 493.801 - Condition: Enrollment and testing of samples.
Code of Federal Regulations, 2014 CFR
2014-10-01
... subpart. (2)(i) Designate the program(s) to be used for each specialty, subspecialty, and analyte or test... procedures, in accordance with § 493.1236(c)(1). (3) For each specialty, subspecialty and analyte or test...
NASA Astrophysics Data System (ADS)
Samborski, Sylwester; Valvo, Paolo S.
2018-01-01
The paper deals with the numerical and analytical modelling of the end-loaded split test for multi-directional laminates affected by the typical elastic couplings. Numerical analysis of three-dimensional finite element models was performed with the Abaqus software exploiting the virtual crack closure technique (VCCT). The results show possible asymmetries in the widthwise deflections of the specimen, as well as in the strain energy release rate (SERR) distributions along the delamination front. Analytical modelling based on a beam-theory approach was also conducted in simpler cases, where only bending-extension coupling is present, but no out-of-plane effects. The analytical results matched the numerical ones, thus demonstrating that the analytical models are feasible for test design and experimental data reduction.
Exploring Teacher-Student Interactions and Moral Reasoning Practices in Drama Classrooms
ERIC Educational Resources Information Center
Freebody, Kelly
2010-01-01
The research reported here brings together three settings of conceptual and methodological inquiry: the sociological setting of socio-economic theory; the curricular/pedagogic setting of educational drama; and the analytic setting of ethnomethodolgically informed analyses of conversation analysis and membership categorisation analysis. Students…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moore, Murray E.
Objective: Develop a set of peer-review and verified analytical methods to adjust HEPA filter performance to different flow rates, temperatures and altitudes. Experimental testing will measure HEPA filter flow rate, pressure drop and efficiency to verify the analytical approach. Nuclear facilities utilize HEPA (High Efficiency Particulate Air) filters to purify air flow for workspace ventilation. However, the ASME AG-1 technical standard (Code on Nuclear Air and Gas Treatment) does not adequately describe air flow measurement units for HEPA filter systems. Specifically, the AG-1 standard does not differentiate between volumetric air flow in ACFM (actual cubic feet per minute)compared to massmore » flow measured in SCFM (standard cubic feet per minute). More importantly, the AG-1 standard has an overall deficiency for using HEPA filter devices at different air flow rates, temperatures, and altitudes. Technical Approach: The collection efficiency and pressure drops of 18 different HEPA filters will be measured over a range of flow rates, temperatures and altitudes. The experimental results will be compared to analytical scoping calculations. Three manufacturers have allocated six HEPA filters each for this effort. The 18 filters will be tested at two different flow rates, two different temperatures and two different altitudes. The 36 total tests will be conducted at two different facilities: the ATI Test facilities (Baltimore MD) and the Los Alamos National Laboratory (Los Alamos NM). The Radiation Protection RP-SVS group at Los Alamos has an aerosol wind tunnel that was originally designed to evaluate small air samplers. In 2010, modifications were started to convert the wind tunnel for HEPA filter testing. (Extensive changes were necessary for the required aerosol generators, HEPA test fixtures, temperature control devices and measurement capabilities.) To this date, none of these modification activities have been funded through a specific DOE or NNSA program. This is expected to require six months of time, after receipt of funding. Benefits: US DOE facilities that use HEPA filters will benefit from access to the new operational measurement methods. Uncertainty and guesswork will be removed from HEPA filter operations.« less
ERIC Educational Resources Information Center
Nic Giolla Mhichíl, Mairéad; van Engen, Jeroen; Ó Ciardúbháin, Colm; Ó Cléircín, Gearóid; Appel, Christine
2014-01-01
This paper sets out to construct and present the evolving conceptual framework of the SpeakApps projects to consider the application of learning analytics to facilitate synchronous and asynchronous oral language skills within this CALL context. Drawing from both the CALL and wider theoretical and empirical literature of learner analytics, the…
The analytical representation of viscoelastic material properties using optimization techniques
NASA Technical Reports Server (NTRS)
Hill, S. A.
1993-01-01
This report presents a technique to model viscoelastic material properties with a function of the form of the Prony series. Generally, the method employed to determine the function constants requires assuming values for the exponential constants of the function and then resolving the remaining constants through linear least-squares techniques. The technique presented here allows all the constants to be analytically determined through optimization techniques. This technique is employed in a computer program named PRONY and makes use of commercially available optimization tool developed by VMA Engineering, Inc. The PRONY program was utilized to compare the technique against previously determined models for solid rocket motor TP-H1148 propellant and V747-75 Viton fluoroelastomer. In both cases, the optimization technique generated functions that modeled the test data with at least an order of magnitude better correlation. This technique has demonstrated the capability to use small or large data sets and to use data sets that have uniformly or nonuniformly spaced data pairs. The reduction of experimental data to accurate mathematical models is a vital part of most scientific and engineering research. This technique of regression through optimization can be applied to other mathematical models that are difficult to fit to experimental data through traditional regression techniques.
Modeling of soil water retention from saturation to oven dryness
Rossi, Cinzia; Nimmo, John R.
1994-01-01
Most analytical formulas used to model moisture retention in unsaturated porous media have been developed for the wet range and are unsuitable for applications in which low water contents are important. We have developed two models that fit the entire range from saturation to oven dryness in a practical and physically realistic way with smooth, continuous functions that have few parameters. Both models incorporate a power law and a logarithmic dependence of water content on suction, differing in how these two components are combined. In one model, functions are added together (model “sum”); in the other they are joined smoothly together at a discrete point (model “junction”). Both models also incorporate recent developments that assure a continuous derivative and force the function to reach zero water content at a finite value of suction that corresponds to oven dryness. The models have been tested with seven sets of water retention data that each cover nearly the entire range. The three-parameter sum model fits all data well and is useful for extrapolation into the dry range when data for it are unavailable. The two-parameter junction model fits most data sets almost as well as the sum model and has the advantage of being analytically integrable for convenient use with capillary-bundle models to obtain the unsaturated hydraulic conductivity.
Ferreira, Ana P; Tobyn, Mike
2015-01-01
In the pharmaceutical industry, chemometrics is rapidly establishing itself as a tool that can be used at every step of product development and beyond: from early development to commercialization. This set of multivariate analysis methods allows the extraction of information contained in large, complex data sets thus contributing to increase product and process understanding which is at the core of the Food and Drug Administration's Process Analytical Tools (PAT) Guidance for Industry and the International Conference on Harmonisation's Pharmaceutical Development guideline (Q8). This review is aimed at providing pharmaceutical industry professionals an introduction to multivariate analysis and how it is being adopted and implemented by companies in the transition from "quality-by-testing" to "quality-by-design". It starts with an introduction to multivariate analysis and the two methods most commonly used: principal component analysis and partial least squares regression, their advantages, common pitfalls and requirements for their effective use. That is followed with an overview of the diverse areas of application of multivariate analysis in the pharmaceutical industry: from the development of real-time analytical methods to definition of the design space and control strategy, from formulation optimization during development to the application of quality-by-design principles to improve manufacture of existing commercial products.
Using meta-differential evolution to enhance a calculation of a continuous blood glucose level.
Koutny, Tomas
2016-09-01
We developed a new model of glucose dynamics. The model calculates blood glucose level as a function of transcapillary glucose transport. In previous studies, we validated the model with animal experiments. We used analytical method to determine model parameters. In this study, we validate the model with subjects with type 1 diabetes. In addition, we combine the analytic method with meta-differential evolution. To validate the model with human patients, we obtained a data set of type 1 diabetes study that was coordinated by Jaeb Center for Health Research. We calculated a continuous blood glucose level from continuously measured interstitial fluid glucose level. We used 6 different scenarios to ensure robust validation of the calculation. Over 96% of calculated blood glucose levels fit A+B zones of the Clarke Error Grid. No data set required any correction of model parameters during the time course of measuring. We successfully verified the possibility of calculating a continuous blood glucose level of subjects with type 1 diabetes. This study signals a successful transition of our research from an animal experiment to a human patient. Researchers can test our model with their data on-line at https://diabetes.zcu.cz. Copyright © 2016 The Author. Published by Elsevier Ireland Ltd.. All rights reserved.
ANALYTiC: An Active Learning System for Trajectory Classification.
Soares Junior, Amilcar; Renso, Chiara; Matwin, Stan
2017-01-01
The increasing availability and use of positioning devices has resulted in large volumes of trajectory data. However, semantic annotations for such data are typically added by domain experts, which is a time-consuming task. Machine-learning algorithms can help infer semantic annotations from trajectory data by learning from sets of labeled data. Specifically, active learning approaches can minimize the set of trajectories to be annotated while preserving good performance measures. The ANALYTiC web-based interactive tool visually guides users through this annotation process.
Analytic proof of the existence of the Lorenz attractor in the extended Lorenz model
NASA Astrophysics Data System (ADS)
Ovsyannikov, I. I.; Turaev, D. V.
2017-01-01
We give an analytic (free of computer assistance) proof of the existence of a classical Lorenz attractor for an open set of parameter values of the Lorenz model in the form of Yudovich-Morioka-Shimizu. The proof is based on detection of a homoclinic butterfly with a zero saddle value and rigorous verification of one of the Shilnikov criteria for the birth of the Lorenz attractor; we also supply a proof for this criterion. The results are applied in order to give an analytic proof for the existence of a robust, pseudohyperbolic strange attractor (the so-called discrete Lorenz attractor) for an open set of parameter values in a 4-parameter family of 3D Henon-like diffeomorphisms.
Creating Web Area Segments with Google Analytics
Segments allow you to quickly access data for a predefined set of Sessions or Users, such as government or education users, or sessions in a particular state. You can then apply this segment to any report within the Google Analytics (GA) interface.
NHEXAS PHASE I REGION 5 STUDY--METALS IN BLOOD ANALYTICAL RESULTS
This data set includes analytical results for measurements of metals in 165 blood samples. These samples were collected to examine the relationships between personal exposure measurements, environmental measurements, and body burden. Venous blood samples were collected by venipun...
NHEXAS PHASE I REGION 5 STUDY--VOCS IN BLOOD ANALYTICAL RESULTS
This data set includes analytical results for measurements of VOCs (volatile organic compounds) in 145 blood samples. These samples were collected to examine the relationships between personal exposure measurements, environmental measurements, and body burden. Venous blood sample...
Code of Federal Regulations, 2014 CFR
2014-07-01
... PROCEDURES Equipment, Measurement Instruments, Fuel, and Analytical Gas Specifications § 1066.101 Overview. (a) This subpart addresses equipment related to emission testing, as well as test fuels and... specifications for fuels, engine fluids, and analytical gases; these specifications apply for testing under this...
MEETING DATA QUALITY OBJECTIVES WITH INTERVAL INFORMATION
Immunoassay test kits are promising technologies for measuring analytes under field conditions. Frequently, these field-test kits report the analyte concentrations as falling in an interval between minimum and maximum values. Many project managers use field-test kits only for scr...
Ruiz-Jiménez, J; Priego-Capote, F; Luque de Castro, M D
2006-08-01
A study of the feasibility of Fourier transform medium infrared spectroscopy (FT-midIR) for analytical determination of fatty acid profiles, including trans fatty acids, is presented. The training and validation sets-75% (102 samples) and 25% (36 samples) of the samples once the spectral outliers have been removed-to develop FT-midIR general equations, were built with samples from 140 commercial and home-made bakery products. The concentration of the analytes in the samples used for this study is within the typical range found in these kinds of products. Both sets were independent; thus, the validation set was only used for testing the equations. The criterion used for the selection of the validation set was samples with the highest number of neighbours and the most separation between them (H<0.6). Partial least squares regression and cross validation were used for multivariate calibration. The FT-midIR method does not require post-extraction manipulation and gives information about the fatty acid profile in two min. The 14:0, 16:0, 18:0, 18:1 and 18:2 fatty acids can be determined with excellent precision and other fatty acids with good precision according to the Shenk criteria, R (2)>/=0.90, SEP=1-1.5 SEL and R (2)=0.70-0.89, SEP=2-3 SEL, respectively. The results obtained with the proposed method were compared with those provided by the conventional method based on GC-MS. At 95% significance level, the differences between the values obtained for the different fatty acids were within the experimental error.
Impact Of The Material Variability On The Stamping Process: Numerical And Analytical Analysis
NASA Astrophysics Data System (ADS)
Ledoux, Yann; Sergent, Alain; Arrieux, Robert
2007-05-01
The finite element simulation is a very useful tool in the deep drawing industry. It is used more particularly for the development and the validation of new stamping tools. It allows to decrease cost and time for the tooling design and set up. But one of the most important difficulties to have a good agreement between the simulation and the real process comes from the definition of the numerical conditions (mesh, punch travel speed, limit conditions,…) and the parameters which model the material behavior. Indeed, in press shop, when the sheet set changes, often a variation of the formed part geometry is observed according to the variability of the material properties between these different sets. This last parameter represents probably one of the main source of process deviation when the process is set up. That's why it is important to study the influence of material data variation on the geometry of a classical stamped part. The chosen geometry is an omega shaped part because of its simplicity and it is representative one in the automotive industry (car body reinforcement). Moreover, it shows important springback deviations. An isotropic behaviour law is assumed. The impact of the statistical deviation of the three law coefficients characterizing the material and the friction coefficient around their nominal values is tested. A Gaussian distribution is supposed and their impact on the geometry variation is studied by FE simulation. An other approach is envisaged consisting in modeling the process variability by a mathematical model and then, in function of the input parameters variability, it is proposed to define an analytical model which leads to find the part geometry variability around the nominal shape. These two approaches allow to predict the process capability as a function of the material parameter variability.
2017-01-01
Background Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Materials and Methods Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician’s request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. Results The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results. PMID:28107395
Najat, Dereen
2017-01-01
Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician's request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results.
Solubility Limits of Dibutyl Phosphoric Acid in Uranium Solutions at SRS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, M.C.; Pierce, R.A.; Ray, R.J.
1998-06-01
The Savannah River Site has enriched uranium (EU) solution which has been stored for almost 10 years since being purified in the second uranium cycle of the H area solvent extraction process. The concentrations in solution are {tilde 6} g/L U and about 0.1 M nitric acid. Residual tributylphosphate in the solutions has slowly hydrolyzed to form dibutyl phosphoric acid (HDBP) at concentrations averaging 50 mg/L. Uranium is known to form compounds with DBP which have limited solubility. The potential to form uranium-DBP solids raises a nuclear criticality safety issue. SRTC tests have shown that U-DBP solids will precipitate atmore » concentrations potentially attainable during storage of enriched uranium solutions. Evaporation of the existing EUS solution without additional acidification could result in the precipitation of U-DBP solids if DBP concentration in the resulting solution exceeds 110 ppm at ambient temperature. The same potential exists for evaporation of unwashed 1CU solutions. The most important variables of interest for present plant operations are HNO{sub 3} and DBP concentrations. Temperature is also an important variable controlling precipitation. The data obtained in these tests can be used to set operating and safety limits for the plant. It is recommended that the data for 0 degrees C with 0.5 M HNO{sub 3} be used for setting the limits. The limit would be 80 mg/L which is 3 standard deviations below the average of 86 observed in the tests. The data shows that super-saturation can occur when the DBP concentration is as much as 50 percent above the solubility limit. However, super-saturation cannot be relied on for maintaining nuclear criticality safety. The analytical method for determining DBP concentration in U solutions was improved so that analyses for a solution are accurate to within 10 percent. However, the overall uncertainty of results for periodic samples of the existing EUS solutions was only reduced slightly. Thus, sampling appears to be the largest portion of the uncertainty for EUS sample results, although the number of samples analyzed here is low which could contribution to higher uncertainty. The analytical method can be transferred to the plant analytical labs for more routine analysis of samples.« less
NASA Technical Reports Server (NTRS)
Akle, W.
1983-01-01
This study report defines a set of tests and measurements required to characterize the performance of a Large Space System (LSS), and to scale this data to other LSS satellites. Requirements from the Mobile Communication Satellite (MSAT) configurations derived in the parent study were used. MSAT utilizes a large, mesh deployable antenna, and encompasses a significant range of LSS technology issues in the areas of structural/dynamics, control, and performance predictability. In this study, performance requirements were developed for the antenna. Special emphasis was placed on antenna surface accuracy, and pointing stability. Instrumentation and measurement systems, applicable to LSS, were selected from existing or on-going technology developments. Laser ranging and angulation systems, presently in breadboard status, form the backbone of the measurements. Following this, a set of ground, STS, and GEO-operational were investigated. A third scale (15 meter) antenna system as selected for ground characterization followed by STS flight technology development. This selection ensures analytical scaling from ground-to-orbit, and size scaling. Other benefits are cost and ability to perform reasonable ground tests. Detail costing of the various tests and measurement systems were derived and are included in the report.
NASA Technical Reports Server (NTRS)
Young, Roy
2006-01-01
The Solar Sail Propulsion investment area has been one of the three highest priorities within the In-Space Propulsion Technology (ISPT) Project. In the fall of 2003, the NASA Headquarters' Science Mission Directorate provided funding and direction to mature the technology as far as possible through ground research and development from TRL 3 to 6 in three years. A group of experts from government, industry, and academia convened in Huntsville, Alabama to define technology gaps between what was needed for science missions to the inner solar system and the current state of the art in ultra1ightweight materials and gossamer structure design. This activity set the roadmap for development. The centerpiece of the development would be the ground demonstration of scalable solar sail systems including masts, sails, deployment mechanisms, and attitude control hardware and software. In addition, new materials would be subjected to anticipated space environments to quantify effects and assure mission life. Also, because solar sails are huge structures, and it is not feasible to validate the technology by ground test at full scale, a multi-discipline effort was established to develop highly reliable analytical models to serve as mission assurance evidence in future flight program decision-making. Two separate contractor teams were chosen to develop the SSP System Ground Demonstrator (SGD). After a three month conceptual mission/system design phase, the teams developed a ten meter diameter pathfinder set of hardware and subjected it to thermal vacuum tests to compare analytically predicted structural behavior with measured characteristics. This process developed manufacturing and handling techniques and refined the basic design. In 2005, both contractor teams delivered 20 meter, four quadrant sail systems to the largest thermal vacuum chamber in the world in Plum Brook, Ohio, and repeated the tests. Also demonstrated was the deployment and articulation of attitude control mechanisms. In conjunction with these tests, the stowed sails were subjected to launch vibration and ascent vent tests. Other investments studied radiation effects on the solar sail materials, investigated spacecraft charging issues, developed shape measuring techniques and instruments, produced advanced trajectory modeling capabilities, and identified and resolved gossamer structure dynamics issues. Technology validation flight and application to a He1iophysics science mission is on the horizon.
Fluorescence In Situ Hybridization Probe Validation for Clinical Use.
Gu, Jun; Smith, Janice L; Dowling, Patricia K
2017-01-01
In this chapter, we provide a systematic overview of the published guidelines and validation procedures for fluorescence in situ hybridization (FISH) probes for clinical diagnostic use. FISH probes-which are classified as molecular probes or analyte-specific reagents (ASRs)-have been extensively used in vitro for both clinical diagnosis and research. Most commercially available FISH probes in the United States are strictly regulated by the U.S. Food and Drug Administration (FDA), the Centers for Disease Control and Prevention (CDC), the Centers for Medicare & Medicaid Services (CMS) the Clinical Laboratory Improvement Amendments (CLIA), and the College of American Pathologists (CAP). Although home-brewed FISH probes-defined as probes made in-house or acquired from a source that does not supply them to other laboratories-are not regulated by these agencies, they too must undergo the same individual validation process prior to clinical use as their commercial counterparts. Validation of a FISH probe involves initial validation and ongoing verification of the test system. Initial validation includes assessment of a probe's technical specifications, establishment of its standard operational procedure (SOP), determination of its clinical sensitivity and specificity, development of its cutoff, baseline, and normal reference ranges, gathering of analytics, confirmation of its applicability to a specific research or clinical setting, testing of samples with or without the abnormalities that the probe is meant to detect, staff training, and report building. Ongoing verification of the test system involves testing additional normal and abnormal samples using the same method employed during the initial validation of the probe.
Analytical procedure validation and the quality by design paradigm.
Rozet, Eric; Lebrun, Pierre; Michiels, Jean-François; Sondag, Perceval; Scherder, Tara; Boulanger, Bruno
2015-01-01
Since the adoption of the ICH Q8 document concerning the development of pharmaceutical processes following a quality by design (QbD) approach, there have been many discussions on the opportunity for analytical procedure developments to follow a similar approach. While development and optimization of analytical procedure following QbD principles have been largely discussed and described, the place of analytical procedure validation in this framework has not been clarified. This article aims at showing that analytical procedure validation is fully integrated into the QbD paradigm and is an essential step in developing analytical procedures that are effectively fit for purpose. Adequate statistical methodologies have also their role to play: such as design of experiments, statistical modeling, and probabilistic statements. The outcome of analytical procedure validation is also an analytical procedure design space, and from it, control strategy can be set.
42 CFR 493.1289 - Standard: Analytic systems quality assessment.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 5 2010-10-01 2010-10-01 false Standard: Analytic systems quality assessment. 493... Nonwaived Testing Analytic Systems § 493.1289 Standard: Analytic systems quality assessment. (a) The... through 493.1283. (b) The analytic systems quality assessment must include a review of the effectiveness...
NASA Astrophysics Data System (ADS)
Győrffy, Werner; Knizia, Gerald; Werner, Hans-Joachim
2017-12-01
We present the theory and algorithms for computing analytical energy gradients for explicitly correlated second-order Møller-Plesset perturbation theory (MP2-F12). The main difficulty in F12 gradient theory arises from the large number of two-electron integrals for which effective two-body density matrices and integral derivatives need to be calculated. For efficiency, the density fitting approximation is used for evaluating all two-electron integrals and their derivatives. The accuracies of various previously proposed MP2-F12 approximations [3C, 3C(HY1), 3*C(HY1), and 3*A] are demonstrated by computing equilibrium geometries for a set of molecules containing first- and second-row elements, using double-ζ to quintuple-ζ basis sets. Generally, the convergence of the bond lengths and angles with respect to the basis set size is strongly improved by the F12 treatment, and augmented triple-ζ basis sets are sufficient to closely approach the basis set limit. The results obtained with the different approximations differ only very slightly. This paper is the first step towards analytical gradients for coupled-cluster singles and doubles with perturbative treatment of triple excitations, which will be presented in the second part of this series.
Failure of Standard Training Sets in the Analysis of Fast-Scan Cyclic Voltammetry Data.
Johnson, Justin A; Rodeberg, Nathan T; Wightman, R Mark
2016-03-16
The use of principal component regression, a multivariate calibration method, in the analysis of in vivo fast-scan cyclic voltammetry data allows for separation of overlapping signal contributions, permitting evaluation of the temporal dynamics of multiple neurotransmitters simultaneously. To accomplish this, the technique relies on information about current-concentration relationships across the scan-potential window gained from analysis of training sets. The ability of the constructed models to resolve analytes depends critically on the quality of these data. Recently, the use of standard training sets obtained under conditions other than those of the experimental data collection (e.g., with different electrodes, animals, or equipment) has been reported. This study evaluates the analyte resolution capabilities of models constructed using this approach from both a theoretical and experimental viewpoint. A detailed discussion of the theory of principal component regression is provided to inform this discussion. The findings demonstrate that the use of standard training sets leads to misassignment of the current-concentration relationships across the scan-potential window. This directly results in poor analyte resolution and, consequently, inaccurate quantitation, which may lead to erroneous conclusions being drawn from experimental data. Thus, it is strongly advocated that training sets be obtained under the experimental conditions to allow for accurate data analysis.
NASA Technical Reports Server (NTRS)
Rader, W. P.; Barrett, S.; Payne, K. R.
1975-01-01
Data measurement and interpretation techniques were defined for application to the first few space shuttle flights, so that the dynamic environment could be sufficiently well established to be used to reduce the cost of future payloads through more efficient design and environmental test techniques. It was concluded that: (1) initial payloads must be given comprehensive instrumentation coverage to obtain detailed definition of acoustics, vibration, and interface loads, (2) analytical models of selected initial payloads must be developed and verified by modal surveys and flight measurements, (3) acoustic tests should be performed on initial payloads to establish realistic test criteria for components and experiments in order to minimize unrealistic failures and retest requirements, (4) permanent data banks should be set up to establish statistical confidence in the data to be used, (5) a more unified design/test specification philosophy is needed, (6) additional work is needed to establish a practical testing technique for simulation of vehicle transients.